Published on March 11, 2018 by

5-10% of the world population is deaf or hard-of-hearing and rely on sign language as their primary form of communication. In this project, I set out to prototype a real-time system to translate the American Sign Language (finger spelled alphabet) from video into text. I will walk you through the current pipeline which utilises convolutional neural networks, transfer learning and a webcam with Keras and OpenCV – and some of my learning when implementing deep learning for real-time classification.

—-

www.pydata.org

PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R.

PyData conferences aim to be accessible and community-driven, with novice to advanced level presentations. PyData tutorials and talks bring attendees the latest project features along with cutting-edge use cases.

Category Tag