Break language barriers with Signify ISL — real-time gesture detection, ISL translation, video calls, interactive learning, and educational modules, all in one inclusive, AI-powered platform.
Signify is a revolutionary AI-powered communication platform that translates Indian Sign Language (ISL) into real-time text and speech, empowering the deaf and hard-of-hearing community to interact seamlessly with the world around them. Whether it’s education, everyday conversation, or professional communication, Signify brings inclusive interaction to the forefront.
1. Real-Time ISL to Text & Speech Translation:
Leverages powerful 2D CNN-based deep learning models to interpret sign gestures via webcam and convert them into instantly understandable text and synthesized speech.
2. AI-Powered Video Calling with ISL Support – Built for Education
This is a feature that enables real-time ISL gesture translation during video calls, with live text transcription and multi-user call sharing.
What sets Signify apart is its educational focus: teachers can conduct live virtual classes for deaf students, using sign language or voice, while the platform automatically translates it into text and speech, making inclusive digital learning a reality.
3. Interactive ISL Learning Hub:
Includes an integrated ISL learning module with animated tutorials, practice tests, and progress tracking – making Signify not just a tool but an educational companion.
4. Multilingual Support:
Ensures translations are available in multiple Indian languages, making the platform accessible across diverse linguistic regions.
5. Fallback Mechanism – Robustness Beyond AI
To ensure consistent performance even when the model encounters difficulty (due to poor lighting, occlusion, or background noise), Signify integrates a fallback logic using hand landmark tracking:
If the CNN model fails to classify a gesture, the system captures hand keypoints using MediaPipe.
Based on the relative positions of landmarks (like fingertip distances, angles, and hand orientation), the system infers the likely sign.
This fallback ensures that users still receive accurate text and speech translations even under challenging conditions, increasing reliability and usability.
Frontend: HTML, CSS, JavaScript
Backend & APIs: Flask, TensorFlow
Model: Custom-trained 2D CNN for gesture classification
Real-time Processing: OpenCV, MediaPipe
Deployment: Vercel + GitHub Integration
"Over 63 million Indians face hearing impairments. Signify gives them a voice—literally."
By enabling barrier-free, real-time communication, Signify contributes to inclusive education, accessibility in workplaces, and a more empathetic society.