SignAI has been experimenting with sign language datasets, motion capture, and prototype sign generation since August 2021. This page documents the real work behind the vision.
These videos show early-stage but functional sign language AI workflows, documented from as early as March 2021.
Studio workflow capturing hand, face, and body movement using motion capture technology for structured sign language training data.
Text input producing early signed output through an avatar interface, demonstrating practical iteration on generation workflows.
Early sign-to-avatar concept showing Joel signing "Hello my name is Joel", with real-time recognition overlays detecting individual letter signs.
SignAI did not appear overnight. It grew from years of work in Deaf accessibility, BSL resources, and a personal understanding of the communication gap.
Joel Kellhofer MBE founded SignLive (2014 to 2023), a remote BSL interpreting platform that grew to thousands of registered users and was adopted by a growing number of organisations. He also created the Sign Dictionary, a free BSL learning resource used by millions of people and thousands of schools. Joel is no longer involved in SignLive.
Exploration started in 2021. Joel began experimenting with motion capture techniques and sign language datasets, capturing hand, face, and body movement in studio workflows, laying the groundwork for what would become SignAI.
Working prototype videos documented sign-to-text recognition with real-time overlays, motion capture workflows for structured training data, and text-to-sign avatar generation. The SignAI journey began here.
Ongoing development of datasets, model architectures, and generation pipelines. Iterating on pose estimation, temporal sequence modelling, and BSL-aligned sign production, not word-for-word substitution, but structurally accurate signing.
SignAI is now presented at signai.com as an independent AI-powered sign language access platform. SignWow operates as the current Deaf-led interpreting, translation, and accessibility service, providing the commercial foundation and real-world testing ground for SignAI's technology.
The workload is multimodal, real-time, and video-heavy. SignAI develops two core AI workflows, designed to complement, not replace, human interpreters.
Temporal modelling of hand shape, facial expression, gaze, and body movement to produce understandable text or structured language representations. Uses pose estimation pipelines and transformer-based sequence models.
Generation workflows that render signed output through avatar or video-based interfaces, aligned to BSL grammar and structure rather than word-for-word substitution.
SignAI complements live interpreters and VRS/VRI services, unlocking everyday communication moments that cannot always wait for human availability.
Deaf entrepreneur with over a decade of experience building products and services for the Deaf community. Joel was awarded an MBE for services to the Deaf community. He has been working on AI-powered sign language technology since November 2022.
SignAI is not a speculative story. It comes from someone who understands Deaf users, sign language content, service delivery, and the operational reality of accessibility products.
Whether you're interested in the technology, exploring a partnership, or want to learn more about AI-powered sign language access, we'd love to hear from you.