Zachary is an undergraduate Computer Science student, currently studying at University of Plymouth.
Originally from Torbay, Devon, Zachary built his design after speaking with a friend who described their frustration in trying to communicate with people who don’t understand sign language and aimed to seek a solution to this language barrier. He was inspired by the AI modules at university, and is hopeful that he’ll progress his career into cyber security upon graduating.
Zachary has come up with a design that allows users of sign language to more effectively communicate with people who don’t understand it, by creating a sign-to-text system. This system will interpret the hand signals into text, for ease of the person trying to understand.
The intended client is anyone who uses sign language over speech, due to a disability or other circumstances. The end goal would be to release a mobile application, so the application can be used anywhere, by anyone. The product will use a camera to view real time gestures (sign language) and output what is being said in text. My deliverables will be an artificial intelligence system, most likely using a neural network, to detect gestures, and output text.
The system has three main stages, AI to detect the gestures, teaching the AI a full language, Implementing the system into something user friendly (mobile application). I expect the first and second stage to take longer than that of the project lifetime, but I will be using an agile prototyping approach to slowly add features throughout the project, thus always having a deliverable project, however small. I’ll be doing all development work on my pc, and using a Microsoft Kinect, I’m using this camera because it has depth perception, as well as skeleton mapping, which could both be very helpful when first developing the system.
To complete this work, I’ll need to learn how to create a neural network that can decipher moving images. I will also need to learn sign language to both teach and test the system. Future development will require me to develop on a mobile platform, which I would need to learn.
There are several risks involved in the project, the main one is lacking knowledge, I’ve never taken on this sort of project before and it offers several difficult obstacles to overcome. The second of which is the hardware limitations, I’m using a very sophisticated camera to develop with, but it still may not work with deciphering sign language, as it is a very complicated language with varying gestures and movements to relay speech. The third biggest risk is the efficiency of AI, even after completing development of the AI system, it may take too long to decode in real time, thus rendering the whole project useless in a real world environment.
Currently there isn’t a system on the market available, so it would be the leading application available, I am actually implementing the design for my final year project, though I expect I’ll only get as far as a working AI, the mobile application will be developed in my own time after my degree has concluded.
When you submit your details, you'll be in safe hands. Our partners are National Accident Helpline (a brand of National Accident Law, a firm of personal injury solicitors regulated by the Solicitors Regulation Authority). They are the UK's leading personal injury service. Their friendly legal services advisers will call you to talk about your claim and give you free, no-obligation advice. National Accident Law may pay us a marketing fee for our services.
By submitting your personal data, you agree for your details to be sent to National Accident Law so they can contact you to discuss your claim.
If you win your case, your solicitor's success fee will be taken from the compensation you are awarded - up to a maximum of 25%. Your solicitor will discuss any fees before starting your case.