Engineering for Access 2019 shortlist: Zachary Price

Zachary Price

About Zachary Price

Zachary is an undergraduate Computer Science student, currently studying at University of Plymouth.

Originally from Torbay, Devon, Zachary built his design after speaking with a friend who described their frustration in trying to communicate with people who don’t understand sign language and aimed to seek a solution to this language barrier. He was inspired by the AI modules at university, and is hopeful that he’ll progress his career into cyber security upon graduating.

Shortlisted idea

Zachary has come up with a design that allows users of sign language to more effectively communicate with people who don’t understand it, by creating a sign-to-text system. This system will interpret the hand signals into text, for ease of the person trying to understand.

Zachary’s entry

The intended client is anyone who uses sign language over speech, due to a disability or other circumstances. The end goal would be to release a mobile application, so the application can be used anywhere, by anyone. The product will use a camera to view real time gestures (sign language) and output what is being said in text. My deliverables will be an artificial intelligence system, most likely using a neural network, to detect gestures, and output text.

The system has three main stages, AI to detect the gestures, teaching the AI a full language, Implementing the system into something user friendly (mobile application). I expect the first and second stage to take longer than that of the project lifetime, but I will be using an agile prototyping approach to slowly add features throughout the project, thus always having a deliverable project, however small. I’ll be doing all development work on my pc, and using a Microsoft Kinect, I’m using this camera because it has depth perception, as well as skeleton mapping, which could both be very helpful when first developing the system.

To complete this work, I’ll need to learn how to create a neural network that can decipher moving images. I will also need to learn sign language to both teach and test the system. Future development will require me to develop on a mobile platform, which I would need to learn.

There are several risks involved in the project, the main one is lacking knowledge, I’ve never taken on this sort of project before and it offers several difficult obstacles to overcome. The second of which is the hardware limitations, I’m using a very sophisticated camera to develop with, but it still may not work with deciphering sign language, as it is a very complicated language with varying gestures and movements to relay speech. The third biggest risk is the efficiency of AI, even after completing development of the AI system, it may take too long to decode in real time, thus rendering the whole project useless in a real world environment.

Currently there isn’t a system on the market available, so it would be the leading application available, I am actually implementing the design for my final year project, though I expect I’ll only get as far as a working AI, the mobile application will be developed in my own time after my degree has concluded.

Click here to go return to the Engineering for Access shortlist.

How Much Could You Claim?

Does your claim qualify? Get free, no obligation advice!

Or call free on
0800 234 6438

Find out how we handle your details in our privacy policy.

Want free advice? Enter your details and we'll call you back!

How Much Could You Claim?

Does your claim qualify? Get free, no obligation advice!

Or call free on 0800 234 6438

Find out how we handle your details in our privacy policy.