Software Production for Microsoft Store

Motioninput Architecture & Compilation Processes, Apps Templating, Debug Strategies and API pathways ​

MotionInput 3.1 Delivery

MotionInput 3.2 Compilation

Future-Proofing MotionInput

MFCs template

Abstract

In a digital age the ability to interact with the functionality of a computer is a near necessity, as such, the barriers to entry for using one should be lowered as much as possible. Hence the need for a way for people of many different needs to be able to control and interact with a computer when they may not be able to or feel comfortable using the traditional methods of keyboard and mouse.

In light of this, MotionInput aims to provide a touchless control system that utilises gestures to interact with a computer. By using the webcam instead of relying on a keyboard and mouse, the possibility of control opens up to the entire human body rather than just the hands. Furthermore, speech commands come integrated to allow for control even without any movement of the body, and intuitive GUIs allow users to modify and adapt to whichever method of using MotionInput works best for them.

The impact this can have is vast, people who cannot use their arms can use their legs or their face as controls, use speech to give commands and navigate their computers with a wave of their hand. Beyond this, in version 3.2, there are many advancements from Custom Gesture recording for gaming for a Pseudo-VR experience, to Stereoscopic Image Navigation for medical use and beyond. These are just a few of the applications of the software of which there are many and even more to come.

Team Members

Joseph Marcillo-Coronado

Team Lead (Term 2) / Client Liaison / Tester

Nerea Sainz De La Maza Melon

Team Lead (Term 1) / Client Liaison / Tester

Abriele Qudsi

Technical Lead / UI Designer / Tester

Chaitu Nookala

Research

Our Timeline with Gantt Chart

Timeline