Evaluation

At the end of project development, we reflected on our work over the past year.

Final MoSCoW List

As we approached the end of project development, we took our original MoSCoW project goals that we drew up at the requirement-gathering stage, and reflect on the progress that we've made to completing the goals that we set out
ID Description Priority State Contributors
1 Using multiple ML (modules) libraries at once MUST Carmen
2 Switching events by gestures on runtime MUST Carmen
3 Increased frame by frame performance MUST Carmen
4 Decreased startup time MUST Carmen & Adi
5 Functionality widely configurable through just JSON files MUST Carmen
6 Extenadable code - easy to add events, gestures and handlers MUST Carmen
7 Clear documentation MUST Full Team 32, Team 30 & Adi
8 Ability to create reduced functionality compiled builds, e.g. removing an unused module and its model files MUST Carmen
9 Simple API for the frontend to use MUST Carmen & Ponmile
10 Reduced storage size of compiled builds MUST Adi
11 Work with the final year students to integrate the new features in MotionInput V3 MUST Carmen
12 System can be closed without causing a window freeze MUST Carmen
13 Move over all Hand module functionality from v2 MUST Carmen
14 Move over all Body module functionality from v2 MUST Jason
15 Move over all Head module functionality from v2 MUST Andrzej & Radu
16 Move over all Eye module functionality from v2 MUST Yadong & Alexandros
17 Add functionality for an in-air virtual keyboard SHOULD Carmen & Team 34
18 Add functionality (a module) for speech recognition SHOULD Carmen & Samuel
19 Add functionality for gesture recording SHOULD Carmen & SSE
20 Ability to detect exercises and extremity triggers simultaneously, allowing for combined modes SHOULD Jason
21 Add functionality for a new mode combining the extremity triggers + walking on the spot, allowing for walking on the spot to trigger a key hold, and the extremity triggers to change the keybind set SHOULD Jason
22 Add functinality for a new "Gamepad" mode, similarly allowing triggers to be used in conjunction with walking on the spot, but the triggers acting as Gamepad buttons, optimising for gaming SHOULD Jason
23 Add functionality for a new "FPS" mode, adapting the "Gamepad" mode to allow for the cursor to be controlled by the hands (potentially best for FPS games) SHOULD Jason & Team 33
24 Optimise system using multithreading technology SHOULD Carmen
25 Optimising gesture detection in each module by only performing calculations on the primitives required from the loaded events SHOULD Carmen
26 Allowing the exercise detection to switch between equipment and no equipment modes without restarting MotionInput SHOULD Jason
27 Automated compilation of source code COULD Adi & Andrzej
28 Automation of creation of micro-builds, which are compiled executables of individual modes COULD
29 Compilation into a DLL COULD


Percentage of key functionalities (MUST & SHOULD) completed: 100%

Percentage of optional functionalities (COULD) completed: 33%

Individual Contribution

Here is the breakdown of each of our individual contributions to the project
Task Carmen Jason Radu Mari
Client Liasion 45 45 10 0
Liasion with other teams 50 30 20 0
HCI 40 40 20 0
Requirement Analysis 40 40 20 0
Pitch Presentations 40 40 20 0
Coding 45 40 15 0
PR reviews 95 5 0 0
Blog 50 50 0 0
Testing 25 25 50 0
Report Writing 50 30 20 0
Report Website 10 90 0 0
Video Editing 80 10 10 0
Overall 40% 40% 20% 0%

Developer Feedback

As a major part of our project was to support other developers, we reached out to students from other MotionInput teams, for feedback on how well we succeeded in the following aspects of our project:

  • Creating clear documentation for developers to expand the functionality of MotionInput.
  • Designing a system that would be more easily expandable than MotionInput V2.
  • Designing a system that would be neticebly more efficent than MotionInput V2.



We were able to get the feedback from the following 8 students:

  • Team 5: Alexandros Theofanous
  • Team 30: Oluwaponmile Femi-Sunmaila, Yadong(Adam) Liu, Andrzej Szablewski
  • Team 33: Phoenix Sun
  • Team 34: Fawziyah Hussain, Siam Islam
  • Team 35: Raquel Sofia Fernandes Marques da Silva


Documentation

Future Work

As our time constraints limited our development to less than 2 terms, there are a number of potential functionalities and optimisations that we believe MotionInput V3 would benefit from their implementations if we had more time, or for future developers of MotionInput.
  • Addition of unit tests for the system
  • Create hand module calibration
  • Refine body module calibration
  • Refine head module calibration
  • Refining the hand, boby, head and eye modules. The Position and LandmarkDetector classes for all modules (and classes they use) remain very similar to the v2 modules. As we were fitting the functionality to the new architecture we left most of the landmark and primitive detection code the same, however a lot of the code there does not follow the best practices and is poorly documented.
  • As it was out of the scope of our project we did not change the methods of how primitives like finger pinched/stretched/folded are calculated. However, we believe the accuracy of hand gesture detection could be greatly improved by improving the calculations in the HandPosition class.
  • Create more gestures for head module (currently many gestures are used for multiple events because there are not enough)
  • Improving error management.
  • Improving the method of distinguishing the click and press in GestureEvents.
  • Allow the backend to be used as a DLL. The architecture was designed with that possibility in mind but due to time constraints we were not able to properly support this.
  • Create a GestureSequence class - a class that would improve the flexibility of the GestureEvents by having a framework to track multiple gestures (possibly with movement) in a row and only activate then. (e.g. to support something like first making a circle with your hand and then saying a phrase. This is currently possible but each combination would require a new GestureEvent class)
  • Add the DisplayElement classes to draw the skeletons for each module on the view window.
  • Improve the front end to make full use of the backends configurability. Dynamically creating new events based on the users desired - mapping events and gestures to handlers as per users request.