Testing

We conducted thorough testing to evaluate functionality quality and identify areas for improvement.

Testing Strategy


We have developed a client side application inside Teams. Therefore, it is pivotal to use specific testing methods, which give assurance that the usability, stability and functionality of the system do satisfy our MoSCoW requirements.

Testing Scope
The application should be tested around mimicked user-behaviour. Being for both patients and physiotherapists; testing should emulate their respective behaviours by considering a host of different scenarios. For example, this involves assessing the functionality with the Teams App of running MotionInput with NDI for a patients externally versus running locally on your own machine.

Testing Methodology
We integrated unit tests in our NDI script as well as in our Teams application. These allowed us to define rigidity within core functionalities of each part of our system. Furthermore, user acceptance testing is carried out with pseudo-users, since finding appropriate test subjects in-need of physiotherapy was challenging. However, we gathered useful feedback on the user-experience when using our UI.
Usability testing was used to gather potential usability issues for user’s within our application, compared to user acceptance testing which helped validate requirements.

User Acceptance Testing

We conducted user acceptance testing to observe the actual functionality of the system fits the desired functionality, in respect to meeting requirements. One of the many reasons this is useful, is to eliminate product failure, by catching bugs, errors and imperfections.
All user acceptance tests were in-person. The environment was setup for the user, they were placed inside an empty Teams call. Some testers were by themselves in the call, whereas others were placed with a ‘physiotherapist’. The physiotherapist’s purpose, was to use the testing subject’s Teams video to run MotionInput for them.

A consent form was drafted and handed out to testing subject. It is shown below:

Consent Form



Testers


All pseudo-testers are from non-technical background, so we could receive authentic feedback. Also, the real testers embodied a specified pseudo-tester who are from a variety of ages ranges.

  • Tester 1 - Stefan Rodriguez


  • 45 year old physiotherapist who works for the NHS.
    He has been delivering treatment to patients for nearly 2 decades and often encounters patients in need of physiotherapy.

  • Tester 2 - Karen Baker


  • 58 year old, retired tennis player
    Struggles to move upper body and spends most of their time in a wheelchair as a result. She relishes to help her shoulder motion which had previously suffered lots due to Tennis injuries. This will enable her to spend quality time with family.

  • Tester 3 - Saif Iqbal


  • 20 year old student at UCL
    Suffered from a broken elbow and is now in recovery. They are in need of a physiotherapy program that they can easily follow and be motivated to follow, whilst studying.

  • Tester 4 - Jodie Numayer


  • 35 year old, disabled person
    Unfortunately due to a condition that Jodie was born with, she has partial paralysis from the waist downwards. An effective and consistent physiotherapy program may be able to help her.


Test Cases


We created 4 key test cases:

Test Case 1: Physiotherapist running MotionInput on the behalf of a patient inside a Teams call.
Test Case 2: Patient joins Teams call and plays MotionInput games without running MotionInput since the physiotherapist is running for them.
Test Case 3: Patient joins Teams call, launches MotionInput and plays games.
Test Case 4: Patients joins a Teams call, launches MotionInput and plays game with someone else placed on the call. In this case the other person on the call was a member of our teams.

Feedback


Throughout testing, users were asked to review certain acceptance requirements. We gathered both positive and negative feedback. We gathered the following feedback:




Conclusions


Our solution has received general excitement and enthusiasm from all age groups, demographics and types of users. Repeated feedback was mostly not included. Common themes which were suggested for improvement we have taken on board and worked on.

Test case 1 generally had issues with the layout of the application. The main feedback being the layout and duplicate information for the user. Since then, we have included all the game information inside modals, which are only visible once the users clicks to view the information. This avoids excessive duplicate information being shown. Also, the button layout has been refactored to a more consistent formatting.

Test case 2 generally wanted more games for her slower motions as she may still be at the earlier stages of recovery. Furthermore, due to cursor visibility issues with the NDI solution, this also limits the amount of games available on the app for the patient. Since then, we have implemented a static version of Duck Hunt as well as many other games with no time pressure, only requiring 1 click.

Test case 3 was more of a youthful patient. Having grown up with gaming consoles and gaming website, they were expecting a raft of games. More than our current selection. We are regularly working on new games to add into our application.

Test case 4 was happy about the implementation of LiveShare SDK when playing games. Latency was not an issue. However, they wanted more games available to play with LiveShare SDK. This is something for the extendability of the project, since we have only had limited time with this SDK and are laying out a framework for expansion.

Usability Testing

Usability tests are inherent in our design process in ensuring the application is intuitive, useful, and easy to use for its intended audience. Our goal was to collect valuable feedback from users to inform improvements to the application, to ultimately improve user experience.

Again, all Usability tests were in person and used a similar consent form to our User-acceptance tests and (users were given fake names, for anonymity). A similar method was also deployed where the environment was setup for the user inside an empty Teams call.


Testers


The testers were chosen to represent the 2 main consumers for our application: a physiotherapist and a patient.
Tester 1 will have MotionInput ran for them through our NDI solution, whereas Tester 2 will launch MotionInput themself.
Both will play games collaboratively and individually.

Test Subjects

Tester 1: Aadam Khan - 40 y/o - Early arthritis patient - has unusually prominent stiffness in his arm, shoulders and wrists.
Tester 1: George Gable - 25 y/o - Ski crash recoverer - tore is rotator cuff and now is seeking to improve his angular motion.
Physiotherapist: Bret Harp - 35 y/o - Physiotherapist


Checklist

We were constantly overseeing the user and the actions they were taking when going through our software. We had a hidden checklist which the user was unaware of, and whilst observing the user, we were ticking our checklist off as users completed the tasks on this checklist.

Checklist


Post Testing Questions

After using and testing our software, we asked the users a few follow up questions based on their experience with the application. Just to acquire some general feedback.

Post-Testing Questions


Feedback


Our feedback was based on the checklist and questions we used. The checklist items crossed out are the items not applicable to the user.


Tester Checklists




Conclusions


Generally, we have received positive feedback regarding the usability and UI of our Teams Applications, as well as our NDI solution integrated into MotionInput. A common issue was that users would not navigate to our demo video page which examples of how to perform different gestures.
We will take that feedback on board and add the page to a more visible part of the application. The feedback will help us make optimisations gradually as the project progresses and will add to our list of soft requirements to which we will design in accordance to.

Unit Testing

Front-End Unit Tests


We first began implementing front end unit tests in order to define the specific behaviour of individual components in our Teams app and to ensure they were behaving as intended. The unit tests were written using the React Testing Library which is the most common convention for React testing and is a tool that allows you to actually test the DOM tree rendered by React on the browser, which is useful for testing routes.

We also used Jest which is a javascript testing framework allowing us to utilise a feature-rich API, so we could therefore emulate unique scenarios to test our application. We were able to successfully construct tests, giving us a layer of robustness in our application. Overall 26 successful tests were constructed!



By using the React Testing Library we constructed our different tests and with jest we were able to mock specific functionalities. This allowed us to mock different functions as well as create mock routers and dispatchers where needed.


Component Testing


Tests are there to test the functionality of many parts of our frontend. For testing navigation, most tests within components consisted of rendering the specific component and then using the `createMemoryhistory` import from ‘history’ to obtain a mock/dummy environment where the test will begin; to where a button click is emulated using fireEvent. Then using the history library, we can assert navigation to the correct link.
This is from the dummy environment, meaning that the route is tested away from webpage or local host connected browsing meaning that it is robust. The ‘expect’ function asserts the result of any action taken to the desired outcome. This is just an example of the logic of one of our tests. Above is a screenshot of some tests using the logic we just discussed. This is displayed in the code screenshots above (Lines 24-39).

To test our modal pop-ups, the first action we had to take was how to test the react portal. As you can see in the code screenshot lines 208-213, we emulate HTML with the portal element ahead of time, which sets up the correct environment with ‘portal’ and ‘root’, instead of ‘root’ by itself.
Then similar logic is used as the other tests. However, here to assert that a modal popped up we try to emulate a click on the “Close” button. This “Close” button is a button on the modal. Therefore, if we are able to successfully press this “Close” button, it means that the modal has rendered on the screen.



Data Collection Unit Tests


In order to test the reliability of our motion data collection methods, we tested every action taken by our program. Every functionality has been tested, and these same tests exist for all our data collection systems.
In order to carry out these tests, we used the UnitTest testing framework in Python, which is the standardised method used for testing since it has many built in functionalities.
Furthermore, it supports the reuse of test suits and test organisation. We coupled this with the OS library so we could test specific file paths without needing to hardcode the path name manually. Also, pandas were used to check the formatting of how our data is stored.
Again, with our tests, we cover every functionality of each system. Not only does this provide great coverage of our features, but this also leads to it being a trustworthy system. We did this for all of our different data collection systems.

Here are the tests we implemented for gathering a patient’s elbow angles:
Here, we test whether the file containing the patients data is created and stored at the correct file path.



Following on from the previous test, we must assert the names of each column actually exist inside the corresponding csv file.




This checks if the graphs are plotted and saved in the right file path.




To test if our calculate_angle function is working properly or not, we set arbitrary possible coordinates for a,b and c and compare the result with expected_angle. This expected angle is the desired value we want to get from the calculate_angle function with a,b,c as parameters. We must assert the return value of calculate_angle and expected_angle are equal.