Testing Strategy
Our project was largely research-based, and thus the major parts of our code could not be tested with unit testing. This includes all our code in creating, training and using machine learning (ML) models for scent identification, as well as code involving responses from the hardware (i.e. the olfactometer and sensor).

Machine learning testing
For the ML part of our project, we first wrote the model training and identification code in a Jupyter notebook. To test the functionalities, we tested each individual function in the Jupyter notebook with various input datasets, and observed whether it produced the expected result. For each model we built, we compared the output of our model with the input data, which contained the IDs of the scent in each emission. If there was any large discrepancy in the classification results produced by our model, we would check through our code to see if the error was in our code, or if it could have been a result of poor input data resulting in an inaccurate model.

To resolve issues caused by poor input data, we tested various heater profiles, changing the heater temperature and heating duration, to find a profile which would give us the best dataset for training a model. Since our final model was not 100% accurate, small discrepancies in the model's classification and the actual scent used for the input data for any identification run are to be expected.

Hardware testing
To test the hardware, we manually tested the software for controlling the hardware, observing if the functionalities resulted in the correct response from the olfactometer and sensor. In the case of the olfactometer, we would observe if the scent emission occurred at the correct time, for the correct duration, and using the correct channel. For the sensor, we printed out the readings on the terminal to check if they were being read at the correct intervals, and whether the readings were within the expected range. When the responses differed from what we expected, we would then look through the code to find the error and debug it.

We also tested the data collection system by manually checking that all the data printed out on the terminal during the collection cycle is also stored accurately in order and in the correct format in each CSV file. We also checked that the CSV files had the correct date and time stamps for each collection cycle.

A major issue we encountered with the sensor was that repeated use over long periods of time with no rest periods, or use at high heater temperatures, would cause the sensor to become damaged and burn out. This meant that the sensor would give abnormally high resistance readings. We manually monitored the readings to observe if the resistance readings started trending upwards over time without any scent emissions. Then, we tried out various heater profiles in order to find one that would not damage the sensor while still providing us with suitable data for ML. In the process, we burnt out 3 sensors, until we settled on the heater profile we eventually used.

Unit and Integration Testing
Unit testing
We tested the database handler system using unit testing. We wrote the unit tests in Python, using the unittest module to create the testing framework. We tested each of the following functionalities of the databases under different conditions to ensure that they were giving the expected output:

  • Reading from database file
  • Writing data into database file
  • Searching for records by specific fields
  • Adding new record
  • Deleting a record

This included testing edge cases such as trying to search for or edit non-existent records, and invalid inputs.

As seen below, we had a total of 47 unit tests for the database system, and all tests passed.

Unit testing result

Integration testing
We tested the integration of our user interface (UI) with the hardware by running through various use cases using our UI. We ran through each feature of our UI to test that it gave the expected response from the hardware. This included checking that the olfactometer and sensor ran as expected when we started and stopped data collection from the UI:
  • Olfactometer started and stopped each emission according to the schedule set from the UI
  • Sensor readings were taken according to the heater profile set from the UI
  • Olfactometer and sensor both started and stopped emissions and cycles respectively when we pressed the start / stop button on the UI
User Acceptance Testing
We carried out user acceptance testing for the interface and ML parts of the project. Our users are the engineers in the company. Thus, we met with our supervisor Richard, a hardware engineer, and had him test out our software as a user. This table below shows the test cases we ran through, the results, and the feedback Richard gave us for each case:

Test case Result Feedback
Creating new olfactometer / sensor schedule Pass Intuitive design and works as expected.
Creating new scent Pass Scent is added but the page needs to be reloaded to use the new scent. Would be better if it could immediately be added to the dropdown fields.
Invalid input on olfactometer / sensor configuration page when creating / editing file Pass Helpful that the interface does not allow the user to submit incorrect inputs.
Setting configuration for olfactometer / sensor Pass Configuration is set correctly and confirmation is given.
View selected configuration file Pass Displays correct information.
Renaming existing configuration file Pass File is renamed successfully.
Editing existing configuration file Pass File can be edited easily through the UI.
Starting data collection cycle Pass Nice that the configuration is displayed in the confirmation popup. Data collection works as expected.
Viewing live data Pass Live data from sensor set-up is plotted on the graph and correct values are plotted based on data type selected.
Stopping data collection cycle Pass Data collection is stopped as expected.
Adding training to existing model Pass Good that there is an "easy" mode and an "advanced" mode. Advanced mode is intuitive to use for entering configurations for training. It is also good that the user is prevented from entering wrong inputs.
Viewing model information Pass Correct information displayed.
Viewing training results Pass Easy to zoom in and out of graphs to view training results.
Running identification Pass File selection works correctly and displays a list of files based on the date range selected. Identification with the trained model was mostly accurate. The few scent emissions that were misidentified are to be expected.
Viewing identification history Pass Newest identification was correctly added into the history page. Search bar works as well.
Viewing scent database Pass New scents are added correctly and can be viewed. Search bar works correctly.