Requirement

Project background and Client introduction




Project goal





Requirement gathering

Although we initially knew this project is to design a voice guidence mobile APP for visually-impaired people. But we wanted to know more about the requirements of this project, so we arranged a meeting with our clients on the Microsoft Teams. During the meeting, we thought about more requirements and functions of our APP. In subsequent meetings we got more ideas from the clients.
Due to the covid-19, we did not have chance to do any survey for our project, so the main requirements are from the meeting with the clients, and following is the example questions we asked to our clients:

1. Does this voice guidence system support multi-cameras?
2. How do we connect the cameras to our mobile phones APP?
3. What functions does this APP should have to help visually-impaired to avoid obstacles?
4. Do we need to store the data that collect from cameras to put them into database?
5. What others things we need to do for this APP? (Like Web_portal for APP)




Personas


alternative

Peter is 45 years old and he became a blind person due to some illnesses many years ago. He tried many assistive systems, like canes and guiding dogs. But he thought that all these methods cannot help him to know the environmental information precisely. After he used this new system with voice recognition and prompt, he knows many details of the objects around him and he can walk on the road more easily and more confident. He found the app to be very easy to use (it has only one button and uses the voice to control it). Unfortunately, he always needs to take the camera with him to receive the images around him which is not so convenient. Also, the whole system is a little expensive for him.



User cases

Use case Diagram



alternative

Use Case Discription

Visually-impaired people (specific user manual please see APPENDIX page):

1. Download the APP
2. Connect the NUC computer to the same network as the phone (mobile hotspot)
3. The users can use voice commands to:

  • Start the remote system
  • Change the Speed of the system
  • Change the Volume of the system
  • Change the Pitch of the system
  • Turn the Vibration on/off of the system
  • Use Location to use other locations' static cameras


Developer:

Developers get the data of recognized objects by the cameras from the SQLite database to do the further development



Moscow Requirement List

* Must have

  • We must to develop a app template for end users with a phone.
    • Mobile app that controls the functionalities of the system and are available for both Android and iOS (Cross Platform).
    • Mobility in object recognition - the cameras are with the user.
    • Static in object recognition - the cameras are in a fixed position (in a building).
    • The APP is easy to operate for visually-impaired people.
    • The APP allows the user to change settings.
    • Text to speech functionality of the APP.
  • SQLite database on the phone that stores retrieved Sight++ data for mobile Intel realsense camera data (like location, date, time, and the objects recognized).

* Should have

  • A website portal for app developers who are building apps for the visually-impaired people. It has the components, examples and a video of how developers would use it.
  • The user can report if the system has made a mistake.

* Could have

  • A service for Sight++ zones.
  • Users control the mobile APP by using the voice command.
  • Haptic feedback for commands in the mobile APP.

* Would like to have

  • Help button added to enable a video call between remote assist to a person on the move.