Evaluation
Achievement
ID | Description | Priority | State | Contributors |
---|---|---|---|---|
1 | The X5QGEN API can generate different types of questions from a text passage | MUST | ✔ | Mathushan & Utku | 2 | The web app can interactively take in a text passage and display questions | MUST | ✔ | Orhun & David |
3 | The web app can store user's progress in a database | MUST | ✔ | Mathushan & Utku |
4 | The system can check if the user got the right answer or not | MUST | ✔ | Orhun & David |
5 | Different question generation models can be plugged into the X5QGEN API | MUST | ✔ | Mathushan & Utku |
6 | Integration of unit testing into the system | MUST | ✔ | Mathushan |
7 | The web app allows the user to see the text passage when solving the questions | SHOULD | ✔ | Mathushan & Utku |
8 | The web app can create a code for users to share the questions | SHOULD | ✔ | Mathushan |
9 | The system allows the user to skip the question or attempt it again | SHOULD | ✔ | Orhun & David |
10 | The system can generate questions with the number of options user requested | COULD | ✔ | Mathushan & Utku |
11 | The system has authentication and a profile page for users to track their performance | COULD | ✔ | All |
12 | THE X5QGEN API can generate wh- type questions | COULD | ✔ | Mathushan & Utku |
13 | The difficulty of the questions are decided by the system | COULD | ||
14 | A Wikipedia based wordclouds for performance visualisation of the user | COULD | ||
15 | Questions that require long written answers | WILL NOT HAVE | ||
16 | Questions with answers beyond input text | WILL NOT HAVE | ||
17 | Questions with images or non-textual data | WILL NOT HAVE | ||
Key Functionalities: | 100 % | |||
Optional Functionalities: | 60 % |
Individual Contribution
Work Packages | Utku | Orhun | Mathushan | David |
---|---|---|---|---|
Partners liaison | 30% | 20% | 30% | 20% |
Requirement analysis | 20% | 30% | 20% | 30% |
Research and Experiments | 25% | 25% | 25% | 25% |
Coding | 20% | 30% | 30% | 20% |
Testing | 22% | 22% | 34% | 22% |
Report Website | 27% | 20% | 27% | 26% |
Overall Contribution | 25% | 24% | 27% | 24% |
Critical Evaluation
User Experience
We spent a great deal of time on the user interface and tried to think of it from the user's perspective.
• The web app has a simple structure where it is very clear what each button does and it is named accordingly.
• When the entered text passage is too short the system gives out a clear message saying the text passage is not long enough to create questions.
• We made the colours contrast in a way where the users can clearly differentiate the elements with functionality in the web app.
• The user can get an insight into their performance through the profile page.
• The system can generate different questions from the same material which provides even more testing resources for both teachers and students.
Functionality
Our main focus was on making the question generation software as high quality as possible so the teachers and students can use it effectively to revise or create assignments. This meant most of our time was spent on the question generation logic and making it accessible to anyone with an internet connection. Also, everything was designed in a way where the question generation logic could easily be changed and the system will still work perfectly.
Other key functional requirements were all completed and optional requirements that we didn't have the time to work on wouldn't have improved the functionality of the system significantly. Even after starting the development phase we got additions to our functional requirements and did our best to complete them in the given time frame.
Throughout the whole development process we prioratised functionality and the final product surpassed our expectations in terms of what it could achieve.
Stability-Efficiency
The only time where there are some performance issues is when the text passages are extremely long. It can take a while for it to generate the questions. This time could be cut down significantly by using virtual machines with more GPU power but unfortunately, we cannot afford it. On the other hand, this waiting time shouldn't be a problem for our target users since they won't be waiting longer than a minute in most cases.
Other than the waiting time performance issue we haven't experienced any stability or performance issues when testing and we can conclude that the deployed system is stable. You can check out the testing page to see through our testing methodologies.
Compatibility
Our web app is connected to our API which is deployed on an Azure virtual machine therefore everyone can connect to the API endpoints and use the question generation logic. Anyone who wants to use the API can read the documentation on how to send requests to the API endpoint and they can start using it just like that. So there are no compatibility issues with the system and it is quite simple to connect to.
Maintainbility
The web app and the question generation API is separated and that makes it easier to do maintenance on the system. Our MVC structure in the web app also allows us to spot a bug easily. That's why maintaining this system shouldn't be a hassle and the only problem is the virtual machine fees we pay monthly which can get expensive.
Project Management
The project has been well managed. We separated the group into 2 from the start, where one group is working on the back-end related functionalities such as question generation logic, API endpoints, routing of the web app and so on. The other team is working on the front-end which is the visual design of our web app and they are more in charge of the user interface. We also utilized the Gantt chart to schedule our deadlines for each part of the project. We always stayed in contact with each other and the clients through regular weekly meetings and the slack channel. We didn't fall behind on any of the deadlines which shows that the project was well managed.
Bug Lists
ID | Bug Description | Priority |
---|---|---|
1 | Sometimes the database sessions don't get killed so the users won't be automatically logged out | medium |
Future Work
1. The plug and play structure of our API allows anyone to change the question generation model to improve the questions. So, in the future the question generation model can be swapped with an improved version.
2. The question generation logic is not specified for a domain. So, it can be changed and trained to generate questions in a specific domain depending on the user's need.
3. If we had more time we could have also added more types of questions on top of multiple-choice and true or false. Such as, drag and drop or fill in the blank.
4. Could add a function where the user can edit the generated question so it would have a database of improved questions and the system can use those questions to improve the question generation.