Evaluation


Requirement List
ID Requirement Description Priority State Contributors
M1 Contain a questionnaire form to gather data Must Yes Sami+Zekun
M2 Allow users to submit their answers to the questionnaire Must Yes Sami+Zekun
M3 Provide a way for users to provide data without letting them input text (e.g. using a drop-down selection) Must Yes Sami+Zekun
M4 Provide a question to find out what level of degree the users are interested in (e.g. undergrad, masters) Must Yes Sami+Zekun
M5 Ensure that the users submit an answer to level of degree question Must Yes Sami+Zekun
M6 Display a list of course rankings, where the courses chosen are based on the data the user has inputted. Must Yes Sami+Zekun+Mate
M7 Provide a way that allows users to go back and edit the questionnaire and find new courses. Must Yes Zekun
M8 Provide a question to find out what the user’s predicted grades are. Must Yes Sami+Zekun
M9 Inform the user what the entry requirements (predicted grades) are for each of the courses Must Yes Sami+Zekun
S1 For each course we display to the user, there should be a link provided that redirects the user to the official UCL course page. Should Yes Sami+Zekun
S2 Provide help/ guidance about how to answer questions (e.g. giving information about how to convert foreign grades to A level grades) Should Yes Sami
S3 Allow users to leave some answers blank (However, degree level and predicted grades must be submitted) Should Yes Sami+Zekun
S4 Inform the user if the course is accredited Should Yes Sami+Zekun
S5 List the modules of each course Should Yes Sami+Zekun
C1 Display the fees for each course Could Yes Sami+Zekun
C2 Display the course rankings as a table Could Yes Zekun
C3 Inform the user of the course ranking in relation to other universities Could No -
C4 Have a compare option, which helps compare additional details between different courses Could Yes Sami+Zekun
Key functionalities (Must and Should) 100%
Optional functionalities (Could) 75%

We also have a technical video that demonstrates the features listed above, and goes into depth about all the unique parts of our project. You can see this here


Individual Contribution Table
Work package Sami (Team Leader) Zekun Mate
Project Partner Liaison 100 0 0
Requirement Analysis 50 25 25
Research and Experiments 25 25 50
UI Design 5 95 0
Coding 40 50 10
Testing 50 50 0
Bi-Weekly Report 100 0 0
Report Website 60 0 40
Poster Design 5 0 95
Video Editing 0 100 0
Overall Contribution 37 38 25
Main roles Systems architect, tester, project partner liaison DevOps Engineer, UI Designer, programmer, tester Researcher, programmer

The List of Known Bugs
ID Bug Description Priority Fixed
1 There were two options for medical in the drop-down menu for the interest’s question. Low Yes
2 Program would crash after the following actions occurred:
  1. User entered any options and pressed submit
  2. Back button pressed to re-access questions
  3. User modified their answers
  4. Resubmit their questionnaire
High Yes
3 Guidance on how to answer the predicted grades was geared towards students who did A levels Medium Yes
4 Learn more button did not work Medium Yes
5 Azure MySQL resource crashed and no longer worked High Yes
5 Website did not work on Safari due to security issues Medium Yes

Critical Evaluation of the Project

User Interface

When designing our user interface, we had to keep several things in mind. Firstly, the project is supposed to act as a proof of concept which lays the groundwork for future projects, so we chose to focus more of our time into the functionality rather than appearance. However, we still chose to put effort into the creation of the UI, as we wanted to emphasize on how simple and effective the tool would be for users to use. We therefore chose to keep the layout simple, basic and neat, while trying to ensure HCI standards were met.

Firstly, we tried to develop the tool keeping in mind that “recognition is better than recall”. This means that we used common UI elements such as drop down options, submit buttons and checkboxes to ensure that users would naturally know how to use the tool. We also tried to include instructions and guidance where we thought it was necessary, such as explaining how to answer certain questions, and providing some default values for users if they were unsure of what to submit.

However, there are some features that could be improved upon. Firstly, we implemented course cards to provide more information about certain courses. When presenting our prototype to pseudo users, some of them said that it was not obvious how to access the course cards (needed to press on a check box), and voiced their concerns that the course cards appeared at the bottom of the page. We could have additionally made the UI more exciting and colourful because it is currently quite plain and basic, but this was a trade off that we chose to make in order to focus more energy on the functionality and back end.

As a way to measure the effectiveness of the user interface, we really wanted to demo our final prototype to current IHE students to find out their opinion, as they would be perfect pseudo users to test the tool on, as they were once prospective students themselves. Unfortunately we did not get the chance to interview them, as our group has been greatly impacted by the coronavirus, as many of us left the country. However, as a substitute, we had the honor of presenting our tool at a IHE education delivery group meeting, which contained important stakeholders who worked under the IHE. This was beneficial as they helped critique the tool, although they were more concerned with the functionality than the UI design.

Functionality and Project Management

We are quite happy with the functionality of our project. We measured our success in this category by comparing the functionality of our tool against the MoSCoW requirements we created during the project’s inception. We are proud to say that we have completed 100% (14/14) of the ‘required functionality’ (Should and must), and we have even completed 75% (3/4) of the ‘optional functionality’ (Could). We also judged our project management on this, as it is ultimately our planning and hard work that allowed us to finish the project in the time frame given to us.

The reason we are proud of finishing our project is because our progress was dependent on obtaining data from UCL’s database of courses, and subsequently sorting and displaying this data to the user. Unfortunately, we never got access to the database for bureaucratic reasons, which put us in a difficult position for quite a long time. Thankfully, we were quite proactive and decided to take matters into our own hands and gathered data ourselves using our own web scraper and a student resource called UCL API in order to access all the course data we needed. Therefore, we are proud of ourselves for finding a way around a tough situation and still managing to meet our requirements and produce all the required functionality.

There were two things that were essential in our project management. Firstly, we tried to maximise productivity by matching up tasks with each group member's strengths. For example, Zekun has very good design skills, which made him a very effective UI designer, and Sami has strong communication skills, making him a good project partner liaison. The second thing that was essential was our planning and time management. This was important especially in this project, as there were a large number of deliverables that needed to be accounted for, so we required strong communication and planning to be able to effectively complete all that was required from us.

Unfortunately, our time schedule was severely affected by the coronavirus. This is because some of our team members were displaced and affected by the virus, which took our focus away from the project for an extended period of time.

Stability and Maintainability

As a result of the way we deployed our project, there are both advantages and disadvantages in the maintainability of the project. Firstly, the fact that the project is hosted on AKS (Azure Kubernetes Service) makes it easier to maintain due to the fact that everything is packaged nicely in one cluster and an additional VM. This is beneficial, as the project partners will not need to worry about hosting or maintaining a server, as Azure will handle all of this anyway. Moreover, in our Kubernetes YAML file, we created a back up back end that could be used in the event that our main back end crashes.

The main trade off is the cost of hosting the project. Because of our set up, it is easy to increase the size of the project and allocate the cluster more resources, but some may consider that a whole kubernetes cluster for a decision tool may be overkill. This is because it costs more money to host this than using a simpler deployment method like a docker container, which could incur higher financial costs for the project partners. In our case, we chose one of the cheaper kubernetes services D1_v2, and this costs about 39 pounds per month, while some of the better clusters with a bit more ram could cost as much as 100+ pounds per month. Moreover, the cluster complicates changing the code in the front or back end, as the code would need to be re-containerized and deployed into the cluster.

Efficiency

As I have mentioned in the stability and maintainability section above, some people may consider our project deployment to be inefficient. This is because some could think that a kubernetes cluster is overengineering what is actually required to host a decision tool, which would use up more resources (electricity, money and CPU power) to host the project.

We did try to keep efficiency in mind when developing our code base. This was one of the most influential factors in choosing the frameworks we would use. For example, we chose to use Fast API because of their efficiency and speed in handling requests, as a result of its concurrent request handling and “starlette and pydantic” implementation. As mentioned previously, we were considering multiple alternatives such as Django and Flask, but ultimately chose not to use as Fast API significantly outperforms them, as shown in the screenshot below from TechEmpower (https://bit.ly/2yC8qbV):

We were also considering efficiency when building our code base. We opted to use many libraries such as beautiful soup in the web scraper, axios in our front end and some regexes, as we knew that library functions would be much more efficient than creating the code ourselves. This also helped improve our time management, as the libraries helped save us precious time, making us more time efficient. However, one area of inefficiency is in our course ranking algorithm which does not have great time complexity, but, there are a very small number of courses that need to be considered by the tool, so this is not a significant issue. (Even if the tool was to be extended to compare every single department, there are still a relatively low number of courses to be ranked).

Compatibility

We have tested our website in the following operating systems and browsers:

  1. Mac OS: Safari, Chrome, Firefox
  2. IOS: Safari, Chrome
  3. IPad OS: Safari, Chrome
  4. Android: Chrome
  5. Windows 10: Chrome, Firefox

Future work - how the project could be extended with another 6 months.

If we had more time, there are numerous things we would do to improve our project. We have completed every one of our MoSCoW requirements, except for one ‘could have’ which states “Could inform the user of the course ranking in relation to other universities”. If we had another 6 months, we could have easily addressed this requirement by extending our web scraper or using APIs to obtain information about university course rankings from websites such as QS world rankings. We would have then modified our database to store this information, which would then make it simple to display to the users. We could also make further improvements by looking at additional data such as future employment statistics (etc) which would also make the tool more informative for prospective students.

Given more time, we could have also increased the scope of the tool to cover every course at UCL, rather than just the IHE. Extending the tool in this way would be quite simple, as when we were developing our systems architecture, we put a lot of effort in ensuring that the tool could scale up easily. To include additional courses, all that needs to be done is writing the course name into a txt file in the web scraper, and rerunning the web scraper. It will then scrape the courses provided and generate a csv file that can be added to the database with the new courses information. This change would be quite easy to make, as we could write a simple for loop to obtain all of the course names, and add them to the text file. The reason we did not do so, is that our client specifically asked us to look at the IHE courses in detail, which is why we did not include the extra courses as well.

The final improvement we would have made if we had 6 more months would be looking at packaging the web scraper into the Virtual Machine that contains the database. We would then create a script that runs the web scraper at a constant time interval (say weekly), generating a new CSV file, and then would try and update our database with the new information. This automation would be beneficial, as it would ensure that our decision tool would always have up to date information to show to the users, and would react to any changes in course structures.