Evaluation

Summary of achievements

Requirements
ID Requirement Priority Completion status
F1 Ability to choose more than one item of clothing. Must have Complete
F2 Ability to try on more than one item of clothing at the same time. Must have Complete
F3 Ability to place, rotate and resize items of clothing within the application. Must have Complete
F4 Ability to use image recognition technology to project garment hologram such as bracelet. Must have Complete
NF1 Show buttons to navigate through different categories of clothing Must have Complete
F5 Use voice recognition to change item of clothing worn based on categories. Should have Complete
F6 Use voice recognition to display product information including purchase and sizing (chat bot). Should have Complete
F7 Use voice recognition to place, rotate and resize garments in their space. Should have Complete
NF2 Help available by tapping on a button Should have Complete
NF3 Displaying help available by voice commands Should have Complete
UI1 Easy to use and navigate through the pages of the menu Should have Complete
F8 Ability to shortlist clothing to purchase on their website (add to basket) Could have N/A
F9 Use voice recognition to add to basket Could have N/A
NF4 Avatar has to look like the user Could have N/A
NF5 Show basket page Could have N/A
NF6 Simulate texture of clothing. Would have N/A
NF7 Show clothing virtually worn on the user’s body Would have N/A
Challenges
  • Loading and unloading clothing objects
  • To allow the user to pick what clothes they wanted to interact with, we had to find a method of loading/unloading clothing objects. We did this by using the Renderer component of a gameobject in Unity. When certain buttons in the UI are pressed the script in the background will stop the rendering of certain clothes or start rendering others.

  • Using gestures to manipulate clothing
  • Used the HoloToolKit with the scripts that facilitate the use of gestures and various helper scripts that make it more reliable. We then connected gestures with scripts that adjuts the Transform components of the clothing objects which allow them to move, rotate and resize clothing.

  • Using the Chatbot API
  • Had to use a class in Unity to make HTTP requests to the AI Chatbot API so we could allow users to input queries and then quickly get and displays responses with the relevant information.

Incomplete features/bugs
  • The chatbot doesn't display relevant information such as the image of the product or possibly the URL of the product on the NET-A-PORTER website.
  • The information about the clothing on the main menu isn't dynamic and is there just as concept for the clothing that we have available
Work distribution
Work package Category Contributers
Bi-weekly Reports Reports All
Elevator Pitch Presentation All
Website Deliverable All
Video Deliverable Vania
Final Presentation Presentation All
Poster Deliverable Yll and Vania
Client meetings Client Liaison All
Client communication Client Liaison Vania
Scanning garments Project All
Testing Project All

Critical evaluation


Architecture Design:

In terms of architecture design, we strongly feel that we made the correct decision when it comes to our system architecture decisions, whether that be the applications that we used to develop in or the APIs we integrated . With respect to the individual decisions, Unity was the only option that we had when developing the app as it is currently the only 3D development software that has support for the HoloLens. Although we were able to develop a very good application, it wasn’t the easiest software to use, but maybe in the future if Microsoft extends Hololens support to development engines such as the Unreal Engine, it might be worth looking into if we work on another project with the HoloLens. Similarly with the chatbot, we didn’t really have a choice but to work with the API team 28 had created. Perhaps if we had a considerable amount of time we could have built our own API with a recommendations system more suited to our app. However we believe their API was relatively easy to use and fitted our purpose given time constraints. Finally, for the image recognition we did have a choice over what to use, as another team had used something different because Vuforia did not work properly for them. However for us it did and that too, nearly perfectly, and we think for any image recognition based apps, Vuforia is the best choice as they seem to have a very established API, which has been used and tested by a wealth of companies and individuals. Therefore again, even though the API took a bit of time to set up, we believe we made the correct decision to achieve our goal in this case.

User Interface Design and User Experience

Our user interface is very user friendly and has an elegant design that matches the “feel” of the company, as they are a high-end fashion retailer. We have also moreorless used a colour scheme that complements the company logo and other branding. To add to good aesthetics, user experience wise, we even included a help button to each page of the UI and a tutorial to guide the user through the app before they start using it on their own. As an improvement, perhaps we could allow the user to repeat certain parts of the tutorial for certain features in case they forget how to use it, although the “Help” function typically takes care of this.

Functionality

In terms of functionality, given the time that we had, we believe we implemented as many features as was feasibly possible and did this to to a high standard as well, that exceeded not only our expectations but also the expectations of the client as well. Obviously if we had more time, we would work on trying to refine the features that we have, such as making the image recognition hologram projection more realistic than it currently is, but we genuinely cannot really critique the functionality to a level more than that.

Stability

In terms of stability, our app is very stable and it has not even crashed once whilst actually running on the Hololens. Whilst running on the Emulator it has crashed, but this is understandable for two reasons: firstly for a computer running the Emulator is very processor heavy as it has to try and render 3D objects in real time and also 2 of the 3 team members were running it on non-natively on Windows through the Mac Boot Camp installation. Also on the image recognition scene, although the app does not crash occasionally some of the 3D models hang and get stuck but this fixes itself after a few seconds.

Efficiency

Once a user actually reaches the main menu page, where all the manipulation buttons are, the app if very responsive and efficient. However prior to this it takes a long time to load up. Taking roughly 20 seconds the first screen to show up after clicking on the app from the home screen and roughly a minute to load up after saying “Yes” to the tutorial. Although we did find ways to reduce this and implemented some of them, such as the way we looped through garments, given the amount of resources the app has load, we were unable to decrease loading times considerably.

Compatibility

Since our application was built directly for the Hololens, there is no other platform this could app could work on as is, since there are a lot of features that are very specific to the Hololens, such as where the Hololens camera and microphone are accessed. Despite this, we feel it would not be too difficult to port it to a desktop app for Windows/Mac OS X, because we did all of the testing locally on our machines, even for the image recognition, using the webcam on a laptop.

Maintainability

In terms of the maintainability the app, although it is reasonable, it is by far not the easiest thing to maintain, because finding a small bug, might take a very long time because of all the scripts running in the application and the GUI. As well as this, we have found some very strange errors, such as for the chatbot, the keyboard shows up when compiling the Unity project as XAML, but not when compiled as D3D, which is the default format for this type of app. Thankfully both work on the Hololens, so for errors like this, even though they are unusual, they are fixable.

Evaluation of Testing

We carried out an extensive range of testing, both using systematic methods and involving live users. Our unit and integration testing was done using Unity’s built-in testing tools and was to a sufficient standard to test appropriate things like the implementation of the chatbot. Our performance testing was mainly based on intuition and it would have been more useful to further use things like the Unity tool that displays CPU usage while your application is running. Of course this was made more difficult by the fact that we ran in separate environments, further depending on whether we had access to the HoloLens or had to settle for using the emulator. Our user acceptance testing was very useful to our development and provided many things that we could improve on based on user feedback such as the standard of our tutorial and if it fulfilled its purpose.

Project Management

On the whole I believe that we managed the project well throughout, which is shown by us meeting all of the deadlines with work that was of a high standard. As well as this we have managed to get on with all of the project tasks harmoniously as a group.

Future work

Due to the fact that this project was meant as a concept for such a big company it meant we were encouraged to innovate as much as could. This has left many different avenues we could go down if we had more time with the project.

Create an avatar

Right now the clothes hang in the air in realistic positions but the next step would be to attach them to an avatar. We could allow the users to create an avatar true to their body shape. We would allow them to enter in values such as waist size and shoulder width and that would generate an avatar that we’d create a 3D model of and place in the gameworld. We could take this a step further and incorporate some kind of face creator. We could do this via technology that creates a 3d model from 2d photos that they would take themselves or do this via a manual creator that they would use themselves.

Further use of the chatbot

We could also use the API of our fellow group’s AI Chatbot further. Right now we’ve integrated it as an option within the application and provide a UI to interact with it. We could further intertwine it with our concept and use it to get live information about the 3d models of clothes that we have in the application. For example, we could have a separate UI that’s available when you select a garment that displays information such as price, name, brand, material etc.

Use NET-A-PORTER's API

We would like to use Net-A-Porter’s API similarly to how the Chatbot has used it. We would like to offer our application as a means of also purchasing clothes which brings it closer to a realistic proposition rather than just a concept. We could offer a basket and checkout. Allow users to search for clothes sold by Net-A-Porter and retrieve the 3D model for that garment and immediately load it from a database so that the user can view it. We would then allow them to purchase it if necessary by placing it in their basket and going through the checkout process including payment.

Further Development