Helping shoppers to try on items without changing clothes
(Contribution: 50%)
FXMirror allows shoppers to try on clothes without physically changing clothes. The solution consists of a 75” flat panel display, a console, a Kinect camera, and proprietary software with patented technologies. As the product was the first of its kind as a virtual mirror, there was no industry standard or publicly widely recognized user interface. The design team focused on identifying and solving key issues on the interface that arose from real-world users.
FXMirror system scans the user’s body with its camera, virtually recreating the skeleton based on analysis of the visual data. With a 3D version of body ready, the customers can try on individual clothes by category (e.g. t-shirts), and also mix and match with items in other categories (e.g. tops with bottoms or jackets). They can download and share the picture of themselves with the clothes on and also share the info of the merchandise.
Teaching new hand gestures to users was difficult. Simple gestures like 'grab and drag' was not easy even when a salesperson is teaching next to them. I learned that teaching users new interactions for such an unfamiliar device was not a good decision. We changed the interactions as similar as something the users are accustomed to - computers and mobile phones - and it helped them to understand and use the new device easier.
The 3D scanning camera can recognize the body correctly only when the arms and legs are spread apart. Though we have included a visual guide and text for the users to do so, but many ignored or did not notice the guide. After changing the visual guide to an animation, we found that the recognition error decreased for 67%. The animation could be visually improved but I learned that animation can work as a powerful visual nudge for the users.