Helping shoppers to try on items without changing clothes

FXMirror TF @FXGear
UI & Interaction design
(Contribution: 50%)
2016 - 2018

FXMirror provides real-time 3D virtual fitting service with its AR mirror kiosk

FXMirror allows shoppers to try on clothes without physically changing clothes. The solution consists of a 75” flat panel display, a console, a Kinect camera, and proprietary software with patented technologies. As the product was the first of its kind as a virtual mirror, there was no industry standard or publicly widely recognized user interface. The design team focused on identifying and solving key issues on the interface that arose from real-world users.

my Impact
  • Developed wireframes and user flows
  • Collaborated closely with developers to implepment UI/UX changes
  • Created visual user interface and artifacts including the icons and the animation guides
  • Created storyboards to visualize solutions
  • Designed the logo, web pages, and created editorials including pamphlets and posters
How it works

Customers first scan their bodies using FXMirror’s camera and can virtually try on clothes on the screen

FXMirror system scans the user’s body with its camera, virtually recreating the skeleton based on analysis of the visual data. With a 3D version of body ready, the customers can try on individual clothes by category (e.g. t-shirts), and also mix and match with items in other categories (e.g. tops with bottoms or jackets). They can download and share the picture of themselves with the clothes on and also share the info of the merchandise.


Bringing similar experiences that users have used previously helps them understand and use a new interface

Teaching new hand gestures to users was difficult. Simple gestures like 'grab and drag' was not easy even when a salesperson is teaching next to them. I learned that teaching users new interactions for such an unfamiliar device was not a good decision. We changed the interactions as similar as something the users are accustomed to - computers and mobile phones - and it helped them to understand and use the new device easier.

Motion and animation can be useful when asking users to follow guidance

The 3D scanning camera can recognize the body correctly only when the arms and legs are spread apart. Though we have included a visual guide and text for the users to do so, but many ignored or did not notice the guide. After changing the visual guide to an animation, we found that the recognition error decreased for 67%. The animation could be visually improved but I learned that animation can work as a powerful visual nudge for the users.

Allowing viewers to immerse in virtual reality