The goal of our project is to work with the R-Pad Robotics Research Lab to build a usable, intuitive virtual reality (VR) environment for users to complete a range of tasks, which a robotic arm can then learn from or mimic. There are two major applications to prototype for: the VR environment with its spatial interfaces and control mechanisms, and a two-dimensional screen interface for the researcher moderating the task collection from the VR user. In addition to these two deliverables, we included a final proof of concept video to showcase the VR interface in its ideal future state.
For this project, I worked primarily on the development of lo to hi-fidelity mockups of the moderator interface, along crafting and conducting meaningful user testing between each iteration. I was also heavily involved in researching and testing the design paradigms behind first vs. third person control of the system, and 3D animating the final concept video to show ideal future state.
Scope:
16 weeks, 6 designers
To better understand our end goal, let's first take a look at one of our final deliverables. Our solution places users in an environment that mimics reality, where they have the autonomy to complete tasks of their choice in first-person. A significant theme we focused on throughout this project was enabling those with varying disabilities to be able to perform everyday tasks through the use of VR.
How can we create a VR interface that allows users to intuitively instruct robots? To help us answer this question, we conducted exploratory think-aloud usability tests with 11 participants, who all had little or no experience using VR. These participants went through think-aloud exercises with 3 existing VR programs — Microsoft Maquette, Virtual Virtual Reality, and Google Earth VR. With this study, we aimed to answer the following questions:
1. What are the pain points & points of confusion for a user in a VR environment given instructions and a series of small tasks?
2. What design patterns are commonly being used in popular VR applications? (for menus, cursors, controls, etc.)
We created a storyboard to define potential solutions to the various usability problems faced in VR environments. The storyboard details how a user would enter the VR environment, view a tutorial for a cup stacking task, and then attempt to complete that task.
We then created mid-fidelity 3D prototypes of key interactions using Microsoft Maquette. These included iterations of different VR controllers, scene layouts, and tools for controlling precision.
We speed dated our storyboards and maquette prototypes with 6 users, in order to gain insight into the pros and cons of different design decisions (first vs. third person, moderated vs. unmoderated, etc.)
First vs Third Person Perspective:
1. First person perspective is more intuitive, as it more closely mimics real life.
2. Third person perspective more closely mimics the setup of the robot within the lab environment.
Path Planning vs Live Control:
1. Path planning accounts for the lag between the user’s action completed in the VR environment and real-time robot movement.
2. Live control is more intuitive for users who are completing the tasks from remote locations, who may not even be aware of the robot setup.
Moderated vs Unmoderated Interface:
1. When the interface is moderated by a researcher, the researcher can more accurately monitor errors, instead of depending on the user to track their own errors. The user can simply focus on completing the task in front of them, without being overwhelmed with this secondary task.
2. When the interface is unmoderated, less manpower is required and tasks can more easily be completed remotely.
In order to closely portray the actual VR experience, we wanted to create a fully functional prototype, which would be connected to the actual robot’s movements (the robot would follow movements made in the VR environment.) We chose to move forward with a combination of first and third person perspective, where the user’s arm within the VR environment resembled a robotic arm. The interface would be moderated, as this would allow researchers to collect more accurate data. The interface would use path planning — when a user made a movement within the VR environment, a ghost trail would appear until the actual robot could catch up to this movement.
Task Screen
The task screen allows the moderator to view testing sessions as they happen and also view data collected from previous testing sessions.
Moderator View
The moderator will have concurrent feeds of the physical robot in real time, the VR user in their physical environment, as well as a livestream of what the VR user is seeing in the virtual space.
Error Tracking
Researchers can input non-fatal and fatal errors manually, by pressing a button on the upper right hand menu. Users are informed through the VR interface that they have made an error.
Messages & Notes
Researchers can send messages to the user while they are in the VR environment to provide hints or instructions. They can also create internal notes during the user test, which are not seen by the user.
Task Completion
When the user sufficiently completes the task, the moderator marks the task as complete, sending the user a message showing their completion time. The moderator can then view aggregate data or return to the task screen.