Human-Robot Interaction Lab

As a research assistant and software developer at Indiana University’s Human-Robot Interaction Lab, I worked on the following projects:


Extensible Visualization Framework

My favorite project at the HRI Lab was building – from the ground up – an extensible visualization framework that integrates directly into the existing HRI Lab software, “ADE” – a platform for developing and running distributed robotics applications. The visualizations can run individually or as part of a central visualization component, and can be distributed on multiple host computers. As of summer 2011, about two dozen visualizations – written by nearly everyone at the Lab – utilize the framework.

“SystemView”, the central visualization component that I wrote for displaying and managing the visualizations, is itself created using the visualization framework: a fine example of “eating one’s own dog food” and of utilizing the Composite Design Pattern. Aside from the fun fact that SystemView can consequently visualize a SystemView inside a SystemView within a SystemView ad infinitum, the central component created the first-ever function graphical user interface for our HRI Lab software, making it much easier to keep track of distributed robotics components that often span multiple host PCs. SystemView also vastly simplified the learning curve for the Lab’s collaborators, allowing for actions such as launching new components and saving and loading system configurations to be done via a GUI, rather than through meticulous command-line parameterization. A screenshot of SystemView – running a number of the Lab’s visualizations – is shown below.

 


 

Complex Multi-Robot Simulator

Another project was to create robot simulator, capable of supporting rich cognitive architectures, faithfully representing environmental objects (boxes, doors, etc), and supporting a 3D visualization. The simulator also needed to allow for the visual manipulation of objects within the environment: for example, the drawing of new walls, the movement of existing objects, and the opening and closing of doors.

When I started, the Lab already had an existing simulator, but that simulator was at the end of its life – patched beyond repair and lacking most of the above functionality. Designing the new simulator, I paid particular attention to building a clean and scaleable architecture that would remain a versatile Lab tool for years to come. In order for the simulator to be able to run hundreds of robots on the university’s high-performance supercomputer, I was careful to use the Model-View-Controller design pattern to separate the visual display from the simulator’s internal structure. The simulator also uses the Command pattern to dispatch environment updates to all of the simulated robots, a Proxy pattern for maintaining environment snapshots within each simulated robot instance, various Observer instances to be notified of state changes, and Decorators for attaching simulated sensors (lasers, cameras, etc) to the robots. I am proud of my work on the simulator: its code is well thought-out, fully modularized, carefully implemented, and meticulously documented.

Below is a screenshot of the simulated HRI Lab hallway.  Though the simulator can run without any visualizations (e.g., on the high-performance cluster) or with only a “standalone” window, displaying the simulator through SystemView makes it easier to navigate amongst multiple robots’ visualizations and to spawn new robots on the fly.

 


 

Documentation

At the HRI Lab, I spearheaded the use of an internal Wiki for lasting, collaborative documentation. I wrote and contributed to over half of the Lab’s wiki articles – ranging from documentation for particular components, e.g., the simulator and visualization framework that I developed, to broader articles on integrating Lab software with the Eclipse IDE, creating Jar files, using Subversion, and so forth. I also wrote the official guide to the Lab’s software, describing its terminology, installation, and use (see PDF file).


 

Working with robots

Working at the Human-Robot interaction Lab, I got my fair share of interacting with robots. Some, like the Pioneer robot, I taught to respond to basic hand-gesture recognition – e.g., “stop”, “move right”, “move left” – based on my undergraduate senior project on Dynamic Associative Networks.  For other robots, like the quadrotor show in the video below, I integrated the robot’s internal API with with our existing Lab software, and taught it to follow a moving target (in this case, a Videre Era robot).

The video cannot be shown at the moment. Please try again later.