By the mid-1980s, all the important components of today’s virtual reality systems existed in one form or another, awaiting an inventive mind to bring all the pieces together and the exploration of virtual worlds to begin in earnest.
In 1981, Michael McGreevy, who was studying for a PhD in cognitive engineering, and Dr Stephen Ellis, a cognitive scientist, began a program of research in spatial information transfer at NASA Ames, emphasizing the interpretation of 3-D displays. Aware of the pioneering work by Sutherland and Furness in using head-mounted displays, McGreevy put forth a proposal in 1984 to craft a similar system for NASA called a virtual workstation.
Based on the ideas and applications mentioned in the report, McGreevy obtained a small amount of seed money ($10,000) from division management to build a prototype display system. He had followed with interest the work done at Wright-Patterson Air Force Base by Thomas Furness III on the state-of-the-art VCASS for pilots (top image).
VCASS had the necessary qualities of high resolution and the ability to quickly render complex images that McGreevy needed to pursue his research in the simulation of virtual environments. But VCASS had one problem: it would cost a million dollars just for the helmet. This was a limitation that overshadowed the system’s qualities. McGreevy would simply have to build his own.
The most expensive part of the VCASS helmet was the use of custom CRTs that generated the high-resolution images seen by the pilot. If these could be replaced with a less expensive display and combined with special lenses that allowed a much wider field of view, McGreevy would have his helmet. He sought the help of contractors Jim Humphries, Saim Eriskin, and Joe Deardon to develop his low-cost alternative to VCASS.
Fortunately, a consumer product had recently appeared on the scene that solved their most important problem—the need for a small inexpensive display that could be worn on the head. Black-and-white hand-held TVs, based on LCD technology (Sony called theirs a “Watchman“) had recently become available. A quick trip to Radio Shack netted two such devices. The early LCD displays had a limited resolution of 100×100 pixels (contrasted to the millions of pixels in the VCASS displays), but they were a start and the price was right.
Next, the LCD displays were mounted on a frame similar to a scuba mask that was then strapped onto your face. Special optics in front of the displays focused and expanded the image so it could be viewed without effort. McGreevy dubbed the odd-looking device the Virtual Visual Environment Display (VIVED, pronounced vivid). It was the only $2000 head-mounted display on the planet and it is still cheaper than some VR headsets today!
To test their novel display, they needed to create independent left and right eye images, or stereo pairs. Without a computer to do this, they turned to a different source. Two video cameras mounted side by side, were wheeled up and down the hallway to create stereo videotapes. Their first production was a walking tour from NASA’s human factors lab, through the offices of the division, and on to the hanger where the XV-15 Tilt-Rotor aircraft was being developed. When users watched the videos through the VIVED system, they had a sense of being there, what is otherwise called immersion.
McGreevy and Amy Wu, his support programmer, proceeded to develop the patched together a Picture System 2 graphics computer from Evans and Sutherland, hardware and software necessary to create the rest of the virtual workstation. They two 19″ display monitors, a DEC PDP-11/40 host computer, and the same Polhemus head-tracker used by Furness.
The Evans and Sutherland graphics system-generated separate (stereo), wide-angle perspective images on each of the two display monitors. To convert the video signal into the proper format for the head-mounted display, two video cameras were mounted so that each pointed directly at one of the 19″ displays. Next, the Polhemus head-tracking sensor was mounted on top of the VIVED display, communicating the position and orientation of the wearer’s head movements to the PDP-11/40. Users who strapped the odd contraption onto their face suddenly found themselves immersed in a computer-generated world and at the time this was nothing short of amazing.
Data from one of McGreevy’s earlier projects to study air-traffic control issues was used for this first virtual environment. Users felt as if they were standing on a horizontal computer-generated grid that stretched out to the horizon. Turning their head, they saw a featureless grid extending to infinity in all directions. Simple 3-D wireframe models of tiny aircraft hung suspended in mid-air. Users could walk around and inspect each aircraft in turn. Sure, the aircraft were fixed in space, but with a little more programming people were able to be at the centre of a swirling confusion of planes busily landing and taking off again.
As word got out of McGreevy’s achievements, a steady stream of visitors from industry, academia, and the military made their way to the small cluttered lab where a revolution was taking place. By 1985, McGreevy and crew had created history’s first example of a practical head-mounted stereoscopic display system. Unlike previous examples, this one would eventually capture the attention of the public and trigger a small industry.
Scott Fisher joined NASA’S VIVED project the same year, 1985, that Michael McGreevy headed East for a two-year training stint in Washington D.C. Fisher’s background from the Architecture Machine Group at MIT and his more recent tenure at Atari’s Research Center (ARC) gave him valuable insight in directing the research at NASA. Fisher was interested in extending the initial system by including a wired glove, voice recognition, 3-D sound synthesis, and tactile feedback devices. His objective was to develop a system that could be the foundation for many different forms of research into virtual environments. While Fisher conceptualized, McGreevy fought the necessary funding battles to keep the program alive at NASA. But without this funding VR motion controllers as we know them today might well have been a thing of science fiction.