Artificial Intelligence + Virtual Reality = Holodeck (Almost)

As the technology develops, some are looking towards a combination of both platforms.

KennyLogoNewIn 2014, in an article that chronicled the emergence of holodeck-like technologies, New York Times tech columnist Nick Bilton wrote: “Some scientists and researchers say we could have something like holodecks by 2024.” (See “Disruptions: The Holodeck Begins to Take Shape”) Without the backing of a named source, that timeline appears to be more speculative than definitive. Yet, recent advancements in the artificial intelligence (AI) and virtual reality (VR) frontiers suggest the date is not too far-fetched.

The holodeck as depicted in the TV series “Star Trek” is a hologram-powered simulation environment. Its use is primarily recreational. But the show also proposes pragmatic applications of the holodeck’s technology. “The Star Trek: Voyager” series features a holographic medical doctor, the Emergency Medical Hologram Mark I. In some episodes, the holodeck is pressed into service as a platform for battle training and forensic analysis.

The concept of reality-mimicking environments constructed with solid holograms remains theoretical. However, similar environments built with convincing pixel-objects are well within reach. From the Google Cardboard ($15) to the more advanced HTC Vive ($799) and Oculus Rift ($599), the headgear required to deploy and display VR content is becoming more affordable.

From CAD to VR

There’s no shortage of accurate, detailed 3D content—the cumulative legacy of CAD use in engineering. VR content is usually put together in game engines, such as Unity, Unreal or CryENGINE. Converting CAD data from the engineering world into game engine-compatible VR is not a straightforward task.

“CAD software makers might want to think about the hand-off process to the engine,” said Michael Kaplan, NVIDIA’s global lead for Media and Entertainment. “Or, would they prefer to develop their own CAD-to-VR conversion software? After all, they’re software companies.” For some, that might be biting off more than they could chew. Kaplan pointed out: “Most of them have never written real-time VR software.”

AI, once considered the stuff of science fiction, is picking up momentum. AI programs like autonomous navigation and image recognition in self-driving cars are expected to come from deep learning, a process that uses massively parallel computation to teach programs to see and react to their physical surroundings.

At NVIDIA’s GPU Technology Conference, NVIDIA CEO Jen-Hsun Huang unveiled the GTX-1, described as “the world’s first deep-learning supercomputer.” He also introduced the TESLA P100, “for deep learning [at] 16-bit performance,” according to Huang. The new products are augmented by the company’s VRWorks software suite for VR application developers.

NVIDIA An attendee gets ready to experience VR at this year’s NVIDIA GPU Technology Conference. Image courtesy of NVIDIA.

CPU maker Intel pursues the same market with its Intel Deep Learning Framework. IBM Watson, a supercomputer that defeated its human opponents in the TV gameshow “Jeopardy!” in 2011, also wants a piece of the action in deep learning. As of February, Watson has learned to detect signs of human emotion.

Benefits of Convergence

Currently, AI and VR are separate industries, each pursuing its own objectives, each wrestling with its own challenges. In my view (which, I must admit, is influenced by an overactive imagination), the convergence of the two could propel digital simulation to new heights. A VR environment running on autonomous agents and deep learning algorithms could give us unprecedented insights into products and processes.

In a holodeck-like simulation environment, a design study is more than a quality assessment. A virtual product trial could be a stirring, searing, unforgettable experience that induces laughter, tears or fear. If powered by AI and deep learning algorithms, such an environment could suggest design changes and configurations that engineers might have never conceived solely by intuition and expertise alone.

If IBM Watson’s emotional intelligence is a harbinger of things to come, future simulation programs might be able to detect how we feel about suggested design changes. If next-gen simulation environments are to look more like the holodeck and less like the standard finite element analysis software interface, then the challenge for developers is to come up with a new way of posing questions—not through dropdown menus and input fields, but through a more natural interaction. Voice command, for example, would be consistent with the holodeck.

VR-powered simulation is still too far away to worry about, you say? By my calculation, 2024 is only eight years away.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Kenneth Wong's avatar
Kenneth Wong

Kenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at digitaleng.news/facebook.

      Follow DE
#15263