By Kaden Strand
There is a great deal of hype for how virtual reality and augmented reality technologies (VR/AR), (collectively referred to as “mixed reality”) will radically transform how we share ideas and will impact education, training, visualization, and social communications. While many excellent prototypes of mixed reality experiences show potential in these areas, it can be challenging for individual professionals, educators, students, and enthusiasts to create and share mixed reality experiences without a strong technical background.
Beyond gaming and entertainment experiences, mixed reality has untapped potential for individuals to explore and share their own ideas and data, especially in the sciences. New authoring tools, shareable 3D formats, and web-based VR are among the emerging technologies rapidly lowering the barrier of entry for individuals to create and share their ideas in mixed reality.
Most mixed reality experiences today are created by content development studios using game development software, such as Unity or Unreal engines. These platforms are tremendously powerful and enable skilled users to create sophisticated virtual worlds. Individuals from many backgrounds are spending time learning these tools, and with some programming experience, it is not too challenging to learn the basics of building a simple mixed reality experience.
However, if you are an educator or scientist working with your own data, and you want to simply view a 3D model, plot 3D variables and mathematical functions, examine GIS topography, view a chemical structure, or explore other data-driven scenarios in mixed reality, it can be prohibitively time-consuming to develop a custom program within a game engine simply to view and share your data in VR. In many cases, data must be entirely reformatted, or different programming languages must be used to work with the game development platform, and each feature must be carefully integrated into a final executable program. As new headsets, interaction methods, and use cases arise, these programs must then be updated and re-distributed. Non-technical users often partner with software developers to bring their ideas to life in virtual environments.
Creating VR with VR
Many companies are working to address these challenges, and some of the most exciting advances enable non-technical individuals to create, share, and annotate virtual environments. For example, Google’s Tilt Brush and Oculus’s Quill and Medium tools allow users to draw, sculpt, and animate 3D objects and environments directly inside VR. Microsoft’s new Maquette software allows the authoring of more sophisticated virtual environments from within VR, with the ability to include 3D models from content libraries; annotate scenes with text, icons, pictures, and videos; and set up viewpoints and transitions between scenes—all with a point-and-click interface requiring no programming. The quality of these experiences depends on individual artistic vision and clarity of thought.
3D file formats for sharing the objects and environments created in these programs have become more fully supported for sharing on the web and in desktop programs, such as PowerPoint. For instance, models can be uploaded to SketchFab and then easily shared or embedded into web pages. The Chrome, Firefox, and Edge browsers have even introduced support for WebVR capabilities to access fully immersive views of VR web pages, such as SketchFab scenes, directly in the web browser.
A Soup of Ideas: The Value of Composable Design
Consider how content is created and shared on the Internet. A single blog post can include text, pictures, videos embedded from YouTube, and interactive widgets, all composed into a cohesive story. It is easy to mix-and-match content on the web. With a basic knowledge of HTML, it is straightforward to create a web page with a variety of content types, and web development platforms and services further simplify this process for non-technical authors.
Because many topics have become increasingly data-driven, data visualizations have become more sophisticated and used more frequently in professional and educational settings. Many popular visualization tools such as D3.js, Plotly, and CartoDB are web-based, allowing users to flexibly incorporate customized data and embed the interactive results directly within their own web pages to be easily shared. The Distill web journal for machine learning research is a leading example of interactive, domain-specific communication.
The A-Frame web framework from Mozilla brings the composable nature of web pages into mixed reality. A-Frame extends the hierarchical structure of HTML into 3D WebVR scenes, allowing for the creation of mix-and-match components sharing the same virtual space. You can access these scenes simply by navigating to a URL with a web browser. An immersive view of these web pages can then be experienced with a VR or AR headset, or viewed from a desktop, tablet, or mobile device as a 2D perspective.
Data visualization and educational tools developed as A-Frame components benefit from this composable design, allowing users to flexibly create and share 3D scenes with a variety of content and automatic support for mixed reality headsets.
Example Scenario: Chemical Visualization
One example of a leading web-based scientific visualization tool for 3D data is the NGL Viewer, used for examining chemical models. Over 140,000 complex chemical structures in the Protein Data Bank, such as antibodies, DNA, and viruses, can be directly viewed on a web page. NGL is open-source software funded by grants and sponsorships, and must prioritize which features to include in the chemical visualization software. This means that extra features, such as VR/AR headset support, multi-user networking, advanced animation tools, and artistic post-processing are out of the scope of the project.
If the NGL viewer was designed as an A-Frame component, a user could incorporate these extra features by including additional A-Frame components in a custom WebVR scene. To explore this idea, Blue Penguin is working with Colorado State University to develop a proof of concept NGL component for A-Frame. Using this component, molecular structures can be easily mixed with other components, such as text annotations and additional 3D models, in a WebVR scene. This design provides much greater flexibility and extensibility to meet custom use cases in education and research, allowing students and instructors to easily create and share VR-enhanced websites describing complex chemical concepts.
A-Frame scenes and components include additional benefits:
- Automatic, cross-platform support for 2D browsers, as well as virtual reality and augmented reality headsets
- Loading of in-scene pictures, videos, and 3D models
- Complex scene and background components such as 3D.io, allowing full interior design of virtual spaces for educational or aesthetic purpose
- Additional VR/AR interactions as controllers, gestures, and speech inputs evolve
- Networking components for cross-device, multi-user collaboration
- Powerful animation system components and artistic post-processing components, enhancing the quality of visual explanations
- Virtual video capture and export, as well as recording of full 3D motion capture
Tip of the Iceberg
The introduction of VR as a web-accessible communication medium opens new possibilities for organizing and sharing ideas. For example, instead of multiple pages or long-form scrolling, there may be improved spatial layouts for communicating ideas. Imagine a reader engaging with an interactive, museum-style space where they are free to inspect the flow of information in a non-linear manner. Soon, visiting a web page will mean stepping into a cornucopia of sophisticated, interactive holograms connected by individuals ready to share their ideas in a new way.
Distill, a web journal for machine learning research, https://distill.pub/.
KADEN STRAND (firstname.lastname@example.org) is Founder and CEO at Blue Penguin, a VR/AR software services company blending software engineering, creative design, and practical problem solving to impact enterprise adoption and novel research for modern virtual and augmented reality technologies.