XR Technology encompasses a spectrum of software and hardware which make up the basis for both Augmented and Virtual Reality. These emerging technologies are evolving rapidly and so staying up to date with the terminology can be a challenge. Below is a list of several of the most common use cases for XR along with a brief description of the application.
Sometimes referred to as Remote Assistant this is direct, live video connection with a remote support expert, whereby the expert can see from the point of view of the technician (or other frontline workers). The expert can thus guide the technician by voice and by drawing, or placing, instructions in the technician’s field of view. The technician can use a variety of hardware to facilitate this interaction, from tablets to head mounted displays.
This is where digitized Work Instructions or SOPs are accessed by the technician, on their connected handheld or head mounted device, allowing them to go step-by-step through prescribed checklists.
Authoring system (software and hardware) for creating step-by-step Augmented Reality instructions to help frontline staff do their work. Usually the task guidance or training content can be captured on-the-fly. And the authoring tool is template driven for accessibility to non-coders.
This is where the trainee observes 3D objects or equipment and related guidance, overlaid on the real world. With some similarities to self-served Work Instructions the differentiator is that all equipment and machinery are simulated when training in Augmented Reality. This is an active experience, where animations, audio and other articulations lead the user through a series of interactions. An advantage over Virtual Reality training is the ability to realistically interact with close-by colleagues in the real world.
Training in Virtual Reality is where the trainee is fully immersed in a simulation of reality. All 360 degrees of what they see is computer generated or 3D modelled, including the background environment, the room, and all the equipment contained therein. This is an active experience, where animations, audio and other articulations lead the user through a series of interactions. An advantage of Augmented Reality training is the ability to interact with colleagues anywhere in the world, via their VR characters or avatars.
Virtual Reality tours are fully immersive experiences that take the user on a guided journey through a simulation. They are characterized by information pop-ups and other annotations related to items in the user’s field of view. They can be composed of both linear (passive) and interactive elements but are differentiated from Virtual Reality training applications by following more of a narrative arc, or story. Virtual Reality tours are aimed at a more general audience with more accessible language and delivery than a training scenario.
Augmented Reality tours bear many similarities to Virtual Reality tours with the major point of difference being that the background environment is the real world instead of being computer generated. Therefore the tour takes place in a live setting and synchronizes informational elements to the position and direction of gaze, of the user. As with a Virtual Reality tour it can be composed of both linear (passive) and interactive elements but is differentiated from Augmented Reality training application by following more of a narrative arc. Augmented Reality tours are aimed at a more general audience with more accessible language and delivery than a training scenario.
Unlike the process of designing or prototyping in traditional 2D PC applications such as AutoCAD, VR allows users to visualize models in 3 dimensions. With a virtual environment, a user can visualize creations at full-scale and, depending on their platform, interact with the models. Often, prototyping in VR can be a collaborative process with multiple users able to view a model simultaneously.
The use of AR for design and prototyping adds value to users by increasing context to the creative process. Using binocular AR headsets that are spatially aware, the user can manipulate 3D models while remaining aware of the environment in which the model may be used. Interaction and control in these applications may be achieved with physical gesture control or some form of PC interaction. Models may integrate motion, allowing for preview and testing of product functionality without the expense of physical creation.
It is relatively inexpensive and fast to create realistic renderings in virtual reality of real operational environments such as a manufacturing floor, laboratory, or office space. It is possible then to manipulate the virtual space quickly, accurately and affordably to test out several configurations for equipment, desks, remodels, etc. Within the virtual environment a user may explore not just variations of configurations but also view various textures for surfaces (to imagine paint colors or building materials), the impact of the outside environment (to visualize how daylight at different times may impact the ambiance of a room, for example), or many other features.
With the use of 3D models paired with AR glasses, a user has the ability to visualize how new equipment would fit into an existing environment or consider new configurations for existing equipment (to improve lean optimization in a manufacturing environment, for example). Many AR glasses are equipped with the ability to scan their environments in real time, mitigating the need for a separate model to be used that represents the room. Some workflows allow users to manipulate the models on their desktop in 2D, and then preview the changes in full-scale, in context before approving changes.
Several full-function XR platforms allow the integration of live IoT-generated data to be visualized in AR head mounted displays. At its most basic level, this requires a user to scan a QR code or other ‘trigger’ located near a machine, device, or location in order to access the live data feed. At the other end of the spectrum, headsets that are equipped with sensors and advanced computer vision systems can determine their own location and overlay relevant information directly on top of the equipment in that room. The data that a user sees when looking at a machine, device, or location can be customized based on the profile of the user. For example, a technician’s profile may deliver live data about the temperature of a unit while a foreman’s profile may deliver live updates about the throughput.
When operating in high-containment or biologically sensitive environments, it is desirable to reduce footfall to the area. Virtual Reality models of equipment can be integrated with IoT-generated data so that executives, trainees, and other non-mission critical staff can view necessary data without imposing health and safety risks. The ability to adjust the scale of models in VR means that a user can monitor multiple sources of data in a smaller space.
A digital twin is a virtual model of a product, service or process. It pairs the real world with the virtual world in a way that allows analysis of data and monitoring of systems to head off problems before they occur. It helps to prevent downtime and plan for the future by using simulations.