Microsoft has its hands full with AR (augmented reality). The release of Azure Kinect camera and HoloLens 2 headset indicates the intent of Microsoft to become a pioneer in the next big technology, augmented reality.
Microsoft Azure has been combining several technologies at once. It does not use every available tool to create a user experience for HoloLens or any other standalone device. Instead, it uses your models and fixes them for a particular physical location. When data is collected by Azure, you can use Google’s ARCore or Apple’s ARKit to access that data.
Microsoft’s new AR solutions have links which serve as a connection between the physical and virtual realms. Microsoft has termed them as spatial anchors. Spatial anchors are maps which apply to lock on the virtual objects for the physical world which is used to host the environment. These anchors offer links through which you can display a model’s live state across several devices. You can link your models to different data sources, thereby offering a view which can integrate well with IoT-based systems as well.
Spatial Anchors
Spatial anchors are intentionally made to function across multiple platforms. The appropriate libraries and dependencies for the client devices can be availed via services like CocoaPods while taking advantage of native programming language code as that of Swift.
You have to configure the accounts in Azure to ensure authentication for the services of the spatial anchors. While Microsoft is still using Unity, there are some indicators that it may support the Unreal Engine in the near future.
In order to utilize this service, it is necessary to first generate a suitable application for the Azure service. Partial anchors in Azure offer support to the mobile back-end of Microsoft so they can be used as service tools and the learning curve does not become too complex.
After you initiate and run an instance of the Azure App Service, you can use REST APIs to establish communication between your spatial anchors and client apps.
Basically, spatial anchors can be seen as an environment map to host your augmented reality content. This can include the use of an app to search for users in an area and then create its corresponding map. HoloLens and similar AR tools can replicate this functionality automatically. On the other hand, in some AR environments, you might have to process and analyze an area scan to construct the map.
It is important to note that anchors are generated by the AR tools of the application after which Azure saves them as 3D coordinates. Moreover, an anchor may have extra information attached to it and can use properties to assess the rendering and the link between multiple anchors.
Depending upon a spatial anchor’s requirement, it is possible to set an expiration date if you don’t want them to remain permanent. After the expiration date is passed, users can no longer see the anchor. Similarly, you can also remove anchors when you do not want to display any content.
The Right Experience
One of the best things about spatial anchors is its in-building navigation. With linked anchors and the appropriate map, you can create navigation for your linked anchors. To guide users, you can include tips and hints in your apps like the use of arrow placement to indicate the distance and direction for the next anchor. Through the linking and placement of anchors in your AR app, you can allow users to enjoy a richer experience.
The right placement on the spatial anchors is necessary, it helps users to enjoy an immersive and addictive user experience or else they can get disconnected with the app or game. According to Microsoft, anchors should be stable and link with physical objects. You have to think about how they are viewed to your user, factor all the possible angles, and make sure that other objects do not obstruct access to your anchors. The use of initial anchors for definite entry points also decreases complexity, thus making it more convenient for users to enter the environment.
Rendering 3D Content
There are plans by Microsoft to introduce a service for remote rendering which will use Azure to create rendered images for devices. However, the construction of a persuasive environment requires a great deal of effort and detail. Hardware like HoloLens 2 may offer a more advanced solution but delivering rendering in real time is a complicated prospect. You will require connections which run on high bandwidths along with a service for remote service. This service is necessary so you can pre-render your images with high resolution and then deliver them to the users. This strategy can be reapplied across multiple devices where the rendering processing runs once after which it can be used several times.
Devices can be classified under two types: untethered and tethered. Untethered devices with low-level GPUs are unable to process complex images. As a consequence, the results limit image content and deliver a lesser number of polygons. Conversely, tethered devices fully use the capabilities of GPUs which are installed in workstations with modern, robust hardware and thus are able to display fully rendered imagery.
It has been a while since GPUs have become prominent in the public cloud scene. Most of the support which Nvidia GPU provides to Microsoft Azure is based on cloud-hosted compute on a large scale and CUDA. It provides multiple NV-class virtual machines which can be used with cloud-based visualization apps and sender hosts.
Currently, Azure Remote Rendering is still a work in progress and there has been no announcement to explain its pricing strategy. By leveraging the power of Azure Remote Rendering and using it with devices like HoloLens, you can use your portable devices to execute complex and heavy tasks and ensure that you can deliver high-quality images.