On Sunday, Microsoft released plans for its second-generation HoloLens headset, announcing that the next design of the augmented-reality glasses will incorporate a powerful AI coprocessor. That AI will allow the device to independently analyze sensory data, including what a user sees and hears, without needing to send that data off to the cloud. This will save processing time, making the device faster and more powerful while still preserving the user’s mobility. (For a deeper look at this type of device-native AI technology, read this Bloomberg piece.)
Microsoft’s news also came just a few days after Google’s announcement of a new Glass Enterprise offering. Lisa Eadicicco goes over the differences between the devices for Time:
While the basic concepts behind HoloLens and Google Glass overlap, in execution they couldn’t be less alike. Google Glass is meant to be physically insubstantial like a pair of literal glasses, only noticeable when someone needs it for a specific task. It displays a small virtual screen above the wearer’s eye, which can be glanced at without disrupting other visual tasks. The new version is even friendlier, able to clip onto existing eyeglasses and rendering the technology more accessible for those who need prescription glasses or protective eyewear in their jobs (though it must remain in wireless range of a smartphone to work properly).
HoloLens, by contrast, is much more immersive, since it can display larger graphics that fall within the wearer’s field of view. And unlike Glass, it’s also a functionally holistic device, unconstrained by reliance on smartphone or virtual-reality-style computer tethers to operate. All of HoloLens’s necessary computing components are baked into the headset.