First thoughts, Apple Vision Pro
Amazing UX and visual quality but weight & comfort still a blocker
In high school in the 1980s I stood in a long line at a defense technology conference (i don’t remember how I got in) to try on a $40,000 Kaiser Electro-optics military HMD. Around 1989 I travelled to UC Berkeley to try the first ‘Dactyl Nightmare’ VR videogame. I got the Oculus ‘Valve’ demo the day after Mark Zuckerberg got his demo. I flew to Taiwan to try the HTC Vive before the hand controllers had been finished. So, yeah, I’ve tried a lot of these things.
Aaron Frank of Singularity University - a friend from the IRL415 Lab community - proposed an investigate reporting field trip to go stand in line at one of the SF Apple stores and get the demo on the first day. So we showed up at 7:45am (thankfully the rain had stopped) and found about a dozen people outside, most of whom were actually picking up pre-ordered AVP devices. We were one of only a handful of people there just to get the demo.
Next week someone who actually bought one of these things is bringing it to the lab and I will have to wait until then to experience the ‘persona’ (AKA Avatar) of myself or someone else. The quality of being able to communicate with someone else wearing another device is ultimately at the core of any kind of broad adoption, and I’ll have to wait until then to try it out. The in-store demo was limited and ‘single-player’ - no communication with others. What we learned from what we were able to try:
Great passthrough quality, but stay clear of fluorescent lights!
Passthrough video had none of the warping and distortion that you see in a Quest3, which was truly next level. Acuity was better than anything yet but still not quite equivalent to your regular vision. There is a smoothing that they do when you hold still that upscales the resolution, which is a nice effect. Fluorescent lights, however (which the SF store was filled with) flicker badly in the passthrough - I would worry those with epilepsy might actually need to avoid them altogether. Hopefully LED lights don’t flicker.
UX: ‘Gaze-Clicking’ is fast and magical
To click on something, you look at it and tap your finger and thumb together. You can rest your hand in your lap while doing this, which is great. Sometimes the gesture is missed, but not often and I suspect you probably get better quickly. It feels unusual initially to point with your gaze, but once you get the hang of it, you can open and close screens incredibly fast. Many operations feel substantially faster than using a mouse, which is pretty amazing. For example, selecting images from photo tiles to enhance and then close feels telepathic. I can imagine application developers coming up with extremely compelling ways to smoothly interact with UX using this approach.
8K/3D Video is indeed impressive
There are a couple very well recorded demos of 180 degree panoramic videos that are truly amazing and will certainly make people’s jaws drop. As I’ve written and talked about lots before, video consumption is very often enjoyed with other people, so I don’t think that any level of video quality can make up for not being able to enjoy the experience with other real people beside you. But… this was much better than I expected. Better than an IMAX theater.
Heavy and Uncomfortable
I found myself at one point actually resting my chin in my hand with my elbow on the table because… it’s that heavy. And I am a large strong person used to wearing these things. Way too heavy for prolonged use. You have to tighten it too your face really hard because of the single strap, and it felt more uncomfortable at the contact points with your face than the Quest. When Aaron took his AVP off, the red ring on his face was comical… I’ll leave him to decide whether to share those photos.
Summary
While a stunning demo, I don’t think this device could get wide everyday consumer adoption even at $500, because of both the weight and sense of isolation from those around you. We are not going to walk down the street wearing digital goggles anytime soon (and I think culturally that is a huge positive). As with other devices like the Quest, Magic Leap, and Varjo, I predict it will find use in narrow verticals like industrial training. I do think the UX and in particular using the gaze as a pointer shows a path to genuinely new ways of interfacing with computers, once the weight can be reduced by (at least) 2X. But there is no short-term line of sight to how that level of weight reduction could be achieved, given that we can be sure Apple tried very hard and are excellent industrial designers. And as I mentioned earlier, the most vital capability for healthy usage - being able to easily communicate with other people while wearing it - remains to be explored.
I have had one and been using it since launch day (Feb 2) and so far it has exceeded my expectations. Totally agree about the weight - the device is full of electronics, sensors, etc and the frame being aluminum with a glass front piece really add up.
Overall it has exceeded my expectations vs. the other HMDs I have. The ability to control it with eye tracking and hand gestures is almost magical in its implementation. The displays are amazing and watching video content on it is a joy.
One fun thing - I loaded up the Second Life Mobile Alpha Viewer using TestFlight and was able to logon and visit my Museum of Computing History in world. Since it was the iPadOS version the controls were a bit "interesting" (using eye/hands spatially to control the virtual touch pad controller) but I was able to move around and look around without much issue.
It is not for everyone - it is a minimum of $3.5K ($4.5K if you add some storage, prescription lenses, and AppleCare coverage) and there are tradeoffs in the implementation. But as a view into the things Apple is thinking about and working on it is pretty exciting for what it is.
great summary. what are the components of the headset that are the main driver of weight and / or the most tricky to miniaturize?