In the future where VR/AR/MR technology is highly advanced, would we spend our time more in reality or inside the virtual world?

This is a long-winded answer to a Quora question, reproduced here.

TLDR: They will blend together into an always-on digital overlay that you will live in 24/7. And you’ll love it.

It’s actually much closer than you think. Very soon we will have headsets that are lightweight, long lasting, comfortable and fashionable. You already have a high-powered computing device with always-on connectivity in your pocket right now – your cell phone. In the short-term we will have headsets that are either tethered or wirelessly connected to your phone and they will serve as your VR/MR display. There’s currently research on contact lenses that have display technology built in so it’s likely that headsets will eventually disappear all-together.

It’s not going to be for VR that much, since VR is a special-use case, but for MR it will be something that you’ll get used to having constant access to. Just like most people don’t actually use their cell phones as phones but as more general-purpose computing and communication devices, MR will just blend into the background of your everyday life.

Let’s just take a look at one such “killer app” that will come into use – the “Mirror World” and what it enables.

(TLDR: Mirror World is (AFAIK) the “rest” of the world – the “inside” that Google Maps doesn’t cover plus a bunch of variously-sourced additional location-specific, real-time AR metadata)

Your phone’s GPS can place you within 3 meters, and as the Mirror World comes online, your location will be highly accurately derived, and your high-bandwidth always-on internet connectivity will allow you to interact with a plethora of Mirror World databases providing you a wealth of information about the world, including information on the people and places around you. This includes inside building and other locations of interest.

Governments, organizations and individuals will be able to place virtual anchors in real world locations with access to pop-up information about a particular location.

This can be anything from your doctor providing route information to their office (which will show up as a series of directional markers down the street (e.g. currently seen as Maps), through the building, into the elevator to the correct floor, then with waypoints right to their front door) to serving up nearby PokéStops and Pokémon.

All this information will be filtered by you selecting what AR layers you’re interested in knowing about, which will show up as information geotaged to the real world in your MR headset.

We already have the various “Maps” apps to get you to a location on the outside, the Mirror World will cover the last bit of information “inside”. For example, the Waze app’s road database is constantly updated and refined by actual user’s info as they drive a route – locations and crowd-sourcing will soon map the rest of the world in constantly-updating detail which will be available for you to use. Niantic already has a global database they use for Pokémon Go. There are more than a few startups (like 6d.ai) that are providing the tools to map the real-world – this tech race has already started.

I’m going to gloss over all of the other use cases that are compelling, from the ability to instantly know about email, IM’s from friends, video conferencing, having a AI personal assistant, etc. etc. MR will be your interface to the digital world and as the real world gets digitized it will all start blending together.

So it’s not about spending time “in” VR or MR or not, it’s going to be that it’s just always there, and you’ll have the ability to add more or less as you like, so it will always be there, just to differing degrees.

Posted in Augmented Reality, Virtual Reality, VR/AR/XR | Leave a comment

The Limit: VR Review

TLDR: Ugh – pass. Low budget, breaks a lot of VR film-making rules.

Filmmaker Robert Rodriguez has a debut VR film called The Limit –  a 20-minute action short starring Michelle Rodriguez and Norman Reedus. I’m not going to cover the plot, you can watch the trailer here. Unfortunately, while I had high hopes for Rodriguez  to pull it off – he fails miserably by falling back on traditional techniques you’d see in 2D cinema – things that absolutely DO NOT work in VR. I fail to see why this was done in VR at all. I’ll ignore the sometimes cheesy acting, the abominable (with a few exception) SFX, and why-o-why do the computer screens have duplicated, non-animated images pasted on them. Even the plane cockpit has two identical, static images on the displays. Would it have killed you to hire a web or a XAML programmer for a week to knock them out? I will say the acting by Michelle Rodriguez is OK for the most part (Reedus’ part is ultra cheesy and he’s on screen for too short a time to form any opinion), and there are a two action scenes that are acceptable. I fail to see why groups of armed bad guys run up to you without shooting just to get punched out. OK OK, whats good –

The Good

  • Michelle Rodriguez ‘s acting (mostly).
  • It’s a 180º display that is mostly well used. (Thank God)
  • It’s a VR film  – we need more of these

The Bad

  • It’s B-movie quality
  • The 3D stereo is almost non-existent – there’s really NO depth effect visible
  • Traditional Cinema Techniques:
    • Abrupt cut scenes
    • 1st-person viewpoint, till sometimes it’s not.
    • Focusing the action in the center of the screen
    • Controlling where I’m looking
  • Head-bob? Really? Head-bob? Thankfully it gets dropped quickly.
  • Why was this done in VR again?

OK – the 180º display was a good choice. But next time try to let the action unfold within that 180º arc. DO NOT move my head for me. If you need to do it, do it gradually – or better yet draw my attention and have me turn my head. HINT – you CAN rotate the video square in the HMD – it’s a computer program and (for most HMDs) you DO know how much my head is rotating. You can then gradually recenter it while I’m focusing on the action. If you are subtle about it, I probably won’t notice you’re doing it. Take some notes from Hardcore Henry on how to film in 1st person. And study up on redirected walking to figure out how to rotate the viewpoint without the user knowing about it.

oh and NEVER use head-bob in VR. NEVER EVER.

Re: the lack of stereo. This is IN FACT the entire reason to film this in “VR” (It’s NOT VR, but it’s a 180 (slightly) stereo video.) The folks at STXsurreal really need to do a better job in filming in stereo.

A still from the video

So this is a left/right video frame (30fps, 2304×4096 H.264) – typical Unity video. Stare at the image – stare hard. You should be able to see some left-right separation between objects – especially the closer ones. I can’t see any, and I used to do this for a living. I would call BS on this but there were frames where I did see some separation – just not on the closer objects. Watching the film I never got the impression that Michelle Rodriguez was closer to me than the background. This is kinda the point of stereo guys. I could forgive all else if it made me feel like I was there and not just watching a flat video. Fail.

Not worth the discount $4.99 I paid, definitely not worth the $10 list price.

Posted in FILM, Virtual Reality | Leave a comment

The Great C: VR Review

TLDR: it’s great! – buy it for one of the best passive VR experiences you can get today.

When I reviewed Google’s 360° action flick  Help – it became the standout example of horrible 360° filmmaking – basically trying to use conventional film techniques in a 360° video and getting pretty much everything wrong.  I am happy to say that The Secret Location implementation of The Great C is a fantastic example of how to do it right!

The prolificness of author Philip K. Dick is well known. He cranked out many many  Sci Fi stories of varying length and quality during his time as a writer. The Great C is a short story, about 10 pages, written in 1952. The Wikipedia summary is:

The story is about a human tribe living in the distant future in a post-apocalyptic world, where a computer called the Great C has destroyed the world. Each year, the tribe sends a young man with three questions to the Great C, and if the computer cannot answer the questions, it will leave the tribe alone. On the other hand, if the computer can answer, the young man will be killed by the computer.

A young man from the tribe is sent to the Great C, with three questions prepared by the wise men of the tribe. He reaches the Great C in a destroyed building in the ruins of a city. The man asks his questions, first, he asks where rain comes from. The Great C answers with ease. His two other questions (why doesn’t the sun fall out of the sky? and How the world came to existence?) are also easily answered by the computer. The Great C then absorbs the man and uses his body as energy, awaiting next year, for the next young man. Meanwhile, the tribe prepares for next year, coming up with difficult questions.

Your typical happy PKD story. It’s perfect grist for VR storytelling. How did Secret Location do?

Normally I’m going to pick apart the ineptness of the director, and the horrible use of “VR Storytelling” when I review something like this. Surprisingly, there’s none of that here plus they actually told a great story as well (if somewhat predictable). They’ve made great use of some of the traditional techniques of cinema, including  sometimes taking over camera motion – and even  including a long zoom – thing you typically don’t see in well-done VR, but Secret Location did a lot of experimentation and kept what worked – so they were able to adapt some traditional film techniques into VR without nauseating the user – kudos to Secret Location  for breaking ground and showing folks how it can be done.

The techniques used here – the fading to black in viewpoint changes, the scene gradually coming into focus during a viewpoint flyover, etc. excellent lighting and shading and 3D audio, sets the mood and carries the audience with it rather than tossing them into a scene abruptly.

Mood is very important to the story and is maintained throughout. Plus there’s a “comfort” mode for those whom have trouble with fully immersive VR. Take the scene below – you can tell it’s got great ambiance – but it’s quite different experiencing it in VR where you can look around and see destruction all around and hear all the sounds associated with the scene. It’s quite the effective technique for storytelling.

Despite taking some liberties with the story – which generally improved it for VR storytelling – they presented a compelling, magical  and somewhat reinterpreted version of the story, making excellent use of the medium and breaking almost no cardinal rules of VR. And when they did, they did it right. In fact, I’d say it’s the best use of a 360º “video” I’ve ever seen. The plot is great, the story poignant, the dialog compelling, the voice acting great, and a soundtrack composed by Junkie XL. The experience in VR is almost magical – it’s that good.

Written in Unreal 4.19 (in the current version of the app on Steam) it’s about a 35–40 minute long experience. It’s fully 360, but most of the action takes place in front of you, so there’s no awkward head swings to keep the action in front. But if you want you can follow minor action that way – it’s a full-on 3D VR application, not a video! Though it does have pretty tight control over your viewing location it does let you look around – and there’s lots to see. It’s one of the best $5.99 you can spend for a VR experience. Available on Vive, Rift, and (supposedly) PSVR.

Posted in FILM | Leave a comment

Eye Tracking – what it means for XR Developers and Designers

Technical Overview: Eye Tracking

Eye tracking means that the direction of the eye gaze for each eye, plus detection of eye-lid closure (i.e.blinks), are available. With a little mathematics it’s possible to turn this into a view direction in the rendered scene and determine the objects the user is gazing at. Additionally humans typically do not slowly track their eyes to a new direction, but tend to move them both simultaneously in a rapid shift in gaze direction called a “saccade”. Not only are they incredibly fast, but the brain also temporarily blinds you while you make them – that’s why  you don’t see a blurry sweeping image when you move your gaze direction – in a phenomenon called “saccadic masking.” Instead the brain replaces view during the saccade with a still image of the new direction.

Eye tracking also provides positional eye information  as well – that is – where the retina of each eye is – usually in relationship to the HMD. This opens up the possibilities for unique user identifiers. (Eye tracking hardware requires a driver to talk to the hardware to give you the information, but for security reasons – the information is filtered to provide only eye-tracking information – actual eye images are never shared outside the driver). Provided with this positional information it’s possible to adjust the inter-pupilary-distance (IPD) to each user, so that the lenses can be adjusted to that user’s optimal position.

Some tracking solutions (like the popular Tobii) also offer pupil dilation – offering up yet more possibilities to gauge the user’s interest.

In the vernacular of eye tracking, head-mounted displays and directional tracers/input devices, this position-direction (and sometimes implied “up” vector) triad is called a “pose”.  Just like you’ll need to know the HMD’s “pose” to render an XR scene correctly, you (or the game engine) need to orient the eye position and gaze relative to the HMD to get it into world coordinates. But once you have that there’s all sorts of possibilities for mischief.

TLDR:

  1. Eye tracking will provide a view direction – typically per/eye and averaged, and it might be up to you to transform that in “world space”. You can then use this to know what the user is looking at in your scene.
  2. Eye positional information can be used to measure (and possibly adjust) the IPD, perhaps retrieving a user’s profile if the IPD is unique.
  3. While the user saccades they are effectively blind – the brain seeing the new direction image instead of a transitioning one. (Yes, you read that right)
  4. Same with blinks – which are a little longer.
  5. You might also have the option of gauging the user’s interest by monitoring pupil dilation.

Practical/Ethical Options

So suddenly there’s a whole realm of possibilities open to the XR designer.

There are the “benign/helpful” ones like;

  • Treating a long gaze on an object like an “open” or “activate”
  • Fusing with foveated rendering to use high quality rendering only where the user is looking (saving both GPU effort and energy)
  • Gauging user interest by creating a “gaze” heat-map of the user’s view.
  • Making sure the user has “seen” an item of interest (like an alert) in the scene – and possibly using that fact as an acknowledgement instead of a “Press OK”

There are the “creative” ones;

  • Redirected walking is an opportunity to alter the topology of the virtual space compared to the real – either to make it seem virtually larger or to adjust the user’s trajectory through the real space – perhaps avoiding obstacles or other users.
  • Dynamically “adjusting” the scene in some way – either to bring attention to an object the user isn’t currently looking at or to remove or alter some object they aren’t currently looking at.
  • Changing the scene topology when they aren’t looking at it. This is a bigger version of redirected walking – you can alter

Redirected walking can be used in many ways – either adjusting the amount of virtual  head rotation vs actual (either increasing or decreasing as desired) or actually adding in a head rotation while the user is moving, to “guide” them in a desired direction – it’s amazing the amount you can get away with.

Questionable Options

As we strap more computing power to our bodies, we expect this power to be used to make the experience better. Unfortunately it also means that we’re passing more personally intimate physical information to be processed. This can be used for good or evil.

Scenario 1:

Imagine that you’re creating a  location based VR group experience. Say some derelict alien spaceship. You have a physical location that has a virtual landscape overlain on it. Since you also control the audio as well as the visual, you let the group start off going down a dark and creepy hallway – they are chatting it up, keeping their spirits up as they move as a group down the hallway.

You decide it’s time to up the excitement – you pick a subject and trigger an alternate experience for them from the main group. Using various redirected walking techniques you slowly guide the victim down an alternate physical route,  but meanwhile everyone still seems grouped together, and since they can still clearly hear each other, bit visually and audibly they are still together, but the main group has the victims avatar, and the victim has a virtual group they are actually hanging with.

You spring your trap.

The group suddenly sees the victim attacked and killed, with lots of screaming and gibblets (remember you own both audio and visuals, the two groups are now on separate rides) while the victim see one of the group attacked in a similarly horrible manner. Hilarity ensues.

Scenario 2:

You’re playing an XR Poker game, and manage to scrape the other user’s pupil dilation values – hence giving you a quantitative measure of how their interest in various cards are in real time – seems like a profitable effort.

Scenario 3:

You’re selling ad time on your XR platform, and one of the thing you provide are user’s physical measurements, including saccade time, blink measurements, pupil dilation. Some of the fun facts you can correlate with this information include;

  • Above average saccade times can indicate a brain condition or influence of drugs.
  • Above average pupil dialation can indicate either brain condition or influence of drugs.
  • Reduced observed upwards saccadic movement can indicate the user is elderly.
  • Excessive blinking can indicate the onset of a stroke, Tourett’s syndrome or some other disorder of the nervous system
  • Blink rates for females are usually higher than males.
  • Blink rates for females on oral contraceptives is 32% higher than for those not on oral contraceptives.

New market opportunities!

Posted in Uncategorized, Virtual Reality, VR/AR/XR | Leave a comment

Ray Tracing is here! (and why it won’t matter for a while)

Nvidia brings first Ray-Tracing capable video card to market.

Nvidia announced their Turing-based video cards, including the Professional Quadro RTX 8000, RTX 6000 and RTX 5000 GPUs at Siggraph and their consumer level cards the GeForce RTX 2070, RTX 2080 and RTX 2080 Ti GPUs  at Gamescom in Germany (you can read about them here) with the promise of in real-time performance for ray tracing for visual effects software and in games. They neglected to share any benchmarks…

So let’s talk about the Professional cards and what they mean.  Now before you go out and buy one, they aren’t available till October, and the three versions range in cost from about $6,000, to about $10,000. Still, there’s some cool things you can do with them.

Epic/Nvidia/ILMxLabs recently demo’d the Unreal Engine showcasing some ray-traced video. Of course Captain Phasma in her chrome armor makes an excellent reflective surface.

And while this is a cool tech demo, it was running on a $60,000 NVidia DGX Station (on sale for $49,000 for a limited time!). Here’s one.

And it was rendered at 24 fps. And even then some of the scene was rasterized and not ray-traced – why do you think it took place in a elevator with matte walls? Maybe they could redo it on a brand new DGX-2?  (with 16 Tesla V100 GPUs – that’s a $400,000 350LB behemoth – or just think of it as 5 Tesla Model X’s…)

OK, so while it was bruteforcing the ray-tracing aspect, overall it was a nice demo. Though IMHO the best thing about it was the Imperial March elevator music 🙂

What’s this Ray Tracing thing?

Ray tracing is about how light bounces around  (simple explanation video here.) Right now (and for the last 40 years) game engines have rasterized everything. Surfaces computed their own light reflections based upon where the camera was. So things like reflections, transparency, shadows, ambient occlusion, etc. needed to be fixed.

Ray tracing, simply put, lets the light ray bound around the scene, and some of them make it to the camera. It’s actually a very simple method that just happens to be incredibly computationally expensive. But you get realistic results, and things like shadows, transparency, reflection, refraction, etc. all just fall out. The Battlefield demo from the Nvidia twitch stream shows off some of the differences.

So does this matter? Not presently. Games (and most VFX tools) use OpenGL or DirectX – which is the rasterization interface from the game to the hardware. No ray tracing. The whole GFX pipeline will have to be rewritten. Exciting stuff, but it not going to happen overnight. Or next year. Or the year after that…

So what was the reason behind this announcement? When I was working in a VFX company we had written a custom high resolution video capture & playback plugin. The goal was to be able to use the Unreal engine to render an ultra-high quality video of the scene – as opposed to the usual Blender or Autodesk. Much faster to iterate.

So, what you see above is intended for VFX houses and not games. Epic (which demonstrably HAS rewritten their GFX pipeline for ray tracing, reportedly using Microsoft’s new DXR (DirectX Raytracing) framework, an API working through the current DirectX 12) and Nvidia realizes that this is a huge market for the them and they are chasing it with all their might. So this is great. But I’m not going to buy a $6000 video card for the few games that will have better visual effects in them. VFX houses, if you can promise they can ditch their render farms AND get faster turn around AND dump their very expensive site licenses for VFX tools with a free rendering engine – they will line up. Ray tracing support in Unreal will supposedly show up in version 4.22, so about Q1 2019.

Games and Virtual Reality

Nvidia also announced their consumer focused RTX cards – these are also ray-tracing capable but are also designed for gamers – and are more modestly priced from $500 to $1200. They are claiming a 6x improvement – again they gave no benchmark data, so we will see. They go on sale Sept. 20th. While the ray tracing is nice, I still will be playing OpenGL or DirectX based games for the next few years, and THAT is the performance numbers I want to see.

One of the more interesting things announced was the adoption of the VirtualLink standard that allows VR headsets to connect to the PC with just a single USB-C cable. This is a good thing and making HMD’s that plug directly into the video card (or the wireless connection) is a good thing for the XR industry.

So while this is an inevitable step in the evolution of graphics, I don’t see much of an effect upon the typical gamer in the next few years. XR folks might see some near-term benefit if most HMD manufacturers adopt VirtualLink. The real benefit will be to VFX houses as their cost of business is going to rapidly drop. So good news overall, this is the beginning of the adoption of ray tracing and a more physically-based graphics rendering – so things will look better. But not anytime soon.

 

Posted in Augmented Reality, Graphics Hardware, Virtual Reality, VR/AR/XR | Leave a comment

HTC demos 16 simultaneous Vive Lighthouse Play Areas

If you’ve had the pleasure of setting up Oculus and Vive VR rigs, then  you’re aware that the Vive is a dream to set up compared to the Oculus – Oculus is incredibly, annoyingly finicky, while HTC Vive usually just works. (HTC support notwithstanding).

HTC’s test area  (Image grabbed from @AGraylin)

The Vive’s outside-in tracking is sub-millimeter and usually rock-steady. It’s something to throw a newb into a Vive HMD, waggle the controllers out in front of them “can you see these”, “yes”, “then grab them” and the newb latches on with an awestruck face. VR becomes real.

The original Lighthouses came with a sync cable, but later versions didn’t need them, and there were hints that the range of Lighthouses was easily extensible beyond the 3m range recommended in the Vive documentation.

I find this intriguing because of the current setup that a lot of the VR location-based entertainment companies use – like The Void,  IMxLABs , etc. is really leveraging off of some experience with mocap – they typically use something like a slew of OptiTrack cameras – and this turns out to be a headache in terms of support (many expensive cameras, a lot of calibration, crunching numbers to get positions of everything then disseminating it).

Lighthouses, on the other hand, take care of a lot of this for you, leaving you just to worry about occlusion (something cameras also suffer from).

In fact, Alan Yeats (@vk2zay Valve Lighthouse engineer) suggested  (in a bunch of tweets) that the current crop of Lighthouses exposed just one of the “modes” they were designing around and they would soon expose more. OK it’s been about three years but we are now seeing a hint of that.

A recent post from HTC’s Alvin Wang Graylin shows some of the recent experiments the HTC has engaged in – linking up to 16 Lighthouse pairs to create an extended playarea.

In fact the docs claim that you can use a lighthouse pair to cover a 100m sq area. (A. Yeats notes that only 4 pairs are supported out of the box). 100m. That’s a football field in length. For multiple rooms, imagine 16x that coverage – which should be able to deal with any occlusion issues. You could cover an entire museum with trackers for an immersive VR experience.

That plus the fact that you can create a tracked device for as little as $3 in parts is pretty incredible.

The price for location based entertainment just dropped significantly – I look forward to some entrepreneurs to create some really immersive VR experiences and usher in the next era of location-based entertainment.

Posted in Augmented Reality, Hardware, Vive, Windows 8 | Leave a comment

Magic Leap hidden documentation tells FOV values

Magic Leap published its Creators guidelines and had a section marked FOV – which contained just the text “Coming soon on launch day!”

If you looked at the source however it included the actual  text of the document (since removed) in a commented-out block of markdown. Ahh Internet 🙂

It did contain some hand-wavy BS about how the further away stuff is in screen space (from the HMD) the “larger” the item could be – before delivering the bad news.

Magic Leap One has a horizontal FOV of 40 degrees, a vertical FOV of 30 degrees, and a diagonal FOV of 50 degrees.

So a crappy 4:3 aspect ratio just slightly larger than 2016’s Hololens FOV and MUCH smaller than pretty much any other VR HMD out there (Vive, Rift, Playstation, etc.) Even Daqri Smart Glasses have a bigger FOV – and these are professional level industrial AR glasses, which includes thermal imaging (which is surprisingly useful).

Posted in Augmented Reality, Virtual Reality, VR/AR/XR | Leave a comment

Latest Magic Leap reveal – more underwhelming than anything else.

Another Magic Leap video , and a few crumbs of information emerge.

The “Magic Leap One Creators Edition” in now know to be:

  • Powered by an NVidia Tegra X2
  • Runs a custom Linux x64 OS called LumenOS

… and that’s about it.

The demo was pretty underwhelming for a company that’s supposedly been working on this tech for 4 years (since 1st VC $ influx). Despite many many requests the FOV question was ignored, probably because the answer isn’t that impressive. The demos shown looked to be really unimpressive, the FPS looked really low, perhaps as low as 24FPS.

Pros:

  • They seem to have spent a lot of time working on supporting (8) gestures
  • It will “map” the environment (i.e. provide a room mesh and (soon) an API about “surfaces” (e.g. where’s the floor, where’s a big table area… etc.)
  • Looks to support AR out-of-the-box
  • Unity support now, Unreal coming.
  • GL 4.5, GLES 3.1, Vulkan
  • Described as a “game console”

Cons:

  • No VR – this was brushed off repeatedly as the device is targeted for AR
  • shared memory – i.e. the GPU/CPU share memory – typical mobile experience but this means a computation vs graphics tug-of-war for system resources. Also means a tiled renderer.
  • It’s an additive display.
  • Not clear on triangle count, at one point it a scene was “200 to 4K triangles” (seems small) to 800K triangles (that’s a tad big). From the demos the former seems about right.
  • Described as a “game console”
  • This is the “Developers Edition” means is like an Oculus DK1/2 vs the consumer CV1 version. Does this mean the final consumer “game console” will be different?
  • No mention of price.

So this is a AR focused HMD, with an (apparent) small field of view, and (apparent) low framerate, that has an external belt-worn “battery pack”. It based upon an (until now) automotive console focused SOC, running a customized Linux kernel (based upon NVidia’s Vibrante kernel?), and has an apparently nice (if small) UI library that’s designed to make designing gesture-based apps easy.

So. It seems like this an underdeveloped AR platform that has reasonable software support and low specs. Maybe too low. Small FOV, low refresh rate. It’s an additive display – which means no VR and most importantly NO VIDEO/MOVIE support – you’re not going to watch Netflix on this guy. There was some talk about fill-rate – well folks, video is ALL fill rate (and in this case battery life as well).

They also failed to show the killer-app of AR, social interaction –  multiplayer is DIY via the WiFi connection. They really should spend some of that money and demo a chat room – but of course, networking is yet another resource hog, and adding a network communication layer is a lot of work. But the hardest nut to crack would be multiple users having a shared AR environment when each HMD has it’s own local map. Not really something that’s left as an exercise to the developer.

AT&T are the US distributors. Why AT&T, does it also have a SIM card? It’s a developer edition, that usually means $$$ so why use a consumer store to sell them?

It’s coming out in the summer – hello – it’s mid July now, not much summer left.

The AR headset space is actually getting crowded with players that are on their 2nd or 3rd generation of hardware. Microsoft is coming out with Hololens 2 soon so ML better have some really compelling aspect if they expect it to take off – or even be mildly competitive.

 

Posted in Augmented Reality, Hardware, VR/AR/XR | Leave a comment