A visual positioning system of computer vision and machine learning.

Anything that’s been documented by Street View, like buildings but not trees because branches get cut and they lose leaves.

Notes taken on a mobile device. Pardon any auto-corrections or incorrection.

Spatial Audio in the Magic Leap

September 2018

Should you have have to walk across a map to access something in AR?
Does AR have to start hidden?
Does it have to be something you could find?

Does audio have to be defined by different locations?

Spatial Audio next to a whiteboard

Spatial Audio next to a whiteboard

AR does not have to be something is hidden to then be found. AR should be contextually visible and accessible, not something that is a challenge to access. In fact, AR should be the interface to all of the technological advancements that we could not otherwise see.

AR should not be hidden. AR should be interactive. In this case, the sound is spatial and therefore interactive, because you can turn away from the sounds (3 degrees of freedom of rotation), and walk away from the sound (three more degrees of freedom).

If the concept is music, sound or audio across space then make the audio spatial.

Here I took static cinema 4D animations that were intended to be linear, non-interactive Snap lenses, and I deployed it the magic leap.

While wearing the Magic Leap, I adjusted the radius of the spatial audio in Unity for the next build.

There were different beats or tunes and with these loops placed in different locations, you could walk around the music.

Play the video above with the sound on to hear the spatial audio.

Ironically in the application I built there was no way to reset the origin of the experience, and with Magic Leap’s persistence, even when I went to the center of the building’s floor plan to launch the music in the cafe, the spatial audio was half a block away.

2D Screenshots from Snapchat

Notes taken on a mobile device. Pardon any auto-corrections or incorrection.

Magic Leap Hello Cube at R/GA

September 2018

The horizontal tracking started at the height of a standing table (bar height at elbows) so the cube of 1.6 meters high was above my view.

The basic “Hello World” example from Magic Leap is “Hello Cube”

Fittingly at R/GA

Notes taken on a mobile device. Pardon any auto-corrections or incorrection.

Savannah Niles of Magic Leap at MIT Media Lab Reality Virtually Hackathon

Notes from Savannah Niles’ talk at MIT Media Lab Reality Virtually Hackathon.


On localized and contextualized apps. The thought of bespoke apps naturally speaks to applications that are tailored to your exact locale based on local knowledge.

See the video below. The in the full post (below) have the Q&A note featured in the video.


Not one interface to rules them all

At MIT we don’t like to only think about Singleton solutions

But dealing with the locale, the space, and the context

As the medium emerges, something more powerful

To authentically integrate digital content into the real world

Stat innovations solutions that are site specific and relative to context


[Started with a tight input pallet]

James- made the initial eye tracking graffiti app (former Graffiti Research Lab)


5 way controller - able to grow to 6 dof, but also the simplest version to be used for accessibility

Number of calories burned = similar to number of clicks

Gentle interfaces, highly response,

Redundant input is natural and gives users a choice


Embrace collisions and physics when you can

Interactivity is reactivate - make something that can react with your environment

Graceful degradation

(Eg being derezzed)

Eg particle effects and portals - whenever something breaks in your environment reconstruction


Classically taught recognition over recall - this Don Norman thought

That works on screens, but

The more minimal you render your interface

Morally mash it on an off with buttons, but reveal II with head pose gaze - gaze signals attention


There has to be a button to go back to reality.

(There has to be a way to dismiss everything / remove everything from your view).

In testing QA said “Hey, there’s a bug. When you press this button you can’t see anything.”


Use affordances of the physical world to interact with digital things

We'll communicate with apps like we communicate with each other

From gesture to conversation, to gaze, to direct focus

E.g. the "put that there" demo that we've all been told about since the 70s


Her full-time focus is for shared experiences

Spatiate - multi-person multi-platform drawing application


Future of input or interaction?

Her personally - thinks that interacting on physical surfaces - eg tapping on table

Being able to switch from fine to ballistic control - from nearby to further objects in the distance



Skeuomorphism for touch devices was necessary

We'll develop a model for using this device

We use headpose as an input - it is fundamental (even in normal h2h culture)

But people got tripped up on headpose

We'll probably have buttons for a while


Everyone will try this as the technology is going viral

Consistency across apps and the ecosystem


Hire more diverse teams

unrepresented people with diverse points of view


Educate ourselves

Read more and look at models like the wiki

Collective Authorship


Notes taken on a mobile device. Pardon any auto-corrections or incorrection.

A Focus on Training in VR

Training in VR saves money and probably increases learning, retention and value creation over time at scale.
I believe AR in manufacturing immediately increases efficiency, output and value.

Virtual reality training by companies like Microsoft is saving lives, millions of dollars, and ensuring the future of mixed reality
— - CNBC

Notes taken on a mobile device. Pardon any auto-corrections or incorrection.

CES 2019

AR ruled.
Despite the sad goodbye of some AR innovators at the end of 2018, there’s not a slowdown there is in fact one more SLAM HMD (glasses that understand the physical space around you).
SLAM (simultaneous localization and mapping) is in many devices. Robots, vacuum cleaners, drones, cars, glasses. Computer vision is so good that SLAM can run from an RGB camera over a mobile web browser (see 8thwall).

Two products that my work touched launched at CES this year.

  1. The Apprentice platform - for telepresence and remote collaboration.

  2. Lovot - a heart-warming robot from Japan.

    (That doesn’t mean the next step is to control a SLAM robot via your glasses).

Angelo Doug Nabil.jpg

Notes taken on a mobile device. Pardon any auto-corrections or incorrection.

✌️ BlippAR

This year, we've seen a few AR companies land on the other side of the Hype Cycle's peak of expectations. BlippAR is the latest. Everyone I met from BlippAR was sharp and responsive. Despite what AdWeek says, BlippAR collaborated with agencies very well. BlippAR, their tools, example applications, and mindset for how AR can apply to our daily lives were more advanced than most companies will be by this time next year. They were advanced in practice, not just theory. And they wanted to advance the practice of AR. Critics say BlippAR may have been too early. But that's better than being way too late. Thanks for giving us a preview of what the next computing platform feels like.

Notes taken on a mobile device. Pardon any auto-corrections or incorrection.

Glasses Reviews

It is obvious we are moving towards a heads up future with information contextually being displayed over places, objects and people.

 Hand held devices will still play a role and often as a control device (like Magic Leap's controller application that is a better source for text input than voice for the interim). 

 On my Instagram stories, I've reviewed some of the most well known HMDs, from binocular 3D with SLAM to sleeker more subtle monocular glasses. 


Notes taken on a mobile device. Pardon any auto-corrections or incorrection.