Savannah Niles of Magic Leap at MIT Media Lab Reality Virtually Hackathon

Notes from Savannah Niles’ talk at MIT Media Lab Reality Virtually Hackathon.

Savannah-Niles-Magic-Leap-2.jpg

On localized and contextualized apps. The thought of bespoke apps naturally speaks to applications that are tailored to your exact locale based on local knowledge.

See the video below. The in the full post (below) have the Q&A note featured in the video.

LOCALIZED AND CONTEXTUAL APPS

Not one interface to rules them all

At MIT we don’t like to only think about Singleton solutions

But dealing with the locale, the space, and the context

As the medium emerges, something more powerful

To authentically integrate digital content into the real world

Stat innovations solutions that are site specific and relative to context

INPUT

[Started with a tight input pallet]

James- made the initial eye tracking graffiti app (former Graffiti Research Lab)

@jamespowderly

5 way controller - able to grow to 6 dof, but also the simplest version to be used for accessibility

Number of calories burned = similar to number of clicks

Gentle interfaces, highly response,

Redundant input is natural and gives users a choice

β€”β€”β€”β€”β€”β€”β€”

Embrace collisions and physics when you can

Interactivity is reactivate - make something that can react with your environment

Graceful degradation

(Eg being derezzed)

Eg particle effects and portals - whenever something breaks in your environment reconstruction

β€”β€”β€”β€”β€”β€”

Classically taught recognition over recall - this Don Norman thought

That works on screens, but

The more minimal you render your interface

Morally mash it on an off with buttons, but reveal II with head pose gaze - gaze signals attention

β€”β€”β€”β€”β€”β€”β€”β€”

There has to be a button to go back to reality.

(There has to be a way to dismiss everything / remove everything from your view).

In testing QA said β€œHey, there’s a bug. When you press this button you can’t see anything.”

β€”β€”β€”β€”β€”β€”β€”β€”

Use affordances of the physical world to interact with digital things

We'll communicate with apps like we communicate with each other

From gesture to conversation, to gaze, to direct focus

E.g. the "put that there" demo that we've all been told about since the 70s

β€”β€”β€”β€”β€”β€”β€”β€”

Her full-time focus is for shared experiences

Spatiate - multi-person multi-platform drawing application

β€”β€”β€”β€”

Future of input or interaction?

Her personally - thinks that interacting on physical surfaces - eg tapping on table

Being able to switch from fine to ballistic control - from nearby to further objects in the distance

Q&A

β€”β€”β€”β€”

Skeuomorphism for touch devices was necessary

We'll develop a model for using this device

We use headpose as an input - it is fundamental (even in normal h2h culture)

But people got tripped up on headpose

We'll probably have buttons for a while

β€”β€”β€”β€”

Everyone will try this as the technology is going viral

Consistency across apps and the ecosystem

β€”β€”β€”β€”

Hire more diverse teams

unrepresented people with diverse points of view

representations

Educate ourselves

Read more and look at models like the wiki

Collective Authorship

Savannah-Niles-Magic-Leap.jpg

Notes taken on a mobile device. Pardon any auto-corrections or incorrection.