[image_hover image=’http://visualise.com/wp-content/uploads/2015/09/OC2-party-12.jpg’ hover_image=’http://visualise.com/wp-content/uploads/2015/09/OC2-party-12.jpg’ link=” target=’undefined’ animation=’undefined’ transition_delay=’undefined’]

Last week Henry and Will were in LA for Oculus Connect 2. They kept a daily diary on all the latest virtual reality announcements, demos and talks. Over the next few days we’ll be sharing these insights, so grab a brew and see whats in store for us over the coming months…

Our first day at OC2 has been a pretty epic one. We started with some incredible demos from Oculus that covered gaming, interactive/social applications and storytelling and finished the afternoon off with talks on optimising streaming of 360 video, low light in VR and best practice and ideas for User Interfaces in VR. 


Oculus VR Headset – Very light, looks very pro. The built in headphones are better than expected, but since they sit on top of your ears, a lot of ambient sound is let in. Resolution looks really good, perhaps a little better than Gear VR. One interesting thing to note is that the Rift now has fresnel lenses. While these lenses provide a wider field of view, they also have the property of creating what looks like shafts of light on bright objects like white text on dark background.

Tracking – We estimated that a single tracker camera can track about 2m x 2m which is roughly twice as much as DK2. Two tracker cameras were used for the Touch demos, but two cameras aren’t required.

Touch Controllers. They’re wireless and have a number of buttons as well as triggers under your fingers. The triggers activate grab actions and it feels pretty natural to reach toward an object, and grab it this way. The controller can understand simple gestures like point and thumbs up, but these actions weren’t mapped to any function in the demos.


These demos gave us a good idea of what content for the consumer Rift will deliver. Oculus has decided to focus on seated experiences for home use, and these demos show quite a lot of promise for making fun and engaging experiences where the user is sitting.

  • Chronos – static camera view that allows you to watch a character move around a mystical world, attacking comical looking medieval/fantastical characters. The view point is always a well chosen static camera position. Very beautiful modelling and game design.
  • Edge of Nowhere – slightly uncomfortable as camera chases character and some movement is not too comfy, but exciting and compelling as the floor is constantly crumbling away and driving you forward, jumping over the abyss and on to rope ladders.
  • AirMech VR – started as an experience where you control a micro machine size toy and fire at enemy mechs – tower defence style – it’s all on a tiny table though and just in front of you. However, this is just a tutorial, once you complete that the whole world expands below you so you are floating over a battle field. Great game.
  • Esper –  Clever use of VR but a little dull.
  • Lucky’s Tale – A 3rd person platformer. Essentially a VR version of Mario 64. It was pretty fun being able to track behind a character as you move through platforms. Motion was sufficiently comfortable, though not completely nausea free. One very cool detail was that if you leaned too far into the world, your own head effects things, like knocking over boxes which then fall into your path. This adds quite a bit of presence.

Favourite demos were Chronos and AirMech VR – miniature things look great when you can lean in to look closer. When things are scaled down like this it gives this weird effect of feeling like you have these incredible toys in front of you to play with.

[image_hover image=’http://visualise.com/wp-content/uploads/2015/09/OC2-Toybox-demo-1024.jpg’ hover_image=’http://visualise.com/wp-content/uploads/2015/09/OC2-Toybox-demo-1024.jpg’ link=” target=’undefined’ animation=’undefined’ transition_delay=’undefined’]


The Toy Box set of experiences give you a very good idea of just how powerful social interaction is in VR. When you start playing around in this virtual toy box with another person, you are very aware that there is a human with you, even though they’re represented by an abstract ghost figure (a floating head and hands).

There’s a bunch of kids toys on the table and you can reach down and pick them up, throw them, smash vases and then the ghost character, that’s actually a real person in another room, picks up a silver orb and smashes it on the ground. The room changes to the interior of a space station, gravity disappears and all the toys float toward the ceiling. The tutor smashes another orb and we’re back to normal gravity.

The most incredible part of this demo for me was the ping pong paddle and ball. You pick up the paddle with one hand and the ball with another. Without even really thinking, I tried bouncing the ball on the paddle and it reacted exactly you would expect.

The Touch controllers represent super intuitive input. Within seconds of having the button functions explained, I was able to interact with objects very naturally. Leaning in and picking up things works just like in reality. There is no abstraction of user interface, menus, or buttons.

Other things we tried included a room full of fire works and you start everything off with a Zippo and a simple flick of the wrist and the flame starts, you lean down to the nearest firework and all hell breaks loose.  

They really rattled through the demos – so many first experiences, so fast it’s a bit of a sensory overload – there’s just so much cool new stuff:

  • My first interaction with another person in VR.
  • My first use of touch controllers.
  • First truly real sport experience in VR (ping pong
  • First pyromaniac experience in VR.
  • First ‘zero gravity’ experience.
  • First time being shrunk.
  • First time picking up and firing weapons.

Oculus has done a really great job with these controllers. This is a big step forward for VR in general.

[image_hover image=’http://visualise.com/wp-content/uploads/2015/09/Henry-A-VR-Experience-1024.jpg’ hover_image=’http://visualise.com/wp-content/uploads/2015/09/Henry-A-VR-Experience-1024.jpg’ link=” target=’undefined’ animation=’undefined’ transition_delay=’undefined’]


Henry is a fantastic first step into realtime CG VR storytelling. All cheesiness aside, it really does feel like a magic experience. It essentially puts you inside a Pixar film.

The whole 10min experience takes place inside Henry’s house. It’s a brilliant, simple story of a lonely hedgehog making a birthday wish for some friends and having it granted by a group of party balloons that come to life on his wish. There were a number of ‘laugh out loud’ moments such as when Henry dive bombs his birthday cake. I even took evasive action myself.

When Henry walks into the room he’s down by your knees, so the idea is to sit on the floor and get down to eye level with him. On the floor (in reality) is a soft rug which plays a key part in the feeling of immersion. Inside Henry’s world, you are also sitting on the identical rug. The ability to move around on the rug, so your face is right by Henry’s or look under the table between us to see his face is magical.

The quality of the animation is so high that the character really emotes and you can read those emotions clearly. When Henry is happy or sad or excited its all expressed. What makes this particularly impressive is that the entire experience runs in realtime. Scripting and programming all those animations was probably insanely difficult.

The story was very easy to follow and as with most of the VR experiences nearly all the action is in front of you for 90% of the time. With very simple stories like this, the challenges to VR storytelling are easy to tackle, it’ll be interesting to see how storytelling is dealt with film in the future.

Come back tomorrow for Henry’s review of day 2.

Share this:


Leave a Reply

Your email address will not be published.