Reimagining Adventure Games With Virtual Reality

It’s no secret that the King’s Quest series from Sierra Entertainment is by far my most treasured video game series, with King’s Quest VI holding the spot of my most favorite game ever. I’ve always had a soft spot for classic point-and-click adventure games, and certainly have noticed their absences in modern day gaming. I thought we might see them reemerge when the era of mobile gaming was getting big, but there’s no real noticeable titles that have come from that. I couldn’t help question why that might be, and Ken Williams of Sierra Entertainment makes a great point in a recent interview with Game Informer:

Game Informer: Point-and-click adventures may not be as popular as they once were, but they were the top-tier games in the ’80s and ’90s. What about the genre do you think audiences that made it so popular for so long?

Ken: Personally, I never liked being pigeon-holed as “point and click”. I like to think of Sierra’s products as interactive stories, and point/click was nothing more than the best we could do at the time to influence the story. The idea was to make you feel a part of the story, and neither text parsing or point/click were perfect answers. A perfect answer would probably be 3-d glasses and motion sensors in gloves. Then you are talking! And, that’s the right experience. It’s like a good book or a good movie. The goal is to immerse the audience in the story. A mouse or a keyboard pulls the player out of the game. I don’t know the magic solution but know that saying things like this is a “touch-game” or a “click-game” or a “parser-based-game” are all dead-ends. They lock in a point in time and miss the fact that interactive technologies are advancing faster than games are being built. The right answer is to say, “what new technologies will be around two years from now when this game releases, and how do we use it to immerse the player in a cool new universe?” Saying, “Point/Click games sold 3 million copies last year, and the market is growing 5 percent a year, so next year there will be 3.15 million point/click games sold” – that’s a sucker trap. Sure death.

tl;dr: Point-and-click was simply the best option at the time, but was never optimal. I found this response particularly interesting as I was already contemplating how VR could change adventure games prior to this interview coming out, and Ken specifically calls out “3-d glasses”. The motion-sensing gloves, however, wasn’t exactly the same route I was going down.

Head Gestures

There’s generally a small set of actions you can do in adventure games (or at least King’s Quest VI, which I’m using as a model). Touch, Talk, Walk, Use, etc. Dialog is generally predetermined, and generally has no real input from the user (except for cases where the player uses an item on the person they are speaking with).

But that’s the old-school way, isn’t it?

I started pondering, if you move from point-and-click, what becomes more important as a game mechanic? It’s certainly not outside the realm of sanity to think that massively overhauling the input methods drastically changes how the game is played. I totally agree with Ken that something like motion-sensing gloves would be an ideal solution to interacting with the environment and managing and inventory system, where you could rummage through the goodies you’ve collected via some world-space menu, but what about that predetermined dialog? Parsed-based games offered some (but limited) control over dialog, but mouse-based games such as KQ6 gave you no option to answer questions or express emotion.

This is where I though things could go a new direction with VR. All of the current-day VR devices offer a common thing, head tracking. Not all offer depth tracking, but the Oculus Rift, Google Cardboard, etc all allow the user to turn their head up and down, as well as tilt their head. The first new game mechanic that came to mind was head gestures; nodding for yes, shaking for no, the ability to bashfully look down at ones feet in embarrassment or shyness. Head tracking allows for a new portrayal of emotions via body language, and in an era where companies like Telltale Games turn a profit on games based entirely around the idea of your responses driving the plot of the game (ie. The Waling Dead and The Wolf Among Us), offering a similar, more immersive experience to fans of the series doesn’t sound like the craziest idea in the world.

Proof of Concept

I’m not the first to have this idea, however. You can see an open source implementation here, and a tech demo around this idea already exists under the title ‘Trial of the Rift Drifter’. However, I’ve never let the existence of something stop me from building it myself to learn.

The result, as you can see above, is a brief offer of cold tea from Sir Squarington (give him a break, he’s been stuck on an empty island in space for eternity with no teapot). You can accept his offer by nodding your head up and down, or deny it by shaking left and right. Additionally, Mr. S changes colors based on your head state:

Blue: Looking
Green: Nodding (“Yes”)
Red: Shaking (“No”)

I don’t believe that a VR-centric translation of the old-school point-and-click adventure games like KQ6 will ONLY take this as input. Ken is right, there’s a secondary input method needed. A mouse and keyboard isn’t it, but instead of motion sensing gloves, there’s some promising tech such as the Nimble VR, a hand and finger tracker that was recently even acquired by Oculus.

Distribution of the HeadGesture Asset

Even though I wasn’t the first to it, I do want to package up the HeadGesture controller I’m working on into a prefab. A great thing of Unity is how easy it is to redistribute assets, and the idea with this prefab is you will be able to drop it into your game, link up your VR camera, be it Oculus, Cardboard or anything else (these all work off of the mechanic of two normal camera slightly offset from each other), and then have your dialog system subscribe to the HeadGesture asset, that will fire off events as the head state changes.

Virtual Reality - Oculus Rift and Google Cardboard

Yesterday, a good friend and I took the day to sit down and really evaluate the Oculus Rift as a user and as a developer. Neither of us are exactly professional game developers by a long shot, but we’ve gotten to the point we can comfortably call ourselves “hobbyists”. Also, early in the week I also received my Google Cardboard and was pleasantly surprised at how well it worked, all things considered.

Using the Oculus Rift

Initial setup of the Rift was not too terrible involved. A couple USB cables, a sync cable between the camera and the Oculus Rift, power and we were up and running. Windows 8.1 actually also picked up the Rift, but we elected to pull down the latest runtime (at the time, 0.4.4-beta). Past that, I jumped right into a few things I had pre-downloaded such as the “Oculus Tuscany Demo” (“) in which you walk around a beautiful home and yard, the “Welcome to Oculus” (video) experience which details the paradigm shift that current VR tech can offer and “A Chair in a Room” (video) which offers an immersive horror/scare experience.

The first thing that you notice with the Rift is that this isn’t just a screen strapped to your face. Each eye is rendered separately, which gives a true feeling of 3D and allows you to properly analyze space and distance. The Tuscany demo has butterflies and plant seeds floating through the air, passing by your face and makes you feel like you need to brush them out of the way. This isn’t the same 3D effect you get with passive glasses at the movie theater, objects have real depth to them.

Past that, what amazed me was the head tracking. I’ve read that the head tracking in the DK2 was good, but especially in the tech demos, it was near perfect. I’d be standing at a railing and lean forward, looking down below me. Part of the “Welcome to Oculus” experience, you’re placed in a movie theater while clips from popular films play, and I was able to lean back behind me and look behind my chair. It wasn’t just guessing, it was actually translating every movement of my head into the virtual space. Turning, leaning and tilting my head all translated perfectly (for most tests), which has a huge impact on immersion.

We proceeded to try out some full games/simulations with the Oculus, including:

  • iRacing – One of the closest real-to-life racing simulators that’s commercially available. I was racing a Mazda MX-5 and a prototype Ford card on the Laguna Seca, looking around in the car, reading the speedometer and tachometer. Combined with a force-feed back racing wheel and pedals, this was the one we kept coming back to.
  • Euro Truck Simulator 2 – A…truck driving simulator. Not exactly the most fun game we own, but strap on the Rift, using the wheel and pedals and loading up some of our favorite tracks, it was great to be able to look around the truck cabin, actually turn our heads to look at the mirrors as we backed the trailer up to the loading dock and eventually jackknife the thing.
  • Half-Life 2 – This, to me, was the most surreal experience. Sure, the game is 10 years old and Oculus support is in beta, but I’m not alone in having this game on my Top 5 Games of All Time list. I’ve played through the game countless times but standing in front of the Metro Police as he knocked a soda can to the ground and forced me to pick it up, it was the closest I’ve felt to actually being in the shoes of Gordon Freeman, shoes I’ve played in countless hours since the late 90s.

Among a few others. Each one offering an amazing, immersive (there’s that word again) experience. We couldn’t stop trying new games and tech demos, and every new one we tried had some new cool feature or detail that outdid the last.

Developing for the Oculus Rift

We spent nearly the entire day playing with the Oculus that we nearly forgot the other major plan for the day, which was to actually make something for it. We were hoping to have a solid plan on what that thing was that we wanted to make, but by the time we loaded up Unity to begin developing, we had no idea what it was that we could build in a night that would leverage everything the Oculus had to offer as a true VR experience. So we did what so many others before us did, copied whatever it was that people liked and remade it. The result?

A room with a comfortable couch and a massively oversized TV. We did not make the models or textures ourselves, as we were on a tight schedule, so we grabbed some pre-made furniture models and found some nice textures to go along with them. More than anything, I wanted to see how a scene I made in a very traditional way translated to the Oculus, and man did it not disappoint. We figured what better use of a virtual world with a giant TV to do something we can never do in real life…

…we watched The Walking Dead.

But this wasn’t about what the TV played, it was how things translated from our traditional development automatically over to the Oculus. As we sat on the couch, we looked around the room, down at the coffee table. We looked behind us and leaned over the couch, all these movements translating 1:1. This was enough to show us all of the possibilities of things we could make with the Oculus Rift.

Google Cardboard

Google Cardboard, for those unfamiliar, is Google’s solution to the currently relatively high cost of entry into the world of VR. Currently, the Oculus Rift is $350 for just the development kit, it is not a commercial-grade product. On the other hand, Google Cardboard and it’s third-party brothers and sisters can be had for the price of a few cups of coffee. Cardboard is literally that, a piece of cardboard (and a couple lenses, some tape and a magnet). The trick is, however, that apps are rendered in a split view (much like the Oculus), and when your phone is placed into the Cardboard, the lenses distort the two separate, flat images into a single 3D view. There are still plenty of issues, including:

  • Extreme “screen door effect” – As the resolution is so (relatively) low and the screen is so close to your face, you can make out the individual pixels, which makes it look like you are looking through a screen door.
  • No head tracing – Which was one of the great parts of the Oculus. You can turn and tilt your head and everything translates fine, but there is no leaning
  • Lack of input – The magnet acts as an input device with Google Cardboard, and translates to essentially a single tap on the screen. Some versions of Cardboard actually do exactly that, use a conductive material to tap the screen. The Oculus on the other hand has a wide array of input devices inherently available, as it’s meant to be used with a computer.
  • Apps are extremely taxing on the device – The Nexus 5 has pretty terrible battery life to begin with, but a short 10-15 demo of one or two Cardboard apps drains about 25% of my battery and the device gets extremely warm

However, for being less than 1/10th the price of an Oculus Rift, it is very impressive in it’s own right. You still get the 3D feel of space and distance, and the freedom of mobility gives you some interesting options as far as games go.

Cardboard Integration with Unity

As I don’t currently own an Oculus Rift, I wanted to see how it was to develop for Cardboard, as Google provides a full Cardboard SDK for Unity and as it turns out, it’s just as easy to develop with Cardboard in mind as it is any other 3D game. There are certain considerations to take into account, such as how input changes and how you focus your core game mechanics, but I took the same scene I built for the Oculus, simply swapped out the prefab that Oculus provides for it’s cameras with the one Google provides for Cardboard, and built to Android. I did have to remove the MovieTexture that was playing The Walking Dead, as Unity does not support MovieTextures on mobile devices, but other than that, it was a single-step process and my on-the-cheap VR goggles dropped me right back on that familiar couch.

I also took the same FPS Demo that I’ve been working on, as detailed in the previous post, and did the same process of simply swapping the cameras. Lo and behold, we have mobile VR!



These past few days really gave me a feel for the place that VR holds, not just in gaming but in countless applications. This isn’t the first swag at VR, far from it even. But this is the first implementation that makes me feel like it finally has value, rather than just some gimmick.

Unity3D - a Checkpoint

A little over a month ago, I wrote that I had begun learning the Unity Game Engine. A little over five weeks in, I wanted to share some of the lessons I’ve learned in this short time and demonstrate that you can achieve some pretty awesome things in Unity in an extremely short amount of time. Additionally, I’ve also had to dive into some extremely amateur 3D modeling (which I’m less proud of).

First Stab at an FPS

This is the genre I usually default to. I generally play anything, but FPS seems to be my “default”. I figured I should start with something I was comfortable with, something I knew the mechanics of so I knew if something was “off”.

I felt like a shooter would hit on a lot of topics. 3D world, flexible game mechanics, networking and would give me a good excuse to fudge some things into the game that might not make sense elsewhere. I learned some 3D modeling, what lightmapping was, basic networking and created…

…an ugly monstrosity. But it was good enough for a start! Let’s break down what’s behind this mess

Photon Unity Networking

Despite the complete lack of player models and opting for simple cylinders, this game does have multiplayer. It was actually pretty easy to accomplish using a very popular third-party add-on called Photon Unity Networking. It’s completely free for up to 20 concurrent players and makes what would otherwise be a very complex solution very, very easy. I’ve chopped out some code I had to handle things specific to my game, but as you can see, to just join a lobby and a room, it’s not a lot of code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
public class NetworkController : MonoBehaviour
{
    public GameObject mainCamera;
    private int MAX_PLAYERS = 16;

    void Start()
    {
        Connect();
    }

    void OnJoinedLobby()
    {
        JoinOrCreateRoom();
    }

    private void JoinOrCreateRoom()
    {
        // Setup room options and join the room (or create it if it does not exist)
        RoomOptions roomOpts = new RoomOptions();
        roomOpts.isOpen = true;
        roomOpts.isVisible = true;
        roomOpts.maxPlayers = MAX_PLAYERS;

        PhotonNetwork.JoinOrCreateRoom(GameController.defaultNetworkRoom, roomOpts, TypedLobby.Default);
    }

    void OnJoinedRoom()
    {
        SpawnPlayer();
    }

    public void Connect()
    {
        PhotonNetwork.ConnectUsingSettings(GameController.gameVersion);
    }

    private void SpawnPlayer()
    {
        // Instantiate a new player on the network 
        GameObject fpPlayer = (GameObject)PhotonNetwork.Instantiate("PlayerController", new Vector3(0f, 2f, 0f), Quaternion.identity, 0);
        PhotonView photonView = (PhotonView)fpPlayer.GetComponent<PhotonView>();

        // If this is the player's spawn, enable all of the control scripts, disable the lobby cam and lock the cursor
        if (photonView.isMine)
        {
            mainCamera.SetActive(false);
            fpPlayer.transform.FindChild("Main Camera").gameObject.SetActive(true);
        }
    }
}

And just like that, the player is in a networked game. Granted this means there’s only ever one room for all players to join, assumes there’s ground under the player at coordinates 0,2,0 and I’ve removed the code to enable player movement scripts for only the local player, but it works nonetheless. From there, you can create a generic script that you can put on ALL components that should be updated over the network, and PUN will handle updating their location, rotation, and whatever else you include to send and receive in the script. Below is the code I wrote to not only update the local and remote player locations, but also do extremely basic smoothing over the movement to reduce jitteriness of the remote players on the local player’s screen.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
using UnityEngine;
using System.Collections;

public class NetworkedComponent : Photon.MonoBehaviour {

    private Vector3 realPosition = Vector3.zero;
    private Quaternion realRotation = Quaternion.identity;

    private float lastUpdate;
    private float lerpRate;

    void Start()
    {
        lastUpdate = Time.time;
    }

    void Update()
    {
        // Lerp remote players to their real position and rotation
        if (!photonView.isMine)
        {
            transform.position = Vector3.Lerp(transform.position, realPosition, lerpRate);
            transform.rotation = Quaternion.Lerp(transform.rotation, realRotation, lerpRate);
        }
    }

    void OnPhotonSerializeView(PhotonStream stream, PhotonMessageInfo info)
    {
        if (stream.isWriting)
        {
            // Local player, send our real position and rotation
            stream.SendNext(transform.position);
            stream.SendNext(transform.rotation);
        }
        else
        {
            // Remote player, receive and record their real position and rotation
            // This will be updated in the game world in the update function
            realPosition = (Vector3)stream.ReceiveNext();
            realRotation = (Quaternion)stream.ReceiveNext();
        }

        // Experimental dynamic lerp rate, however there is an issue
        // Lerp rate is based on Phonton update n-1, so may not be valid for this update
        lerpRate = Time.time - lastUpdate;
        lastUpdate = Time.time;
    }
}

You’ll see I was experimenting with some of the logic behind how it smooths the movement, which at this point is still questionable at best. The nice thing about Unity is I can place this script on ANY GameObject (players, random crates in the level, vehicles) and I don’t have to do anything else to have them update their position or rotation over the network.

The wonderful player models currently

3D Modeling

Several times in the past I’ve attempted 3D modeling and it’s never gone well. But, for game design, I had to have some basic level of understanding to at least prototype my ideas. I followed a few tutorials, one to make a coffee cup, which turned out great, and then I attempted to recreate a pint glass I just happened to have on my desk for…reasons. The pint glass left something to be desired, but for my first blind attempt, I’m still proud.

These were just learning experiences to see what I could apply to my games, and took a extremely rough stab at modeling a level for the FPS I was working on.

Does it look good? Absolutely not, but again, this is a learning experience.

Want to play it? I’ve been pushing up regular development builds here.

Future Ideas

One thing that struck me was how unbelievably easy it is to integrate the Oculus Rift, the championing technology in virtual reality and the first real device that makes it feel like VR has a potential future. I debated purchasing a Development Kit 2 (DK2), however with the next Oculus prototype, Crescent Bay, coming out within the next year, I’ve decided to wait. Luckily, I do have a buddy that has access to a DK2, and we’ve set aside a day this holiday to see what we can create using Unity and the Oculus Rift.

How does the integration work? Well, Unity ships with two pre-made “Character Controllers”, objects you literally drag and drop into the game and get basic character movement and control. I say basic, but there’s parameters for all types of things you can tweak, such has how steep a slope needs to be before the player can’t scale it, how tall steps need to be before the character has to jump rather than walk up them, etc. The two pre-made ones are for first person control (which I’m using in my FPS Demo), and for third-person control. The Oculus team ships a third Character Controller which you can drag and drop into your game, an OVRCharacterController. Just as you do with the FirstPersonCharacterController, you drop this in your game, and instantly you get control of your character using the Oculus Rift.

I’m simplifying it a bit, as some things don’t work out of the box. For example, your UI may need some tweaking, depending on how you structure it. But to get the Oculus working in your existing game with little to no work, to me, is absolutely amazing.

I’m extremely excited to develop on the DK2 later this month, and will be documenting the process as well.