How OpenStack Is Changing The Enterprise

It may be cliché these days, but I should point out that obviously nothing I
express here represents the views of my current or former employers.
extremely grateful to them, and proud that I had, and continue to have, the
opportunity to work with so many great minds! However, their views are
completely their own and not in any way represented here.

Over the past several years I’ve witnessed first-hand the evolution of the current and coming Enterprise cloud. The cloud’s very concept challenges the traditional notions of how the Enterprise operates today, and has required careful re-examination of how these businesses think and work in the modern era. Due to my zealous interest in this area, this has led to a great number of on-going discussions with peers, colleagues, and others working this space. Thankfully, it has also led to some of the common observations, and trending solutions, described below.

What Enterprises REALLY Want

The Enterprise is a demanding entity which, thankfully, is driven solely by business logic. The giant companies of the world are not interested in technology because it is new, cool, or trendy. They are interested in how that technology will impact the bottom line. This is a simple, but key observation. Arguments for cloud technology have to be measured against, and impact, concrete business results.

So what cloud benefits does the Enterprise truly care about:

  • Agility
    • Faster time to market
    • Reduction In Effort Hours
  • Insight And Control
    • Ability to more easily govern a huge landscape
  • Lowering Costs
    • Data center optimization
    • Built-in disaster recovery
  • Rapid Innovation
    • Test ideas faster
    • Ability to try new things and fail with less risk
  • Ultimately, Google-esque IT infrastructure which never goes down
    • Self-healing
    • Auto-scaling
    • Mass Orchestration

Dealing With Enterprise Governance

Why Govern IT?

Enterprise governance is a necessary evil, and often seen the biggest enemy of the cloud. However, without it, large companies could in no way could consider themselves to have compliance across their infrastructure. Today, these infrastructures are incredibly large, and a standard methodology for maintaining them needs to be in place. ITIL has certainly led the way, but currently poses some real challenges for cloud due to the Enterprise interpretation of so many tasks (often as manual processes.) Fortunately, more and more automation is being accepted, provided that the outcomes are the same and there is full transparency in the process.

A Standard Operating Environment

Maintaining an SOE is absolutely essential to Enterprise computing. When managing a large field of servers (sometimes in the 100,000s) businesses cannot tolerate lots of snowflakes (one-off servers.) To provide the most ideal landscape one wants to make certain every server is secure, patched, audited, compliant, etc. This process involves creating a set of SOEs by combining:

  • A core OS at a specific patch-level (often Linux or Windows)
  • A set of standard required products (Security, etc.)
  • A standard set of products for a server role (Web, Database, etc.)
  • All the necessary configurations for above

Software like Red Hat Satellite v6.x make this a central tenant of their product’s philosophy, and for good reason. Overcoming snowflakes and keeping servers compliant is critical to successfully managing modern IT. OpenStack opens up new doors for solving this problem. With a new delivery model, and a wide open cloud landscape, we are free to revisit how we build, deploy, and manage servers. Discarding traditional manual processes and relying on RESTful orchestration, image catalogs, and cloud services we can carve out new enforceable standards with ease. This leads to an interesting paradigm shift trending in cloud-enabled IT today:

Service Delivery Transformation

Many service delivery organizations are adopting new models for cloud. Common models involve delivering standardized “ready-to-go” application instances available through a catalog. This is a parallel to public cloud delivery models. However, this is quite different from the traditional service delivery work of setting up and micro-managing endless farms of servers. Thankfully, removing that burden, opens up new avenues for innovation and broader product support. Using emerging technologies like Docker and Puppet the delivery process is far more streamlined and template based. Further, adoption of data grid technologies and an Enterprise service bus make refactoring traditional applications to modern horizontal/elastic models much easier.

Template and Automate

Manually making classic “golden images” to place in Glance would certainly suffice; however, that is against the cloud concept of being inherently agile. We also need to concern ourselves with ease of deployment and absolute consistency. Finally, we need to maintain a careful verifiable record of these transactions. Therefore, creating and placing these templates in a version controlled repository like git makes a lot of sense. In the cloud era, these applications (or environment) architecture definitions will become the de-facto method for powering automation. They become the “single source of truth” from which to blueprint all of IT. Today these are often documented in common formats such as cloud-inits, dockerfiles, puppet manifests, and heat templates. New standards like TOSCA (which is intersecting closely with HOT) are starting to provide an agreed upon way to define even very complex architectures in a simple YAML file. Not only is the Enterprise becoming entirely virtual, but even the architectures for critical applications and environments are essentially becoming code.

With templates in place, automation becomes easy to accomplish. With all the infrastructure and applications defined in a repository, it is a simple task to invoke tools like disk-image-builder/oz, Heat and cloud-init, Puppet, and so on to perform the orchestration of your defined infrastructures. Providing that it is all hidden behind a nice service catalog (like OpenStack Murano), you are able to create a simple end-user experience which is wired to a Enterprise compliant, revisioned, controlled, and transparent automation process.

Pulling It All Together

Moving infrastructure to CI/CD is part of an evolution to the next-generation of cloud. Continuous integration and continuous delivery are excellent concepts for developers; however, until today infrastructure itself has not been defined in code. Through the cloud and this paradigm shift, the industry has encountered a brand new way to automate and deliver environments. Whole static DEV/QA environments can be replaced through integration of dev-ops processes with Jenkins and OpenStack. This can enable automated provisioning and testing on an isolated exact-replica of production environments. Further, when testing is complete, this infrastructure can be returned to the pool. Successful applications can be manually promoted, or automatically integrated into production with Canary/blue-green CI deployment patterns. Changes to upstream templates could even be set to trigger automatic (no-downtime) upgrading of infrastructure company wide. The possibilities are mind-boggling!

Notes On Event Based Management

When dealing with Enterprise inventory requirements, like integration with CMDBs or auto-ticketing systems, make ample use of the OpenStack AMQ. Many popular products, including CloudForms/ManageIQ, utilize this for addressing the record keeping necessary to support a constantly changing OpenStack environment. Simple integration with OpenStack event notification makes writing a custom implementation for most back-ends trivial.

The Future: Dawn Of The Immutable World

We are just at the cloud’s opening act of moving the Enterprise away from worrying about servers, and towards caring about workloads. As the idea starts to set in, the obvious implication of a world of transient servers becomes apparent. If these servers are indeed transient (just template-based cogs in a machine) — why should we ever access them directly. Wouldn’t we most desire these cogs to be unchanging and untouched. Ideally, these would only be modified though changes to a single Enterprise “source of truth” (git). The modern application-based cloud servers and new container technologies are providing a great path to the clever realization that we only care about “what goes in” and and “what comes out”. IMHO, the future Enterprise will eventually want everything “inside the box” to be completely immutable, governed, and transparent. No access directly to servers, and certainly no changes outside of git.

Let me know what you think, and if you have seen other trends (or flaws in the current ones), please point them out in the comments section below!


[1] Gartner Data Center Pool on Private Cloud Computing Drivers, Gartner, Private Cloud Matures, Hybrid Cloud Is Next, Thomas Bittman, September 6, 2013

Posted in Uncategorized | Tagged , | Leave a comment

Running Windows 7 guests on OpenStack Icehouse

VDI is a great way to enable end-users to take their corporate desktop with them on any device, anywhere in the world. Implemented correctly, it is also a great money saver for enterprises. However, to make this real, you will most certainly find yourself dealing with Windows 7 guests and a healthy dose of cloud automation.

Today OpenStack is picking up pace in the VDI sphere. Companies are dotting the OpenStack ecosystem, like Virtual Bridges and Leostream, who are providing VDI brokering platforms. Some companies have also utilized in-house talent to write cloud automation for the VDI basics. Today we won’t get too deep into the roll out of VDI on OpenStack. Instead, we will focus on the first problem — getting a Windows 7 desktop on the cloud to begin with.

There are some great tools like Oz which are trying to simplify the process of getting every OS into the cloud. However, there are still some bits being worked on in the Windows space there. In light of that, the road to getting a Windows 7 cloud image created and installed is a manual and somewhat tricky chore. To alleviate the pain, I’m going to walk step-by-step through the process I use to create Windows 7 guests.

There are a few things you will need:

  • A Windows 7 image
  • A Windows 7 product key
  • A Linux box running KVM
  • The KVM Windows Drivers ISO

Once you have those together, it’s time to start the process!

Step 1. Install Windows 7 in KVM

Fire up virt-manager on your Linux server, and you should be greeted with the following friendly GUI:

Screenshot from 2014-05-13 20:59:48

It’s not quite VirtualBox, but it works! :) Click the “Create new virtual machine” button, give the new instance a name and click forward. On the next screen, select your Windows 7 ISO and set the OS properties:

Screenshot from 2014-05-13 21:51:38

Click forward and give yourself 2 GB of RAM, and a 1 CPU, per the minimum system requirements. On the next screen select 20 GBs of space, and uncheck “Allocate entire disk now”:

Screenshot from 2014-05-13 22:21:18

Click forward and review your setup. Be sure to check the customize button before hitting finish:

Screenshot from 2014-05-13 22:23:15

You should now be at a screen where you can be a little more specific in your setup. Switch the network and disk to use virtio as shown:

Screenshot from 2014-05-13 22:26:40 Screenshot from 2014-05-13 22:26:26

Now we need to add in a cdrom for the KVM Windows Drivers. To do this click “Add Hardware”, select Storage, and a cdrom with the virtio iso:

Screenshot from 2014-05-13 22:32:05

Finally, we are ready to click “Begin Installation”! Go through the usual screens, and you will eventually get to here:

Screenshot from 2014-05-13 22:41:49

Uh.. where are the drives! No worries, this is what we brought the virtio drivers along for. Click “Load drivers” and browse to E:\WIN7\AMD64:

Screenshot from 2014-05-13 22:46:00

Click “OK” and select the “Red Hat VirtIO SCSI controller”. Your 20 GB partition should now appear. Click next, and go grab some coffee while Windows does its thing.

When it finally prompts you for a user name, enter “cloud-user”. Set a password and enter your product key. Then set the time, etc. At some point you will get a desktop and find you are without Internet connectivity. Time to install more drivers! Open the windows device manager and you should see something like this:

Screenshot from 2014-05-13 23:27:21

Right click the ethernet controller and navigate to the drivers in E:\WIN7\AMD64\. It should auto-detect your device after hitting ok.

Always Trust Software From "Red Hat, Inc."!

Always Trust Software From “Red Hat, Inc.”!

Repeat this process for the other two broken devices. Finally verify the system can reach the Internet. If everything looks okay, then shutdown the guest OS and open the info panel:

Screenshot from 2014-05-13 23:38:46

Remove both cdroms, and restart the Windows guest.

Step 2. Install Cloudbase-Init

When the instance comes back up, open a browser in the guest and navigate to and grab the latest cloud-init for Windows and run the installer:

Screenshot from 2014-05-13 23:50:07

For now, accept the defaults and continue the install. When everything finishes don’t let the installer run sysprep. Also, before you shutdown, edit the C:\Program Files (x86)\Cloudbase Solutions\Cloudbase-Init\conf and make it look something like this:

bsdtarpath=C:Program Files (x86)Cloudbase SolutionsCloudbase-Initbinbsdtar.exe
logdir=C:Program Files (x86)Cloudbase SolutionsCloudbase-Initlog

Now disable the Windows firewall:

Screenshot from 2014-05-14 00:30:24

All the connections to this server will be controlled the security groups in OpenStack. Also, we should allow RDP access:

Screenshot from 2014-05-14 00:32:00

Now we can shutdown, by manually running sysprep again:

C:\Windows\System32\sysprep\sysprep.exe /generalize /oobe /shutdown

Step 3. Upload Image To OpenStack

Now for the easy part! Let’s convert the image to a qcow2, and push it into glance:

# qemu-img convert -c -f raw -O qcow2 /var/lib/libvirt/images/win7.img ./win7.qcow2
# glance image-create --name="Windows 7 (x86_64)" --is-public=True --container-format=bare --disk-format=qcow2 --file=./win7.qcow2

When the upload completes, log into Horizon and verify the image is available:

Screenshot from 2014-05-14 01:06:08

Then try creating a new instance — and don’t forget to set the Admin password:

Screenshot from 2014-05-14 01:08:38

It will take a bit to spin up due to the size (around 4 GB). When the task completes, head over to the instances console and verify you have Windows 7 running [Note: you may need to update the product key in the console on the first boot]:

Screenshot from 2014-05-14 01:33:28

Now you can provision a static ip and edit your OpenStack security group to add port 3389 (RDP). Now sit back, and test connecting to your instance from something fun like an iPad :)

2014-05-14 18.16.08


Now you have a fully functional Windows 7 OpenStack image! With this you can start down the road to a slick OpenStack VDI solution. The first steps on that path are using this image to make a few customized snapshots for the various user groups in your company. These could include system wide changes particular to each division, like customized software or settings. With a little automation magic, you can take these base images, along with persistent volumes tied to each user, and create a nifty “stateless” VDI environment:
OpenStack VDI
In the above example, the user requests a VDI instance. A cloud automation tool communicates with OpenStack to provision a new win7 instance, and attach the user’s persistent storage. The user then accesses the desktop through RDP, VNC, or SPICE. When they are finished, they log off and the instance is destroyed. The user’s data, living in a cinder volume, will be reattached on the next session to a new fresh image. The user gets a brand new instance, and known “perfect state” every time they log in. This could be bad news for PC support :) The BYOD movement should not be underestimated either. Employees favor it, it cuts IT costs, and arguably leads to increased productivity. With cloud VDI, you can answer one of the most important risks in BYOD — maintaining control. No more lost/stolen devices, user corrupted systems, mawlware, or viruses. Just transient desktops and data. Anytime, anywhere, any device.

Posted in Uncategorized | Tagged , | 1 Comment

OpenStack Icehouse Feature Review

I’ve been playing with devstack over the past few months, and I’ve been really impressed with the progress on Icehouse leading up to its release last week. There are some key new features, and updates, which I will touch on below:

Compute (Nova)

  • The improved upgrade support is great, and will allow upgrades of the controller nodes first, and rolling updates of compute nodes after (no downtime required!)
  • The KVM / libvirt driver now supports reading kernel arguments from Glance metadata.
  • KVM / libvirt also got some security boosts. You can now attach a paravirtual RNG (random number generator) for improved encryption security. This is also enabled through Glance metadata with the hw_rng property.
  • KVM /libvirt video driver support. This allows specification of different drivers, video memory, and video heads. Again, this is specified through Glance metadata (hw_video_model, hw_video_vram, and hw_video_head)
  • Improved scheduler performance
  • Scheduling now supports server groups for affinity and anti-affinity.
  • Graceful shutdown of compute services by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.
  • File injection is now disabled by default! Use ConfigDrive and metadata server facilities to modify guests at launch.
  • Docker driver removed from the Icehouse release. :-( The driver still exists and is being actively worked on, however it now has its own repo outside Nova
  • Important note: Nova now requires an event from Neutron before launching new guests. The notifications must be enabled in Neutron for this to work. If you find guests failing to launch after a long wait and an error indicating “virtual interface” issues, give the following a shot to disable this check in Nova:

    vi /etc/nova/nova.conf
    Set vif_plugging_is_fatal=False and vif_plugging_timeout=0

Object Storage (Swift)

  • The new account level ACLs in Swift allow for more fine grained control of object access.
  • Swift will now automatically retry on read failures. This makes drive failures invisible to end-users during a request.

Image Service (Glance)

Nothing has been reported in the official changes, but there has been some activity on github. Much of the work seems to be stability and cleanup related.

OpenStack Dashboard (Horizon)

  • Live Migration Support
  • Disk config option support
  • Support for easily setting flavor extra specs
  • Support explicit creation of pseudo directories in Swift
  • Adminstrators can now view daily usage reports per project across services

Identity Service (Keystone)

  • There is now separation between the authentication and authorization backends. This allows holding identity information in a source like LDAP, and using authorization data from a separate source like a database table.
  • The LDAP driver updates added support for group based role assignments.

Network Service (Neutron)

  • New OpenDaylight backend.
  • Most work on Icehouse’s Neutron went towards improved stability and testing.

OpenStack Orchestration (Heat)

  • HOT template format is now the recommended format for authoring Heat templates.
  • The OS::Heat::AutoScalingGroup and OS::Heat::ScalingPolicy now allow the autoscaling of any arbitrary collection of resources.

Database as a Service (Trove)

  • Experimental support for MongoDB, Redis, Cassandra, and Couchbase

Overall, there are a ton of features and changes beyond what I documented here. Check out the official release notes for more info.

Posted in Uncategorized | Tagged , | Leave a comment

Installing OpenStack Icehouse On RHEL 7

The public “release candidate” of RHEL 7 (Red Hat Enterprise Linux) came out yesterday, and I decided to take a shot at installing the latest OpenStack RDO on it. The install was smooth, and surprisingly easy. To try it out yourself, follow the steps below.

Install RHEL 7

Grab the RHEL 7 Release Candidate from here [ Note: You must have a current Red Hat Enterprise Linux subscription. ] You can also download an OpenStack / KVM ready qcow2 image to quickly get up and running. Install RHEL 7 on your host server, or in a VM. Make sure to register with:

# subscription-manager register --auto-attach

Update the system:

# yum -y update

Reboot if necessary (kernel update, etc.)

If you are running an instance using the rhel7 qcow2, you should log in and edit root’s ssh authorized_keys. This will allow ssh to root, and generally make things easier when we run packstack:

cloud-user$ sudo -i
# vi /root/.ssh/authorized_keys</em> (remove everything on the first line before "ssh-rsa")

Install EPEL 7

Add the EPEL 7 beta repo on each host with:

# yum -y install

Install Icehouse RDO

For each host install the Icehouse RDO repo:

# yum install -y

On the controller node run:
# yum install openstack-packstack

Create ssh keys (optional)

If you have multiple hosts you should create root ssh keys, and add them to the authorized_keys for each host. Log into the host where you will be running packstack (the cloud controler node), and execute the following as root:

# ssh-keygen (accept defaults)
# cat /root/.ssh/ >> /root/.ssh/authorized_keys

For each hostN:

# perl -e '$pub=`cat /root/.ssh/`; chomp $pub; print "ssh root\@hostN echo \"$pub >>/root/.ssh/authorized_keys\"\n"' | sh

When you are finished test logging into one of the other servers as root. You shouldn’t be prompted for a password.

Run packstack

The best approach to using packstack is to run:

# packstack --gen-answer-file=config.txt

Edit config.txt for your environment, then execute:

packstack --answer-file=config.txt

If you are in a hurry a packstack --allinone will get you up and running all on one node.
Likewise, a packstack --install-hosts=host1,host2 will install on two hosts, making host1 the cloud controller, and host2 a compute node.

packstack will take a while to run, but on a clean install of RHEL 7 you should soon see:

**** Installation completed successfully ******

Congratulations! It’s a cloud!

Check Out Your Cloud!

Source your keystonerc_admin file and verify services are up:

# source /root/keystonerc_admin
# openstack-status

You should see a lot of “active” components, and some additional info. If you have no errors, then it is time to connect to the dashboard!

First, allow connections to the OpenStack dashboard (horizon):

# vi /etc/openstack-dashboard/local_settings

( Add the hosts you like to ALLOWED_HOSTS. Be sure to add the floating ip if you are running this on top of another OpenStack install! )

# systemctl restart httpd

At this point you should be able to log into the dashboard. Go to http://the-address-of-the controller-node/dashboard/ and you should see the below:


Cat the keystonerc_admin created by packstack, and log in as the admin user with the supplied password.

Posted in Uncategorized | Tagged , , , | Leave a comment

Fedora: Encrypting Your Home Directory

There are a number of steps for encrypting your home directory in Fedora, and enabling system applications like GDM to decrypt your files on login. I’ll walk through the process of how I got this set up on my own machine.

First, make sure you have ecryptfs and related packages installed:

# yum install keyutils ecryptfs-utils pam_mount

Then you can either go the easy way:

    # authconfig --enableecryptfs --updateall
    # usermod -aG ecryptfs USER
    # ecryptfs-migrate-home -u USER
    # su - USER
    $ ecryptfs-unwrap-passphrase ~/.ecryptfs/wrapped-passphrase (write this down for safe keeping)
    $ ecryptfs-insert-wrapped-passphrase-into-keyring ~/.ecryptfs/wrapped-passphrase

[All done! Now you can log in via GDM or the console (“su – user” will not work without running ecryptfs-mount-private)]

OR the hard way, which I followed. There are some benefits of going this route. It is a much more configurable install which allows you to select the cipher and key strength:

First enable ecryptfs:

# authconfig --enableecryptfs --updateall

Move your home directory out of the way, and make a new one:

# mv /home/user /home/user.old
# mkdir -m 700 /home/user
# chown user:user /home/user
# usermod -d /home/user.old user

Make a nice random-ish passphrase for your encryption:

# < /dev/urandom tr -cd \[:graph:\] | fold -w 64 | head -n 1 > /root/ecryptfs-passphrase

Mount the new /home/user with ecryptfs:

# mount -t ecryptfs /home/user /home/user
(choose passphrase, any cipher, any strength, plain text pass through, and encrypt file names)
# mount | grep ecryptfs < /root/ecryptfs_mount_options

Add to the /etc/fstab (with the mount options from ecryptfs_mount_options above) like so:

/home/syncomm /home/syncomm ecryptfs rw,user,noauto,exec,relatime,ecryptfs_fnek_sig=113c5eeef8a05729,ecryptfs_sig=113c5e8ef7a05729,ecryptfs_cipher=aes,ecryptfs_key_bytes=32,ecryptfs_passthrough,ecryptfs_unlink_sigs 0 0

Wrap up the passphrase with the users login:

# ecryptfs-wrap-passphrase /root/.ecryptfs/wrapped-passphrase</div>

Copy over files to the new home dir:

# su - user
$ rsync -aP /home/user.old/ /home/user/</div>

Unmount /home/user and set up the initial files for ecryptfs and pam_mount:

# umount /home/user
# usermod -d /home/user user
# mkdir /home/user/.ecryptfs
# cp /root/.ecryptfs/sig-cache.txt /home/user/.ecryptfs
# cp /root/.ecryptfs/wrapped-passphrase /home/user/.ecryptfs
# touch /home/user/.ecryptfs/auto-mount
# touch /home/user/.ecryptfs/auto-umount
# chown -R user:user /home/user/.ecryptfs
# su - user -c "ecryptfs-insert-wrapped-passphrase-into-keyring /home/user/.ecryptfs/wrapped-passphrase"

Now it gets interesting! Edit /etc/pam.d/postlogin and add the highlighted lines:

# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        optional unwrap
auth        optional
auth        optional
password    optional unwrap
session     optional unwrap
session     [success=1 default=ignore] service !~ gdm* service !~ su* quiet
session     [default=1] nowtmp silent
session     optional silent noupdate showfailed
session     optional

Edit /etc/security/pam_mount.conf.xml and replace the whole file with:

<?xml version=”1.0″ encoding=”utf-8″ ?>
<!DOCTYPE pam_mount SYSTEM “pam_mount.conf.xml.dtd”>
<debug enable=”0″ />
<luserconf name=”.pam_mount.conf.xml” />
<mntoptions allow=”*” />
<mntoptions require=”” />
<logout wait=”0″ hup=”0″ term=”0″ kill=”0″ />
<lclmount>/bin/mount -i %(VOLUME) “%(before=\”-o\” OPTIONS)”</lclmount>


# su - user -c "vi /home/user/.pam_mount.conf.xml"

And add this:

<volume noroot=”1″ fstype=”ecryptfs” path=”/home/user” />

Now you can login and see your decrypted files! (“su – user” will not work without running ecryptfs-mount-private.)

You should setup swap encryption for both of these methods with:

# ecryptfs-setup-swap

If you want to go that extra mile, you can symbolically link your /home/user/.ecryptfs/wrapped-passphrase  to a flash drive and mount it at boot, or use autofs or some scripting to mount it on login (and just in time for PAM to access it.) However, if you are going to go that far you should look into more CIA level disk encryption, like TrueCrypt.

Posted in Uncategorized | Tagged , , | Leave a comment

Cloud Gaming Explained

The next generation of consoles is almost upon us. Before we save up for that glorious new $500 gaming experience, it’s a good idea to understand just what we are paying for. A big part of the next generation is Cloud Gaming and Streaming Games. These innovations are very exciting, but not everyone understands what they actually mean.


All previous generations of consoles have been restricted by the power of the client (the actual hardware device.) This is because our console is dedicated to doing all the required work to get the game to function. It needs to be powerful enough to process the physics, control the AI, perform collision detection, render complex HD scenes, etc. So it is only reasonable to assume that every device has to be powerful enough to actually run the game we want to play… right?


Not anymore. Average network speeds are moving up around the globe, and cloud technologies are stabilizing, standardizing  and taking hold in every industry — including gaming. We are entering a new era in what is possible by leveraging these strengths. My first brush with this concept was way back in the 1990s. At that time, it was common to see “dumb terminals” in schools, computer labs, and libraries across the US. These were very simple machines that hooked up to a central computer via a serial port and provided a rudimentary text console. The devices themselves lacked much capability. They could turn on and proxy data through the serial port, and print things to the amber or green monochromatic display. All the work was done on the back-end server, and the thin client had just enough horsepower to allow user interaction.That simple concept: “pushing all the work to the server” is the basis of  Cloud Gaming.


In the 2000s we began to see some kewl browser based games powered by flash or JAVA. Unfortunately, there was no elegant way at that time to leverage the graphic hardware capabilities of the host. Finally, WebGL was introduced in 2010 (fist stable release was 2011). It provided a new standard with OpenGL and HTML5 Canvas integration, javascript API, and hardware GPU acceleration. It’s now a cross-platform, royalty-free standard built into most web browsers. I became interested in seeing the possibilities of WebGL right away. I scoured the net looking for something to provide good showcase, and I came across a nifty project called quake2-gwt-port. I have a screencast below which I made in April, 2010. I was running the server on the localhost, using a test release of chrome, and while there is no sound in the video it was playing perfectly for me through HTML5 <audio> elements!


WebGL Quake II


This is a great example of “how” Cloud Gaming will work. Your console will have to shoulder much less of the responsibility. It will communicate through some proprietary protocol to servers in the cloud which do all of the heavy lifting. Your device just needs enough power to display the interface, and transmit user interaction. If a web browser can do this, imagine what a specifically cloud-designed console could do! The technology evolution to cloud gaming will allow these future devices to be cheaper, smaller (think iPhone sized), and have a much longer life span. Their internal technology could remain static (even get cheaper), while the content they provide has the potential to become infinitely more complex and powerful.


Cloud Streaming is how companies like Sony plan to tie this into a business model. They will most likely provide a subscription service which gives users access to a huge library of games, much like Netflix does for movies. When a user selects a game to play, a properly sized cloud instance will spin up (in a nearby availability zone) and begin transmitting the content to the user’s console. This provides some deeply interesting cloud-based cost models for the provider. Time will tell if those models pay off, but I have a feeling they will.


If you are like me, you’re probably wondering how you can check out some of that cloud gaming awesomeness right now! Well, you can download the Quake II port at the link above and stick it on a cloud instance. I’ll be doing that myself later in the week, and I’ll post a brief howto. I’m also playing around with a tool called emscripten that compiles C++ into javascript. I want to get a cloud-ified ScummVM (or some other emulator) up and running in the cloud, and see what the end-user experience is like. I’ll keep the blog updated with my adventures.


Aside | Posted on by | 1 Comment