Virtual Desktop Infrastructure (VDI)

It seems that these days pretty much every cold call I get from a vendor is about VDI.  If you’re an enterprise it seems that it’s pretty much anyone wants to talk to you about.  It’s clearly the topic of the day, but what does it actually mean?

At work I’ve been looking at options for VDI for a while, but to be honest I prefer to refer to the topic as Centralised Desktops.  For me ‘VDI’ implies a particular solution, whereas in fact if centralisation is something you want, you should be looking beyond simple virtualisation.

Virtualised Desktop Reference Architecture

So what it?  Well simply put, it’s about moving the execution of your desktop environment away from your users desks and into a managed central location, probably a data centre.  So no more desktop computers (well, probably… but we’ll come to that another time).

Instead, on each desk you put a Thin Client.  These are small, cheap, power efficient devices that really don’t do much more than receive the ‘screen’ from the newly centralised desktop and send the keyboard and mouse information back down down the wire.   The actual OS and applications are running in a far off data centre.  This is where is get interesting, as there are many platforms that these can run on. 

The solution most people think about with VDI is running the desktop OS and apps on a virtual machine.  In this scenario you’d typically have a server running a hypervisor such as VMWare, Hyper-V or Citrix’s Xen.  That server would host a number of desktop OS’s that can be presented out to the thin clients on peoples desks.

Now that’s a good approach for most people, but it’s not the only one.  For high end users, guys doing CAD or analysis work, a VM isn’t going to cut it.  A share of CPU time and memory might not be enough.   For these sorts of users something like HP’s Blade Workstations could be an answer.  These are basically high spec computers squeezed into a blade form factor.  If you’re familiar with blade servers they’re basically the same thing but with better graphics capability.

So using Blades you can give end users very high end computing capacity from a remote location.  But what about the other end of the spectrum, the people in your organisation who have very low computing requirements.  There’s a pretty good chance that for some people even a desktop VM is overspec’d.  For these guys more traditional Citrix/Terminal Services type solutions are still a very good fit. 

In that sort of scenario you’d have single server OS that many people would connect and logon to.  They can then share the OS and applications running on it as they are presented back to the thin client.  Of course in that instance each user is only getting a share of the server and OS resources, but the point is exactly that.  Each user consumes a share of a single server (and its costs) and a single OS (and its costs).  Per user its cheap!

So that’s what the two ends of the solution look like.  But how do you link them up?  First lets talk about how the ‘screen’ gets from the centralised desktop to the thin client (and of course the keyboard and mouse back the other way.

There are a number of protocols for achieving this.  For years Citrix has had ICA.  It’s tried and tested, I’d hazard a guess that most larger businesses are probably using it in some way or the other.   Microsoft has RDP, which has shipped with every version of windows since Win2000.  Again, it’s tried and tested, remote assistance uses it for example, and imagine pretty much every Windows server in the world uses it for management.  The problem with these protocols has been that whilst they’re great for running bog standard Windows and office apps, as soon as you throw anything complicated like graphics or media at them they start to choke.  They’ve improved a lot over the past few years, but there are still limitations.

In addition to the Citrix and Microsoft protocols there are more specialise alternatives that aim to improve the experience for media intensive applications, or users over long connections.  A good example of this is HP’s RGS protocol or Citrix’s HDX.  Last year we ran a proof of concept using RGS that saw people in our Bangalore office happily using AutoCAD on desktops hosted out of an office in Bristol.  It works very well indeed.

There are other solutions such Teradici’s PCoverIP which originally used hardware acceleration at both ends to improve performance, but is now being used by VMWare in a software only capacity as part of it View product.  On paper this looks very good, but I’ve not really had a chance to try it first hand yet.

What connects the thin client to the centralised desktop?  In the simplest of deployments you can actually hard code a thin client to talk to a specific desktop/server.  In essence this gives you a 1:1 connection.  That’s not necessarily the smartest route though.  Most solutions will now use a connection broker to negotiate the right central desktop for each thin client or user.  To my mind  good broker is where the intelligence come into the solution.

Personally I feel that there isn’t a one-size-fits-all solution for VDI.  Perhaps for some organisations that’s not true, but for many I think a blend of solutions will be the best choice.  A broker helps you do this.  Say you have a mix of virtual desktops and blade workstations.  How do you make sure your users get the right desktop?  Well a broker will look at the connection request, who it’s from or where, and connect the thin client to the right back end. 

What’s more, because this process is dynamic it doesn’t necessarily have to connect the user to the same desktop each time.  Say some of your central  desktops are down for maintenance, the broker would direct them to one that was working.  Even better, if you have say 10,000 people in your company, it’s a fair bet that maybe only 80/90% of them are working at any given time.  In that case why have 10,000 desktops computers and licenses?  Just have say 8,500 and let the broker make sure they are utilised.  Depending on the solution the broken can even go off and provision more VM’s should extra people show up.

Of course in truth it’s not quite that simple.  For one thing if your desktops aren’t going to be persistent, (i.e. not tied to a single user/thin client) you need to work out what to do with your users applications, ‘profile’ information and data.

Data is the easy one, just don’t have any of it local.  Put everything on network shares, in Sharepoint or in some other system.  If your desktops are in a data centre next to those storage systems they’ll get fast access to everything they need – faster than a traditional desktop would get.  Local data is pretty much always a bad idea anyway.  The one exception might be with Blade workstations, where demanding apps might need local storage to caching data etc.

Your users ‘profile’ information is slightly more tricky.  If they are effectively moving to a different computer every day, you need to make sure that they’re settings follow them across those different desktops.  One solution would be Windows Roaming Profiles.  These have been around for years and can work well.  Other solutions such as Appsense or RTO’s Virtual Profiles do things in a slightly different, more efficient way, but achieve the same goal.

Applications, now that’s the difficult one.  If you think of a normal PC, apps are almost always installed locally, either by CD/DVD or in business probably over the wire using something like SMS/SCCM.  That installation takes time, and it’s not something you can afford to do every time a user logs on to a centralised desktop to make sure they’ve got the right applications.

There are two answers to this.  Application Virtualisation and (once again) Terminal Services.   App Virtualisation has been around for a few years, but has only really taken off over the last year or so.  It’s a complex technology, but basically it separates the application from the OS,  allowing it to run in it’s own mini-virtual environment.  With the app separated from the OS, your not restricted to traditional installations.  Most app virtualisations technologies will allow you to ‘stream’ the application down to computer as and when it is needed.  Again, this is complex but for an end user is means that when they click on the icon, the technology downloads the application components as they are needed so there’s no long installation, just a small initial delay.

There are however some limitations to app virtualisation, which means that other solutions like terminal services may still have a place in a VDI environment.  say you have an app that just won’t work in App-V or XenApp or other virtualisation tools?  In that case you can install them natively on a Windows OS and present them out to the virtual desktops using terminal services.  it may sound a little convoluted, but it works.

So… that’s a real high level view of what VDI is.  Hopefully it all made sense.  I’m planning to to some follow up posts with some more detail, but for now here’s a quick diagram showing a reference architecture for a VDI implementation (the diagram above).   Again, it’s quite high level, but I think it shows how these things all fit together.

Join the conversation

2 Comments

  1. Thanks for the nice write-up. Very good high-level view and covers the usual suspects. It would be good to broaden your sample set a bit by including Virtual Bridges VERDE as Brian Madden did in Geek Week when he looked at the “Big 5” VDI vendors. VERDE combines VDI with Disconnected Use and also has a solution to eliminate WAN latency and reliability issues for intermittent connected branches.

    I am with Virtual Bridges and would be happy to provide you an eval. Unlike the other solutions, it scales down as well as up. You can run it all on one server or your can run a cluster of 10,000 servers.

    Thanks, again, for the nice write up.

    Jim

Leave a comment

Leave a Reply