So what is Windows 7 DirectAccess?

A couple of weeks ago I was fortunate to be offered a place on a two day class at Microsoft to learn about the DirectAccess functionality of Windows 7 and Server 2008 R2.  The class was run by Fernando Cima from MS in Brazil (who it has to be said really knows his stuff).

For many years now private company networks have been completely separate from the hostile environment of the Internet.  This separation is achieved through the use of firewalls etc, and in many cases internal networks use private IP address ranges that can’t be directly addressed from the internet.

Whilst this separation provides a level of security, it does make life difficult for mobile workers, after all they’re on the Internet wanting to gain access to the very resources company’s want to protect.  Many organisations use solutions like VPN’s, Citrix and application publishing (for example Outlook Web Access published through a reverse proxy) to get around these limitations and make services available to remote workers.

These solutions are definitely useful, but the user experience for mobile workers is often less than ideal.  To maintain security they’ll almost certainly have to logon to these remote services separately, probably using complex passwords or two-factor authentication like RSA SecurID.  It’s hardly seamless.  Using a VPN probably also means that any traffic between the laptop and the Internet is routed though the VPN, into the company network, out of the company Internet connection and back again.  Again, not exactly ideal (though split-tunnelling can help).

What’s more, supporting remote clients is often a support nightmare.  Devices not on the network are hard to manage.  Using a VPN they are only accessible if and when the user launches the VPN.  Whether the connection is up and running long enough for patches to download etc. is often down to chance. 

So all in all there’s a lot of room for remote access solutions to be improved.  This is where DirectAccess tries to help.  It provides remote clients with seamless access to both the Internet and the internal network (intranet).  If the remote client has internet access, it automatically has access the intranet and the services within it.  No client software to launch, no additional authentication nothing.  Sounds good eh?

So how does it work… well the first thing to note is that DirectAccess isn’t a fancy new product in its own right – it’s really just a clever implementation of IPv6 and IPSec.  There is however a DirectAccess server role, and a wizard to set the whole thing up. 

At a high level it makes sure that both the client computer and the Intranet resources are globally addressable, and secures communication between them.  Of course to do this there are a number of problems to overcome:

Addressing
As mentioned, internal networks often use private address ranges, and even if they didn’t there simply aren’t enough IPv4 addresses available for companies to use globally unique addressing. 

Fortunately IPv6 is here to save the day… (this is the scary part).  IPv6 provides globally unique addressing, allowing the client and intranet services to address each other.  The techie among you are probably now thinking about how your networks are all IPv4 and that this is never going to work. 

From Windows Vista onwards, the IP stack in Windows has natively supported IPv6, and enables it by default.  In fact, Windows now favours IPv6 and will use it to communicate with other Vista/2008/7 nodes if it can.  So there’s a fair chance that some of your computers have an IPv6 address already.  

In many cases of course computers will be sitting on an existing IPv4 networks.  IPv6 traffic would therefore need to traverse these older networks to be of any use.  To achieve this IPv6 can be encapsulated within IPv4 packets.  Windows also now supports a number of protocols for achieving this natively, and will automatically encapsulate IPv6 should it determine that there is IPv4 connectivity between two IPv6 nodes (it can also be forced).  On an Intranet the ISATAP (Intra-Site Automatic Tunnel Addressing Protocol) is used to achieve this, and is enabled when DirectAccess is setup.  Older IPv4 based resources (for example Window Server 2003 and earlier) can be accessed through the use of IP translation devices such as Network Address Translation-Protocol Translation (NAT-PT).

The other area where IPv6 traversal over IPv4 networks has to be considered is over the internet, which is IPv4, and on the clients local network which is likely to be IPv4.  DirectAccess clients will attempt to use traversal technologies to gain access to the DirectAccess server, in these cases either 6to4 or Teredo depending on whether the client has a public internet address or is on a NAT’ed network.  Again this is automatic and handled by the Windows IP stack.  As a last resort, if 6to4 and Teredo traffic is blocked, a new protocol IP-HTTPS is used to encapsulate IPv6 packets inside HTTPS.

So in short, you should be able to use IPv6 for DirectAccess with only minimal work to the underlying network.  MS themselves are very clear that IPv6 is a long term goal, and have provided a heap of technologies in Windows to help deal with a long transition.

Security
If both your remote clients and your internal services are globally addressable, security is going to be a big concern.  DirectAccess uses the IPSec features of IPv6 to authenticate the client connecting to the DirectAccess server, and protect the traffic that passes between them. 

Computer authentication certificates can be either issued from a central CA in the domain to validate that the remote client is a company asset, or can be issued by the health authority within a Network Access Protection (NAP) environment.  In that case computers are validated as being ‘healthy’ before being granted access.

Name Resolution
Using DirectAccess clients effectively have access to both the Internet and the intranet.  You therefore need to ensure that traffic intended for the intranet goes there, and not out to the Internet.  Windows 7 does this using a Name Resolution Policy Table.  This table contains entries for internal namespaces and corresponding internal DNS servers.  When an application wants to access a resource, the resources name is compared to entries in the NRPT, if there’s a match the internal DNS server is used to resolve the name, if not the Internet connections DNS server is used.  In this way traffic is routed to the correct location.

So at a high level that’s how direct access works.  There’s a lot more to it than that, and I’ll try to post more over the next few days.  It’s a pretty complex thing to deal with – my head hurt after the two day class having struggled to take it all in.  I’ve never done anything with IPv6, so that was a hell of a learning curve, but once to you go though is a few times is make a lot of sense.

IPv6 is probably going to be Microsoft’s biggest obstacle in gaining traction with DirectAccess, many people I’ve spoken to about is run away when you point out that IPv6 is required.  Having worked though the implementation in a few labs however it’s not all that bad.  Whilst there are things you would want to do before enabling IPv6, it’s far less work than I though it would be – at least to get to a point where DirectAccess works.

Do the benefits of DirectAccess make it worthwhile?  Well it’s certainly a great solution.  Once it’s working remote clients are on the internal network, so communication is bi-directional and remote clients can not only access internal resources, but can be accessed from those internal resources for management and support.  Group policy applies as the computer starts up and users logon, patches apply and applications can be delivered.  For organisations with large numbers of remote users that’s are pretty compelling functionality. 

Whilst IPv6 might be perceived as an obstacle for DirectAccess, DirectAccess is probably the first ‘killer app’ for IPv6.

Google Moon

To help celebrate the 40th anniversary of the moon landings Google has added the moon to Google Earth

“Forty years ago, two human beings walked on the Moon. Starting today, with Moon in Google Earth, it’s now possible for anyone to follow in their footsteps,” said Moon in Google Earth Product Manager, Michael Weiss-Malik. “We’re giving hundreds of millions of people around the world unprecedented access to an interactive 3D presentation of the Apollo missions.”

Buying a car should be easier

Over the past couple of months I’ve been shopping for a new car.  I’m doing a lot of miles these days as work is 50 miles or so from home, and my old Peugeot just isn’t great doing a hundred miles a day.  Unfortunately my own indecision has seen me bounce between different cars on almost a daily basis!  I just can’t decide.

Whilst I’m sure my girlfriend is probably very bored of me announcing different cars I intend to buy, talking to the various dealerships and seeing how they work has been quite interesting. It’s actually surprised me how there seems to be a curious separation between car companies web presences and their dealerships on the ground.

These days pretty much any information you could want to find out about a car is on the web somewhere.  If you want to read reviews there are sites like Drivers Republic, Evo, 4Car or Autocar that offer one off reviews and long term reports.  Lots of marques have owner run forums where you can read about day to day life with the car you’re interested in.  And of course the company’s own websites have all the spec’s and configuration tools to pick out what options you’d want and the retail costs.  I say retail costs because you can use places like Drive the Deal or Broker4cars to work out what a good price might be and how much discount you should be able to get elsewhere.

With all this info available on the internet by the time you actually speak to someone at your local dealership, the chances are you probably know what you’re after and just want to see it in the flesh and take a test drive.  It seems to me that at the moment car dealerships aren’t setup to deal with customers in this situation.

Often I’ve found that they’re closed after work and run a skeleton crew of sales people at weekends, just the sort of times people are able to drop in.  Last Sunday I tagged along with a friend who is looking to get a new car.  We went to four dealerships, one was closed, and the other three had a single salesman trying to deal with more people than they could cope with.

Most manufacturer websites will let you configure yourself a car –model, colour, options etc – and then save it for future reference.  Despite having this information about exactly what the customer wants, so far I’ve none of the dealers I’ve spoken to have had the ability to recall that saved spec into their own systems.  Each time I’ve had to run though the whole process again, using a different system, with some poor sales guy – wasting both our time.  In fact to be honest the sales guys add very little value to the process, other than being someone to negotiate with.  Having an IT background its a business process crying out for some integration.

From my perspective as a customer, it would seem like the car companies should try to reinvent the way they sell their cars.  I don’t think it would even take that much effort.  Just by shifting opening hours and making better use of the IT systems they already have they could massively improve the customer experience.

Idle thoughts about Azure and the Cloud

Yesterday whilst I was checking my mail I noticed this tweet from Steve Clayton at MS:

image

As I’d had a few conversations about Azure earlier in the day it got me thinking.  At the time I replied back saying that maybe there’s some confusion out there about where Azure would fit within a company’s overall infrastructure.  Hopefully most large companies will be on the ball and understand how cloud services can be used, but smaller organisations that might have less mature IT capabilities may not yet understand where they fit or how best to use them.

Whilst I was sitting in traffic earlier today I started to think about ways that platforms like Azure or Amazons EC2 might be useful to me, either personally or at work (can you tell I’m a geek?).  To be honest there are loads, but a really basic example might be something like this (it may be stating the obvious!)…

If other organisations are anything like the ones I’ve worked in they’ll use – and rely on – dozens of internally developed applications.  These might be as trivial as custom room booking tools to more critical ticketing systems or internal shopping carts for services. 

Traditionally these web or client-server apps would end up needing their own servers in a rack somewhere, burning power and depreciating nicely.  This is ok in the short/medium term, and you might use some virtualisation to get better utilisation out of the hardware.  Even so, the chances are you’ll still be paying for things like software maintenance and you’ll still need to support those systems as well as the app itself.  What’s more, as these apps and their servers age, the level of support they need will probably increase but at the same time the willingness of the business to pay for upgrades or updates will probably decrease.  After all its worked fine for years why should they pay more now?  This is where cloud services can help…

What if rather than hosting your shiny new application a server that you buy, rack and support yourself, instead you upload your application to (for example) the Azure Services Platform.  It supports many of the common platforms like .net, PHP etc. so there shouldn’t be too many changes to the underlying code (I’ll caveat that by saying I’ve not done it myself, so I’m basing this on the conversations I’ve had with MS and those that have).  In effect you have the same application running out in the cloud rather than on your own kit.

There is of course more to consider, basic things like cost through to more in depth subjects like authentication and security.

In terms of cost parity it largely depends on how utilised the servers are.  Cloud services like Azure and EC2 tend to be billed based on usage, i.e. so many cents per hour of CPU time, and so many cents per GB of storage used.  It’s hard to generalise whether this is cheaper or more expensive than owning your own kit, but you have to remember that those cents per hour of CPU include all the running costs – hardware, OS, power, cooling, hardware support, software support, ongoing patching, upgrades overtime etc.  I can say that where I’ve looked at this sort of thing in the past cost have looked pretty good in comparison.  Especially when you consider that the initial setup cost is far lower (no need to buy kit) and you don’t need to worry about old servers going out of support and having to chase your business/customers for funds to upgrade them in five years time.

Having an app out in the cloud is all well and good, but how do people sign into it?  Is it another username and password for people to remember?  In some cases the answer is probably yes, but where I think MS have a huge advantage is their work to improve the authentication experience for apps hosted on Azure – particularly for business customers. 

Their federation tool, currently called Geneva, that allows you federate Active Directory with Azure (I’ve written about this before here).  In effect, if you have Geneva setup then accessing an application hosted on Azure would have the same user experience as if it was hosted on your own network and domain.  Their usual username and password will authenticate them, and in most cases will sign them in transparently using integrated authentication.

The security of cloud services is always a question that comes up, and as this weeks news about leaks from Twitter have proved is something you have to consider very hard.  Whether it’s more or less risky than publishing an application to the internet yourself is up for debate.  Nonetheless it’s a question that you’ll probably have to answer when asked.

Anyway, that’s a pretty basic example, but it’s probably a scenario that’s fairly common.  Where private clouds might fit within this is another matter altogether!

What’s in Petrol

Last week I was reading my way through the Seloc Lotus forum that I read quite a lot and stumbled on this post by Guy from Opie Oils, one of the advertisers there (fine purveyors of all things oily!). 

It’s a little off topic here I suppose, but I thought it was interesting so asked if he minded if I reposted it here.  He’sa wealth of information and has posted quite a bit of interesting stuff on various car forums in the past.  Anyways, here it is:

What’s in Petrol

Well…………! In The Beginning there was Carbon and Hydrogen.

These got together in accordance with rules forged in the Big Bang (yes, really!) to make methane, one carbon atom with 4 hydrogens stuck on.

A bit later, (only 4000 million years) other atoms started getting together and finally came up with Life, a self-reproducing chemical mix. The reproducing bit was quite fun, but after 600 million years even that gets boring.

So, a more or less intelligent life-form invented The Car and the Motorcycle, the ultimate boredom cure. This was, and is, powered by the Internal Combustion Engine, which must have fuel.

Methane is a fuel, which means it burns in air to produce energy, but unfortunately it’s a gas; a tank-full would propel a Honda 50 for about half a mile.

But! Methane had not been idle since the formation of planet Earth, and had joined up with more carbons and hydrogens to make chains called ‘hydrocarbons’. Well, they weren’t called that at the time. They had to wait for a life-form to evolve that liked giving things names, and a hundred and 20-odd years ago chemists had to learn Latin, so they called the one with five carbons ‘pentane’, the 6-carbon one ‘hexane’, then ‘heptane’ then ….wait for it…. the 8-carbon one ‘octane’ and so on. (If we were naming them now the last one would be called ‘eightane’ so you would need 95 minimum REN for your engine.)

All these things were liquids, very thin and volatile, and pure concentrated energy. The Hildebrand and Wolfmuller (rough 1894 equivalent of the Honda 50) now did 100 miles to the tank full.

Unlike water, these liquids don’t stand around in lakes. They are hidden underground in porous rock so you have to drill for them. The old name was ‘petroleum’ meaning ‘rock oil’ but this was soon shortened to ‘petrol’. The petrol came out of the wells mixed with heavy oil, so it had to be distilled off in an oil refinery.

Early on, the pale coloured stuff that evaporated easily and caught fire very easily was sold as internal combustion engine fuel. It was a simple as that. ‘Octane Number’ hadn’t been invented, but in modern terms this ‘light petroleum fraction’ was about 50 Octane. Now we all know that in the GCSE Science engine The Piston squeezes the air/fuel mixture, then The Spark Plug ignites it to produce The Power Stroke.

The trouble is, with 50 octane fuel if The Piston squeezes too much the heat generated by compression makes the stuff Go Bang prematurely before The Spark Plug gets a look in, giving a Power Stroke with as much push as a fairy’s fart. This is why early engines couldn’t use compression ratios above 4 : 1, and 10BHP per litre was seen as hot stuff.

Engines improved but petrol didn’t and even some time after WW 1 a touring 1000cc engine only turned out about 25BHP, and a hot-shot Sport version with the latest overhead valves would need a good tuner to get 50BHP.

So finally some effort was made to stop primitive petrol going bang too soon, and a variable compression engine was invented for research. (The ‘CFR’ engine, as used for finding Research and Motor Octane Numbers, RON and MON, to this very day.) Early on researchers found that the bung in the CFR head could be really screwed down if a heavy liquid called ‘TEL’ (tetra ethyl lead) was added. This was really effective and cheap, and allowed the ‘straight’ petrol to be upped to 90 or even 100 octane, and a whole load of exciting high-power engines were designed around these fuels.

This leaded fuel survived into the late 1990s, but much earlier an amazing discovery had been made. The shape of the petrol molecules was very important. ‘Octane’ if the ‘straight eight’ version with 8 carbons in a row had an ‘octane number’ of 25. It was only the mutant octane with 5 carbons down the middle and the others sticking out from the sides that gave the best results at high compression. (This special octane is still used as a standard for 100 octane. Proper name is 2,2,4-trimethyl pentane.)

Today, ‘petrol’ is really a synthetic fluid built up from oil industry feedstocks. Very little of it is unmodified distillate from crude oil. It is tailor made to include the best compression-resisting molecules so that no poisonous and polluting lead compounds are needed to reach 95 or even 98 octane. Nothing much is added, apart from a touch of detergent to keep the engine top end clean. Quite a lot of petrol now has 5% ‘renewable’ alcohol as a planet-saving gesture, but this also improves the octane number (by about 1 ) so there’s nothing wrong with that.

Anyway, if you have a motoring holiday instead of flying ComaJet, you are keeping that carbon footprint down….and paying too much tax as well…..but that’s another story.