Monthly Archives: September 2011

Amazon Kindle Fire: finally the tablet market got interesting

The fanfare around the Amazon Kindle Fire has been fascinating. Firstly, that the market seemed to be comparing it to the Apple iPad. Well, really that people seem to compare everything to the Apple iPad – and inevitably the iPad comes out on top. But let’s be clear: even with Apple’s mighty buying power, the cost to customers is still $499-$829, which makes it a premium device.

Despite what people say, the iPad is a device that the relatively well-off purchase as a toy. You see it on sofas, in cafes and in meetings and Apple have sold a shedload of them – and some estimates are as high as 46m units in 2011. The iPad outsells the entire Mac range put together.

But this makes it a niche product. And along came a bunch of other devices. HP’s TouchPad. RIM’s Playbook. All also premium devices, from organisations without Apple’s buying power and design capability. Oddly, when you compete with a fine product but without the financial backing, you are unlikely to succeed – and the market agrees.

Google have fared a little better than RIM and HP but over the whole Android range of devices from Samsung, Motorola, LG, Acer, etc. – only 11m are expected to sell in 2011. Because they all directly compete with Apple and – well – Apple do it better.

Along comes Amazon, who have had success with their Kindle e-Reader. It’s pretty unclear how many they have sold this year (CEO Jeff Bezos prefers to focus on e-books sold), but it was 8m last year. Sights of a Kindle are pretty common especially on public transport, where it is a fantastic and convenient means to read.

So Amazon know how to sell devices, and it’s by creating commodity devices that are cheap and sold at very low margin – so Amazon can upsell its massive content backcatalog.

And along comes the Amazon Kindle Fire. It’s no iPad Killer – be serious. But it’s $199! It’s small, it’s cute and it’s cheap enough so that the market is no longer niche. $199 is cheap enough so that the market is opened up to a huge amount of people who could never justify spending $400-800 on a toy.

Probably the Kindle Fire is sold at very low margin or a loss, but Amazon have a business model that already works for the Kindle and you can bet that the model will work just fine for the Fire.

The Kindle Fire will sell as fast as Amazon can make them. I can’t tell you how many that will be but I heard they can produce about 5m this year. Next year – you can bet the Kindle will outsell the iPad in terms of volume, although not necessarily in terms of revenue or profit.

And having that number of devices out there suddenly opens up the tablet market to being a much more interesting place – one where there isn’t just a single dominant player. And whilst they don’t compete for retail sales, they will compete for sales of content like e-Books, Movies and Music. With Amazon’s purchase of Audible’s audio book back catalog, they are clearly positioning themselves as a very interesting cloud content provider.

Which should finally make things interesting.

SAP HANA and the strange details of hardware certification

I’ve been pondering SAP’s HANA hardware certification strategy this morning and I have to say, it makes little sense. This may well be because I don’t understand the hardware behind it, so if there are greater minds than me out there, then please correct me.

Let’s break down the SAP HANA hardware stack and discuss what’s required – and then try to understand why so few hardware configurations are supported. The hardware vendors tell me that SAP are very prescriptive as to what hardware can be used – presumably so that SAP HANA appliances are as fast as can be, and perform consistently. But if that’s the case, why do the supported platforms vary so widely?

Server Platform

There are two server platforms supported right now – the Nehalem EX, from Dell, Fujitsu, HP and IBM, and also the newer (and 40% faster) Westmere EX platform from Cisco and IBM. This makes sense but why not support any Nehalem EX or Westmere EX platform? They’re all made by the same people in the end, so should perform very similarly.

And to add to this, SAP only certified the Nehalem EX Intel X7560 CPU (which was the fastest) but now the Westmere EX is out – why not support all of that range, and not just the very expensive Intel E7-X870 range? All of them are faster than the X7560.

Memory Requirements

There’s an issue of memory volume – some servers like the IBM x3690 (which is supported) only support up to 256GB RAM – and I thought this was the reason Blades weren’t supported: only HP has a 2TB blade, and it has just 40 cores.

Certainly it seems for now that you need 1 CPU (8-10 cores) per 128GB RAM – why not just have this as a standard. This would mean that certain systems would only support a certain amount of capacity, and customers would have to procure systems accordingly. This choice should be a good thing.

Log Volume

Here’s where things get really weird. There is almost no rhyme or reason to the standards for log volumes for SAP HANA. Basically Log Volumes are somewhere to store transient information so that in the event of a database crash, recent transactions can be replayed.

Due to the large data volumes in SAP HANA, a lot of logs can be produced really quickly, and you need at least the log volume that you have RAM size. But Log Volumes create sequential writes, which with the right disk subsystem can be supported quite easily.

It appears that SAP HANA requires something like 600MB/sec sequential write performance. If I were architecting this then I would use a RAID10 SAS array with 8x 146GB disks (for example for a 512GB appliance). It would be cheap and work well – RAID10 is excellent for write performance.

However all the appliances use solid-state storage for writes. Some use 1 (or 2) Fusion-IO cards which cost about $15,000 per 320GB disk. Yes – that’s roughly $60,000 for the log storage alone for a 1TB appliance.

This provides the relevant performance but they are insanely expensive and are bad for several reasons. First, they really shine for random I/O – and Log Volumes don’t require Random I/O by their nature.

But most of all, the Fusion-io ioDrives that appear to be used in SAP HANA appliances are based on MLC SSD technology, which has a write limit of 4PB. Based on the data written to a SAP HANA appliance, these will last no time at all and the appliances will start to fail.

And to add to that, all the SAP HANA 1TB appliances have 2 640GB Fusion-io drives configured in RAID-0 – so there is no data redundancy and if one fails, you lose the log storage, appliance and have to restore from backup. Seriously.

Data Storage

Data storage makes the most sense – although again, they are pretty random based on memory size. Basically you need 4x RAM and performance doesn’t really matter: the faster the storage subsystem, the faster the appliance starts from cold.

Current storage subsystems are all Direct Attached Storage with SAS arrays but they vary from IBM with a 256GB model with 8x300GB 10k SAS – to HP who require 24x146GB 15k SAS for the same 256GB appliance. Such discrepancies make no sense – presumably HP wanted their appliance to start faster: the disk subsystem performs 4-5x faster than IBM’s!

The good news is that architecturally, using SAS base storage makes sense – at least for a single appliance. It’s cheap and cheerful and works well. But why don’t SAP just issue some guidelines for storage performance requirements and let the vendors meet it?

Conclusions

SAP HANA hardware certification is quite new and I’m sure this will bed down, but on the one hand SAP appear to have been prescriptive about what is required for SAP HANA – but at the same time, there is a huge variation in the configurations provided by different vendors, and therefore presumably the relative performance of different SAP HANA appliances.

And if this is the case, then why don’t SAP just create a performance benchmarking tool that runs on Linux. It measures the size of your main memory, and then sees if the appliance you built is fast enough to run SAP HANA reasonably. If your system passes, then you are supported.

By the way this is exactly what SAP did for the BWA appliance: when you install it, it measures the system and lets you know that CPU, memory and disk meet the minimum requirements. If they don’t then it won’t allow the installer to continue.

Isn’t this certification enough?

SAP HANA: an analysis of the major hardware vendors

When I first wrote this blog I didn’t realise a few things. First that it would become the reference location on the internet for HANA material, and Second that it would therefore require updating!

So here we are, some 9 months after the initial post, with a full revision of this information. Hardware Vendors – this is based on the publicly certified systems on SAP’s Product Availability Matrix. SAP update this regularly with new appliances and vendors.

SAP released the first version of their in-memory platform, SAP HANA 1.0 SP02, to the market on June 21st 2011. We are now at the release of SAP HANA 1.0 SP04 in May 2012 and things have moved on hugely. We now have High Availability, system monitoring and scale-out appliances: up to 16TB certified and 100TB in the lab.

SAP has now an open hardware platform, allowing multiple hardware vendors (currently 7), so that customers could choose who they want to procure from. This should theoretically produce a level playing field where the price becomes commoditised and customers get choice and value.

This article gets into the detail of what it looks like if you actually want to purchase an appliance and it’s based on my experience of working with the hardware vendors over the last 18 months.

Note that my high-level message is pretty clear: SAP HANA hardware is ready for the masses and stable for databases up to 16TB of HANA (equivalently 80TB of Oracle compressed data).

What is the SAP HANA Technical Architecture?

SAP HANA is pretty simple. It’s based on the following components:

  • Server – based on Intel’s Nehalem EX or Westmere EX platforms – X7560 or E7-2870, 4870 or 8870 CPUs, respectively. These are big rack-mount systems that take up to 8 CPUs and 80 cores. It’s commodity hardware that you can buy off the web, but for example the Dell PowerEdge R910 with 1TB RAM is $65k list price on their website. I’ve now removed all the Nehalem hardware from this post because it’s no longer sold.
  • RAM – lots of it, and matched to the CPUs. 20 cores allow 256GB RAM, leading to a maximum of 1TB of RAM with current CPUs. Think along the lines of $35k list price for 1TB RAM.
  • Fast Log Storage – Sized to 1x RAM and usually the very expensive Fusion-io ioDrive Duo. These are $15-30k a pop for the 320Gb and 640GB drives, respectively. In some configurations, the log and data storage are shared. Fusion-io ioDrive2 is now released though I have yet to see certified hardware using it. It is half the price of the ioDrive for the same capacity and much faster too.
  • Data Storage – 4x RAM. On all the certified single-node configurations this is cheap SAS direct storage. You need this so you can power down the appliance and do things like backups. Budget $15-20k for at 1TB storage system. For multi-node configurations it uses some form of shared storage – either a SAN or local storage replicated using IBM’s GPFS filesystem. Prices vary for scale-out.

So theoretically at least, you should be looking at $145-150k for a basic 1TB appliance, based on Dell’s website list prices. Note that this is hardware only – and all of the SAP HANA hardware partners offer a pre-built system with installation services and likely require a support contract. It may add up!

The other big difference since I first wrote this blog is that we now get scale-out appliances from Cisco, Fujitsu, HP and IBM from 1TB to 16TB. And in the lab, SAP have a 200-node 100TB test system which means about 1PB of uncompressed data. Things have moved on!

In addition, SAP have invested in a company called Violin, which uses Infiniband and SSD storage. This would be an awesome way to get compact scale-out HANA appliances when Intel’s Ivy Bridge server platform which enables 1TB HANA Blades to arrive.

IBM – remains the safe choice

There’s an adage: “no-one ever got fired for buying IBM”. I’m sure someone has been, but it’s good marketing. IBM sold by far the most of the last generation in-memory appliance from SAP called BIA/BWA and they currently have the greatest choice of SAP HANA appliances.

I have a total of 9 appliances on my list from 128GB to 16TB with various different configurations depending on the customer requirements. At the time of writing they are also the only vendor to scale out to 16TB – at least for now, IBM remain ahead of the game.

My colleagues also tell me that IBM promise any hardware configuration to be available in as little as 5 days. Certainly they provided a 200-node test system for SAP in less than 2 weeks!

That said – IBM isn’t the cheapest of the vendors and there are some hidden costs like the licence cost of their high-performance GPFS filesystem. But as one CIO told me “We priced up SAP HANA appliances and the vendors seemed very varied in price. But as we got close to negotiations, the variance evaporated”.

HP – Solid Hardware, availability concerns?

Since Meg Whitman took the help at HP, things seem to have gotten better (though they could hardly have gotten worse!) and they have consolidated their hardware.

HP now have 5 certified appliances from 128GB to 1TB; they also have a scale-out appliance with up to 8TB. The amount of disk hardware required is concerning – 12 disks per node or 192 disks for a 16-node 8TB system.

The concerns I have heard with HP is that they are now very strict on loan hardware and have extremely long lead times. With Hitachi and IBM being aggressive on delivery schedules, this could put them at a disadvantage.

Dell – Services Provider or Channel Partner?

Dell now have 3 certified single-node appliances from 128GB to 512GB. I heard rumours that they have lost interest in SAP HANA and their services page says “ERP in the Cloud”. Certainly I tried to buy a SAP HANA appliance from them and summarily failed. They said:

With regards to SAP HANA, after a significant amount of research throughout Dell, I must advise that we are not in a position to supply these solutions at this time. While we at Dell strive to offer complete solutions to our customers, we will only do so when we have the capability to do so effectively.

In any case I haven’t seen Dell in any customers. Has anyone seen a Dell SAP HANA appliance in the wild? Let me know!

Fujitsu – Dark Horse

I’ve not had too many dealings with Fujitsu but they have been quick to respond and appear to know what they are doing from a sales-enablement perspective. They have the same 5 appliances as HP and 128Gb to 1TB appliance sizes – just the same.

They also have the same scale-out as HP with the same enormous amount of disks, up to 192 for a 16-node system using either NetApp or Eternus disk fabric.

Cisco – IKEA of HANA appliances?

Cisco have expanded their portfolio and I hear their blade business based on the UCS C260 and C460 blades is doing well in the enterprise. They now have 128, 256 and 512GB appliances as well as a 16-node 512GB (8TB) scale-out appliance like HP and IBM.

Their appliance requires even more disks! Up to 300 for the 16-node system. Wow!

Cisco guarantee fast delivery but your HANA appliance will arrive as a bag of parts that need assembling by a Cisco Services Partner. And then shipping to you.

Hitachi Data Systems

Hitachi used to call their appliance the Blade System 2000 or BS2000 for short. Thankfully they had the common sense to rename it the Compute Blade 2000 and it is available as a blade chassis from 256GB to 1TB, using their AMS2000 shared storage.

Theoretically this should allow them to build out a neat scale-out solution using their HDS storage arrays and 2TB per blade chassis but this has not been released yet.

One thing that is worth noting with Hitachi is that they have SAP HANA hardware on the shelf in standard configurations and have a promised ship time of 2-3 weeks.

NEC – New kid in town

NEC have arrived with a single appliance – a 1TB system using Virident SSD instead of the Fusion-IO. It is bigger than all the other vendors at a massive 7U which can accept 2TB of RAM (which has no value for HANA). I’m guessing NEC have plans to certify more hardware but I have not seen one in the wild.

Conclusions

As I predicted, the hardware market will increase in volume and consolidate and other vendors have indeed come on board. This will continue through 2012.

Scale-out: Scale-out is now a reality and there are systems running IBM’s X5 platform up to 200x512GB nodes or 100TB. The concern I have is that without IBM’s proprietary GPFS technology, a lot of shared storage is required to make HANA work. Can HP and others prove large scale-out capability?

Blades: Let’s face it – SAP HANA was meant to run on blades. But there’s no suitable blade platform yet because you can’t get 8 CPUs and 2TB RAM in a single blade. Plus, you are even more limited from an expansion (i.e. Fusion-io cards) and network bandwidth perspective, if you use a blade chassis. It now looks like when Intel’s Ivy Bridge platform arrives in late 2012, the hardware vendors will have designed high-density systems to run SAP HANA.

But to conclude, the SAP HANA hardware business has come a long way in the last 9 months. If it continues to scale at this rate, Teradata had better be concerned.

SAP HANA and the death of the mainframe

Only ten years ago if you wanted to store a lot of data, you went and talked to one of a small number of vendors that sold equipment that would support it. You paid a handsome sume of money to either Digital (Compaq), HP, IBM, Sun or perhaps Silicon Graphics, Fujitsu and Sequent, and they gave you a massive computer which you hooked into a power station.

Then you went to talk to either Oracle, Informix or IBM and they sold you some expensive database software so you could make it work. A big database license on one of these systems can cost in excess of $10m.

The noughties: the decade of consolidation & commoditisation

During the first decade of the 21st century, this market consolidated. HP bought Compaq and with it the Digital UNIX systems, which it subsequently butchered. SGI had bought MIPS and stopped making CPUs, so HP made CPUs based on the Intel Itanium platform. Oracle bought Sun. SGI stopped making expensive custom UNIX hardware – they makebig systems based on Intel x86 and Fujitsu make supercomputers based on their SPARC architecture they share with Oracle. IBM bought Informix and Sequent.

If that confused you, don’t worry. You can still buy a mainframe from Oracle (Sun), HP and IBM. And database software from IBM or Oracle. It’s worth bearing in mind that HP’s platform is crap and the lack of a sensible roadmap (and lack of Oracle support) suggests that it is being sunsetted. To be honest, Itanium (dubbed Itanic) was a lame duck that never got off the ground. Sure, HP is suing Oracle over this, but that’s just a charade to make customers feel better.

The rise of Wintel/Linux/Virtualization dominance

Three additional dimensions have emerged. First, Intel’s x86 range of CPUs have become much much faster, and for most purposes, a single pizza box system can serve 99% of computing needs – even for a multi-terabyte database.

Second, Linux has emerged as a stable and more-or-less free (you pay for support from one of the large vendors like RedHat or SUSE in many Enterprise scenarios) Operating System. It runs on just about anything from  your cellphone to very large systems.

Third, we have achieved massive consolidation of equipment using virtualization software like VMware. This is because most systems sat there idle, so you can take 50 or 100 systems and put them on one physical piece of equipment. This isn’t that relevant to this discussion because we’re talking about large scale systems, which VMware doesn’t help.

What’s more for commodity applications, Microsoft’s Windows and SQL Server database are pretty attractive. They’re cheap and easy to use for mid-size data volumes, which isn’t really the focus of this article. Microsoft would argue that they compete with DB2 and Oracle, but this isn’t the case for databases >10TB: IBM and Oracle still rule the roost.

The stop-gap: Teradata and Exadata

First Teradata and now Oracle’s Exadata have now built custom hardware based around largely commodity components. They are both based on the Intel x86 platform and Linux Operating System and both built on largely the same premise: make everything parallel.

But Teradata and Exadata only benefit customers from a performance perspective. Despite using commodity components, they are extremely expensive and profitable for Teradata and Oracle respectively. Plus you are tied into their platform and they will come knocking for their maintenance dollars.

For my money both these technologies represent an opportunity in the market for the vendors but it is not the end-game.

Where SAP HANA fits in

SAP HANA and other solutions like Hadoop, are truly fascinating. SAP HANA runs on high-end commodity hardware and provided you have enough memory to run your database, it doesn’t matter what you run it on from a technology standpoint.

Initial certified hardware solutions are still quite expensive – I priced up a 1TB Dell system for $75k – not including disk storage, which will probably double that number at retail price, so think $150k. But really that’s nothing compared to what an equivalent 10TB database cost 10 years ago (SAP HANA compresses 10:1 compared to databases back then).

And worryingly for Oracle, Teradata, IBM and HP, it is nothing compared to what Mainframe or Teradata/Exadata hardware costs.

Where does this leave us today?

In the short term we have IBM posting higher profits from Mainframe sales, HP in all kinds of organizational trouble and Oracle focussing on Exadata. This figures as people will keep buying mainframes to stay in support. This market won’t die out for many years, but it will start to tail off by 2015.

The decline of the mainframe

IBM at a global level don’t care because they already know their hardware business is in terminal decline. They sold off their PC business to Lenovo and their services business is growing at a sufficient rate that that it’s all good. Although I’m not sure if the SVP of Hardware at IBM feels the same way – you can bet his targets aren’t decreasing year on year. Besides IBM have the best and dominant mainframe platform, for various reasons.

Teradata should know by now that they’re screwed long-term, and HP have bigger problems, like how to stay alive. Mainframes aren’t a focus for them.

Oracle remain arrogant (plus ça change!) – and in reality may innovate fast enough for this not to be a problem for them. They have an in-memory product called TimesTen in development which will no doubt compete with SAP HANA. And Oracle has traditionally run on the kitchen sink, including hardware from most of the above vendors. I suspect they will move away from the appliance game again over a period of time.

Attempting to balance consulting with health – Part 2: The Calorie Deficit

In my first part: Attempting to balance consulting with health – Part 1: Self-Awareness, I describe how we need to become aware of what we’re doing and the effect it has on us before we can really make a change. If you’re in that place then read on – if not then go back to the first part and think again :=)

As I said before, I found myself at 220lb and feeling pretty overweight, at the beginning of 2011. Knowing that to look really great, I needed to lose 40lb of weight felt like a long journey ahead. And for now I think it’s safe to ignore that if you feel in that position.

Because in the end, if you lose just a little weight, you will start to feel good about yourself. Shirts that are a little looser, belts a little tighter – all those things will make you feel great. But there are dangers ahead.

The Calorie Deficit

The human body is a bit like a bank. We consume food and we place a deposit. And we spend from our deposits a little every day, and a lot more if we exercise. If we deposit more than we spend, we get fat.

And so it goes – if you can adjust your lifestyle so you spend more than you deposit, then you will lose weight. At my height and age I needed to put in 2500 calories to my body (with no exercise) to maintain that. If I create a deficit – by either reducing food or increasing exercise – of 500 calories a day – then I will lose 10kg over 6 months.

The best thing is that 500 calories a day is easily saved. A bottle of wine, for example is 500 calories. Or 2 pints of Stella. Or a chocolate bar and a can of coke. The important thing is not to cut out the stuff that matters – and by that I mean the stuff you really enjoy. Cut out a few things that you can do without.

Exercise – your way

The other half for me was exercise. Another 250 calories a day can easily come from 30 minutes a day of light exercise – or a 40 minute brisk walk. But it has to be whatever makes you happy.

What makes me happy is running – I really enjoy it. Some days I will run a mile, and others I might run 10. At this stage I did whatever I felt like and it was just fine. The best thing about running is that running kit can travel wherever you are – important with consulting.

If you don’t like running then there’s the hotel gym, just plain walking or perhaps putting a bike in the back of your car, if you travel by car. But whatever you do, make sure it’s something you enjoy, or it will become a chore.

Eat – but without guilt

Once you’re in overall, don’t worry about the details. Try to eat what you can’t because if you don’t, you will binge. Just worry about the overall situation – but if you get the urge to have a chocolate bar or a bag of chips, just run with it.

You ate all that stuff from time to time before, and the overall effect is negligible compared to the changes above. However if you start feeling bad about it you will then go on a big calorie binge at some stage, and that is a bad thing!

The most important thing is not to feel guilt: what is important is the overall loss and not whether you follow some strict regime.

Making time

This is probably the big one, and it’s so personal. It’s so easy just to go back to the hotel after the long day of work and head to the bar. For a quick drink of course. And then after a couple of drinks and a meal to check your emails and flop into bed. The morning after, the snooze button is hit repeatedly of course – before a quick shower and heading into the office after the fried eggs at breakfast.

It’s not exactly a virtuous circle and breaking it is really tough. But if you have become self-aware you  might be willing to try to do something about it.

For me, it was the travel which helped – being away from home meant that there was little to do in the evenings, and I started to use that to my advantage. At the beginning I ran – a few miles here and there.

And at the weekends, I would take off for a long run – 7-10 miles in the early morning one day. I ran the London Marathon in 2009 so I’m ok with distance, but please, if you do the same – work up to distance running slowly.

The thing to bear in mind is that it’s a simple choice and an easy one. Run rather than have a drink at the bar?

4 Months later

So for me – 4 months later and even with this not-particularly-regular regime, I had lost some 20lb. I was below 200lb or 90kg and I felt a lot better. The jeans were looser and I was lighter on my feet. I fit into suits I hadn’t ever fit into, and jeans I hadn’t worn in 4 years.

The slight paradox though is that for each lb I lost, I felt more self-aware of those that were left – and also for the lack of tone. I’m guessing you may also feel the same. And at this point the summer was looming and I wanted to look good. But that’s for another time.