Blistering IOPS at a sensible price

blistering iops at a sensible price

Storage has come on leaps and bounds over the last few years, especially now with ‘all flash arrays’, virtualised storage and software-defined storage solutions being pushed. This is all fuelled by the demand for more input/output operations per second (IOPs), scalability, service automation and increasing capacity.

The All-Flash Array

‘All flash arrays’ are great ways of maximising IOPs, but as with all performance there’s a price attached and the cost per GB of storage rises significantly.

You will find all flash arrays are capable of 300,000 IOPS upwards, this can be sustained reads/writes with latency less than one millisecond. But buying in an ‘all flash array’ and putting it in your environment may not deliver to you the expected performance. You need to look at the infrastructure servicing the storage network. There is no point in putting in the best flash storage array on the market yet the supporting network bandwidth and host HBA’s are not performant enough to utilise this.

So in short, ‘all flash arrays’ can deliver outstanding performance for your business but it will require significant buy-in to upgrade storage networks and that’s before you include the cost of the array itself.

The Hybrid Array

Dot Hill has helped the SMB\SME in this area by bringing to it high-performance storage at a reasonable price. So bring forward the Dot Hill 4004 series of SAN with its real-time tiering feature. The tiering firmware is the code from the more expensive Pro 5000 series unit adapted to run on the 4004 series of SAN’s.

What this storage array achieves is that it allows us to forget about the expensive ‘all flash array’ and instead use real-time tiering to get the performance we require. Real-time tiering is a technology that monitors blocks on your disks and moves them to faster disk as and when required. The difference between Dot Hill’s solution and the competition is that it does actually move data in real time. Many other competitors will scan storage usage during set schedules such as every 4 hours and then move the data. The risk here is that you have missed the point the data was hot and required faster I/O, resulting in the user experiencing poor performance. With the Dot Hill, real-time tiering option data gets access to fast disk when it needs to, so you meet SLA’s on application performance.

Dot Hill offers a range of configurations based on budget including the size of the shelf, autonomic tiering options, controller port speeds, replication and snapshot licences. This is all in a modular format so you only buy the options you require and are not forced to purchase for example replication licences when you don’t use replication on your storage arrays.

The Business Requirement

A scenario of how this solution can be tailored is as follows:

I am an IT Manager looking to upgrade a Windows 2003 File server to Windows 2012R2 hosting 5TBs of storage and growing daily.

I also have a database server that hosts many databases for my company applications. This requires higher I/O, identified due to recent performance issues.

As always I need to do this on a tight budget but still deliver a solution that will match the scope set out by the business. Part of the budget is for storage to serve databases and file data. In order to spec a cost-effective storage option I would suggest these actions:

  • Assess my backend storage network, in this case, running 8GB fibre
  • Assess my host’s HBAs in this case already utilising dual 8GB fibre
  • Look at storage requirements existing plus growth estimated allowing for three years
  • Database peak periods can see bursting of I/O above 100,000 IOPs
  • % of reads vs writes on all systems

The Solution

So to achieve this I would mix SAS, NL SAS and SSD disk in one array and create two-tiering pools. A tiering pool is a group of base disks tiered dependent on disk speed. This is then presented as a usable capacity for volumes to reside on and tier between as required in real time. (See Fig1)

So for the file servers in Pool A, I would create two tiers containing cheaper near-line SAS disk and faster SAS disk. The near-line SAS volume would make up roughly 85% of the required storage and the SAS volume would take the remaining 15%.

What we have achieved here is to use slower cheaper disks for file data that isn’t really accessed yet providing faster disks to cope with hot data, such as user profiles during login.

Now for the database servers in Pool B, here we would build an array of SAS disk with a header of 400GB SSD storage. Again this achieves highly performant I/O for the database server at the time of bursting the blocks would move to the SSD guaranteeing performance. Here I am achieving flash array performance but at a fraction of the cost.

tiered storage diagram

Conclusion

By utilising hybrid arrays with real-time tiering we can provide I/O when required by our applications and file data.

To achieve this we tier our arrays with a higher percentage of storage being slower cheaper disk and a header tier utilising faster more expensive disk. The reason for this is that studies show that 90% of data written is never accessed again so building large flash arrays for this type of data is not cost-effective That’s not to say all flash arrays do not have their place indeed they do but it’s knowing when and where to use them.

Dot Hill’s real-time tiering is a breakthrough for those small and mid-market firms who still require those heavy-hitting applications and virtualisation platforms. They can now be supported on a storage platform that won’t break the bank.

Dan Mulliss

NEXT>> 6 current storage trends

Does a bigger cloud provider guarantee better service?

Cloud - Is a bigger provider better?

When choosing cloud, is a big name always best? Over recent years, we’ve seen a number of significant outages at a good number of the larger cloud providers and platforms out there. Some have been just blips and some outages have lasted days. And to prove it’s not just small providers facing downtime issues here’s evidence even the big players sometimes stumble:

The above is, of course, a small snap-shot of the biggest names. But what I’m showing here is a simple demonstration that bigger doesn’t necessarily mean better – and you shouldn’t ignore that fact.

A big name doesn’t mean better service

I am forever frustrated by those who go out and buy a core business service from businesses who, in effect, sell their services primarily based on cost, backed up by their size. I’ve just lost count how many adverts I’ve seen and sales calls I’ve had (clearly not reading my website) telling me how great their cloud platform is, how big it is, how cheap it is, often how new it is‽ So, through experience – what have I learnt?

1. You are probably a drop in the ocean

Bigger providers simply mean that you are one of perhaps 10,000, 100,000, 1,000,000 – or more customers. If there are issues, which we’ve seen – your voice is lost, your cry means nothing. You get what you pay for when it comes to IT infrastructure and cloud services, well you certainly do if you are comparing providers in a sensible manner. If something is cheap you have to seriously understand where that saving has come from. To hear that it’s bigger, thus we get better pricing due to volumes is not usually a sensible answer – scratch beneath that.

2. Unless a cloud service provider has been providing the services you are buying for more than 3 years, be careful

It does take a number of years to bed in a new environment and to iron out issues… no matter what the size of the organisation. Also understand if it’s actually that business providing and managing that service, in essence, they have some control if something goes wrong. I was approached by a very large hosting company the other day who had moved into IaaS but it was built by a 3rd party – I then see that a month later they had a 6-hour outage on that new platform.

3. SLA’s mean nothing

Don’t think that an SLA on paper means anything. What’s your comeback on a Service Level Agreement? Very little if you really look into it. Will you speak to someone who really has the clout to make anything happen on your behalf, very unlikely. The bigger the organisation delivering you the service the less important you really are.

4. Resource demand

I was speaking to one of the largest cloud providers in the UK, who were advertising fast or slow storage. In effect, this meant that they were selling SAS storage and SATA storage. I obviously understand that this is irrelevant as the speed is also dictated by the RAID set and also the number of clients on a particular RAID set. If you don’t know what IOPs are then read my explanation on the QuoStar Blog. However, in short, the individual speed of a disk is generally irrelevant.

5. Understand the billing

When you take into account the resource demand many providers can deal with that – but they will bill you accordingly. Just make sure that you understand if you are better off on a fixed service, rather than an elastic one – often advisable for fairly static and loads, such as an Exchange server or a Citrix server. Also understand what they bill you for data going in and out of your environment, in terms of both disk I/O and network bandwidth. You’ll often sign up on a low basic package that starts to ramp as you use it.

The issues with smaller cloud providers

I’ve obviously learnt a lot more, but the five points above are particularly relevant when looking at a large provider. I should state that I’m not necessarily saying you should always go for a smaller provider, as I’ve seen a whole raft of issues with this side of the spectrum. Including problems like:

1. They often outsource

They may have no control over your systems as they just make a sales margin on stuffing your systems into a larger provider, often one of the large providers stated above.

2. They don’t have the migration skills

Sure the cloud is great for a large percentage of businesses, but the technical bit is the easy bit from that side. You have to have a significant amount of experience to really make cloud a success, not everything works without some real understanding of the issues and the relevant mitigation techniques. This can only really be gained through experience, no matter how good the technical skills.

3. Cloud’s not right for every environment and business

Many of the smaller, well in effect the larger providers, will just pump cloud as it’s all they know or it’s what they know how to sell and implement, or simply what the sales team is incentivised on.

4. Underspecification

Many smaller providers, and actually most of the larger organisations as well, will just specify what you ask for, or make a judgement call on what they deem you need. You may not really understand all the implications so you sign a contract based on that initial quote. You’ll often find that you are then contracted, you complain that the speed is slow so they’ll say okay you need to pay more to up the resources to address that issue.

5. A home-built system

The cloud is big so many smaller providers have built their own systems over the last few years. Many will have environments that are sound but many will not. You need to understand at the very least:

  • How old is the cloud platform (how long they have been using it to deliver business-class services)?
  • Who runs that system? Is it them or is it managed/supported by a 3rd party?
  • Who built the system? Did they build it, or did a 3rd party?
  • Do they have accreditation? They’ll tell you the data centre is ISO 27001 compliant (audited from a security systems standpoint), but are they as the company actually managing your systems accredited? If they aren’t then it doesn’t really mean anything.
  • Is the connectivity diverse (numerous carriers)? If it isn’t, then walk away. Also, make sure they have N+1 (one plus a spare) at least on every element of their cloud platform.
  • Know where your services are running from. I’ve actually seen some providers running ‘cloud services’ from their offices!

6. Going too local

If you are evaluating the cloud then chances are that a portion of that decision will be business continuity, either you’ll specify that or the smaller provider will state the benefits to you. As a rule, I’d generally advise a data centre out of your area, perhaps at least 25 miles away, you don’t want a serious and large-scale event such to affect you and the data centre simultaneously.

All the above are things I’ve seen time and time again. Of course, I’ve just given you a small snapshot of a few key areas, but you need to understand the detail. It all depends on your business, but just understand that the big names are not always the best option.

NEXT>> What is the cost of cloud computing?