Welcome to EMC Consulting Blogs Sign in | Join | Help

Simon Munro

SQL Data Services Does Not Scale

By scratching deep enough into available material on SQL Data Services (SDS) you will uncover the limitations of the platform that make it very difficult to scale. It does not scale upwards because of the size limitation of the instance and it does not scale outwards because the SQL model does not handle partitions very well.

The data has been floating around a while, even when SDS was in its previous guise of the ACE model, and referencing various old and new materials and videos I came to the conclusion that SDS doesn’t scale. That hit me like a tonne of bricks – the natural assumption when talking about the cloud is scalability and scalability is, after all, a big buzzword. Scalability is still mentioned in a roundabout way, such as more recently ‘All of the guarantees around scalability and consistency still apply’, which seems to be PR approved, non committal and probably legally correct. The official marketing speak page on the Azure Services platform states:

With SDS, you can easily provision and deploy relational database solutions to the cloud, and take advantage of a globally distributed data center that provides enterprise-class availability, scalability, and security with the benefits of built-in data protection, self-healing and disaster recovery.

You see what they did there? They pointed out that the ‘globally distributed data center’ is scalable (and you can take advantage of that), not SDS itself – the inference that SDS is scalable was your own.

Before getting to my argument, let me ask you, dear reader:

Do you think that a SQL database that is limited to 10GB in size running on an average sized machine can be considered scalable?

Me neither.

On Saturday at the SQLBits conference I presented “Comparing Azure Storage and SQL Data Services” where, in one of the earlier slides, I made the statement that SDS doesn’t scale and asked who agreed, disagreed and did not have an opinion. There were a couple of people in agreement and most of the people had no opinion – so I spent the next 45 minutes presenting some stuff so that they could at least arrive at their own opinion by the end. The talk was largely on the difference between the EAV (Entity Attribute Value) model and the SQL model, BASE, partitioning and Brewers CAP conjecture. The image below is an excerpt from the presentation (which will soon be available as a video from SQLBits):

clip_image002

It depicts my basic understanding of how SDS works in the fabric

  1. SDS databases have to run completely within a single instance. You can’t have a database, supporting TDS and ACID as we know it, that spans more than one instance.
  2. The movement to another instance implies the use of a separate partition, which SQL cannot do (very easily).
  3. The multiple ‘nodes’ within an instance that are managed by the fabric are there for availability – data is copied between the nodes by the fabric in an ACID manner. If the primary node fails, the fabric can route TDS to one of the backup nodes while it sorts out the one that failed.
  4. A fundamental principle of the MS data centre is to provide/use commodity hardware and therefore the size of each node is limited by what would be considered an ‘average server’ specification.
  5. In addition to the limits imposed by the commodity hardware, databases that are too big will utilise too much bandwidth and horsepower to practically keep three copies of the data up and available at any one time.
  6. There is not (yet) any way to specify the size of the node, so you can’t request a bigger ‘machine’ in the data centre, so you cannot scale an instance up.

The only (and suggested) way of getting scalability out of SDS is to scale out using partitions. Generally this is a really difficult thing for the relational model and SQL to do as a big part of the SQL model is the ability to have consistent data. Brewers CAP conjecture implies that SQL, by being Consistent and Available has to forgo Partition Tolerance – and I believe that that is the case with SDS. The MIX09 presentation “What’s New in Microsoft SQL Services” by Nigel Ellis provides an interesting link to a case study “Microsoft Exchange Hosted Archive to Store Petabytes with SQL Services Infrastructure”. There have been mentions that hosted exchange is a big internal customer of SDS and the scale-out abilities of this solution have been provided by partitioning the data – in the case of “customer Big” across 768 partitions in the case studies’ diagram. You should take a look at the document and see if you can decode the fuzzy diagram to glean some information.

Anyone who has tried to build a solution based on partitioned databases will know that the architecture in the above case study is no simple achievement and takes a lot of work whereby an entire layer needs to exist to allow the system to first identify the correct partition for the data. In the same MIX09 presentation Nigel Ellis hints at efforts that will be made in terms of adding some partition support to SDS, which may, through the use of configuration, provide for better querying across partitions (although ACID may be more difficult).

I think it would be great if the SDS team were a bit more open about the possible limitations of SDS scalability. A question/answer was posted on a recent SDS blog post:

Are we losing some BASE capabilities to grant ACID capabilities?

The whole theory behind BASE (basically available, soft state, eventually consistent) is to gain scalability at the cost of consistency. We have always supported full ACID capabilities in the service and will continue to do so.

I think that it is a valid question and a meaningless answer. The question originates from the assumption based on the previous ACE model of SDS that SDS would be more EAV based than SQL based – therefore being more BASE than ACID. As it turns out, that will not happen and the SQL model is the chosen approach which does indeed lose BASE capabilities in order to grant ACID capabilities.

At every opportunity I reiterate my belief in SDS and think that Microsoft has a really good offering – being the only vendor that offers a really good enterprise RDBMS in SQL Server 2008 as well as a cloud offering that can leverage existing database skills, technologies and investments. Microsoft is not (yet) targeting enterprise scale cloud-only databases (apart from the Hosted Exchange product) but can satisfy a huge chunk on the market by providing a good product for small to medium projects as well as the ability to combine cloud and on-premise databases using data sync services. We are a long way from your average enterprise wanting to simply move a multi terabyte database into the cloud – security, regulatory and other issues need to be resolved first.

Since the ability to handle partitions is effectively out of the (SQL) database the metadata model for partition information needs to be hosted within another application or service. Although initially there will be a roll-your-own approach it is highly likely that Microsoft will offer something in their stack. Perhaps the silver-bullet for handling the partitioning problems that SDS is presented with will come in the shape of the Entity Framework (EF). In theory the EF model could contain enough information about the spread on data across partitions, be they on-premise, in the cloud or across service boundaries – in which case, if developers stick to coding against EF they will get a degree of scalability built in to their architecture for free.

Until the cool tools emerge I think that SDS should be a bit clearer on what they mean about scalability and perhaps put a warning label on the product :

clip_image003[3]

WARNING

Although this product supports scalability by allowing scaled-out partitions, it is unlikely that your current application architecture is compatible with the style and your developers probably don’t have much experience developing partition aware or tolerant applications. Please seek professional advice before making incorrect assumptions about the scalability features of this product

Disclaimer: The opinions formed in this post are based on publicly available information specific to SDS and general architectural principles that apply to cloud architectures. Nothing in this post is, as far as I am aware, covered by an NDA (I have not noticed them on the SDS blog or MIX09 videos), nor has this been discussed with any party that has inside information (I did not get an MVP drunk and ask lots of questions).

Simon Munro

@simonmunro

Published Tuesday, March 31, 2009 5:11 PM by simon.munro

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

jamie.thomson said:

"so that they cloud at least arrive "

Freudian slip? :)

March 31, 2009 5:40 PM
 

simon.munro said:

Good spot Jamie!  Updated.

March 31, 2009 5:43 PM
 

SQL Server and Cloud Links for the Week | Brent Ozar - SQL Server DBA said:

April 3, 2009 1:05 PM
 

Mike Amundsen said:

well put.

April 3, 2009 6:53 PM
 

RBarryYoung said:

Great stuff, simon.

April 4, 2009 2:45 PM
 

Jason said:

It seems Simple DB is similar in that it recommends partitioning. I haven't seen any documentation yet for Google Apps Engine that says anything about partitioning yet. I'm sure its there too. Sneaky cloud-marketing bastards.

May 1, 2009 4:59 PM
 

simon.munro said:

Jason,

I would suggest that Amazon's SimpleDB and Google's Bigtable are explicity recommending partitioning as the intended audience (those that know about EAV storage models) are more mature in their understanding of non-relational storage.  Microsoft having 'no comment' on the scalability of SDS does mean that their marketing is getting ahead of real-world architectural implications

May 1, 2009 5:19 PM
 

Delivery Focus said:

As much as I avidly support cloud technologies and as much as I prefer using SQL Server, I can’t come

May 5, 2009 4:51 PM

Leave a Comment

(required) 
(optional)
(required) 
Submit
Powered by Community Server (Personal Edition), by Telligent Systems