Welcome to EMC Consulting Blogs Sign in | Join | Help

Claypole's World - The SQL Server Side

Concerning development of bespoke applications and database administration.

  • Denali : Using EXECUTE WITH RESULT SETS (example using SSIS Source Adaptors)

    One of my pet peeves about stored procedures is the lack of clarity they provide a user on the data types of returned columns.  This is especially applicable when the column returned has been calculated. Consider a statement such as this SELECT 10.0/5. It’s not obvious what that data type is. Now consider the fact that as the SQL developer I might decide to change the statement to be SELECT 10/5.

    As I am sure you know the first returns a decimal and the second an int.  The interface therefore is inherently weak. There is nothing in the stored procedure programming paradigm that hardens that interface. Given that the result set of a stored procedure is actually a public interface or external contract that could potentially be consumed by any number of data consumers I’ve often thought of this as a bit of problem.  The only way to work around this lack of hardening is to place an explicit cast on each column to guarantee that regardless of the change to the column the data type remains consistent i.e. SELECT CAST(10/5 as decimal(8,6)). Note though that this isn’t a complete solution – it doesn’t enable us to say anything about nullability.

    This has now changed in Denali.  Denali offers an extension to the EXECUTE statement called with RESULT SETS.

    I won’t bore you with a re-iteration of books online however that article is available here.

    A modified example (Books Online version returns an error) from the CTP1 version of Books Online is below.

    USE AdventureWorks2008R2;
    GO
    
    EXEC uspGetEmployeeManagers 16
    WITH RESULT SETS
    ( 
       ([Reporting Level] int NOT NULL,
        [ID of Employee] int NOT NULL,
        [Employee First Name] nvarchar(75) NOT NULL,
        [Employee Last Name] nvarchar(150) NOT NULL,
        [Employee ID of Manager] nvarchar(max)  NOT NULL,
        [Manager First Name] nvarchar(75) NOT NULL,
        [Manager Last Name] nvarchar(150) NOT NULL )
    );
    N.B. I have modified the data types for illustration purposes from the actual values.
    What can we see from this?
    1. The column names have been changed
    2. The data types have been set
    3. Nullability can be enforced

    A good step forward I think you’ll agree. This is especially handy in SSIS as I can use this much friendlier syntax when creating queries against data sources.

    All you need to do is drop in the query above:

    image_thumb2

    and either press preview or check out columns page (below) to see column changes

    image_thumb1

    The advanced editor shows us the data type changes

    image_thumb3

    That’s very cool I think. However this hasn’t resolved the stored procedure problem. After all I can still call this:

    EXEC uspGetEmployeeManagers 16

    and get the original data types.

    This is still useful but does mean one would need to carry round and include the interface definition wherever the stored procedure was used.

    Not a great single version of the truth story there then.

    However, I can also augment the stored procedure to create that interface like so:

    ALTER PROCEDURE [dbo].[uspGetEmployeeManagers]
        @BusinessEntityID [int]
    AS
    
    SET NOCOUNT ON;
    
    EXECUTE sp_executesql 
        N'-- Use recursive query to list out all Employees required for a particular Manager
        WITH [EMP_cte]
            ([BusinessEntityID]
            ,[OrganizationNode]
            ,[FirstName]
            ,[LastName]
            ,[JobTitle]
            ,[RecursionLevel]
            ) -- CTE name and columns
        AS (
            SELECT     e.[BusinessEntityID]
                    ,e.[OrganizationNode]
                    ,p.[FirstName]
                    ,p.[LastName]
                    ,e.[JobTitle]
                    ,0 -- Get the initial Employee
            FROM    [HumanResources].[Employee] e 
            JOIN    [Person].[Person] as p        
            ON        p.[BusinessEntityID] = e.[BusinessEntityID]
            WHERE    e.[BusinessEntityID] = @BusinessEntityID
            UNION ALL
            SELECT     e.[BusinessEntityID]
                    ,e.[OrganizationNode]
                    ,p.[FirstName]
                    ,p.[LastName]
                    ,e.[JobTitle]
                    ,[RecursionLevel] + 1 -- Join recursive member to anchor
            FROM    [HumanResources].[Employee] e 
            JOIN    [EMP_cte]                
            ON        e.[OrganizationNode] = [EMP_cte].[OrganizationNode].GetAncestor(1)
            JOIN    [Person].[Person] p        
            ON        p.[BusinessEntityID] = e.[BusinessEntityID]
        )
        -- Join back to Employee to return the manager name 
        SELECT     [EMP_cte].[RecursionLevel]
                ,[EMP_cte].[BusinessEntityID]
                ,[EMP_cte].[FirstName]
                ,[EMP_cte].[LastName]
                ,[EMP_cte].[OrganizationNode].ToString() AS [OrganizationNode]
                ,p.[FirstName] AS ''ManagerFirstName''
                ,p.[LastName] AS ''ManagerLastName''  -- Outer select from the CTE
        FROM    [EMP_cte] 
        JOIN    [HumanResources].[Employee] e 
        ON        [EMP_cte].[OrganizationNode].GetAncestor(1) = e.[OrganizationNode]
        JOIN    [Person].[Person] p 
        ON        p.[BusinessEntityID] = e.[BusinessEntityID]
        ORDER BY [RecursionLevel]
                ,[EMP_cte].[OrganizationNode].ToString()
        OPTION (MAXRECURSION 25)'
        ,N'@BusinessEntityID [int]'
        ,@BusinessEntityID = @BusinessEntityID
    WITH RESULT SETS
    ( 
       ([Reporting Level] int NOT NULL,
        [ID of Employee] int NOT NULL,
        [Employee First Name] nvarchar(75) NOT NULL,
        [Employee Last Name] nvarchar(150) NOT NULL,
        [Employee ID of Manager] nvarchar(max)  NOT NULL,
        [Manager First Name] nvarchar(75) NOT NULL,
        [Manager Last Name] nvarchar(150) NOT NULL )
    );
    

    Instinctively I like this. I like the separation. I feel like an interface developer and a contract has been included with this proc.

    
    

    The casts are not cluttering up the rest of the syntax and neither are the aliases.

    I am not limited to a single result set either. 
    I can even say that no result will be returned by using the WITH RESULTS NONE option.

    However, WITH RESULT SETS will not allow you to return a different number of columns to the actual procedure. Every column must be handled.

    So a request such as this :

    EXEC uspGetEmployeeManagers 16
    WITH RESULT SETS
    ( 
       ([Reporting Level] int NOT NULL,
        --[ID of Employee] int NOT NULL,
        [Employee First Name] nvarchar(75) NOT NULL,
        [Employee Last Name] nvarchar(150) NOT NULL,
        [Employee ID of Manager] nvarchar(max)  NOT NULL,
        [Manager First Name] nvarchar(75) NOT NULL,
        [Manager Last Name] nvarchar(150) NOT NULL )
    );
    

    Returns an error like this:

    Msg 11537, Level 16, State 1, Procedure uspGetEmployeeManagers, Line 9
    EXECUTE statement failed because its WITH RESULT SETS clause 
    specified 6 column(s) for result set number 1
    , but the statement sent 7 column(s) at run time.
    

    I haven’t yet tested this for performance but a quick check against the query plans shows no obvious differences between the two query plans.

    This does suggest that the WITH RESULT SETS rules are applied after the fact and that there might be some cost in that translation.

    However, I will save that particular investigation for later.

    Cheers, James
  • Focus on Fast Track : My SQLBits VI Presentation

    For those of you interested in the principles of Fast Track I thought it’d be worth putting out my SQLBits VI presentation slide deck on my blog. It will of course be available from the SQLBits website in due course. In about 6 weeks time that will be the better resource as the session recordings will be available at that time. Only then will you be able to truly relive the JRJ and naked Neo experience. I expect both these resources (the slides and the session recording) will be available from here.

    For now though you’ll have to make do with the slides.  Simon Munro upped the ante with some natty embed from Slide Share. Imitation is still the best form of flattery (even though his slides are way cooler) and so I have gone for something remarkably similar.

    Finally a big thank you to my friend and SQLBits Committee colleague Allan Mitchell for teaming up with me and providing the SSIS demonstrations that formed the second half of this presentation.  He might think I have a slightly perverse interpretation of the word “team” after being subjected to persistent name calling throughout my section of the presentation. Actually I think he rather enjoyed it… Although it’s not often you see Allan struggling for words. :o)  See for yourself when the videos are released on www.sqlbits.com!

    Cheers, JRJ

  • Focus on Fast Track : Understanding the –E Startup Parameter

    One of my focuses at the moment is the work that Microsoft have done with various hardware providers to deliver “Fast Track” – A reference architecture for SQL Server Data Warehousing that delivers predictable blistering performance on prescribed commodity hardware. Sounds great I think you’ll agree.  What is even better is that the architecture comes with a series of best practices on how to make the most of your Fast Track system. 

    As you might expect there are some SQL Server settings you need to change to make.  However, I am impressed by the fact that the team have gone further than this.  There is also a focus on the build of your solution and how you should load data into a Fast Track Data Warehouse.

    It’s inspired me to get blogging again and I’ve decided to write a series called “Focus on Fast Track”.  This should follow on nicely for those of you who attended my session at SQLBits VI entitled “Fast Track Foundations : Serious about Sequential I/O”. If you didn’t see this presentation it shouldn’t matter too much – especially as all the sessions are recorded at SQLBits and are made available for free over the internet once all the editing / post production work has been done. 

    Fast Track is important an important step for Microsoft. They are helping customers get the most from SQL Server and not just from the software perspective.  What is being offered is clear deployment solution for data warehousing.  Tackling the physical deployment challenge with real hardware specifications is a big step forward. I hope it is the first of many.

    In this first post I’d like to talk about one of the recommended settings for SQL Server in a Fast Track System.  The use of the –E startup parameter. Actually its very relevant to all SQL Server Data Warehousing scenarios.  It has also had a chequered past in terms of how this parameter has been documented. I’d therefore like to try and bring a new insight into what this parameter does but also why it is important.

    Life without –E

    Before identifying what –E does or doesn’t do I think it would be helpful to clearly state what happens when –E has not been configured on a SQL Server.

    Imagine a Database with a Primary and Secondary File Group. The Primary contains a single file and only holds the internal tables etc and the secondary hold the user data.  The secondary file group contains multiple files that are evenly sized and spread across multiple LUNs.  It would look something like the database below.

    CREATE DATABASE [EParam] ON  PRIMARY 
        (NAME = N'EParam'
        ,FILENAME = N'C:\Temp\EParam.mdf' 
        ,SIZE = 51200KB 
        ,MAXSIZE = UNLIMITED
        ,FILEGROWTH = 1024KB 
        ) 
        ,FILEGROUP [Data]  DEFAULT 
        (NAME = N'EParam1'
        ,FILENAME = N'C:\Temp\EParam1.ndf' 
        ,SIZE = 10240KB 
        ,MAXSIZE = UNLIMITED
        ,FILEGROWTH = 1024KB
        )
        ,
        (NAME = N'EParam2'
        ,FILENAME = N'C:\Temp\EParam2.ndf' 
        ,SIZE = 10240KB 
        ,MAXSIZE = UNLIMITED
        ,FILEGROWTH = 1024KB 
        )
        ,
        (NAME = N'EParam3'
        ,FILENAME = N'C:\Temp\EParam3.ndf'
        ,SIZE = 10240KB 
        ,MAXSIZE = UNLIMITED
        ,FILEGROWTH = 1024KB
        )
        , 
        (NAME = N'EParam4'
        ,FILENAME = N'C:\Temp\EParam4.ndf' 
        ,SIZE = 10240KB 
        ,MAXSIZE = UNLIMITED
        ,FILEGROWTH = 1024KB
        )
    LOG ON 
        (NAME = N'EParam_log'
        ,FILENAME = N'C:\Temp\EParam_log.ldf' 
        ,SIZE = 1024KB 
        ,MAXSIZE = 2048GB 
        ,FILEGROWTH = 10%
        )

    When a database has been laid out as above SQL Server will use “a round robin proportional fill” algorithm to evenly distribute the data.  Every time a table requests an extent from SQL Server it will be satisfied by allocating an extent from the next file in the filegroup. This ensures that the files fill up evenly. Note the round robin is not determined at the table level.  We can see this in the following example script.

    Use EParam
    
    Create Table NoET1 (ECol1 char(8000)) ON Data;
    Create Table NoET2 (ECol1 char(8000)) ON Data;
    
    declare @cnt int = 0
    
    While @cnt <8
    BEGIN
        Insert into NoET1
        Values (REPLICATE('X',8000));
        
        Set @cnt = @cnt +1
    END
    GO
    declare @cnt int = 0
    
    While @cnt <8
    BEGIN
        Insert into NoET2
        Values (REPLICATE('X',8000));
        
        Set @cnt = @cnt +1
    END
    GO
    declare @cnt int = 0
    
    While @cnt <8
    BEGIN
        Insert into NoET1
        Values (REPLICATE('X',8000));
        
        Set @cnt = @cnt +1
    END
    GO
    declare @cnt int = 0
    
    While @cnt <8
    BEGIN
        Insert into NoET2
        Values (REPLICATE('X',8000));
        
        Set @cnt = @cnt +1
    END
    --and so on

    I’ll use the excellent Internals Viewer to illustrate what has happened.

    image

    Each time a new extent has been required it has been granted from the next data file in the sequence. This has kept the data distribution even.

    If prop fill operated at the table level then the NoET1 (blue purple) table would have data in all four files but it does not.

    The Gaps (Slight Tangent)

    You may be wondering why there is a couple of odd looking things going on in the picture above.  Lets look at file 5 as a good example.

    1. Why are the first set of squares empty?
    2. Why are there holes and mixed colours at the start of the allocation process but not at the end

    image

    At the beginning of every file is a bunch of pages that define the file. They are the File Header(5:0), PFS(5:1), GAM(5:2), SGAM(5:3), DCM(5:6) and BCM(5:7) pages. The numbers in brackets relate to the (File Number : Page Number). That takes care of the first eight grey squares.  (5:4) and (5:5) are not allocated.

    The reason the next extent has three colours is because it is a mixed extent. It contains one page from NoET1, six pages from NoET2 and an IAM page.

    IAM pages are distributed as needed through files and manage the extents for a 4 GB range in a given database file for a given allocation unit.  Each Table can have up to three types of allocation unit associated with it – IN_ROW_DATA, LOB_DATA, ROW_OVERFLOW_DATA. All our Rows are IN_ROW_DATA. Furthermore, you will get one IAM page for each file on which an allocation unit has extents.

    So in our case lets quickly look at NoET1. This table only contains IN_ROW_DATA and so is associated with a single allocation unit.  However, it contains data in three files so we have 3 IAM Pages for this table.

    Abbreviation Meaning
    GAM Global Allocation Map

    SGAM

    Shared Global Allocation Map
    DCM Differential Change Map
    BCM Bulk Changed Map
    PFS Page Free Space

    Mixed Extents, GAM, SGAM, DCM, BCM and IAM pages are all covered in books online here.

    Where is the Confusion?

    The functionality of –E I don’t believe has been completely documented. There is a subtlety to how –E works that hasn’t in my view been adequately described. Even the Fast Track paper references an incorrect/out of date KB Article.  Let’s just highlight the sources of incorrect info and look at what they say.

    1. Fast Track White Paper
    2. KB Article 329526
    3. My Blog :o(
    4. Data Loading Performance Guide

    Fast Track White Paper

    For the clarity of the post I've copied verbatim what the Fast Track White Paper says:

    -E should be added to the start-up options. This increases the number of contiguous extents in each file that are allocated to a database table as it grows. This improves sequential disk access. Microsoft Knowledge Base Article 329526 describes the -E option in more detail.”

    What is wrong with this statement? –E does not increase the number of contiguous extents in each file that are allocated to a table.  Maybe it should – interesting argument -  but it does not.

    KB Article 329526

    The source: 329526.

    “The default allocation mechanism allocates one extent per file, using a round robin proportional fill factor algorithm, before it switches to the next file in the filegroup.
    In the SQL Server 64-bit release, you can allocate four consecutive extents per given file, before switching to the next file in the filegroup.”

    So what is wrong with this statement? You do not allocate four consecutive extents per given file before switching to the next file in the filegroup. Certainly not on SQL Server 2008. This KB article hasn’t been updated to say it covers 2008 but I don’t believe the code here changed. I would ignore this KB Article completely.

    My Blog

    The SQL Gods may strike me down for this one. Rule no.1 don’t believe everything you read.  I’d referenced 329526 for a series of posts about enterprise options but clearly hadn’t tested this one properly. Naughty Claypole. I would ignore this post if I were you.

    Data Loading Performance Guide

    The Data Loading Performance Guide says the following about –E :

    “When SQL Server bulks load into a table allocated to this filegroup, extents are allocated in a round robin fashion. In the default configuration, SQL Server allocates one extent to each file before moving to the next in the chain. It is possible to increase this to 64 extents using the –E startup flag.”

    File Default Extent Allocation With –E Startup
    0 Extent 1 1-64
    1

    Extent 2

    65-128
    0 Extent 3 129-192
    1 Extent 4 193-256
    etc etc etc

    This is easily the closest to the actual answer. I’d say the wording  “It is possible to increase this to 64 extents using the –E startup flag” is slightly ambiguous and doesn’t deal with a small subtlety which is what I hope to show you in the next section. Suffice it to say that SQL Server does not increase the allocation to 64 extents when –E has been enabled.  Remember an allocation means a reservation. If a table received 64 extents every time it received an allocation then the Total pages column in sys.allocation_units would be increased by 512 pages. It does not.

    So what does –E do?

    -E changes the rate at which the proportional fill occurs. The Data Loading Performance Guide alludes to this – especially in the table.  Rather than moving to the next file after a single extent allocation the movement occurs after 64 extents have been allocated.  These extent allocations do not need to come from a single table. If I was loading two tables in parallel for example and had –E enabled then both tables would consume those 64 extents in the single file before moving on to the next file.

    -E does not alter the fact that SQL Server allocates one extent at a time.

    Therefore when –E is enabled you will only see Total Pages increase by 8 pages (1 extent) every time an allocation occurs.

    That sounds like I might be splitting hairs but I believe the distinction is important to understand. It is especially important for Fast Track as Fast Track is completely dependent on sequential I/O for its performance. –E is enabled to enhance the sequential scanning performed in a read-ahead so that more data is contiguously available in a single file.  If two tables that share the same files/filegroup are loaded in parallel then the advantage of –E is removed. The extents will be fragmented and this will in turn reduce the performance of the read-ahead.

    Now to show that I have learned my lesson. Here comes the proof.

    Proving –E Functionality

    First of all we need to enable the –E startup parameter.  You can do this by amending the startup parameters on the SQL Server Service.  You should be using SQL Server Configuration Manager to do this.

    Step 1 : Enable –E Startup Parameter

    I have actually documented setting the –E startup parameter in another post. There are a couple of pitfalls that I thought were worth bringing out and so if you are unsure about setting Startup Parameters I’d urge you to read it. It includes a step by step guide to setting this correctly (yes you can set it incorrectly).

    However, if you just want to get on with the show make sure you set your startup parameter with exactly ;-E at the end of your list of Startup Parameters on the Startup parameter property of the SQL Server Service by using SQL Server Configuration Manager.

    Step 2 : Create a multi file database

    Something like this should be fine.

    CREATE DATABASE [EParam] ON  PRIMARY 
        ( NAME = N'EParam'
        , FILENAME = N'C:\Temp\EParam.mdf' 
        , SIZE = 51200KB 
        , MAXSIZE = UNLIMITED
        , FILEGROWTH = 1024KB 
        )
    , FILEGROUP [Data]  DEFAULT 
        ( NAME = N'EParam1'
        , FILENAME = N'C:\Temp\EParam1.ndf' 
        , SIZE = 10240KB 
        , MAXSIZE = UNLIMITED
        , FILEGROWTH = 1024KB 
        )
        , 
        ( NAME = N'EParam2'
        , FILENAME = N'C:\Temp\EParam2.ndf' 
        , SIZE = 10240KB 
        , MAXSIZE = UNLIMITED
        , FILEGROWTH = 1024KB 
        )
        , 
        ( NAME = N'EParam3'
        , FILENAME = N'C:\Temp\EParam3.ndf' 
        , SIZE = 10240KB 
        , MAXSIZE = UNLIMITED
        , FILEGROWTH = 1024KB 
        )
        , 
        ( NAME = N'EParam4'
        , FILENAME = N'C:\Temp\EParam4.ndf' 
        , SIZE = 10240KB 
        , MAXSIZE = UNLIMITED
        , FILEGROWTH = 1024KB 
        )
         LOG ON 
        ( NAME = N'EParam_log'
        , FILENAME = N'C:\Temp\EParam_log.ldf'
        , SIZE = 1024KB 
        , MAXSIZE = 2048GB 
        , FILEGROWTH = 10%
        )
    GO

    Step 3 : Create two tables

    Create Table ETestTable (ECol1 char(8000)) ON Data;
    Create Table ETestTable2 (ECol1 char(8000)) ON Data;

    Internals Viewer shows that the database is still empty. As no data has yet been added nothing has been allocated.

    image

    Step 4 : Load in 64 extents of data into ETestTable

    declare @cnt int = 0
    
    While @cnt <504
    BEGIN
        Insert into ETestTable
        Values (REPLICATE('X',8000));
        
        Set @cnt = @cnt +1
    END

    Internals Viewer shows that all 504 records went into a single file. The round robin has not yet kicked in.  We load in 504 records because the first extent is mixed extent and our first 8 page requests will be allocated to mixed extents. The situation gets slightly muddled as one of the pages in the first extent is an IAM page meaning that we spill over into a second mixed extent. Therefore the first TWO extents allocated will be mixed extents. However, the 9th page request will allocate a uniform extent as we can see below.

    image

    Step 5 : Load another 64 extents of data into ETestTable

    This time we have no issue. We can happily load in 512 records for our 64 extents.

    Internals Viewer shows us that EParam3 has allocated all the extents.

    image

    Clearly then we can see 64 extents being allocated from one file before the prop fill round robin kicks in.

    However, notice that there is no IAM page in  the EParam3 file. Where is it?

    Well it is actually in the mixed extent on EParam4.

    image

    The two pages I have highlighted in the image above are the IAM Pages.  The first IAM page on the left handles allocations from IN_ROW_DATA for ETestTable in the EParam4 file and the IAM page on the right handles IN_ROW_DATA extent allocations from ETestTable in the EParam3 data file.

    Step 6 : Parallel Load Two Tables

    Step 6a : Load 64 Extents into second table

    First of all let’s even up proceedings and do an initial load into even things up.

    declare @cnt int = 0
    
    While @cnt <512
    BEGIN
        Insert into ETestTable2
        Values (REPLICATE('X',8000));
        
        Set @cnt = @cnt +1
    END
    image

    This time we can load in 512 pages in our first attempt to get to a full 64 extents because the previously allocated mixed extents have consumed 5 of our initial 8 page requests. Also note that ETestTable2 needs 2 IAM pages as the data has now spread into two files.

    image

    Step 6b : Parallel load ETestTable and ETestTable2 with 32 Extents each

    To simulate the parallel load I’ve inserted a row for each table in the following script and halved the number of loops:

    declare @cnt int = 0
    
    While @cnt <256
    BEGIN
        Insert into ETestTable
        Values (REPLICATE('X',8000));
        Insert into ETestTable2
        Values (REPLICATE('X',8000));    
        Set @cnt = @cnt +1
    END

    What are the results?

    image

    We have a striped and perfectly interleaved set of extents. 

    Clearly then –E does not allocate any more than one extent at a time and does nothing to protect this situation from occurring.

    For completeness let’s just see where those IAM Pages are:

    image

    They are of course in that mixed extent that we used earlier.

    Conclusion

    So there we have it. Clearly –E does not allocate more than one extent at a time. The change made to SQL Server by adding it as a startup parameter is that round robin behaviour in the proportional fill algorithm has been affected. The round robin does not change files until 64 extents have been allocated from a single file.

    Many thanks to Thomas Kejser for his help and to Henk van der Valk for suggesting Internals Viewer as the best way to showcase this behaviour.

    In my next post in the Focus on Fast Track series I will look at the implications of –E before moving on to look at Sequential I/O and the impact –E has on it.

    Until the next time,

    JRJ

  • Enabling SQL Server Start-up Parameters – A Couple of Pitfalls

    In SQL Server there are a couple of ways to set startup parameters: 

    1. SQL Server Configuration Manager
    2. Command Prompt Net start

    Of these only one can be relied upon as a permanent option. That is the Configuration Manager.  I have been toying with startup parameters recently as part of my interest in Fast Track Data Warehousing. I have noticed there are a couple of things you really need to be mindful of when setting these values.

    To illustrate these little gotchas I thought it would be worth while stepping through a scenario whilst configuring the –E startup parameter.

    How To : Enable –E

    Below are the steps you need to follow:

    Step 1 : Fire Up SQL Server Configuration Manager

    Start >> All Programs >> Microsoft SQL Server 2008 >> Configuration Tools >> SQL Server Configuration Manager

    image

    Step 2: Right Click on SQL Server Service and Select Advanced Tab

    image

    Step 3 : Click on the drop down arrow and update the Start-up Parameter property

    To set the –E parameter tack ;-E to the end of the list of startup parameters

    image

    Step 4 : Click on Apply and then Clear the warning box

    image

    Click Apply

    image

    Click OK

    Step 5 : Click on the Logon Tab  and Restart SQL Server by clicking on the “Restart” button

    image

    So What’s the Problem? Where are the gotchas?

    Assuming you followed Steps above EXACTLY then nothing will be wrong and –E will work perfectly well for you.

    However, gotchas do exist.  Look back at Step 3

    This is where the gotchas are. The Startup Parameter property box is very sensitive.

    • If you put your new parameter at the end of the list you must precede your new option with a semi-colon
    • If you put your parameter at the beginning of the list you must append the option with a semi-colon (documented but not clearly)
    • You must NOT have any spaces between the semi-colon and the hyphen.

    The last one is the real killer

    If you put a space between the semi-colon and the hyphen then the parameter will be ignored.

    This works:

    image

    This does not:

    image

    SQL Server interprets the startup parameter as “space-E” not “–E”. It will therefore not have the desired effect.

    The SQL Server Log

    To compound the matter the SQL Log shows something quite interesting.

    With ;-E

    2010-04-20 09:27:36.91 Server      Microsoft SQL Server 2008 (SP1) - 10.0.2531.0 (X64)
        Mar 29 2009 10:11:52
        Copyright (c) 1988-2008 Microsoft Corporation
        Developer Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: )

    2010-04-20 09:27:36.91 Server      (c) 2005 Microsoft Corporation.
    2010-04-20 09:27:36.91 Server      All rights reserved.
    2010-04-20 09:27:36.91 Server      Server process ID is 1456.
    2010-04-20 09:27:36.91 Server      System Manufacturer: 'Dell Inc.', System Model: 'Latitude D830'.
    2010-04-20 09:27:36.91 Server      Authentication mode is MIXED.
    2010-04-20 09:27:36.91 Server      Logging SQL Server messages in file 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG'.
    2010-04-20 09:27:36.91 Server      This instance of SQL Server last reported using a process ID of 5876 at 20/04/2010 09:27:30 (local) 20/04/2010 08:27:30 (UTC). This is an informational message only; no user action is required.
    2010-04-20 09:27:36.91 Server      Registry startup parameters:
         -d C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\master.mdf
         -e C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG
         -l C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\mastlog.ldf
         -E
    2010-04-20 09:27:36.92 Server      SQL Server is starting at normal priority base (=7). This is an informational message only. No user action is required.

    Illustration of ;-E

    I will use the great Internals Viewer to demonstrate the impact of setting this parameter.

     image

    In the graphic above the blueish boxes represent data pages that have been allocated from three extents. It can be clearly seen that all extents have been allocated from a single file.

    Without going into the details here (please see my next post to get the juice on –E) the default behaviour of SQL Server has changed. Proportional fill has not occurred.

    With ;space-E

    2010-04-20 09:47:19.86 Server      Microsoft SQL Server 2008 (SP1) - 10.0.2531.0 (X64)
        Mar 29 2009 10:11:52
        Copyright (c) 1988-2008 Microsoft Corporation
        Developer Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: )

    2010-04-20 09:47:19.86 Server      (c) 2005 Microsoft Corporation.
    2010-04-20 09:47:19.86 Server      All rights reserved.
    2010-04-20 09:47:19.86 Server      Server process ID is 1604.
    2010-04-20 09:47:19.86 Server      System Manufacturer: 'Dell Inc.', System Model: 'Latitude D830'.
    2010-04-20 09:47:19.86 Server      Authentication mode is MIXED.
    2010-04-20 09:47:19.86 Server      Logging SQL Server messages in file 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG'.
    2010-04-20 09:47:19.86 Server      This instance of SQL Server last reported using a process ID of 1456 at 20/04/2010 09:47:13 (local) 20/04/2010 08:47:13 (UTC). This is an informational message only; no user action is required.
    2010-04-20 09:47:19.86 Server      Registry startup parameters:
         -d C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\master.mdf
         -e C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG
         -l C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\mastlog.ldf
          -E
    2010-04-20 09:47:19.87 Server      SQL Server is starting at normal priority base (=7). This is an informational message only. No user action is required.

    Illustration of ;space-E

    As you can see from the graphic below the purple boxes representing data pages are now spread in three different data files. Three extents have again been allocated and the proportional fill algorithm has spread this allocation evenly across the files.

    image 

    So what does the SQL Server log show?

    What’s the difference in the log? Almost nothing!  One could be very easily led to think that the –E startup parameter had been set as it shows in both logs. You have to look pretty closely to see that the second log –E is mis-aligned. 

    However, the actual difference in behaviour is significant.

    In the first instance SQL Server behaviour has been altered. In the second the change has no impact on SQL Server.  However, by looking at the log one could be led to believe that –E had in fact been set. Confusing? I think so.

    SQL Server Log Conclusions

    Discovering this issue has led me to an interesting conclusion.

    The SQL Server Log is a statement of intent not a statement of fact.

    I was expecting the log to tell me to tell me that –E had not been enabled when I had configured ;space-E. It did not. It tells me that SQL Server was started with space-E.

    The log is therefore spitting out the configured values not the running values (the ones that actually have an effect).

    What would be better is if the Log told me three things.

    1. What options were configured as Startup parameters
    2. What options had been set – i.e. the Runtime values
    3. What options weren’t recognised and therefore had been ignored

    I think it’d be much more helpful (and a lot less confusing) if SQL Server did something like this:

    2010-04-20 09:47:19.86 Server      System Manufacturer: 'Dell Inc.', System Model: 'Latitude D830'.
    2010-04-20 09:47:19.86 Server      Authentication mode is MIXED.
    2010-04-20 09:47:19.86 Server      Logging SQL Server messages in file 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG'.
    2010-04-20 09:47:19.86 Server      This instance of SQL Server last reported using a process ID of 1456 at 20/04/2010 09:47:13 (local) 20/04/2010 08:47:13 (UTC). This is an informational message only; no user action is required.
    2010-04-20 09:47:19.86 Server      Configured registry startup parameters:
         -d C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\master.mdf
         -e C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG
         -l C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\mastlog.ldf
          -E

    2010-04-20 09:47:19.86 Server      Running registry startup parameters:
         -d C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\master.mdf
         -e C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG
         -l C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\mastlog.ldf

    2010-04-20 09:47:19.86 Server      Unrecognised Registry startup parameters:
          -E

    It would be even nicer if the log actually explained what these options are. It could even provide a link to the BOL documentation. That’d be useful.

    Do you agree?  I have posted a connect item up. If you think this is worth changing in SQL Server then please vote!

    https://connect.microsoft.com/SQLServer/feedback/details/553439/sql-server-log-and-startup-parameter-logging

    You may have noticed that I haven’t really explained in full what the –E parameter is. That is because I want to cover that in my next post and kick of my series entitled "Focus On Fast Track".  It’s nearly done so don’t worry you won’t have to wait long!

    Until the next time, James

  • Last Call for SQLSocial London Event 16th March 2010 : Meet Itzik Ben-Gan, Greg Low, Davide Mauri & Bill Vaughn

    Simon Sabin has organised a SQLSocial event for this coming Tuesday. It’s at the Smithfield Tavern and will start at 18:30.

    It features some of the worlds most famous SQL Server celebrities in an informal setting. These guys are over in the UK for DevWeek and have kindly given up an evening to meet you all.

    85 people have already registered – so this is very much the last call for any stragglers who have yet to make up their minds.

    There aren’t many times you get the opportunity to just walk up to the likes of Bill, Greg or Itzik and ask them a question so I’d urge you to take advantage. You might even want to ask Davide about his plans for DTLoggedExec – his free SSIS logging tool that’s available now on codeplex. Failing that you can always ask Simon about what’s going to happen at SQLBits VI.

    The event is free and your first beer is also free so there really is no reason to miss this.

    Head to http://www.sqlsocial.com/Events/10-02-16/Evening_with_Itzik_Ben-Gan_Greg_Low_and_Davide_Mauri.aspx for registration.

    Agenda

    6:30 Welcome, network, eat, post your question topic
    7:00 Small session on something interesting
    7:15 Q&A panel with Itzik, Greg, Davide and Bill
    8:30 just socialise

    Address

    Smithfield Tavern
    105 Charterhouse Street
    London
    EC1M 6HR

    Unfortunately I won’t be able to be there as I am working out in the Dam at the moment. However, I hope you all have fun.

    Cheers, James

    P.S. If your mind goes blank ask about enhancements to the OVER clause… Almost guaranteed to generate a comment from a certain someone :o).

  • PASS Nederland Chapter Meeting 17/11/2009

    For those of you who aren’t aware I am currently working out of the Netherlands. That’s one of the joys of being  a consultant for a global organisation – you can literally end up anywhere! However, last night I had the please of speaking to the PASS Chapter here in the Netherlands http://www.sqlpass.nl/. I covered some of the Data Warehousing Features in SQL Server 2008. It was a lot of fun and great to speak to a completely new audience for me.  Some people may have become wary of my presentation “style” after seeing me at SQLBits but here I had 30 or so willing victims who had no knowledge of me or my pop quizzes :o).  If you are in the UK and would like to know more about these features I will presenting this session on Friday 20th November at the SQLBits 2008 & R2 day.  It’s the last session of the day but what else are you going to do? Sit in traffic on the M4? Surely it’ll be less painful than that :o)…

    I’d like to thank every one who came along and look forward to attending next months session. If I wasn’t greatly mistaken (my Dutch is still a bit basic to say the least) Marcel van der Holst will be presenting.  Personally, I’d come over from the UK for that kind of session. Marcel is THE MAN and has helped me out with my book and also in understanding the curious case of the dubious deadlock and the not so logical lock.

    A couple of things came up last night that people have asked me to post links to that I included in my presentation.

    1. The excellent Data Loading Performance Guide - http://msdn.microsoft.com/en-us/library/dd425070.aspx
    2. Stuart Ozer’s ManagePartition.exe  - http://sqlpartitionmgmt.codeplex.com/

    I will post the slides and the presentation up onto the SQLBits website once I’ve presented it this week...

    Big thanks to André Kamman (not just another SQL DBA) for the invite and to Compu’Train for hosting the event.

    See you all at SQLBits! I think Andrew Fryer still has a few tickets left so if you want to get to SQLBits on the Friday for FREE then send him a mail.

    Cheers, James

    @jrowlandjones

    P.S. Are you interested in becoming a BI consultant for EMC Consulting? If the answer is YES then please feel free to Contact Me and let’s start that discussion…

  • SQLBits V : Friday Ticket Give-away!

    Andrew Fryer, Microsoft IT Pro evangelist extraordinaire, has 25 free tickets to give away for this Friday’s SQLBits SQL Server 2008 and R2 day!  Head over to his blog here to find out how you can claim a ticket and bag yourself a great geeky day out. You are also most welcome to stay over and enjoy the community day on the Saturday although you will need to foot the bill for your own accommodation. There are limits you know :o).

    Look forward to seeing you all there!

    Cheers, James

    @jrowlandjones

  • My new SQL 2008 Internals Book!!!

    I say my but really of course I mean “our” :o).  It’s been a fantastic experience and I have got to know some great people and had a lot of fun, stress heartache and now joy.  Big thanks to the team at Wrox, especially Ami and of course my fellow authors and contributors, Jonathan Kehayias and Cindy Gross.

    cover

    Not much more to say than it’ll be available in all good bookshops in January but don’t let that stop you getting your pre-orders in now :).

    So in the style of Murray Walker “Buy! Buy! Buy!”

    Cheers, James

    @jrowlandjones

  • Bob Muglia demonstrates 192 core SQL Server 2008 R2 – with record performance

    Today is the beginning of PASS 2009.  I am currently sat in the key note and have just seen Bob Muglia demonstrate a SQL Server 2008 R2 running 192 cores.

    I think that’s worth sharing. As I am sat next to Simon Sabin I thought I’d share this using pictures rather than bang on for pages of text (he hates it when I do that ).

    IMG_0547IMG_0549

    However, if you want to see what can be done with a mere 96 cores why not check out Henk van der Valk’s awesome work on YouTube here.  If you want to meet Henk and see 96 cores in action at SQLBits 2009. Henk works in the European Performance Center for Unisys.

    Finally some new benchmarks announced today…

    Check out the new SQL Server 2008 R2 record for TPC-E.  Of interest to me is that R2 is not only the fastest 2012.77  as opposed to 1,568.22 but also at lower cost Price/tpsE (958.23 USD compared with 1180.01 USD).

    TPC-E

    Also of interest on the benchmarking front is the new SQL 2008 TPC-H benchmark.  TPC-H is a data warehousing benchmark. SQL Server 2008 R2 has featured rather well in the price/performance for 10,000 GB results. Yes that is SQL Server in the no.1 slot :o).  Whilst the QphH isn’t the same as some of the others it clearly demonstrates SQL Server scales way beyond common misconceptions.  It is also delivering scale at a much keener TCO.

    image

    I think that’s quite enough for now.  Simon is starting to hurt me (too many words).

    Cheers, James

     

     

  • Get Mugged at #SQLPASS 09

    Tomorrow I am hosting a table at the Birds of a Feather lunch and am following this up with my stint in the Ask The Experts Area (Data Warehousing). Please feel free to come by and say hi.  I know of two people now that have stumbled upon my blog so please don’t be shy. I rarely bite and the UK hasn’t had a case of rabies since November 2002. My aim is to get this up to double figures this week.  By way of inducement, I’ll have plenty of SQL bucks to give out and also should have a stack of SQLBits mugs to distribute too.  You won’t believe the effort we went to to get them at PASS this year.  I am still walking a little awkwardly…

    So if you fancy getting your hands on one these highly sought after collectors items then all you need to do is make sure you are

    1. In Seattle
    2. Attending #sqlpass 09
    3. At the Birds of a feather lunch or
    4. Lurking around the Ask the Experts area

    Please don’t be shy. I’d rather not be sat there like Billy no mates if it’s all the same to you.  Ok starting to sound a bit desperate so let’s have a look at what you could win…

    IMG_0546

    I should point out that not all the mugs made it on one piece and so those that survived are now extremely collectable.  Mugs will be offered on a first come first served basis – until I decide otherwise. One mug per mug/person. When they are gone they are gone. Caution contents of mug may be hot when filled with contents that are hot. Referee’s (my) decision is final. A SQLBits mug will however make you more attractive to the opposite sex (after all you now own a collectors item).  Tests on improved levels of attractiveness to the same sex are on-going. Early tests are encouraging.

    I look forward to meeting you.  I have a lot of mugs.

    Cheers, James

    @jrowlandjones

Powered by Community Server (Personal Edition), by Telligent Systems