Welcome to EMC Consulting Blogs Sign in | Join | Help

Steve Wright's Blog (2005 - 2012)

I have now left EMC Consulting, if you wish to continue to receive new content then please subscribe to my new blog here: http://zogamorph.blogspot.com

  • A Business Intelligence project ALM's

    Table of Contents

    1   Introduction.
    2   Automation process components
      2.1 The Build.
      2.2 The Deployment script.
      2.3 Automated deployment and execution.
    3   Visual studio tips.
      3.1 Reducing unresolve reference errors.
      3.2 Composite Projects.
    4   Resources.

    1 Introduction

    As I have mentioned in previous posts I have been working on a data warehouse project. One of my main roles in this project was to work on the build, deployment and development processes. Within this post, I would like to show how I, with help from my colleagues, was able to automate a build, deployment and execute the data warehouse code.

    The reason for developing this automation was to try and find problems before we released to user acceptance testing (UAT). Once this automation started running it did capture errors. One error in particular, which would only appeared in release, was missing permissions. As the developers and the testing team all have different elevated rights, so that they can do their jobs, masked the problem ever existed.

    2 Automation process components

    A child with building blocks

    2.1 The Build

    To develop our data warehouse solution the project used the Microsoft development tools: Team Foundation Server (TFS) 2010 and Visual Studio (VS) 2010 / 2008. TFS 2010 was used as a source control repository and automated build platform (Team build). VS 2010 was used to develop our database code and VS 2008 - Business Intelligence Studio (BIDS) for creating the integration packages and analysis services cubes.

    As we were using the above tools I was able to create, using my company's IP, a build that would drop deployable artefacts. This was then wrapped into a TFS build which would be triggered by a number of check-ins. This gave us our continuous build for the project which was helpful to find problems like: incompatible code between team members; incomplete check-ins missing files from the change set.

    However, if a build was successful it didn't mean that the build was deployable. An example of this is an unresolved reference warning which shouldn’t have been a warning but an error. This happened to us, one of our builds which we tried, had several unresolved reference warnings, we tried to deploy but failed because a view referenced to a column within a table was invalid.

    A parachutist

    2.2 The Deployment script

    We used a PowerShell script to manage the process of deploying, in the correct order, our databases, SSIS packages and analysis cubes. The script, which my colleagues had created, used a configuration file to list the artefacts for deploying with how and where. Also the script interaction used a text menu interface, which was created from the following location: http://mspowershell.blogspot.com/2009/02/cli-menu-in-powershell.html

    This was helpful in reducing the work that we had to do for writing a release process. As we only had to write was: select menu option blah, select menu option blah etc. The script help reduce deployment errors from incorrect commands typed in or artefacts deployed in the wrong order.

    A running nan

    2.3 Automated deployment and execution

    To have the automated build, deployment and execution process I reuse the build and deployment components. The build was refactored and reused within a scheduled trigger for out of office hours. The deployment script was extended to include support being called by automation tool. The automation tool was called TFSDeployer. TFSDeployer is a service which can be configured to listen for certain build events and then run a powershell script when captured. For more details about the application, please use the following link: http://tfsdeployer.codeplex.com/.

    The script, which TFSDeployer would execute, when the out of office build completed, was configured to do the following steps:

    • Get the TFS build details.
    • Copy the contents of the build drop location to a deployment area.
    • Deploy the data warehouse solution using the deployment script
    • If the deployment failed then update the TFS build to failed and the status to broken
    • If the deployment was successful then execute a SQL Server job; which was configured to run the ETL.
    • If the SQL Server job reported failure then the TFS build was set as failed and the status to broken
    • If the SQL job reported success then the TFS build was set as successful and the status set for ready for testing

    This then gave us the ability to test our builds and as mentioned before allowed us to find many problems before we went into UAT. This also gave us a platform to perform and automate regression testing.

    3 Visual studio tips

    3.1 Reducing unresolve reference warnings

    Here are some tips on how we managed to reduce some of our unresolved reference warnings.

    • Converting some of derived tables to use common table expression with column definition syntax: WITH <CTEName>(Col1, Col2, Col3) AS
    • Adding a server and database variables for references which were going to go to be referred to via a link server. Within the code also wrapped the reference variables within square brackets.
    • Include objects which were created by code for example: select into tables and indexes. With indexes we also took another approach, where possible, which was to disable and enable them.

    3.2 Composite Projects

    During one of our early development cycles we came across a circular reference problem. This caused a problem at deployment time. I had to deploy the two databases about three times each to get a complete deployment.

    The method I used to reduce a circular reference to happen was to use composite projects. The way I chose to split databases into composite projects was to put the storage objects like tables into one project. Then created another project for the code objects like views and referenced the storage project. The idea for this was based on some information I read from: http://vsdatabaseguide.codeplex.com/

    4 Resources

  • Integrating a rules engine with integration services

    Introduction

    As I have mentioned in several previous posts, I have been working on a data warehouse project. One of my main roles within this project was to work with our clients JRules / Java consultants to help integrate IBM JRules, a rules engine, with Microsoft SQL Server 2008 R2: Integration Services (SSIS).

    The goal of the integration was to have the rules engine as part of the ETL process to help evaluate business rules. The reasons to use a rules engine within the ETL process were as follows:

    • To allow business people to write the business rules for the data warehouse in a language they know: English
    • To give the business people the responsibility for maintaining the rules to ensure that the data warehouse is up to date.
    • To be able to update the rules without having to deploy the entire ETL solution.

    IBM JRules was used within the project because it was already in use within their organisation. The challenge was how to cross the divide between Java and Microsoft .Net.

    Options to consider for integrating with the rules engine

    Below is a list of options which I and some of my other colleagues thought about on how to integrate JRules and SSIS

    • How the rules engine will access the data.
      • Receiving the data: Should the data be supplied with the request to execute the rule engine.
      • Reading the data: Is the rules engine responsible for reading the data when requested to run.
    • Which processing method
      • Batch: Process a batch of data within the rules engine.
      • Row by Row: Make separate requests to the rules engine row by row.
    • The communication / execution method
      • Using Web Services method, Restful Services method.
      • Directly coding against the rules engine API libraries. For our integration we would have to have used some sort of interoperability classes like: JNBridge, JIntegra.
    • How to apply the results of the rule engine. Does the rule engine apply the results directly or are they returned to the ETL for processing.
    • Error handling
      • Rules engine errors: how to record the errors from the rule engine.
      • Integration errors: how to handle errors caused by a failure of the integration.

    Our approach

    Our approach to the integration was to use a batch process model with a SOAP Web method for triggering the rule processing. The rule processing would be responsible for reading, processing and recording the results of the rules execution. The SSIS packages would prepare the batch of data; trigger the rule processing and then process the results. Below is a diagram to illustrate the integration approach:

    SSIS Intergation with JRules diagram

    Web service method

    The web service method, which was written by our clients JRules/Java consultants, was defined with the following parameters: the batch identifier; the name of the collection of rules for evaluation.

    This method would then start running the rules processing procedures which are as follows:

    • start reading the data
    • execute the rules over each complete source row once fetched
    • store the results back to the database

    After the completion of the rule processing the web service method would give a response indicating the success of the rules execution. The web service method was optimise to use multiple threads and to stream read the data. Also the way that the rules are implemented within JRules also helps to optimise the rule processing.

    Data Integration

    The way that we passed data between SSIS and JRules was to use a couple of tables, which use a name value pair structure. One table, Inbound, was used to store the prepared data for rule processing. The other table, Outbound, to store the results from rule processing which the SSIS would process.

    The reasons to use a generic structure are as follows:

    • If the rules parameters changed then only the ETL process would need to be updated
    • The rule integration was easier to code
    • To have the integration only use two tables to help debugging

    SSIS Integration

    SSIS packages would be responsible for creating the batch data. Afterwards trigger the rules engine processing method and wait for the response. Once the response was received the results would be applied or/and log any errors.

    The method I use to call the Web method was to write a .Net control flow component. The main reason I chose to write a component was to have easier code reuse. Also maintenance was easier as I only had to update one component and all packages would pick up the new component.

    The approach I took with my custom component was to separate the SSIS integration and the actual call to the web service. Creating a code library that handled the interaction with the web service. This was then wrapped with another code library that handled the SSIS integration.

    Taking this approach also allowed me to write a console application to test the interaction with the rules engine using the same calling code as the packages would use. This console application allowed us to debug any integration problems that we came across.

    Integration results

    Currently we are able to process a batch of 100,000 source rows within a five minute period. This is adequate for a normal day runs where we would only have to process changes for a day. However when there is a requirement to re-run the rules against entire warehouse data the process can take a little while. This is because a number of batches are required to complete the rules processing. There is an improvement currently being investigated, by another team member, to reduce the amount of data sent to the rule engine for processing. The method which is currently being looked at is as follows: Find the unique rows to send to the rules engine for processing and then apply the results back to the required rows.

  • Comparing Master Data Services instances

    As I have mentioned in previous posts, I've been working on a data warehouse project. Within the project we decided to use SQL Server 2008 R2 Master Data Services (MDS) to store all the warehouse specific reference data. Here are some reasons why MDS was used:

    • Allowed us to manage the reference data.
    • Reference data could be loaded into the warehouse like a source system
    • Would allow the client’s data governance team to easily update the reference data and keep the warehouse up-to-date

    For active development of entities and data loads we use a sandpit instance. When the ETL was ready to use the new entities or loaded data, a cut of the sandpit instance would be promoted to the development environment. We came across a problem when we needed to identify some changes which were accidentally made on the development instance.

    I came up with a method which helped to identify the changes fairly easily, below is the method I used. I will say that it’s not a perfect solution and might not work for everyone or continue to work when MDS updates have been applied.

    Run the following SQL script on both instance of the MDS database servers:

       1:  CREATE DATABASE MDSCompare
       2:   
       3:  DECLARE @vColList AS VARCHAR(MAX)
       4:  DECLARE @vViewName AS SYSNAME
       5:  DECLARE @vSQL AS VARCHAR(MAX)
       6:  DECLARE @vEntityID AS INT
       7:  DECLARE @vModelID AS INT = (SELECT id FROM mdm.tblModel where Name = '<ModelName,Char,Master Data>')
       8:   
       9:  DECLARE EntityID_Cursor CURSOR LOCAL FORWARD_ONLY FAST_FORWARD READ_ONLY
      10:  FOR SELECT ID FROM mdm.tblEntity e WHERE model_id= @vModelID order by id
      11:   
      12:  OPEN EntityID_Cursor
      13:   
      14:  FETCH NEXT FROM EntityID_Cursor
      15:  INTO @vEntityID
      16:   
      17:  WHILE @@FETCH_STATUS = 0
      18:  BEGIN
      19:   
      20:      SELECT @vViewName = REPLACE(e.Name,' ','')
      21:           , @vColList  = COALESCE(@vColList + ', [' + a.name + ISNULL('_' + la.name,'') +']', '[' + a.name + ISNULL('_' + la.name,'') +']')
      22:      FROM mdm.tblAttribute a
      23:          INNER JOIN mdm.tblEntity e
      24:              ON e.id = a.entity_id
      25:          LEFT OUTER JOIN mdm.tblAttribute la
      26:              ON a.domainEntity_Id = la.entity_id
      27:              AND la.attributeType_id = 1
      28:              AND la.IsSystem = 1
      29:      WHERE a.entity_id = @vEntityID   
      30:      AND a.attributeType_id <> 3
      31:   
      32:      SET @vSQL = 'SELECT ' + @vColList + ' INTO MDSCompare.dbo.' + @vViewName + ' FROM mdm.' + @vViewName
      33:   
      34:      EXEC (@vSQL)
      35:     
      36:      FETCH NEXT FROM EntityID_Cursor
      37:      INTO @vEntityID
      38:   
      39:      SELECT @vColList = null
      40:          , @vViewName = null
      41:          , @vSQL = null
      42:         
      43:  END
      44:     
      45:  CLOSE EntityID_Cursor
      46:  DEALLOCATE EntityID_Cursor

    Then to test that all the entity which are required have been created use the following script:

       1:  DECLARE @vModelID AS INT = (SELECT id FROM mdm.tblModel where Name = '<ModelName,Char,Master Data>')
       2:   
       3:  ;WITH MDSEntity(Entityname)
       4:  AS
       5:  (
       6:      select REPLACE(e.Name,' ','')  Entityname
       7:      from mdm.tblEntity e WHERE model_id= @vModelID
       8:  )
       9:  , CompareTables (TableName)
      10:  AS
      11:  (
      12:      SELECT Table_Name
      13:      FROM MDSCompare.INFORMATION_SCHEMA.TABLES t
      14:      where t.table_schema = 'dbo'
      15:  )
      16:  SELECT *
      17:  FROM MDSEntity e
      18:  LEFT OUTER JOIN CompareTables c
      19:      ON e.Entityname = c.TableName
      20:  WHERE c.TableName IS NULL

    Then use the visual studio 2010 schema compare tool against the two instances of the MDSCompare databases to highlight any structural changes which have been made to the entities.

    To find data changes use the following script on the MDSCompare databases to create primary keys:

       1:  USE MDSCompare
       2:   
       3:  SELECT 'ALTER TABLE [' + table_schema + '].[' + table_name + '] WITH NOCHECK ADD CONSTRAINT PK_' + table_name+'_Code PRIMARY KEY CLUSTERED (Code)'
       4:  FROM INFORMATION_SCHEMA.COLUMNS
       5:  WHERE column_name = 'code'

    Then use the visual studio 2010 data compare tool to highlight any data differences.

    While using the above methods to do the comparisons I found some differences which I had to ignore. The reason we had to ignore them was because of the following scenario:

    While loading data through the batch staging process we had to set the default name attribute of the entities to be an empty string. This is because it would not allow nulls and we did not want to use this attribute. However the development instance has the default name attribute set with nulls. I believe this was because the MDS deployment tool had converted the empty element of the deployment package, which was created because of the empty string, to null while uploading.

    Related posts.
  • Choosing the right CDC tool for the job

    Introduction

    I have, as mentioned in a previous blog, been working on a data warehouse project using CDC for extracting the source system data. I would like to share some of the experiences and evaluation criteria used for selecting a CDC tool for our project.

    The reason for using a specialist tool was as follows:

    • Couldn't move all the source databases to SQL Server 2008
    • Needed the capture change data to be sent to another server instance
    • A consistent management experience for CDC between all our source systems. The two database vendors the tool needed to support was SQL Server and IBM AS400
    Timescales

    First we underestimated how long it would take to select a CDC tool. We plan for 1 month, for both evaluation criteria and running the tests, and it took us about 2/3 months. The time was taken in evaluating the tools against the criteria and gaining access to database systems from other projects.

    However our development of the ETL solution was able to continue, while the CDC tool hadn’t been selected, as we used generic columns for the CDC metadata. Our requirements allowed this approach because only a selected set of metadata was needed from the CDC tool which most tools offered. The method to ensure that the changes were picked up in the right order was to use a row ID, which was a big integer identity seed, within the CDC destination table. This was able to work as most CDC tool records the changes in the same order as they are made.

    Evaluation criteria

    Here are some categories of criteria, which our client used, to select which CDC tool to evaluate, these are common sense categories:

    • Impact: How CDC would impact both source and target systems servers and databases. The Server impact was measured in how much resources it would take to use on the server: memory, CPU, etc.

    The database impact was whether it made any schema changes and if it had performance implications for the application. This was important to know as there was a third-party tool where the support contract would have been invalidated if any schema changes were made to their database. To tests for schema changes I use a simple procedure which was to use VSDBCMD tool to import the schema into a schema file before installing CDC. After the install import again to another file and use VS2010 schema compare tool to compare against the schema files.

    • Schema changes: Does the CDC tool cope with schema changes being made on the source system. Could the tool continue to work if the schema changes were made onto columns which weren't being captured or if any new columns were added.
    • Speed: How quickly were changes committed to the target server. The metrics for this was by the volume of changes and speed to commit them. Also how quickly the initial synchronisation took to complete.
    • Management: What was the management tool, how easy was the management tool to use, how quickly CDC was recoverable from errors or disasters.

    While evaluating CDC tool we also found that we had to consider some other factors. One consideration was how it impacted the operational procedures of the source systems, for example backups, recovery and deployments.

    Another factor was the number of databases and tables which we wanted to capture data from. We had 20 source databases, which had about 15 tables each. Depending on the CDC tool more time would have to be spent on the development and deploying the CDC solution.

    To develop or to configure that is the question?  (sorry couldn't resist the Shakespeare pun)

    That is the question that we found ourselves asking. As some tools required development of code to create a CDC solution. Also with the tools which required development the experience and environment had to be considered as part of the evaluation. Such as the level of privileges required on the development computer; is there any integration with source control or how to protect from loss of work; how easy is it to transfer development from one developer to another.

    Another aspect of the CDC tools evaluation was how easy it was to deploy from our development environment to production. How easy was the CDC solution to deploy and was there any automation through scripting or exporting.

    Observations

    While evaluating we found the same tool gave a different level of change data capture experience depending on the database vendor it was configured against. For example: when configured against the AS400 the tool was able to give a full row of data. But when configured against and SQL server it was only able to give the columns which had changed. The reason for this was how the tool had implemented CDC for the SQL server. There was no requirement for the replication components for SQL server to be installed, without this component the SQL server log file only records the columns which had changed. Hence why the tool was only able to give only the change columns data and not the full row of data.

    We also found that the same CDC tool behaved differently when running against different processing editions (x86 and x64). The difference was with settings and memory requirements.

    Useful links
    Related posts.
  • Source system data to warehouse via CDC.

    Introduction

    I have been working on a data warehouse project with a difference. The difference is that the ETL is not doing the classic extract of source system data, instead the source system is going to send its data to the data warehouse by using change data capture (CDC). The decision for using CDC was as follows:

    • To have the ETL only processes the data that has changed within the source system between each run of the ETL. This would help the ETL perform as it would only have to process the change data and not work out what the changes were first.
    • Not to have the ETL processes impact the central source system database. The CDC would be responsible for delivering the changes of the source system to another database server. Then the ETL could be run at any time within the day and wouldn't be responsible for blocking the transactional source system. As there was plans to have the ETL run at the end of the day for two time zones e.g. US and UK.

    I would like to share some of the experiences, decisions and challenges that we face while trying to build a data warehouse using CDC. The first decision, which I will put into another post, was which CDC software to use. As a consequence of using CDC we faced a challenge on how to extract data from source systems where CDC couldn't be applied, One such system was Master Data Services (MDS) this was due to how the data is stored within its database. As this would mean the transforms for this source system would have to use a different approach to where CDC was used. What we decided to do was mocked CDC through use of SSIS and then stored the delta changes that we have found through our dataflow process. The reason for choosing this approach was so that all the transforms had a consistent approach.

    Development

    Early into development we discovered that we had to define what was meant by processing intraday changes. Did this mean every row which was captured by CDC within that Day? In other words, that every change which was captured should create a history change within the data warehouse leaving the last change as current. Or did this mean capture events changes within the day? For example scanned the changes for that day and find the flags for the records that are required. For us this meant capture event changes. Importance of understanding the meaning of intraday changes had an impact on how we approached coding the transforms procedures.

    Another challenge that we faced was how to deal with transforms which require data from two tables. The reason why this was a challenge is because of the following scenario: An entity within the warehouse has a business rule which requires data from 2 tables from the source system before it can be evaluated. However within the source system data has been changed and captured for one table only. For example: business rule order status requires data from the OrderHeader table and the OrderLine table.  As illustrated below.

    BusinessRules

    The source system updates the OrderLine table and a CDC captures the changes and sends them to the ETL. Then the ETL process runs with only the OrderLine changes tries to evaluate the order status business rule but how to evaluate this when OrderHeader data has not been supplied? As illustrated below.

    MissingBusinessRules

    There are many ways this can be resolved, here are some examples:

    • Change the application process to create updates to the required linked tables. This might not solve the problem as the CDC software may not have committed all of the changes before the ETL process starts running, unless the CDC is part of the ETL.  Also this may cause performance problems for the application team.
    • To design the data warehouse scheme and business rules so that the transforms don't require joins between tables to create a data warehouse. This I will highlight may not be practical as it could mean it basically becomes a copy of source systems schemas which would give you no value for reporting and analysing.
    • To get transform to read the missing data from the data warehouse if CDC hasn’t passed on the data. This was a good option to consider however there are few considerations to be aware of:
      • Could have performance issues with the transform for some of the following reason: Using left outer joins on warehouse tables and depending on the warehouse schema could require a large number of tables; have use of function calls IsNull or Coalesce; The size of the tables data warehouse could make query tuning very difficult.
      • Adding complexity to untranslate business rules or transforms if source system data isn’t being natively stored
      • Harder to read and maintain as the transform isn’t just transforming source data
      • The data warehouse could become even wide or larger if the source systems native data is stored within the warehouse to help the transform.  This will have an impact for querying and processing performance; also add overhead cost for storage.

    We solve the problem by extending one of our client's requirements. The requirement was: keep a copy of the source system data changes for each day for a further 30 days. To meet this we configured the CDC to send all the changes to an archive database. Then have the ETL process to extract the day changes from the archive database. Rather than have CDC drop the changes to a landing database and then have the ETL copy the data again to the archive database as well as transform the data. The extension of the requirement was to keep all changes of the latest version of each primary key within the archive database. This then allowed us to write a post-extract process to extract any missing data required for any of the transforms where data wasn't present. This then allowed us to write the transforms in a manner that always assumed that all data was in the same location.

    Testing

    We found that we had to change the way that we prepared to test our ETL process by using datasets. Normally we would create some datasets which would mirror the source system in a particular state for testing certain scenarios for transforms and business rules. However this time we only needed to create change records for testing but the problem was how to ensure that the change records were fed in a controlled manner to ensure consistent mirrored states of source system and valid transitions were maintained. We resolved this problem by investing time to create a test harness solution. Which mimic our CDC tool of choice by inserting the test data into the archive database and then running the ETL process. One thing I would recommend is that you trial your CDC tool over your source systems to see what data changes will be actually captured. This will help to create valid test data for testing transforms and business rules within your ETL process

    Related posts.
  • Configuring Excel Services & PowerPivot on multi server Topology

    I have been working with a couple of colleagues, James Dawson and Russell Seymour , on installing and configuring PowerPivot within a SharePoint 2010 beta 2 farm.

    We used the following instructions to install PowerPivot on one of the application tier server: http://msdn.microsoft.com/en-us/library/ee210616(SQL.105).aspx.

    After following the instructions we were able to verify the installation as we had a SharePoint site with PowerPivot gallery.  We managed to uploaded and open a couple of Excel workbooks, with PowerPivot embedded, within the web browser.  Also we were able to connect to the PowerPivot service and could see the PowerPivot data files loaded. When I tested the slicers I got greeted with the following error:

    The data connection uses Windows Authentication and Excel Services is unable to delegate user credentials. The following connections failed to refresh: Sandbox

    We did some research and found the following blog post: http://powerpivotgeek.com/2009/12/11/excel-services-delegation/.  I implemented the changes it recommended to our Excel workbooks and tried again this time getting the following error:

    The data connection uses None as the external data authentication method and Unattended Services has not been configured on Excel Services. The following connections failed to refresh: Sandbox

    However at the time of this error we had already configured the Excel Services with the Secure Store Service application identity that we had created. 

    The problem proved to be with the configuration of all the service applications. At the start all the services applications had their own application pools with there own domain users. 

    The solution in the end was to run all the services applications under the following app pool configuration: SharePoint Web Services System.

  • Using Powershell to understand a replication topology

    I recently had to investigated a client replication topology.  I needed to understand the publications, subscribers and the articles. The problems I was facing were as follows:

    • The replication topology was only defined on the production infrastructure.
    • There was no documentation about the replication topology.
    • I had no way of being able re-create replication topology.
    • Only short amount of time.

    I was able to script the replication topology from the SQL Server Management Studio.  This was partially helpful as I now had a script which could be used to review the replication topology.  However this end up being a fairly large file and would be time consuming to review all information. 

    So I deicide to write a PowerShell, SQL Server Powershell for the Invoke-Sqlcmd commandlet, that would do the following steps:

    • Parse the file to find the lines containing the following store procedures:
      • sp_addarticle.
      • sp_addsubscription.
      As theses contain most useful parameters to help understand the replication topology
    • Pares the line to find the key parameters and their values.
    • Then insert the information into tables within a database.

    I chose to use PowerShell because this was a perfect fit for a scripting language: manipulating a text file and doing regular expression on strings.

    This help me see that articles were being repeated across publication, some of the publication had identical articles but had different subscriber.

    A copy of powershell script is available here.

  • How To: Emailing the Sprint Burndown report

    The Scrum for Team System process template includes reports to help manage the progress of project.  The onus is on the team to run - by opening them - and review them.  This is the default behaviour of the Team Foundation Server which our process template built against, however as the Team Foundation Server platform uses SQL Server Reporting Services for its reporting software, the reports can actually be pushed out to the teams via e-mail. 

    I have mentioned how to do this for version 2.x, and for version 1.x the same steps can be applied, with the following post:  How to E-mail the Sprint Burndown Chart to yourself or your team.

    For version 3.0 (For which beta 2 is now available for download from the following link: http://scrumforteamsystem.com/cs/forums/4554/ShowPost.aspx ) things are a little different.

    Firstly, if not already configured, the Reporting Services server will need to will need to be set up to point to an SMTP server. This can be done by using the Reporting Services configuration tool and updating the e-mail settings within this tool.   You can find this tool on your TFS report server under the following: Start Menu > Programs > Microsoft SQL Server 2008 > Configuration Tools > Reporting Services Configuration Manager

    Report Server 2008 Configuration Manager Tool

    To create the subscription you need to use the Report Manager (to gain access to this you can right mouse click on the report folder within the team explorer):

    Team Explore 2010 Report Site

    Once the report manager has loaded at the root folder, for your team project report folder, with a list of reports and folders. Then click on the Sprint Burndown report to view the report. After the report has rendered select the subscriptions tab, when the page has loaded, click on New Subscriptions.

    Then complete the necessary details on how and when the report should be delivered.  If your team project has only one work stream then all the report parameters can bet set to use the report defaults.  Otherwise set the value of the release and workstream parameter that you require and the remaining parameters can be left set to use the report defaults:

    Sprint BurnDown Report Parameters

    When there is only one work stream the report can work out the release and work stream and then current sprint.  When there is multiple work streams by default the report will get the current release but then default to the first work stream and work out the current sprint for the work stream.  To get the sprint burndown for the other work streams repeat the steps to create a new subscription and select the required release and work stream.  After the release has finished the subscription for the multiple work streams, it will need to be updated to the next release and work streams.

  • My Adventures in Codeplex

    In October 2009 I created 2 codeplex projects which I would to tell you about:

    SQL Server Reporting Services MSBuild Tasks (ssrsmsbuildtasks)

    A few years ago I created some tasks for MSBuild to help deploy reports for my project, details can be found here,  since then my tasks have been re-used in a few other projects within my company.  However lately we have been doing a few projects which have made use of Reporting Services integrated mode with SharePoint, which meant that my tasks were unusable as they only work with a native mode report server.

    So I have been updating them to include support for an integrated mode report server.  Also before I added the integrated mode support I took some time to rewrite my original native mode tasks as well.  The re-writing of the tasks includes some improved understanding of MSBuild by making more use of Item Groups and metadata for setting of report server properties of the report items being deployed. 

    I placed the tasks within codeplex as I needed a better way of sharing my tasks other than through the blog post.  Also using the task is helpful in getting the your reporting project integrated with any continuous build of your project.  Another way they can help is with automating unit testing of your reporting project which I have talked about how to do from a previous post.

    The SQL Server Reporting Services MSBuild Tasks are available from the following URL: http://ssrsmsbuildtasks.codeplex.com/

    Server Reporting Services Slide Shower (ssrsslideshower)

    While working on our Scrum for Team System process templates for Team Foundation Server (http://www.scrumforteamsystem.com) I created a useful tool.  The tool helps to show reports that are important to a scrum team in a sort slide show presentation, more details can be found through this previous post.

    While writing the tool most of the mechanics of the tool doesn’t depend on Team Foundation Server or our Scrum template.  Also I thought it would be a useful tool for people who wish to demo any reports that are within their reporting services infrastructure.  So I have removed the dependences of Team Foundation Server and our Scrum Template. 

    I have plans to improve the tool, within my limits of coding knowledge, to allow reports parameters values or labels to be set as it renders the reports within the slide show. I also plan to get it working with a integrated report server as well.  However my ssrsmsbuildtasks project has taken most of my time but over the next few week I plan to start making my changes to this project.

    I placed the project on codeplex because within our Scrum for Team System V3 template for Team Foundation Server 2010, which beta 2 is now available from this link,  we have dropped the inclusion of this tool and plan to release it within codeplex.  This is  mainly because it was a value add tool and not needed as part of the process template. Also the changes that were made to template and the reports, which now don’t have clear defaults for our report parameters which fit into the my improvements plans that I mention above.   

    The SQL Server Reporting Services Slide Shower is available from the following URL: http://ssrsslideshower.codeplex.com/

  • SfTS V3 Beta 2:Helping to see the wood for the trees

    Within any application lifetime management methodology, the reports are a useful tool to help see how well , or not , the project is progressing.  Within the Scrum methodology the reports which are used for tracking progress are the burndown reports.  In the Scrum for Team Systems process templates for Team Foundation Server these reports are available to help with managing the project. 

    With a manual run burndown report when an unexpected trend in the burndown happens, there is an option to manually add an annotation.  This would help during a retrospective when talking about why the trend oddity occurred.  Within our previous versions of the Scrum for Team System templates our burndown reports would only show the end of each days aggregated totals.  Any unexpected totals would have to investigated to understand the underlying cause.  Previously this would have involved either using Team Foundation Explorer to review the history and comments; Using Excel Pivot Services to query the data or writing a report against the warehouse. This can take some time to create and analyse the result set. 

    In connection with the above for the last few months I, with the help of fellow work colleague Kate Begley, have been working on writing the reports for the Scrum for Team System V3 process template (For which Beta 2 has been released and details are available from here) part of which has been adding drill-though report functionality to the reports.  This helps with the investigation in explaining for example why there was an unexpected up trend in our sprint burndown seen in our report below:

     BurnDownChart

    To get drill-though details for the end of day click on any data point for example within the blue circle in the image above, the drill though details report will be displayed as seen below:

    BurnDownDayView

    In this example the details report shows the work items broken down by the incomplete tasks which contribute to the sprint burndown line aggregated daily totals, complete tasks which have been completed up to that day, and the tasks that have been descoped within the selected sprint / team.  The incomplete tasks are further grouped together by their states showing the subtotals for each state, as well as the grand total of incomplete tasks, which are used to make up the burndown trend.

    To investigate the unexpected up trend in our sprint burndown I would need to compare the totals with the totals of the previous day. To do that I would export to excel the details report of the day concerned and the previous days details and compare these side by side to analyse the up trend, as shown below:

    ExeclSidebySide 

    The above shows the reason for the up trend in this example was due to new work being added to the sprint. Please keep in mind this might not always be the case as other outcomes could of happened:

    • New tasks could have been added and the incomplete task could have had the work remaining total for it adjusted upward because of unexpected issues
    • Both the currently incomplete tasks could have had the work remaining adjusted upward because of impediments

    Other reports in beta 2 which have the drill-through functionality are:

    • QA / Bug History Chart
    • Release / Product Burndown Chart by Day
    • Release / Product Cumulative Flow
    • Sprint / Sprint Burndown Chart
    • Sprint / Sprint Cumulative Flow

This Blog

News

Locations of visitors to this page
Powered by Community Server (Personal Edition), by Telligent Systems