Welcome to EMC Consulting Blogs Sign in | Join | Help

Howard van Rooijen's EMC Consulting Blog (2004 - 2010)

This blog has now moved to http://howard.vanrooijen.co.uk/blog - please update your subscriptions if you wish to receive new content.

  • Keep Calm and Carry On

    After 6 years of blogging here at Conchango / EMC Consulting I've decided to move my blog to a new home:

    http://howard.vanrooijen.co.uk/blog

    Please update your RSS reader to point to my new RSS feed:

    http://howard.vanrooijen.co.uk/blog/feed/

    If you're interested in some of the projects featured in this blog here's some info: 

    You can, of course still follow me on Twitter at http://twitter.com/HowardvRooijen

    Thanks & I hope to see you at http://howard.vanrooijen.co.uk/blog !

    @HowardvRooijen

  • Who Can Help Me?

  • Using MEF and Castle Windsor to improve decoupling in your architecture

    For the last 6 months I’ve been leading a small team to deliver a “best of breed” ecommerce retail site, based on ASP.NET MVC, Sharp Architecture, NHibernate, Fluent NHibernate, Spark View Engine, N2CMS, Castle Windsor, xVal Framework, AutoMapper, PostSharp, Gallio / MBUnit / DevelopWithPassion.bdd, Solr and SolrNet. We delivered it in 10 x 2 week sprints and went live in time for the Halloween peak trading period. You may have been following blog posts by James Broome and Jonathan George who have been covering some of the different aspects of the solution, from BDD to Performance Testing and Optimisation. I’ve collated these posts (and some of my own on the project) into a delicious list: http://delicious.com/howardvanrooijen/fdo-casestudy for those who may be interested.

    You can now see the site for yourself at http://www.fancydressoutfitters.co.uk

    As I’ve mentioned the fundamental architecture of the site is based on Sharp Architecture – an amazing Open Source Project which the aims to pull together best practices around MVC / NHibernate / Castle etc. It really is an awesome framework and I attribute a lot of the success of the project to this. We extended the core framework – contributing many of the changes back to the project (multiple db support, XVal validation support, etc). One of our main project practices was doing a weekly code review – using NDepend to give us a good view of the project and the cohesiveness of the overall architecture (we were running fast). When I was trying to familiarise myself with #Arch - I spent some time looking at the “Northwind” reference project. I used NDepend to take a look at the dependencies and it left a little bit of a bad taste in my mouth:

    northwind-dependencies

    I wrote a post to the group - the general theme was that we thought there were too many dependencies across the different layers and that if the project was supposed to encompass a “best practice reference architecture” it wasn’t as loosely coupled as it should be – and this was mainly due to NHibernate initialisation requirements. No-one really agreed with me (mainly because of the law of diminishing returns and the effort required to implement). But still the problem irked me and I started pondering possible ways of improving the architecture to remove this coupling.

    I had the idea of implementing a “Registrar” pattern that would be responsible for orchestrating type registration with the container and then maybe extending that with a “Bootstrapper” pattern that would be responsible for initialisation. I also wanted a convention over configuration approach that would mean that as long as developers follow the convention – they don’t need to get involved in any of the plumbing. The main problem is that Castle Windsor doesn’t really have a mechanism for dynamic type discovery and there is a fundamental issue with loading types dynamically, in that if the assembly is a project reference, but does not have a hard coded reference in a project – that assembly is not loaded into the AppDomain, thus not dynamically discoverable at runtime.

    At this point I realised that MEF might have an answer. Implementing this in MEF is trivially simple and only uses some of its basic features – but that’s exactly why MEF is so useful!

    Here are the steps for implementation:

    First create a marker interface that you can use to define which assembly you want to load to you are registering doing auto registering of types with Castle:

    public interface IComponentRegistrarMarker { }

    For each project add a marker in the properties folder:

    public class InfrastructureRegistrarMarker : IComponentRegistrarMarker {}

    Next, define an interface that you are going to use to register your components across the different project – this should ideally like in a *.Framework, *.Utilities or *.Core type of project, that is referenced by all projects:

    public interface IComponentRegistrar
    {
       void Register(IWindsorContainer container);
    }

    Next, create a concrete implementation of the above interface so that you can register dependant components using Castle Windsor’s Auto Registration feature. Now use the MEF Export attribute to state that the implementation exports the IComponentRegistrar type. The use of MEF is quite a simple, yet powerful solution as it allows each assembly to auto register its own dependencies:

    [Export(typeof(IComponentRegistrar))]
    public class InfrastructureComonentRegistraronentRegistrar : IComponentRegistrar
    {
       public void Register(IWindsorContainer container)
       {
          container.Register(
           AllTypes.Pick()
            .FromAssembly(Assembly.GetAssembly(typeof(InfrastructureRegistrarMarker))) 
            .WithService.FirstNonGenericCoreInterface("MefAndWindsor.Domain"));
       }
    }

    Now here is where MEF comes into its own and delivers something over the shortcomings of both the generic .NET Framework and Castle; use MEF to inspect the current folder (i.e. bin\debug) and load only MefAndWindsor Types. Then retrieve every type that is marked an “export” of the IComponentRegistrar interface, then use some Lambda syntax to call it’s Register method passing in the container:

    public static class RegistrarOrchestrator
    {
       public static void Register(IWindsorContainer container, string path)
       {
          var compositionContainer = new CompositionContainer(new DirectoryCatalog(path, "MefAndWindsor.*.dll"));
          compositionContainer
            .GetExports<IComponentRegistrar>()
            .Each(e => e.Value.Register(container));
       }
    }

    And there you go – configuring your application to use a container is reduced to 3 lines of code:

    var container = new WindsorContainer();
    ServiceLocator.SetLocatorProvider(() => new WindsorServiceLocator(container));
    RegistrarOrchestrator.Register(container);

    I didn’t have time to refactor the Nothwinds sample app – so the comparison isn’t like for like, but you should get the idea. The result is that the app is more loosely coupled – now the *.Console app really doesn’t have a dependency on the *.Infrastructure assembly:

    mefandwindsor-dependencies

    You would still need to reference the *.Infrastructure project in the *.Console project to ensure that the assembly is copied into the bin folder. To totally decouple the solution you could achieve the same affect with a Post Build Step or custom MSBuild target:

    This pattern could be extended – for example a IComponentInitialiser could be used to initialise dependant frameworks, such as NHibernate – this would allow you to have total persistence ignorance in your architecture.

    If you are interested in seeing the code – I’ve uploaded the demo to my CodePlex Project.

  • Simplify Separation of Concerns in ASP.NET MVC with AutoMapper

     

    One of the great aspects of ASP.NET MVC is how lean and clean the architecture is – there is so little background noise compared to WebForms; possibly the best illustration of this is the difference between the MVC Request Pipeline vs. WebForm Page Lifecycle. The number of hoops you are forced to jump through in WebForms, compared to MVC is quite staggering.

    One of the advantages to the noise level being low in MVC is that it is much easier to perceive other patterns (or anti-patterns) that appear in your code; this is especially useful if the patterns represent commodity that could be replaced or refactored into a pattern or framework that requires less manual work to implement or maintain.

    The first example of this pattern of commodity code to appear was within our Controllers; repetitive, hand-cranked object conversion code; the web application was based on the S#arp Architecture Framework, which we had extended to support the Spark View Engine and ViewModels and we manually mapped the Domain Entities into the ViewModels before passing to the ViewEngine for rendering. We took an action from our Sprint 2 code review – to see if there was a better way to handle this object conversion. After a little bit of research, I stumbled across Jimmy Bogard’s AutoMapper:

    AutoMapper uses a fluent configuration API to define an object-object mapping strategy. AutoMapper uses a convention-based matching algorithm to match up source to destination values. Currently, AutoMapper is geared towards model projection scenarios to flatten complex object models to DTOs and other simple objects, whose design is better suited for serialization, communication, messaging, or simply an anti-corruption layer between the domain and application layer.

    AutoMapper is a powerful little framework, with a whole slew of features:

    Once we performed the refactoring it became very apparent how well AutoMapper fits into MVC style architecture as it enables easy separation of concerns with regards to object conversion,  it was amazing how much cleaner our code became - so much so that we modified the overall solution architecture to incorporate explicit mapping layers for both our ViewModels and to convert external 3rd Party types into our Domain Entities.:

    automapper-architecture

    The general pattern of usage of AutoMapper within MVC is as follows:

    1. Map Controller input into Domain Entities
    2. Pass Domain Entities into Task Layer to "do stuff"
    3. Map output of Task Layer (Domain Entities) into ViewModel
    4. Pass ViewModel to ViewEngine (Spark)

    Simple, slick and clean.

    To formalise the Mapping Layer and make it testable we implemented a simple interface:

    public interface IMapper<TInput, TOutput> 
    {     
        TOutput MapFrom(TInput input); 
    }

    Next we implement a custom marker interface so that we could resolve the Mapper from the DI container and we adopted the naming convention:

    <Input Type><Output Type>Mapper

    e.g.

    public interface IEditModelEntityMapper : IMapper<EditModel, Entity> 
    { 
    } 

    Then implement the interface, configuring the AutoMapper convertions in the constructor:

    public class EditModelEntityMapper : IEditModelEntityMapper 
    {     
        public EditModelEntityMapper()     
        {         
            Mapper.CreateMap<EditModel, Entity>()             
                    .ForMember(x => x.Property, y => y.MapFrom(z => z.Property));     
        }     
    
        public Entity MapFrom(EditModel input)     
        {
             return Mapper.Map<EditModel, Entity>(input);     
        } 
    }

    Next, we use the Mapper inside a ASP.NET MVC Controller (injecting the dependencies into the constructor):

    public class CustomController : Controller 
    {     
        private readonly IEditModelEntityMapper editModelEntityMapper;     
        private readonly IOutputViewModelMapper outputViewModelMapper;     
        private readonly ITasks tasks;     
    
        public CustomController(ITasks tasks,
                                IEditModelEntityMapper editModelEntityMapper,
                                IOutputViewModelMapper outputViewModelMapper)     
        {         
            this.tasks = tasks;
            this.editModelEntityMapper = editModelEntityMapper;
            this.outputViewModelMapper = outputViewModelMapper;     
        }    
    
        public ActionResult Index(EditEntity input)     
        {
            var entity = this.editModelEntityMapper.MapFrom(input);
            var output = this.tasks.DoSomething(entity);
            var viewModel = this.outputViewModelMapper.MapFrom(output);
    
            return View(viewModel);
        }
    }

    We were a little sceptical at the start of the project that AutoMapper would "cut the mustard" when it came to the performance requirements of a public facing, high load, e-commerce site because of the amount of reflection AutoMapper uses at its core, but after some cursory testing we were incredibly impressed with the performance of the solution under load.

    The result of switching to AutoMapper was a huge reduction in commodity plumbing code, it also meant we had a formalised pattern for separating the concern of performing object conversion that was simple, elegant and most importantly - testable. Another side-effect of using AutoMapper was that it resulted in our Controllers becoming spartan, lean and testable. By following a strict naming convention it promoted code discovery and reuse: if a developer was looking to convert EntityA into ViewModelB it’s very simple to use ReSharper to look for the EntityAViewModelBMapper type.

    Work smarter, not harder.

    @HowardvRooijen

  • The Value and Benefits of ASP.NET MVC

    The ASP.NET (Webforms) proposition holds firm in the light of ASP.NET MVC – its strengths are for large corporate development teams working on internally facing LOB applications. ASP.NET MVC on the other hand has an entirely different value proposition and enables teams to work in an entirely different way. Although ASP.NET MVC is intended as a retort to the success of Ruby on Rails, it actually delivers something else – a web platform that finally follows some of the design patterns that the Java Community have been using for the past decade.

    ASP.NET Webforms is a white elephant in the web world (but that’s not to say it isn’t a powerful elephant!) – there isn’t another web platform that looks anything like it, which is not surprising as its main goal was to create an abstraction over the web that would allow developers to build web applications as simply as they build Windows applications. In 2009 this abstraction is bringing little benefit to development teams; developers creating desktop experiences are now using WPF, developers creating rich experiences for the web are now using Silverlight, and Microsoft made the right move by reusing the core technologies used and the programming model of WPF. The end result is that ASP.NET WebForms seems slightly outmoded and has a series of fundamental architectural decisions that adds huge amounts of friction to the development process.

    ASP.NET MVC on the other hand is a breath of fresh air on the Microsoft Platform. Built by a team of people who are recent recruits to Microsoft, who have years of real-world industry knowledge of delivering web applications using ASP.NET WebForms and know all of the platforms strengths as well as pitfalls and short comings and want to create an alternative that doesn’t make the same mistakes. From my perspective delivering a retail ecommerce site using ASP.NET MVC, combined with the power and features of IIS7, has had an entirely different feeling from delivering one using ASP.NET WebForms – it feels like I’m swimming with the current rather than against it.

    Here are the four reasons why it works so well:

    Testability:

    It was almost impossible to test ASP.NET Webforms – all parts of the platform were too tightly coupled to be able to write unit tests and from a UI Automation perspective, it was neigh on impossible too as you had no control over the HTML Mark-up, fortunately MVC has been designed from the start with testability in mind. We used BDD as our testing approach on the project and it worked very well indeed, see James Broome’s blog posts on the subject for more information. The result of using BDD is that we’ve written more tests that any other project I’ve been on and because of that the quality of the code we’re producing is getting higher and higher. We’re 10 sprints in (10 x 2 weeks = 20 weeks total build time) and our bug count is in the low 20s (and most of these are style / validation issues).

    Because we have absolute control over the HTML Mark-up, we can finally successfully use UI Automation Tools (in our case Selenium) to take much of the regression testing burden off our QA resource. We also use code generation to create multiple test runs using our real product data and use Selenium’s Grid feature to run these tests in Virtual Servers on various different browsers. This means that we can adequately run the project with only 1 QA which in turn reduces the overall cost of developing the solution while keeping quality high.

    Separation of Concerns:

    Whereas ASP.NET WebForms allowed developers to easily get into a architectural muddle by easily allowing business logic to be written in the wrong place (generally resulting in a “Big Ball of Mud Architecture”) MVC goes some way to creating a “Pit of Quality” that encourages developers to design their applications well, separating presentation logic, from business logic and giving the ability to remove low value, repetitive plumbing tasks. The end result is that MVC applications are far leaner than traditional ASP.NET WebForms applications. This separation also allows far easier unit testing. As James points out in his blog post, the separation of concerns doesn’t just affect the architecture and testability, it also allows the different disciplines within the team (C# Developer, CSS Developer) to separate their own concerns and work very efficiently without treading on each other’s toes, this has massive positive implications for speed of delivery.

    Extensibility:

    Each concern is not only well separated but most constituent parts of the framework are replaceable. The ViewEngine (responsible for rendering the HTML) is probably one of the best examples. On our project we’ve opted to use the Spark View Engine rather than the default WebForms one – as the Spark Engine gives you much finer grain control over your mark up and also offers a much simpler / more elegant templating approach. The outcome of this is that our front end developers are much happier, more productive and fully empowered as they are finally masters of their domain. The other outcome is that the quality of the HTML the produce is far higher, and there are fewer cross browser issues caused by embedded mark-up / styles that plagued ASP.NET Server Controls in the past.

    Open Source:

    ASP.NET MVC is released under a MS-PL Open Source license – which is truly ground breaking for Microsoft. This one brave move has many achievements, but the greatest is fostering a real community around this platform. It has also allowed the community to “fill in the gaps” and add features and functionality that the core Microsoft Development team could not add in the timeframe they had. The MVCContrib project on CodePlex is a prime example of that – the community has taken the MVC framework and added a huge amount of value. Another example is the Sharp Architecture project – its vision is to create a best practice architectural framework, leveraging the ASP.NET MVC framework with NHibernate and a whole manner of other Open Source libraries.

    The Benefit of using Open Source:

    The value of these Open Source frameworks and tools is as follows: normally when a project is planned, budget defines the amount of effort that can be allocated. That effort is generally expended in three ways, delivering commodity, delivering core functionality and delivering innovation. What actually happen is that scope enlarges, which increases core functionality and commodity and pushes innovation out of scope for the capacity available. Frameworks like Sharp Architecture allow you to reduce the commodity in your project, and the effort that would be expended on commodity can instead be used for core functionality and innovation:

     commodity-vs-innovation

    Could it have been better?

    One major criticism to the ASP.NET MVC approach – it really feels like Microsoft has played catch up with Ruby on Rails, rather than learning from its mistakes and leap frogging to the next generation web platform (for example creating a framework like Open Rasta or one based on the Presenter First Design Pattern, which solved many of the problems inherent in the MVC Design Pattern and is also geared towards enabling TDD and Agile). The same end result could have been met by supporting the MonoRail project to deliver a MVC web framework and then spending effort on a vNext web platform.

    @HowardvRooijen 

  • Microsoft, Open Source and Codeplex Foundation

    Thoughts on Codeplex Foundation

    I’m quite surprised, in a nice way, by the announcement of Codeplex Foundation as for the last few months I’ve been shaping some thoughts on the same topic of how Open Source products can be utilised by real companies, to deliver more cost effective projects on the Microsoft stack. This topic has been at the forefront of my thoughts as over the last 20 weeks my team have been focused on building a Greenfield best of breed eRetail platform which is composed mainly of Open Source tools, frameworks and products on the Microsoft stack – as the aim is to deliver the most “bang for buck” feature-wise, while keeping the TCO to a minimum.

    My conclusion was that the Open Source Community within the Microsoft space is fragmented, not only from a communication perspective, but also from a perception perspective – what are the guidelines for real companies that want to build system based on Open Source components, does the use of Open Source Software mean that all software projects have to sit in the Java / PHP / *NIX space?

    A few months ago I stumbled across the wonderful http://www.designintheopen.org/ - A community of practice for design & user experience people in Open Source. I was quite inspired by the notion and realised this is exactly what the .NET eco-system is missing – a central hub where the myriad of different Open Source Projects can and the people who contribute to them, can communicate and collaborate – and so I registered http://www.developintheopen.org and as a side project was going to try and create a similar community – but it seems with Codeplex Foundation – Microsoft have beaten me to it (and created a bigger, better solution) – and that’s a really good thing. Below are the thoughts I’ve been shaping.

    Other ways Microsoft could support Open Source

    Microsoft is making some great steps in the direction of embracing how Professional Developers within the Microsoft ecosystem want to design, build, test and deploy modern web applications. Even better Microsoft is finally starting to understand that Developers alone do not deliver web applications; multi-skilled teams of Developers, Testers, Graphic and Interactive Designers do.

    Much of this step in the right direction can be directly attributed to the change of mindset and tone of voice encouraged by Scott Guthrie - Corporate Vice President in the Microsoft Developer Division. Under his helmsmanship there has been a significant sea change of mentality. In his team “Open Source” does not seem to be a dirty word, it’s not treated as Microsoft’s death knell, but is viewed in the same way as it is by the external development community – it’s a wonderful thing; Open Source can add huge amounts of value to a platform (as long as it’s released under a suitable license) and allows anyone in the community to actively contribute rather that just be a passive consumer.

    A few years ago the Microsoft ecosystem didn’t have a Open Source Community, but after a series of enlightened steps (the release of WiX as Microsoft’s first Open Source project broke the ice, the creation of MS-PL and the release of CodePlex – a community for Open Source projects based on a web front end wrapping Team Foundation Server), which has culminated in the last 12 months with the release of ASP.NET MVC under MS-PL and for the first time ever, an Open Source project (jQuery javascript Library) being included out of the box with a full Microsoft Product.

    Microsoft has a real opportunity to change the way it interacts with the Microsoft Open Source community, to actively support it, which in turn will add large amounts of commercial value to the Microsoft Platform. Most Open Source Projects are severely lacking is the productionising skills that Microsoft has in abundance and could easily contribute resources to the projects to help pull them up to the quality level of a full blown supported Microsoft Product. A primary example of this would be to donate Technical Writers to help create documentation to the standard on MSDN. Documentation is one of the greatest barriers to entry on most Microsoft based Open Source Projects. There are other key resources such as Performance and Security Specialists who could help identify and remove security issues and performance bottlenecks from these Open Source Projects which would help them become more “enterprise ready” and much more likely to make it onto companies “White List” of approved tools and frameworks.

    Another possibility would be for Microsoft to contribute either financially or via dedicated resources to help drive the core efforts of the more popular Open Source Projects that help make web development leaner, more efficient, to a higher quality.

    If I were to be given $1000 to donate to the most important Open Source projects that directly affect Microsoft Web Development, this is how I’d spend it:

    • N2CMS (Content Management Framework)- $200
    • Sharp Architecture (Best Practice Web Architecture – incorporates many of the above) - $150
    • NHibernate (+ NHibernate.Search + Fluent NHibernate) (data access technology) - $150
    • Castle Windsor (Dependency Injection Framework – makes code leaner, cleaner & more testable) - $100
    • xVal Framework (Validation Framework) - $50
    • Spark View Engine (View Engine for rendering HTML) - $150
    • AutoMapper (used for converting objects) - $50
    • PostSharp (inject commodity code rather that writing it by hand)  - $50
    • Gallio / MBUnit (Testing Platform / Unit Testing Framework) - $50
    • Horn (Package Build Framework) - $50

    Of course, what I’d actually ask Microsoft to do is finance the above projects in terms of full time development effort – i.e. 1x key developer for 12 months, 1 x key developer for 6 months in addition to the Microsoft Resources (i.e. technical writers / performance / security). If there is too much red tape involved in letting MS Resources do this work (contracts, IP issues etc), then Microsoft could fund this via its Partner network. This could be done via a series of “Grants” and the creation of a program / awards much akin to MVP but with a specific focus on people who run, contribute and support Open Source Projects – something like a “Microsoft Community Contributor”.

    Another way in which Microsoft could help is by acting as a mediator between the various Open Source Projects, for example replicating the work done with the Common Service Locator – to act as a unifying force for the myriad of Open Source Projects that tackle the same problem domain.

    The reason that there is so much take up of LAMP (Linux, Apache, MySQL, PHP) over WISA (Windows, SQL Server, Internet Information Services, ASP.NET) has very little to do with the costs of the core components that make up either acronym, but rather the availability of applications that sit on them. Take the business problem of creating an ecommerce application for your enterprise. If you look toward the LAMP ecosystem – there are *many* mature, free ecommerce systems (i.e. Magento eCommerce), there are *many* mature free Content Management System (Joomla, Drupal etc) that you can tie together to create an solution that fits your needs. If you look at the WISA ecosystem – there is nothing comparable. Because the eco-system is so spartan and immature: there are no mature, free ecommerce systems and there are no mature free Content Management Systems (Microsoft essentially left the CMS market, by retiring it’s CMS Server in favour of MOSS – not exactly a like for like alternative). It is very difficult to compete with the “freeconomics” of the LAMP ecosystem.

    If you re-examine the same problem from a business / consultancy perspective, delivering the solution has following cost facets:

    Solution Facets

    As a client you can’t remove the ongoing costs of Operational Support, Hosting or Consultancy, the only place you can reduce the cost of a solution is via Development & Product Licensing costs. Thus whether you a save a few thousands for your core OS / Web Server is far outbalanced by say the tens (or hundreds) of thousands of pounds of cost for your CMS and Search solutions. In the future cloud platforms, such as Azure or EC2, may in fact reduce the Hosting and Operational Support costs, as platforms will be able to scale and shrink on demand, thus reducing upfront costs and if cloud based database offerings such as SimpleDB, SQL Azure or Azure Table Storage are utilised, this will further minimise, back-up, replication & disaster recovery costs.

    In addition to the idea of funding key Open Source projects mentioned above, I would go one step further and actively fund .NET Ports of some of the more successful LAMP / Java Products. Microsoft bought the search technology FAST for $1.2 Billion and are now bundling that with MOSS – how does this help someone creating a Microsoft based ecommerce platform without taking a hard (and expensive and inappropriate dependency on MOSS). If MS wanted to add value to the ecosystem, for $250k you could fund a couple of the best developers in the Microsoft Open Source world (I’m thinking of folks like Oren Eini and JP Boodhoo) to re-write Solr, the OS Enterprise Search platform in .NET. Most OS Projects are direct “ports” i.e. the API is kept exactly the same so that the documentation doesn’t have to be re-written, but by funding a ground up re-write, you could really harness the power of the .NET platform (LINQ / WCF / WF4 / PowerShell) and even create a Azure flavour that would allow the search engine to vastly scale on that platform. Add to this comprehensive documentation via a Technical Writer and you have an incredibly important application that adds immense value to the platform eco-system.

    For Microsoft to compete against the LAMP stack they to concentrate on the community and Open Source ecosystem to add immense value to the WISA stack.  To achieve this Microsoft should harness the passion and commitment of the leading lights in the MS OS ecosystem and help fund them to work full time on the projects they currently work on in their spare time. Microsoft should also fund rewrites of some core Open Source applications to take advantage of the power of .NET to give the Businesses greater choice and greater reason to invest in the Microsoft Platform. At the very least OS projects should be treated in the same way as Partner's Products - as something that enriches the platform, not something that threatens it.

    @HowardvRooijen 

  • Improve your Debugging Experience by using the DebuggerDisplayAttribute

    Visual Studio 2005 offered a number of User Experience improvements within the Debugging feature set – the big furore in the lead up to release was that VB.NET developers would get Edit & Continue support but C# Developers wouldn’t. This decision was soon reversed after Microsoft had an Obi-Wan moment (and sensed “a great disturbance in the Force, as if millions of voices suddenly cried out in terror”). The noise generated by this meme blocked out some of the other great debugging features that were added in VS 2005 – such as Debugger Visualizers and the DebuggerDisplayAttribute, which bang for buck is the easiest way to improve your debugging experience.

    To demonstrate the DebuggerDisplayAttribute – I’ll reuse the code from my previous post: we start with a normal POCO class:

    [ImplementINotifyPropertyChanged]
    [ExcelSheet(Name = "Deceased MPs")]
    public class MemberOfParliamentDataModel
    {
        [ExcelColumn(Name = "constituency")]
        public string Constituency { get; set; }
    
        [ExcelColumn(Name = "firstname")]
        public string FirstName { get; set; }
    
        [ExcelColumn(Name = "fromdate")]
        public string FromDate { get; set; }
    
        [ExcelColumn(Name = "fromwhy")]
        public string FromWhy { get; set; }
    
        [ExcelColumn(Name = "lastname")]
        public string LastName { get; set; }
    
        [ExcelColumn(Name = "party")]
        public string Party { get; set; }
    
        [ExcelColumn(Name = "todate")]
        public string ToDate { get; set; }
    
        [ExcelColumn(Name = "towhy")]
        public string ToWhy { get; set; }
    }

    and then we use the LINQ to Excel  provider to perform a LINQ query:

    var xLabourMps = from xMp in xMps 
                     where xMp.Party == "Lab" 
                     select xMp;

    We can add a break point on the next line where we iterate through the collection to print out the results. If you expand the debugger data tips for the “Results View”  property you see a long list of LINQtoExcel.Sample.DataModel.MemberOfParliamentDataModel objects. If you expand one of those nodes then you get to see the actual values of the objects. Very powerful, but ever so slightly clunky.

    image

    Enter the DebuggerDisplayAttribute:

    To improve the debugging experience all we need to do is decorate our Data Model object with a DebuggerDisplayAttribute, and list the object’s properties you want to display within { } as follows:

    [System.Diagnostics.DebuggerDisplay("{FirstName} {LastName} - {Party} Party")]
    [ImplementINotifyPropertyChanged]
    [ExcelSheet(Name = "Deceased MPs")]
    public class MemberOfParliamentDataModel
    {
    
    }

    Now when we next run the application and hit the breakpoint, we get a much more relevant data tip:

    image

    This one line of code will save you numerous mouse clicks and if you’re trying to become a better developer using a continuous improvement cycle, adopting a Lean mindset of Eliminating Waste realistically means making a large number of small adjustments to your everyday working habits. Work smarter, not harder.

    @HowardvRooijen

  • LINQ to EXCEL Provider + PostSharp = Cleaner Code

    Earlier in the year when I was doing some research into writing my own LINQ Provider I stumbled across a great listing of LINQ Providers, two entries looked very interesting; the first was LINQ to RDF(Semantic Web), the second was LINQ to Excel. The latter caught my attention as it was a very simple implementation of a LINQ Provider, which replicates LINQ to SQL functionality, but using Excel, rather than SQL Server as your data source. I also thought it could be a valuable addition to my developer tool belt as generally with Greenfield projects, before you have a proper data model or extract, you are much more likely to be given an Excel spreadsheet containing some sample data, thus LINQ to Excel could be used as a very crude ETL tool. I thought it would also come in handy as a data source for unit testing. So with that in mind, I decided to kick its tires.

    When I’m doing this kind of research – I like to use “real data” rather than Microsoft samples (Northwinds / AdventureWorks) as they are generally more interesting. Luckily in the UK there is a charity UK Citizens Online Democracy that runs a wonderful project called MySociety:

    mySociety has two missions. The first is to be a charitable project which builds websites that give people simple, tangible benefits in the civic and community aspects of their lives. The second is to teach the public and voluntary sectors, through demonstration, how to use the internet most efficiently to improve lives.

    One of MySociety’s projects is They Work for You - whose aim is to help bridge the growing democratic disconnect between Citizen and Government – the volunteers keep tabs on their elected MPs, and their unelected Peers, and comment on what goes on in Parliament. One of the great features of the MySociety projects and especially They Work for You is their Source Code, APIs and Raw Data Feeds. For this code sample I’m using their data feed about All Members of Parliament and in particular Members of Parliament who have died while in office (the extract goes back to 1815!).

    In Excel, the data appears as below:

    image

    To get up and running the first step is to create a class that describes the data represented in the Excel WorkSheet. I implemented it using the sample provided with In the original LINQ to Excel 2.5 release:

    1. Add the  [ExcelSheet(Name = "Deceased MPs")] attribute to the Data Model class, where Name is the name of the worksheet
    2. Make the class implement the System.ComponentModel.INotifyPropertyChanged interface – this is used by the LINQ to Excel plumbing to track changes to ObjectState for persisting the object back to the Excel WorkSheet.
    3. Add the PropertyChangedEventHandler PropertyChanged public event.
    4. Implement public properties with backing fields
    5. Add the [ExcelColumn(Name = "fromdate", Storage = "fromDate")] attribute to the property, where Name is the column name and Storage is the name of the backing field. 
    6. Implement the INotifyPropertyChanged behaviour in each property setter by adding the following call  this.SendPropertyChanged("FromDate");  where "FromDate" is the name of the property.

    You can see the full source of this Data Model class below:

    [ExcelSheet(Name = "Deceased MPs")]
    public class MemberOfParliamentDataModel : System.ComponentModel.INotifyPropertyChanged
    {
    private string constituency;
    private string firstName;
    private string fromDate;
    private string fromWhy;
    private string lastName;
    private string party;
    private string toDate;
    private string toWhy;

    public event System.ComponentModel.PropertyChangedEventHandler PropertyChanged;

    [ExcelColumn(Name = "constituency", Storage = "constituency")]
    public string Constituency
    {
    get { return this.constituency; }
    set
    {
    this.constituency = value;
    this.SendPropertyChanged("Constituency");
    }
    }

    [ExcelColumn(Name = "firstname", Storage = "firstName")]
    public string FirstName
    {
    get { return this.firstName; }
    set
    {
    this.firstName = value;
    this.SendPropertyChanged("FirstName");
    }
    }

    [ExcelColumn(Name = "fromdate", Storage = "fromDate")]
    public string FromDate
    {
    get { return this.fromDate; }
    set
    {
    this.fromDate = value;
    this.SendPropertyChanged("FromDate");
    }
    }

    [ExcelColumn(Name = "fromwhy", Storage = "fromWhy")]
    public string FromWhy
    {
    get { return this.fromWhy; }
    set
    {
    this.fromWhy = value;
    this.SendPropertyChanged("FromWhy");
    }
    }

    [ExcelColumn(Name = "lastname", Storage = "lastName")]
    public string LastName
    {
    get { return this.lastName; }
    set
    {
    this.lastName = value;
    this.SendPropertyChanged("LastName");
    }
    }

    [ExcelColumn(Name = "party", Storage = "party")]
    public string Party
    {
    get { return this.party; }
    set
    {
    this.party = value;
    this.SendPropertyChanged("Party");
    }
    }

    [ExcelColumn(Name = "todate", Storage = "toDate")]
    public string ToDate
    {
    get { return this.toDate; }
    set
    {
    this.toDate = value;
    this.SendPropertyChanged("ToDate");
    }
    }

    [ExcelColumn(Name = "towhy", Storage = "toWhy")]
    public string ToWhy
    {
    get { return this.toWhy; }
    set
    {
    this.toWhy = value;
    this.SendPropertyChanged("ToWhy");
    }
    }

    protected virtual void SendPropertyChanged(string propertyName)
    {
    System.ComponentModel.PropertyChangedEventHandler handler = this.PropertyChanged;

    if (handler != null)
    {
    handler(this, new System.ComponentModel.PropertyChangedEventArgs(propertyName));
    }
    }
    }

    Next you need to ensure that you build the application in x86 configuration mode (this is crucial if you are running on a 64bit environment) otherwise the application will not be able to resolve the OLEDB drivers.

    Once you have Data Model class set up and configured the build, you can now perform a LINQ query against the Excel document, first, create a new Excel LINQ Provider, pointing to the document, then perform you LINQ Query against the specified WorkSheet:

    var provider = ExcelProvider.Create(Path.Combine(targetDirectory.FullName, "Documents\\deceased-members.xlsx"));

    If you wanted to retrieve a list of all Labour MPs who have died in office – you can just perform a normal LINQ query:

    var xMps = from p in provider.GetSheet<MemberOfParliamentDataModel>() select p;
    var xLabourMps = from xMp in xMps 
    where xMp.Party == "Lab"
    select xMp;

    foreach (var mp in xLabourMps)
    {
    Console.WriteLine(
    string.Format(
    "{0} {1} - {2} - {3} - {4}",
    mp.FirstName,
    mp.LastName,
    mp.Party,
    mp.Constituency,
    mp.ToDate));
    }
    If you want to add a new record the syntax is as follows:
    var diedToday = new MemberOfParliamentDataModel
    {
    Constituency = "Somewhere",
    FirstName = "John",
    FromDate = "2000-01-01",
    FromWhy = "general_election",
    LastName = "Doe",
    Party = "Ind",
    ToDate = DateTime.Now.ToShortDateString(),
    ToWhy = "died"
    };

    provider.GetSheet<MemberOfParliamentDataModel>().InsertOnSubmit(diedToday);
    provider.SubmitChanges();

    And that’s it. Very simple, very flexible, very powerful. But the Data Model class is also very ugly, very verbose; bloated with boilerplate commodity behaviour. At this point I decided to change the tires.

    The first step was to refactor the source code; originally all the source code was in a single file, so I moved each class into it’s own file – instantly you get a better feel for the shape and complexity of the codebase:

    image

    Next I went through the codebase and cleaned it up a little, renamed some of the variables, added more spacing to make the flow of logic easier to follow with the eye and reordered some of the object creation, so that it’s easier to differentiate dependency creation from actual business logic. Now it’s much easier to see the magic that the author wove. Next I added some better null (empty cell handling) that I was seeing with some of my test data.

    Whenever I see code that’s “verbose, bloated with boilerplate commodity behaviour” I always think about PostSharp, the Open Source AOP Framework and how this can be used to remove commodity. Randomly at about the same time an internal discussion started about eliminating magic strings from INotifyPropertyChanged implementations and Jonathan George rose to the challenge and implemented a PostSharp Aspect called ImplementINotifyPropertyChanged to weave the INotifyPropertyChanged behaviour into the decorated object at compile time. Once he published the code – I refactored my sample and was amazed at the result. The code for the aspect is included in the example at the end of the post – but be warned, it’s black voodoo magik performed in “here be dragons” land. But the simplifying effect on the code is utterly amazing – the PostSharp Aspect is responsible for all the INotifyPropertyChanged concerns, which means our Data Model can be converted to use automatic properties, we can now take advantage of some default behaviours inside LINQ to Excel that resolves the Storage field of the ExcelColumn attribute to be the name of the property it decorates:

    [ImplementINotifyPropertyChanged]
    [ExcelSheet(Name = "Deceased MPs")]
    public class MemberOfParliamentDataModel
    {
    [ExcelColumn(Name = "constituency")]
    public string Constituency { get; set; }

    [ExcelColumn(Name = "firstname")]
    public string FirstName { get; set; }

    [ExcelColumn(Name = "fromdate")]
    public string FromDate { get; set; }

    [ExcelColumn(Name = "fromwhy")]
    public string FromWhy { get; set; }

    [ExcelColumn(Name = "lastname")]
    public string LastName { get; set; }

    [ExcelColumn(Name = "party")]
    public string Party { get; set; }

    [ExcelColumn(Name = "todate")]
    public string ToDate { get; set; }

    [ExcelColumn(Name = "towhy")]
    public string ToWhy { get; set; }
    }

    If you download the sample, build it and then use .NET Reflector to decompile LINQtoExcel.Sample.exe and examine all the extra plumbing that PostSharp has woven into the MemberOfParliamentDataModel class. AOP / PostSharp are commonly used to remove commodity code such as Logging / Caching / Exception Handling – but as Jonathan’s code (and Ralf Westphal’s Software Transactional Memory example) demonstrates it can be used in much more sophisticated scenarios.

    LINQ to Excel should be a great addition to your developer tool belt – it’s especially useful for playing with sample data in simple ETL scenarios. When combined with PostSharp implementation of your data model is vastly simplified and will get you up and running even faster. Don’t work hard, work smart and utilise many of the powerful Open Source tools at your disposal.

    @HowardvRooijen

  • 2 Visual Studio Add-ins you probably weren't aware of

    The Visual Studio Add-in ecosystem is not as rich as it should be, especially compared with Eclipse, but it looks like the work that is going in to Visual Studio 2010 is going to change that. Microsoft's investment in MEF and updating core components of Visual Studio from COM to the managed world should make extensibility a much easier affair. That said, there are a few wonderful add-ins out there and here are 2 that I've randomly (and delightfully) stumbled upon:

    Markup Tamer

    On previous projects, we've adopted the following convention for mark-up (extract):

    Before (all on one line):

    image 

    After:

    image 

    Formatting

    The control is formatted so that every attribute is on a new line – this really helps when it comes to Team Foundation Server Version Control (TFSVC) and merged check-ins. By default in TFSVC files can be checked out to more than one person at a time (unlike the default setting in Visual Source Safe) which means that extra care has to be taken if two developers are working on the same file. When it comes to merging changes on the same file TFSCV is a little fickle and 7 times out of 10 you will have to do the merge by hand – in this case it’s very difficult to merge changes if all the code is on a single line and the default merging tool has issues matching blocks of code that have changed – thus spreading code over multiple lines – in this case Server Control attributes makes the merging process easier.

    It was a bit of a pain having to manually format the mark-up to make it merge friendly. Then I stumbled upon Markup Tamer, which can automatically apply this convention to ASP.NET, Silverlight & WPF. Thus

    From:

    image

    To:

    image

    You can download the installer from: http://www.codeplex.com/DimebrainMarkupTamer

    To execute the add-in – you can find it on the edit menu:

    image

    And you can even configure it’s behaviour via Tools > Options

    image

    Configuration Section Designer

    It’s a visual designer for creating ConfigurationSections, which generates the code, config xml and an XSD to give you IntelliSense, all through the wonder of T4 Templates:

    image

    Here’s an example of the Config Section the above diagram generated:

    image 

    Using the ConfigurationSection (TfsServerConfiguration) is as simple as:

    image 

    Configuration Section Designer is such a great tool, it’s hard to fathom why this isn’t included in Visual Studio, out of the box. It certainly removes all the associated pain of creating configuration sections and all the associated plumbing.

    You can download the installer from: http://csd.codeplex.com/

  • Cloaking your ASP.NET MVC Web Application on IIS 7

    If you are building and deploying public facing web applications, security has to be one of your key consideration; ensure that you create a security threat model of your application to highlight the flow of data in your application and the possible weak points (Microsoft have a useful tool called Microsoft Security Assessment Tool which can help you with the planning process); ensure that your production environment has been hardened (and that you have run the various tool provided to spot any vulnerabilities in your infrastructure, such as Microsoft Baseline Security Analyzer and tools like CAT.NET and Paros for spotting vulnerabilities in your application code); ensure that your web application protects against Cross Site Scripting (XSS) and Cross-Site Request Forgery (CSRF/XSRF) attacks.

    If you can afford the cost, adding an Intrusion Prevention System device to your network adds benefit, but if you can’t afford such a device then a tool such as UrlScan can offer some protection by blocking potentially harmful HTTP Requests. In order to use URLScan effectively you need to put an operational feedback loop in place whereby you use a tool such as LogParser (if you want a nice UI for this command line app, give Visual LogParser a try) to examine your application’s IIS Logs for suspicious activity and add rules to UrlScan and your firewall to block such requests.

    Examining IIS Server logs from a high traffic public website or having a network monitoring solution such as Cacti is fascinating and terrifying in equal measures; once you have removed the noise of normal human generated traffic, the sheer volume of remaining non-human traffic generated by bots and spiders is staggering. Once you’ve filtered out all the requests generated by search engine’s crawlers there are a surprising number of other requests being made against your servers the two worst being harvesters (screen scrapers) and bots that perform vulnerability scanning and exploitation. These bots start by fingerprinting your server and then exploit any known vulnerabilities, the HTTP RFC 2068 highlights this possibility:

    "Note: Revealing the specific software version of the server may allow the server machine to become more vulnerable to attacks against software that is known to contain security holes. Server implementers are encouraged to make this field a configurable option."

    There are two recourses to this situation, firstly you can broadcast a fake web topology, for example if your web platform is WISA (Windows, Internet Information Services, SQL Server, ASP.NET) you can configure your servers to return the response headers of a LAMP (Linux, Apache, MySQL, PHP) platform. Secondly you can cloak this information, so it isn’t broadcasted at all.

    By default a WISA platform (running ASP.NET MVC) discloses its identity, by broadcasting the following response header (using Firebug):

    image

    You can turn off the X-AspNet-Version header by applying the following configuration section to your web.config:

    <system.web>
      <
    httpRuntime enableVersionHeader="false"/>
    </
    system.web>

    which results in the X-AspNet-Version being removed:

    image

    You can then remove the X-AspNetMvc-Version header by altering your Global.asax.cs as follows:

    protected void Application_Start()
    {
        MvcHandler.DisableMvcResponseHeader = true;
    }

    which results in the X-AspNetMvc-Version being removed:

    image

    But there is no easy way to remove the Server response header via configuration. Luckily IIS7 has a managed pluggable module infrastructure which allows you to easily extend its functionality. Below is the source for a HttpModule for removing a specified list of HTTP Response Headers:

    namespace Zen.Core.Web.CloakIIS
    {
    #region Using Directives

    using System;
    using System.Collections.Generic;
    using System.Web;

    #endregion

    /// <summary>
    ///
    Custom HTTP Module for Cloaking IIS7 Server Settings to allow anonymity
    /// </summary>
    public class CloakHttpHeaderModule : IHttpModule
    {
    /// <summary>
    ///
    List of Headers to remove
    /// </summary>
    private List<string> headersToCloak;

    /// <summary>
    ///
    Initializes a new instance of the <see cref="CloakHttpHeaderModule"/> class.
    /// </summary>
    public CloakHttpHeaderModule()
    {
    this.headersToCloak = new List<string>
    {
    "Server",
    "X-AspNet-Version",
    "X-AspNetMvc-Version",
    "X-Powered-By",
    };
    }

    /// <summary>
    ///
    Dispose the Custom HttpModule.
    /// </summary>
    public void Dispose()
    {
    }

    /// <summary>
    ///
    Handles the current request.
    /// </summary>
    /// <param name="context">
    ///
    The HttpApplication context.
    /// </param>
    public void Init(HttpApplication context)
    {
    context.PreSendRequestHeaders += this.OnPreSendRequestHeaders;
    }

    /// <summary>
    ///
    Remove all headers from the HTTP Response.
    /// </summary>
    /// <param name="sender">
    ///
    The object raising the event
    /// </param>
    /// <param name="e">
    ///
    The event data.
    /// </param>
    private void OnPreSendRequestHeaders(object sender, EventArgs e)
    {
    this.headersToCloak.ForEach(h => HttpContext.Current.Response.Headers.Remove(h));
    }
    }
    }

    Ensure that you sign the assembly, then you can install it into the GAC of your web servers and simply make the following modification to your application’s web.config (or if you want it to be globally applied, to the machine.config):

    <configuration>
    <
    system.webServer>
    <
    modules>
    <
    add name="CloakHttpHeaderModule"
    type="Zen.Core.Web.CloakIIS.CloakHttpHeaderModule, Zen.Core.Web.CloakIIS,
    Version=1.0.0.0, Culture=neutral, PublicKeyToken=<YOUR TOKEN HERE>
    " />
    </
    modules>
    </
    system.webServer>
    </
    configuration>

    Now when you execute a page, you should see the following HTTP Response (with X-AspNetMvc-Version, X-AspNetMvc-Version and Server response headers removed):

    image

    One further note – the bots also fingerprint via file extensions, if you are running ASP.NET MVC, extensionless URLs implemented via the ASP.NET MVC routing system, should help avoid this type of detection.

    @HowardvRooijen

This Blog

Syndication

News

This blog has now moved - please visit http://howard.vanrooijen.co.uk/blog for new content!
Add to Live.com
Powered by Community Server (Personal Edition), by Telligent Systems