Welcome to EMC Consulting Blogs Sign in | Join | Help

Simon Evans' Blog

My blog covers the technology areas I focus on here at EMC Consulting, namely Architecture using the .NET Framework, ASP.NET, WCF, WCF Data Services and Windows Azure Follow me on twitter @simonevans

  • Implementing the Repository-Mapper-Command pattern using Entity Framework 4

    Back in August last year I blogged about a pattern I’d created for consuming data from Windows Azure table storage which I called the Repository-Mapper-Command pattern (RMC).

    What was less clear about this blog was that this pattern works for many kinds of repository – not just Azure Table Storage. Indeed, the pattern is very suited to any repository accessed using LINQ (IQueryable<T>), such as Entity Framework based repositories, or WCF Data Service repositories.

    This is the first in a series of three posts, where I will show the use of the RMC pattern using a simple domain model persisted using a repository built for the Entity Framework 4 (EF).

    The RMC pattern revisited

    The premise of the RMC pattern is to separate the implementation concerns for a Repository into mappers and commands, making the code more testable, maintainable and promoting reuse. A repository calls one or more mappers and commands, where:

    • a mapper translates to and from the external facing Domain objects that the repository interface defines and an internal representation of the data as understood within the repository (for example repository may use a WCF Data service proxy of EF generated entities).
    • each command executes a single repeatable transactional unit of work against the underlying infrastructure concern.

    Refining the pattern implementation based on lessons learnt

    Since I wrote my previous post describing this pattern and giving an example of how to implement it using Windows Azure table storage, I have worked on other projects that have implemented the same RMC pattern. Working with this patterns on large scale repositories has led to a few refinements in the implementation of the pattern. We found the lots of repositories and commands led to a proliferation of commands, mappers, interfaces and general plumbing code (like exception handlers); while the code was well factored as reusable and testable code, the shear volume of code made the repository implementation feel more bloated and therefore less maintainable than was ideal.

    With these points in mind, I have made the following refactoring points to the code which applies to all implementations of the pattern:

    • Place all the commands used by a single repository into a single interface (e.g. IPersonCommands) that defines all the commands (as methods) that the repository can execute. For example the PersonRepository would have an IPersonCommands interface.
    • Place all commands for one kind of repository (e.g. Entity Framework) in a single class, which implements all the interfaces for each type of repository. For example the class Commands could implement the IPersonCommands and IAddressCommands interface. Any command methods shared between these two interfaces would only be implemented once by the Commands object.
    • Implement AutoMapper profiles to reduce mapping code bloat as per Owain’s post here.
    • Implement a CommandRunner class which provides common exception and logging plumbing to each command the runner executes, and some syntactic sugar to make the repository code more readable (particularly in the case of compensating transactions).

    The Entity Framework 4 EDM

    Having described the pattern, lets look at an example implementation in EF. Firstly, here is my sample Entity Data Model (EDM):

    image

    This simple model has a few features which are representative of many application models:

    • The person to address association defines:
      • A person is associated with zero to many addresses.
      • An address can be associated with one person, or exist independently of the person.
    • The person to Email address association defines:
      • A person is associated with zero to many addresses.
      • An email address is always associated with a person.

    The important point that is commonly overlooked is there are always two ends to the association (as described above). The value of the EDM is that it makes the relationship explicit and clear to understand. The EDM ensures that, however we navigate the model, CRUD will always work.

    The subtle difference between these two associations is the multiplicity on the person to address association indicates that they are both aggregate root objects; this means that they can live independently of each other, and therefore have their own repository class: a PersonRepository and an AddressRepository. However, the Person to Email Address association differs in that the email address is always owned by a person. Therefore it cannot be an aggregate root; an email address can only be surfaced by the associated PersonRepository.

    Because the address can be persisted without a person through its own repository, the address entity needs the addition foreign key scalar property. This is the PersonId highlighted in yellow above. Providing this scalar property enables the address entity to be managed without constant calls to the person entity.

    Which flavour of Entity Framework?

    In my last post I covered design decisions as to which approach you should take to using EF. Lets apply that logic to the RMC pattern in EF.

    The EF implementation used in the RMC pattern is stateless (for use within web sites and services), because the Repository pattern provides a stateless API. In the RMC pattern, mappers are only required if there is an impedance mismatch between the Domain objects and repository implementation classes. With EF, these classes are the Entities in the EDM. Mappers are needed if the entity model does not match the domain model. This is often the case, particularly when a system uses multiple infrastructure concerns (e.g. EF for some of the domain model and table storage other parts).

    This example assumes the most complex scenario, where the EF entities do not match the domain model. Based on these characteristics this example therefore uses a “Model first” approach (using the EDM), with mappers mapping between the Domain model and the Entity model. The entity model objects are auto-generated as POCO classes using the CTP 5 T4 modelling template. This means my Entity Model project looks like this in solution explorer:

    image

    And a type entity generated from the T4 looks like this in code:

    1 namespace EMC.Sample.Data.Entity 2 { 3 using System; 4 using System.Collections.Generic; 5 6 public partial class Person 7 { 8 public Person() 9 { 10 this.Addresses = new HashSet<Address>(); 11 this.EmailAddresses = new HashSet<EmailAddress>(); 12 } 13 14 // Primitive properties 15 16 public int Id { get; set; } 17 public string Firstname { get; set; } 18 public string Surname { get; set; } 19 20 // Navigation properties 21 22 public virtual ICollection<Address> Addresses { get; set; } 23 public virtual ICollection<EmailAddress> EmailAddresses { get; set; } 24 25 } 26 }

    Note how clean this code is – pure auto generated POCO with no dependencies on other libraries or implementation of change tracking.

    The Domain Model

    Here’s my domain model in the class designer:

    image

    While the differences in my model example are trivial (because the model is simple and I’m only implementing an EF repository), the domain model differs to the EDM in that address properties are named differently in this layer. This is purely to demonstrate how the mapper will work.

    The Repository

    Before showing the code for each part of the repository, here’s how the repository project is structured in solution explorer:

    image

    The Infrastructure assembly contains all the repository classes, separated by infrastructure concern (e.g. Entity Framework, WCF Data Services etc.). Notice that common classes such as the CommandRunner and RepositoryException are at the root of the namespace because they are not Entity Framework specific. The Installers folder contains all the Castle Windsor installers used for dependency injection.

    The Entity Framework folder contains all the EF specific repositories. Their associated Mappers and Commands are contained in in the corresponding sub folders.

    The repository interfaces are defined in the Domain project. Here’s the interface for the PersonRepository:

    1 namespace EMC.Sample.Domain.Contracts.Repositories 2 { 3 using System.Collections.Generic; 4 5 /// <summary> 6 /// The repository for the <see cref="Person "/> domain object. 7 /// </summary> 8 public interface IPersonRepository 9 { 10 /// <summary> 11 /// Gets a person, including all child objects and collections. 12 /// </summary> 13 /// <param name="id">The person id.</param> 14 /// <returns></returns> 15 Person Get(int id); 16 17 /// <summary> 18 /// Persists a person (add or update), including all child objects and collections. 19 /// </summary> 20 /// <param name="person">The person to persist.</param> 21 void Persist(Person person); 22 23 /// <summary> 24 /// Gets a list of all people. 25 /// </summary> 26 /// <returns></returns> 27 List<Person> All(); 28 } 29 }

    This interface defines the repository has three interactions: Get, Persist and All, each working with one, or a collection of Person domain objects.

    Now lets look at the PersonRepository implementation:

    1 2 namespace EMC.Sample.Infrastructure.EntityFramework 3 { 4 using System.Collections.Generic; 5 using EMC.Sample.Domain.Contracts.Repositories; 6 using EMC.Sample.Infrastructure.EntityFramework.Commands; 7 using EMC.Sample.Data.Entity; 8 using EMC.Sample.Framework.Mapper; 9 using Entity = EMC.Sample.Data.Entity; 10 using Person = EMC.Sample.Domain.Person; 11 12 /// <summary> 13 /// The entity framework implementation of the person repository interface. 14 /// </summary> 15 public class PersonRepository : IPersonRepository 16 { 17 private readonly IPersonCommands commands; 18 private readonly IEntityMapper mapper; 19 private readonly ICommandRunner runner; 20 21 /// <summary> 22 /// Initializes a new instance of the <see cref="AddressRepository"/> class. 23 /// </summary> 24 /// <param name="mapper">The mapper used by methods in the repository to map between entities and domain objects.</param> 25 /// <param name="runner">The runner used by the repository to execute commands.</param> 26 /// <param name="commands">The commands that the repository is allowed to execute.</param> 27 public PersonRepository(IEntityMapper mapper, ICommandRunner runner, IPersonCommands commands) 28 { 29 this.commands = commands; 30 this.mapper = mapper; 31 this.runner = runner; 32 } 33 34 public Person Get(int id) 35 { 36 using (var context = new ModelContext()) 37 { 38 Entity.Person personEntity = null; 39 runner.Do(() => personEntity = commands.GetPersonById(id, context)); 40 return mapper.Map<Person>(personEntity); 41 } 42 } 43 44 public void Persist(Person person) 45 { 46 using (var context = new ModelContext()) 47 { 48 var personEntity = mapper.Map<Entity.Person>(person); 49 var addresses = new List<Address>(personEntity.Addresses); 50 51 personEntity.Addresses.Clear(); 52 runner.Do(() => commands.PersistPerson(personEntity, context)); 53 addresses.ForEach(addressEntity => addressEntity.PersonId = personEntity.Id); 54 55 foreach (var addressEntity in addresses) 56 { 57 runner.ThenWithCompensation(() => commands.PersistAddress(addressEntity, context), onException => 58 { 59 runner.Compensate(() => commands.DeletePerson(personEntity, context)); 60 }); 61 } 62 63 mapper.Map<Person>(person, personEntity); 64 } 65 } 66 67 public List<Person> All() 68 { 69 using (var context = new ModelContext()) 70 { 71 var peopleEntities = new List<Entity.Person>(); 72 runner.Do(() => peopleEntities = commands.GetAllPeople(context)); 73 return mapper.Map<List<Person>>(peopleEntities); 74 } 75 } 76 } 77 } 78

    First thing to note is that the repository uses dependency injection to provide instances of the mapper, the commands and the command runner.

    Look at the simple Get method:

    1 public Person Get(int id) 2 { 3 using (var context = new ModelContext()) 4 { 5 Entity.Person personEntity = null; 6 runner.Do(() => personEntity = commands.GetPersonById(id, context)); 7 return mapper.Map<Person>(personEntity); 8 } 9 }

    The core of this method uses the runner to execute a command (GetPersonById), and then uses the mapper (an AutoMapper profile) to map the EF entity (personEntity) to the Person domain object. Note that the runner is handling exceptions from EF and logging. More on that later.

    Now lets look at the Persist method (add or update):

    1 public void Persist(Person person) 2 { 3 using (var context = new ModelContext()) 4 { 5 var personEntity = mapper.Map<Entity.Person>(person); 6 var addresses = new List<Address>(personEntity.Addresses); 7 8 personEntity.Addresses.Clear(); 9 runner.Do(() => commands.PersistPerson(personEntity, context)); 10 addresses.ForEach(addressEntity => addressEntity.PersonId = personEntity.Id); 11 12 foreach (var addressEntity in addresses) 13 { 14 runner.ThenWithCompensation(() => commands.PersistAddress(addressEntity, context), onException => 15 { 16 runner.Compensate(() => commands.DeletePerson(personEntity, context)); 17 }); 18 } 19 20 mapper.Map<Person>(person, personEntity); 21 } 22 }

    The first thing the persist method has to do is map between the inbound Person domain object and the EF personEntity. Earlier on in this post I described how the address entity is an aggregate root, and so has its own repository. However, the persist person method can also persist the associated addresses (along with the email addresses). In this method, the persistance of the email addresses is treated differently from the persistance of the addresses:

    • The personEntity is persisted with the associated email addresses by the PersistPerson command in an ACID transaction.
    • The addresses are persisted by reusing the same PersistAddress command the AddressRepository uses in their own ACID transactions.

    Because the addresses are persisted in their own transactions, this could leave the person entity in an inconsistent state if there is a failure in persisting an address. To counter this, the runner provides a delegate to execute compensation on exception (to delete the person entity).

    You may wonder why not just run everything in a single ACID transaction? On more complex models, executing everything in one transaction can effect scale due to mass database locks in SQL. While this may be a fairly trivial (and unlikely) example of compensation, it demonstrates a concept that normally makes data access code bloat, or worse still compensation is just ignored.

    The Mappers

    The mappers used by the repository use AutoMapper profiles to limit the amount of code required to map between entities and domain objects. Here’s an example of the PersonMapperProfile:

    1 2 namespace EMC.Sample.Infrastructure.EntityFramework.Mappers 3 { 4 using AutoMapper; 5 6 public class PersonMapperProfile : Profile 7 { 8 public override string ProfileName 9 { 10 get { return "PersonMapperProfile"; } 11 } 12 13 protected override void Configure() 14 { 15 Mapper.CreateMap<Data.Entity.Person, Domain.Person>() 16 .ForMember(to => to.Lastname, opt => opt.MapFrom(from => from.Surname)); 17 18 Mapper.CreateMap<Domain.Person, Data.Entity.Person>() 19 .ForMember(to => to.Surname, opt => opt.MapFrom(from => from.Lastname)); 20 } 21 } 22 } 23

    This profile uses AutoMapper’s convention based mapping and tells AutoMapper what to do with the properties that fall outside of this convention (Surname to Lastname).

    The Commands

    Lets look at the commands the Get and Persist repository calls use:

    1 2 namespace EMC.Sample.Infrastructure.EntityFramework.Commands 3 { 4 using System; 5 using System.Collections.Generic; 6 using System.Data; 7 using System.Linq; 8 using EMC.Sample.Data.Entity; 9 10 /// <summary> 11 /// The Entity Framework implementation of all commands defined by each command interface implemented. 12 /// </summary> 13 public class Commands : IPersonCommands, IAddressCommands 14 { 15 /// <summary> 16 /// Gets a person by id. 17 /// </summary> 18 public Func<int, ModelContext, Person> GetPersonById 19 { 20 get 21 { 22 Func<int, ModelContext, Person> func = (personId, context) => 23 context.People.Include("Addresses").Include("EmailAddresses").FirstOrDefault(person => person.Id == personId); 24 25 return func; 26 } 27 } 28 29 /// <summary> 30 /// Persists (inserts or updates) the person and any provided child objects in an ACID transaction. 31 /// </summary> 32 public Action<Person, ModelContext> PersistPerson 33 { 34 get 35 { 36 Action<Person, ModelContext> action = (person, context) => 37 { 38 if (person.Id == 0) 39 { 40 context.People.Add(person); 41 } 42 else 43 { 44 context.People.Attach(person); 45 context.Entry(person).State = EntityState.Modified; 46 47 foreach (var emailAddress in person.EmailAddresses) 48 { 49 if (emailAddress.Id == 0) 50 { 51 context.EmailAddresses.Add(emailAddress); 52 } 53 else 54 { 55 context.EmailAddresses.Attach(emailAddress); 56 context.Entry(emailAddress).State = EntityState.Modified; 57 } 58 } 59 } 60 61 context.SaveChanges(); 62 }; 63 64 return action; 65 } 66 } 67 } 68 } 69

    As stated earlier, there is one single Commands class that implements multiple interfaces. These interfaces define what commands each repository can call.

    The interesting thing about this class is that each command is defined as a Func or Action property. The reason for this is so that the runner can inject exception handling and logging around each command. The repository shares object context between commands but the boundary of the transaction never flows beyond a single command. This promotes many small short transactions rather than one large transaction.

    Notice too how the persist command uses manual change tracking and not auto change tracking. The use of the CTP5 context.Entry(person).State tells entity framework what the state of the entity is. This manual control means we remove addition reads from our application which is important for stateless applications such as web sites and services.

    The Command Runner

    Finally, lets look at the runner code:

    1 2 namespace EMC.Sample.Infrastructure 3 { 4 using System; 5 using EMC.Sample.Framework.Diagnostics; 6 7 /// <summary> 8 /// The implementation of the command runner interface. 9 /// </summary> 10 /// <remarks>The command runner uses the ILog interface to log exceptions thrown by the entity framework.</remarks> 11 public class CommandRunner : ICommandRunner 12 { 13 private ILog log; 14 15 public CommandRunner(ILog log) 16 { 17 this.log = log; 18 } 19 20 /// <summary> 21 /// Executes the specified command. 22 /// </summary> 23 /// <param name="command">The command.</param> 24 /// <returns> 25 /// An instance of the command runner. 26 /// </returns> 27 public ICommandRunner Do(Action command) 28 { 29 return this.DoWithCompensation(command, e => { }); 30 } 31 32 /// <summary> 33 /// Executes the specified command with compensation. 34 /// </summary> 35 /// <param name="command">The command.</param> 36 /// <param name="compensation">The compensation.</param> 37 /// <returns> 38 /// An instance of the command runner. 39 /// </returns> 40 public ICommandRunner DoWithCompensation(Action command, Action<Exception> compensation) 41 { 42 try 43 { 44 command(); 45 } 46 catch (Exception ex) 47 { 48 if (ex.InnerException != null) 49 { 50 log.Write(LogMessageType.RepositoryException, ex.Message, ex.InnerException.Message); 51 } 52 else 53 { 54 log.Write(LogMessageType.RepositoryException, ex.Message, string.Empty); 55 } 56 57 compensation(ex); 58 throw new RepositoryException(ex.Message, ex); 59 } 60 61 return this; 62 } 63 64 /// <summary> 65 /// Executes the specified command. 66 /// </summary> 67 /// <param name="command">The command.</param> 68 /// <returns> 69 /// An instance of the command runner. 70 /// </returns> 71 public ICommandRunner Then(Action command) 72 { 73 return this.DoWithCompensation(command, e => { }); 74 } 75 76 /// <summary> 77 /// Executes the specified command with compensation. 78 /// </summary> 79 /// <param name="command">The command.</param> 80 /// <param name="compensation">The compensation.</param> 81 /// <returns> 82 /// An instance of the command runner. 83 /// </returns> 84 public ICommandRunner ThenWithCompensation(Action command, Action<Exception> compensation) 85 { 86 return this.DoWithCompensation(command, compensation); 87 } 88 89 /// <summary> 90 /// Executes the specified command. 91 /// </summary> 92 /// <param name="command">The command.</param> 93 /// <returns> 94 /// An instance of the command runner. 95 /// </returns> 96 public ICommandRunner Compensate(Action command) 97 { 98 return this.Do(command); 99 } 100 } 101 } 102

    The majority of this class is syntactic sugar (such as the Then method). Note each method returns ICommandRunner, which provides a fluent API for executing commands.

    The meat of this class is the DoWithCompensation method which templates the command and compensation passed in with an exception block and logging.

    Next time…

    I will be posting similar examples of using the repository-mapper-command pattern with a WCF Data Service and Azure Table Storage in the future. Stay tuned.

  • Entity Framework 4 : Implementation Options

    The .NET 4 version of the Entity Framework (EF) brought several major improvements over the first version of the ORM; these features have since been extended further through the soon to be released add on EF Features CTP 5. Note for the rest of this post, when I refer to EF, I mean EF4 with the CTP 5 installed too.

    The new features in EF provide much greater choice of how you choose to work with the framework; version 1 of the Entity Framework (part of .NET 3.5 SP1) gave you exactly one way of working with it (database first), which grated on many people who preferred to work “object model first”.

    Working “object model first” means that you prefer to design your solution considering the object model before you consider the data model. This does not mean that you do not consider the data model at all; rather you choose to design this concern first. There are several reasons why you may choose to do this, but I think the most compelling reason is that the object model view of the world touches far more of your application than the data model does; indeed if you opt to work test driven, you will be wanting to work with the object model first so you can create your tests.

    In reality, working “object model first” in EF means one of two main options:

    • Working “Model First” with the Entity Data Model (EDM) tooling in Visual Studio
    • Working “Code First” writing POCO domain objects in code (or using the class designer) and using the convention based API handle data mapping.

    Both of these options provide a way of automatically generating the underlying SQL data model.

    “Model First” development using the Entity Framework

    Model First development in EF means working with the Entity Data Model (EDM). The EDM is a model abstraction concerned with mapping Entities (objects) to Data (relational stores). The EDM is an XML format consisting of three core sections:

    • the conceptual model (CSDL) which models the entities,
    • the structural model (SSDL) which models the data and
    • the mapping model (MSL) with manages the impedance mismatch of the two.

    The EDM has been a core part of EF from version one. Most people view the EDM through Visual Studio tooling, which looks like this:

    EDM Designer

    Working model first with the EDM provides benefits in three key benefits over code first:

    • It enables developers to visualize the problem and focus on the whole model rather than individual aspects of it.
    • It provides build time validation that your entities will work for all CRUD operations no matter how you navigate the object model (via edmgen.exe).
    • It provides a way of working effectively with data architects in a domain that they are expert in.

    The complexity of the model in question will determine the value of working model first; the more complex the model, the more valuable a model first approach is.

    Code is generated from the model. Code generation happens through the Visual Studio tooling (Edmgen.exe and T4 templates). In EF there are now three auto-generation options for implementing a model first approach:

    • Using the basic ObjectContext implementation to generate entities which inherit from the EntityObject class (like EF v1).
    • Using self tracking entities. It differs from option one in that the entities do not inherit from the EntityObject class and do not depend on the System.Data.Entity assembly (the generated ObjectContext still does). Thus self tracking entities can be used as currency between layers of an n-tier application. Like option one, the entities are automatically tracking by the object context.
    • Using POCO entities (CTP 5 only). This option is similar to self tracking entities except that the entities are completely ignorant of change tracking.

    Additionally you can switch off code generation and roll your own if you prefer. In this case the POCO entity properties must match the entities properties you model in the EDM (the CSDL section of the model).

    “Code First” development using the Entity Framework

    Code first development (CTP 5 only) means that you write your entity objects as POCO classes and use a convention based approach to handling mapping to a data model. A detailed blog on how to do this can be found on the ADO.NET team blog.

    Working code first has the following benefits over model first:

    • More light weight approach with less artefacts.
    • Suits developers who prefer to work with code rather than a visual abstraction.

    Which approach is best for me?

    I’ve spent a while trying to distil out of my head what the key factors are in making these decisions. I think ultimately, there are three key factors:

    • The type of application or service being built, such as a web application, service, thick client or RIA application.
    • The number of data stores or services the application will consume.
    • The complexity of the data store you are surfacing through EF.

    The last of these three points is hardest to quantify, because complexity is subjective. The main point that makes a model core complex is shear number of entities and relationships. If you cant easily picture all your systems relationships in your head, then you will get value from a model.

    The diagram below illustrates the decision tree as to which approach I think best suits each option:

    Entity Framework Options

    Here’s the key points from this diagram:

    • Value the EDM over code first in more complex modelling scenarios.
    • With the exception of WCF Data Services, use manual change tracking against POCO entities in stateless applications (e.g. websites and services).
    • Use self tracking entities with auto change tracking for simple thick client (stateful) applications.
    • If you only have one data store, use your entities as Domain objects in your application or service.
    • When you implement multiple stores implement  domain-to-entity model mapping within your repository where necessary. A good way to do this without introducing code bloat is using AutoMapper profiles as described by Owain Wragg’s blog here.
  • Liberating Identity using Windows Identity Foundation

    Last night I presented to the London Connected Systems User Group on the subject of Windows Identity Foundation (WIF).

    The LCSUG community is focussed around building and integrating services. One architectural concern that has been widely ignored in service oriented architecture is identity and access control. Web based identity protocols such as WS-Trust, OAuth WRAP and WS-Federation are of great importance to service and application design because they enable identity to be freed beyond the boundaries of an application or single enterprise and onto the web.

    The software demands of end users have changed in recent years; people expect access to information anywhere from any device all the time. For most enterprises, this demand will mean provisioning internet facing applications and services in a public cloud, while key tier 1 or line of business applications will remain under close supervision on premise or in a private cloud. This is reality of “hybrid cloud” scenarios.

    To deliver cloud enterprise architecture you need one key ingredient : HTTP. Identity that flows over HTTP(S) enables access control to operate from device to service and beyond in a way that it seamless and invisible to end users. This web single sign on capability is also becoming a key differentiator in a users experience of an application.

    The slide deck I gave last night can be viewed below:

    Over the next few weeks I will be providing detailed examples of using WIF with ASP.NET MVC, WCF and WCF Data Services.

  • Common Windows Identity Foundation WS-Federation Exceptions Explained

    Windows Identity Foundation (WIF) is a hugely important part of the .NET Framework, enabling you to architect and develop relying parties (RP) and security token servers (STS) that use claims based identity models. For all of this goodness, WIF is surprisingly undocumented on MSDN for a release product (at the time of writing at least), and when developing against it, it won’t be long before you come across “ID” exceptions which have little or no explanation.

    Having spent many hours of my life trawling through forums trying to understand the issues, I’ve decided to blog a list of every exception I have seen when using WIF to create a relying party ASP.NET website that uses WS-Federation to authenticate with a security token server. This is sometimes called “passive” authentication, where the browser is acting as a thin client “middle man” in the federation process between the RP and the STS.

    Note that this blog is not a guide to developing with WIF, WS-Federation or its use with ASP.NET; I’ll assume you already know how the pieces fit together. If you want some good resources on how to get developing with WIF, I suggest you look here:

    The following table describes several common Windows Identity Foundation exceptions associated with using WIF to build a relying party using WS-Federation:

    WIF Exception

    Error Message

    What does that mean?

    ID1025

    Cannot find a unique certificate that matches the criteria.

    The relying party is unable to read the security token which has been encrypted at a message level by the issuing STS. In order for the relying party to decrypt the token, it needs access to the x.509 certificate used to encrypt the token. This certificate is referenced in the relying party’s WIF configuration under the <serviceCertificate> element. This exception is stating that there is an issue identifying a unique certificate that matches the configuration.

    ID4036

    The key needed to decrypt the encrypted security token could not be resolved from the following security key identifier

    This exception follows on from the ID1025 exception above. We are still trying to decrypt an encrypted security token using the information contained in the <serviceCertificate> element. This time however, WIF  has managed to uniquely identify the certificate, but it cannot access the certificates private key to execute decryption. Not being able to access the certificate’s private key may be either because the certificate is missing a private key, or because the service account the relying party website is using does not have permissions to read the private key.

    ID4175

    The issuer of the security token was not recognized by the IssuerNameRegistry. To accept security tokens from this issuer, configure the IssuerNameRegistry to return a valid name for this issuer.

    Security tokens are signed by the issuer (the IP-STS). This issuer is validated by the relying party so that the RP can be sure the tokens have been issued from a trusted source. The relying party’s WIF configuration contains an <issuerNameRegistry> element where the settings for the issuer’s signature are stored. This exception means that the configuration contained under the issuer name registry does not match the signature of the security token.

    ID4291

    The security token 'Microsoft.IdentityModel.Tokens. SessionSecurityToken' is not scoped to the current endpoint.

    Session security tokens are scoped to work only for a single relying party, based on the realm and audience configuration in WIF. This is demonstrated by the path of cookies created by the cookie handler. If you attempt to use this cookie for another RP of sub application of the RP, you will see this exception.

    ID1073

    A CryptographicException occurred when attempting to decrypt the cookie using the ProtectedData API

    The cookie the message refers to is the session security token stored as a series of cookies named FedAuth1 to FedAuthX (based on the size of the SessionSecurityToken). The Session Authentication Module (SAM) is responsible for managing the reading and writing of the the session security token information between page requests, using a cookie handler to physically read and write the cookies in the request and response stream. This exception states that the encrypted session security token could not be decrypted by the SAM. The encryption method stated here is DPAPI.

    ID1074

    A CryptographicException occurred when attempting to encrypt the cookie using the ProtectedData API

    The cookie the message refers to is the session security token stored as a series of cookies named FedAuth1 to FedAuthX (based on the size of the SessionSecurityToken). The Session Authentication Module (SAM) is responsible for managing the reading and writing of the the session security token information between page requests, using a cookie handler to physically read and write the cookies in the request and response stream. This exception states that the session security token could not be encrypted by the SAM. The encryption method stated here is DPAPI.

    How do I fix ID1025?

    There are several ways to identify a certificate (e.g. via a certificate’s thumbprint or subject name). Make sure the certificate is complete and installed correctly and the identifier is correct. For example

    <certificateReference x509FindType="FindBySubjectName" findValue="mysubjectname" storeLocation="LocalMachine" storeName="My" />

    This configuration references the certificate by subject name (called mysubjectname) in a store on the same machine. Note that storeName “My” actually refers to certificates stored under Personal certificates when viewed the certificates MMC snap in.

    Be careful when using FindByThumbprint. I have seen issues with this because the thumbprint value has to be correct in hex and configuration file encoding can screw this up and cause much pain during configuration management.

    How do I fix ID4036?

    Firstly, ensure that your certificate is complete with all the required properties for use with WIF (see below).

    Assuming that is the case, check which user account your web site is running under in IIS by:

    1. Looking at the sites basic settings and checking which app pool the site is using.
    2. Looking at the named app pool and checking the identity by selecting advanced settings of the app pool.

    Next, open up MMC and add the certificate snap in and manage certificates for the computer account.

    Navigate to Personal > Certificates. Select the appropriate certificate referenced by WIF, right click All Tasks > Manage Private Keys. Ensure that the identity account used by the app pool has read access to the private key.

    How do I fix ID4291?

    Check the path of the chunked cookie (using development tools in Firefox) and ensure they match your configuration as expected. Check if your application contains any sub web applications. If needed, you may need to force the cookie path by setting the <cookieHandler path="custompath" /> configuration attribute.

    How do I fix ID1073?

    Using DPAPI requires the app pool load user profile setting to be turned on for the app pool in IIS. To do this, go to the app pool > advanced settings and set Load User Profile to true.

    This will now work fine for the default ApplicationPoolIdentity account, but in production systems running a custom windows account (on 64bit), you will still get the same exception when the app pool recycles (I don’t know why).

    I solved this by switching the token format from DPAPI to an RSA cryptographic algorithm using the following  WIF event handler in the global.asax:

    protected void OnServiceConfigurationCreated(object sender, ServiceConfigurationCreatedEventArgs e)
    {
        //
        // Use the <serviceCertificate> to protect the cookies that are
        // sent to the client.
        //
        List<CookieTransform> sessionTransforms =
            new List<CookieTransform>(new CookieTransform[] {
            new DeflateCookieTransform(),
            new RsaEncryptionCookieTransform(e.ServiceConfiguration.ServiceCertificate),
            new RsaSignatureCookieTransform(e.ServiceConfiguration.ServiceCertificate) });

        SessionSecurityTokenHandler sessionHandler = new
            SessionSecurityTokenHandler(sessionTransforms.AsReadOnly());

        e.ServiceConfiguration.SecurityTokenHandlers.AddOrReplace(sessionHandler);
    }

    This code has the added benefit of working in a web farm without all nodes in your farm sharing the same machine key in configuration, which is much nicer (and what I needed).

    How do I fix ID1074?

    I have only seen one cause of this failure. Image your relying party is accessed at:

    https://mysite.com/myapplication/

    This is your audience URI used on the security token issued from the STS.

    Now if your user access the relying party by:

    https://mysite.com/myapplication

    the lack of closing slash on the end of the URI gives this exception thrown by the FAM. This is surely a bug in WIF.

    The workaround for me was to switch off passive redirect in the FAM configuration:

    <federatedAuthentication>
      <wsFederation passiveRedirectEnabled="false" issuer="
    https://close-brothers.dev/closests/identity/issue" realm="https://close-brothers.dev/closeasset/federation" homeRealm="https://close-brothers.dev/closests/identity/issue" requireHttps="true" />
      <cookieHandler requireSsl="true" />
    </federatedAuthentication>

    and then write code to handle the redirect myself. Again, this workaround was something I think I would have needed regardless, because my site uses ASP.NET MVC and I want the routing engine to handle all requests and provide plumbing for home realm discovery.

    WIF and x.509 certificate creation in the real world

    When you develop with WIF, you are given a handy little tool called FedUtil.exe. It does a bunch of things, like change your application configuration, and generating self signed certificates under the covers.

    As useful as this is for development, when it comes to deployment on production environments, you will want to have all the certificates signed with a certificate authority, and you will want control of certificate generation using MakeCert.

    The lack of WIF documentation here is very tiring and is from my experience the reason for half of the exceptions I mention above. With the help of two platform architects (Thanks to James Dawson and Barry Feist) we figured out what our makecert commands need to be:

    REM Root Certificate Authority
    makecert.exe -pe -n "CN=My Root CA,O=MyOrg, OU=Dev, L=London, C=GB" -ss my -sr LocalMachine -sky exchange -a sha1 -r "MyRootCA.cer"

    REM SSL Cert
    makecert.exe -pe -n "CN=ssl.dev,O=MyOrg, OU=Dev, L=London, C=GB" -ss my -sr LocalMachine -a sha1 -sky exchange -eku 1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2 -in "My Root CA" -is My -ir LocalMachine -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 "MyRootCA.cer"

    REM STS Cert
    makecert.exe -pe -n "CN=STS.Web,O=MyOrg, OU=Dev, L=London, C=GB" -ss my -sr LocalMachine -a sha1 -sky exchange -eku 1.3.6.1.5.5.7.3.1 -in "My Root CA" -is My -ir LocalMachine -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 "MyRootCA.cer"

    REM Relying Party Cert
    REM makecert.exe -pe -n "CN=RelyingParty.Web,O=MyOrg, OU=Dev, L=London, C=GB" -ss my -sr LocalMachine -a sha1 -sky exchange -eku 1.3.6.1.5.5.7.3.2 -in "My Root CA" -is My -ir LocalMachine -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 "MyRootCA.cer"

    Here we are creating four certificates. The first is the root CA. The second is a certificate we using for SSL between the relying party and the STS. The third is a certificate used by the STS to sign tokens. The final certificate is used to encrypt the security token and decrypt the token by the relying party.

  • Consuming data from Windows Azure Table Storage using the Repository-Mapper-Command pattern

    Back in May I presented to the UKAzureNet group about how we at EMC Consulting had developed the See The Difference website using many technologies including Windows Azure.

    Part of this session included some information on how we had consumed data we were storing in Windows Azure Table Storage. In this post, I am going to describe the patterns I designed for this in much more detail.

    The challenge of consuming data from Table Storage

    The Windows Azure SDK provides an API out of the box to consume table storage data. This API is built on top of WCF Data Services (formerly ADO.NET Data Services), and thus uses OData messaging to execute queries against table storage.

    Table storage uses an implementation of OData with the following key characteristics:

    • It  adds table storage specific authorization HTTP headers to each message which map to your Windows Azure account.
    • $expand does not work, because table storage provides a simpler, schemaless entity-attribute-value model.
    • Entity group transactions only work against a single table in a single partition.
    • $orderby does not work, because table storage cannot reorder data (data is ordered on its partition and row keys only).

    These constraints of table storage provide many challenges as to how you design your table storage implementations. Choice of partition keys, row keys and the granularity of each table is key to designing effect table storage. Other people have already blogged heavily on how to solve such issues.

    When you have successfully designed your table storage model, the next challenge is how to consume this model in an effective manner. Yes the Windows Azure SDK provides you with an API, but you have to figure out the following:

    • How can I make this testable using BDD specs?
    • How do I handle exceptions and diagnostics?
    • How do I make my data access code maintainable?
    • How do I encapsulate my table storage access in a way that calling code is ignorant of the implementation?
    • How do I work with transactions across entities?
    • How can I effectively implement retries and compensation for failure?

    Faced with all these challenges, I ended up architecting a Repository-Mapper-Command pattern, which I will describe below.

    An overview of the Repository-Mapper-Command pattern

    The Repository-Mapper-Command pattern is, as the name suggests an implementation of three well known design patterns chained together:

    Pattern Formal Definition Motivation for use
    Repository Conceptually, a Repository encapsulates the set of objects persisted in a data store and the operations performed over them, providing a more object-oriented view of the persistence layer. Repository also supports the objective of achieving a clean separation and one-way dependency between the domain and data mapping layers. To ensure that calling clients are ignorant of storage implementation details. To provide an API that works using the domain layer objects as currency. To provide a testable layer that enables interactions with data to be articulated using BDD specifications.
    Mapper To map from one or more objects to one or more objects. To map the contents of domain objects passed into the repository layer into table storage entity objects required for persistence and vice versa.
    Command Encapsulate a request as an object, thereby letting you parameterize clients with different requests, queue or log requests, and support undoable operations. The enable the reuse of table storage operations by chaining commands together within a repository. To enable the command interactions being executed within a repository to be tested using BDD specifications.

    From a calling clients perspective, the repository is called to either retrieve or persist domain object(s). The repository will call mapper(s) classes to translate between domain objects and table storage entities. The repository will call command objects to execute commands against the table storage context. The command objects use table storage entities as currency and not the domain objects.

    For the rest of this article, a Comment domain object will be the subject of this pattern. A comment domain object represents a single comment stored in table storage, which requires retrieval and persistence using this pattern.

    An important aside on the role of data service context and transactions

    Before describing each part of this pattern in more detail, it is worth recapping on the role of DataServiceContext in WCF Data Services (used by TableServiceContext).

    The best way to think of DataServiceContext is to consider it as a constructor of an OData message or messages. For example, if you read an entity, update this entity and insert another entity in a single batch operation (by using the MergeOption on SaveChanges), this will execute as a single request-reply message. This message is the boundary of what can be done in a single transaction.

    An instance of a DataServiceContext object is not thread safe. The ramifications of this is that each transaction (unit of work) should be executed using a new instance of the context. It also means that the contents of an acidic transaction needs to be represented in a set of domain objects using composition. For example, if I needed to transactionally insert a customer entity and the customer’s address within the same transaction, the domain model must represent this in a single object graph.

    One final gotcha when using Table Storage entity group transactions; the build in retry policy mechanism build into the table storage client does not work when using entity group transactions; I had to implement my own retry mechanism within a repository and this was a key motivator in implementing the command pattern.

    These facts are fundamental to the design of a repository that uses WCF Data Services such as table storage repositories.

    The Repository

    The repository provides an API that works using domain objects, ignorant of the data access and storage technologies used. The interfaces enable the repository to work with an IoC container such as Castle Windsor, which enables repositories to be stubbed out for testing purposes.

    The following diagram illustrates the repository classes and interfaces (click to open it in the original size):

    image

    The table below describes the responsibilities of each class and interface in the repository:

    Interface / Class Responsibility
    ITableStorageRepository<T>

    Defines a generic interface for all table storage repositories in the system, containing the methods:

    • Single, for returning a single domain object.
    • SelectPartition, for returning a list of domain objects from a table partition.
    • Delete, for deleting a single domain object.
    • Persist, for adding or updating a single domain object.
    ICommentRepository An implementation of the ITableStorageRepository<T> interface for the Comment domain object. Extensions to the standard methods listed above would be placed in this interface.
    TableStorageRepositoryBase

    Base class inherited by all Repository classes, responsible for encapsulating the following functionality:

    • Creation of ApplicationDataServiceContent, including using the correct CloudStorageAccount settings.
    • Logging exceptions from Command objects into Azure diagnostics.
    CommentRepository The concrete implementation of the ICommentRepository interface. Inherits the TableStorageRepositoryBase class. Executes commands and uses mappers to translate between domain objects and entity rows.
    ApplicationDataServiceContext Provides a strongly typed wrapper over the TableServiceContext object, providing easy access to tables and default settings for tracking options.
    TableServiceContext The Azure SDK provided class that is a Table Storage specific implementation of WCF Data Service class DataServiceContext. The TableServiceContext class provides additional plumbing to handle table storage additions to the OData messaging such as adding authentication HTTP headers to messages.
    CloudStorageAccount The Azure SDK provided class that represents an Azure storage account as set up via the Azure service portal.

    The core currency of the repository API is a domain object. The repository exposes methods such as Single to retrieve a single domain object.

    A transaction cannot flow outside of a repository, or outside of a method on a repository. This is good practice regardless of the type of repository, but it is imperative given the nature of the DataServiceContext class. Hence, if a calling client calls persist against two repositories, this will occur as two separate ACID transactions. Failure must be handled using a compensating transaction to leave data in a consistent state.

    Each method on a repository must instantiate its own service context for thread safety. The creation of a service context with the configuration is handled by TableStorageRepositoryBase. The service context instantiated is of type ApplicationDataServiceContext, which provides strongly typed access (via IQueryable<T>) to each table service entity.

    A single repository method will execute one or more commands against a single instance of the service context. Hence, a transaction can flow between commands within a single repository method.

    In order for a command to be executed, the repository must map any domain objects into table storage entity objects using a mapper. Likewise the result of a read command requires a mapper to map the table storage entity back to a domain object.

    The pattern implements a Persist method, rather than add and update. This means the repository is responsible for deciding whether to add or update the domain object(s), based in whether the object has an entity key set or not (when the entity key is null). For example the persist method will execute an AddToObject when the entity key is null.

    The repository catches TableStorageCommandException objects, and actions the exception accordingly, including logging the exception against Windows Azure diagnostics via the TableStorageRepositoryBase.LogCommandException method.

    The Mapper

    The follow diagram illustrates the mapper classes and interfaces, including the related domain objects and table storage entities (click to open it in the original size):

    image

    The table below describes the responsibilities of each class and interface in the mapper:

    Interface / Class Responsibility
    IMapper<TInput, TOutput> The generic interface for mapping from an input entity to an output entity.
    ICommentRowToCommentMapper Implements the IMapper<TInput, TOutput> interface, to map from a CommentRow table storage entity to a Comment domain object
    ICommentToCommentRowMapper Implements the IMapper<TInput, TOutput> interface, to map from a Comment domain object to a CommentRow table storage entity.
    CommentRowToCommentMapper Concrete implementation of the ICommentRowToCommentMapper interface.
    CommentToCommentRowMapper Concrete implementation of the ICommentToCommentRowMapper interface.
    CommentRow A strongly typed representation of a single comment row in a table. CommentRow inherits from TableServiceEntity.
    TableServiceEntity The Azure SDK provided class which represents a single entity in a table.
    Comment The Comment domain object, passed into and out of the repository and used by all other layers of the application architecture.

    The mapper classes above around mapping between the Comment domain object and the CommentRow table service entity object. The mappers are required to translate between the domain objects surfaced by the repository and the table storage entities executed by commands.

    The Command

    The command encapsulates a single operation against table storage and a table service entity (such as a LINQ query). The repository will chain together commands against a single ApplicationDataServiceContext, promoting reuse and improving the maintainability of your code. The command also provides a natural place to handle OData errors thrown as InvalidOperationException objects.

    The follow diagram illustrates the Command classes and interfaces (click to open it in the original size):

    image

    The table below describes the responsibilities of each class and interface in the command pattern:

    Interface / Class Responsibility
    TableStorageCommandBase Provides implementation of common command logic, such as common exception handling for table storage failures.
    IGetCommentRowsByProjectIdCommand The interface for the GetCommentRowsByProjectIdCommand class. Defines the signature for returning a collection of CommentRow table entities from table storage for a single project id.
    IPersistCommentRowCommand The interface for the PersistCommentRowCommand class. Defines the signature for persisting (adding or updating) a single CommentRow table entity into table storage.
    GetCommentRowsByProjectIdCommand Concrete implementation of the IGetCommentRowsByProjectIdCommand interface. Uses the ApplicationDataServiceContext to execute a LINQ to Data Service query, returning a collection of CommentRow objects for a specific project id.
    PersistCommentRowCommand Concrete implementation of the IPersistCommentRowCommand interface. Checks if the CommentRow being persisted has an identity and adds or updates the CommentRow accordingly against the ApplicationDataServiceContext.
    TableStorageCommandException Strongly typed exception class used to throw exceptions back to the calling repository.

    Each command represents a single unit of work executed against a shared service context. Commands do not use the domain objects; they are coupled to the table storage implementation details. Each command has an interface so that interactions can be tested in specifications.

    By convention, each command exposes an Execute method, with at least one argument for ApplicationDataServiceContext. If an exception occurs, the TableStorageCommandBase class contains common exception parsing code to translate the OData error into a strongly typed exception (TableStorageCommandException).

    Summary

    Use of the Repository-Mapper-Command pattern for Windows Azure table storage enables you to implement storage access with:

    • Testable BDD specs.
    • Diagnostics logging and exception handling.
    • Reuse LINQ operations improving maintainability.
    • Provide an API that encapsulates all storage implementation details.
    • Works with entity group transactions.
    • Enables the implementation of retries and compensation where needed.
    • Centralizes object dependencies via IoC.

    Useful Resources

  • See the Difference – Developing a new way to give

    It is unusual for me to be able to publically blog about project work that I have been doing due to client confidentiality, but my last finished project – See the Difference is different.

    See the Difference is a start up charity site designed to provide a new way of giving, where people donate to projects run by charities that have a specific purpose. The site enables you to see the difference you have made. It launches later this month.

    From a technology perspective, its interesting because it is built on Windows Azure Web Roles, Azure Table Storage and SQL Azure

    On the 15th April James Broome and I presented See the Difference to the UKAzureNet community, which is a user group focussed on the Windows Azure Platform here in the UK.

    The video for the presentation is below:

    The presentation is here:

    I will be posting more Azure related content here soon, so watch this space.

  • A few Windows Azure events coming up

    There are a few events coming up over the next couple of weeks that I’m speaking at (all in London) with a colleague of mine James Broome about the Windows Azure platform:

    Hope to see you there!

  • Common causes of why a Windows Azure web role fails to start

    Things have been very quiet on my blog of late. There are several of reasons for this, but the biggest reason is that I have spent most of the last six months really getting to grips with the Windows Azure platform, and working on a real world project using it. So this is the first of several blogs on the subject.

    For all of my Windows Azure blogs I am going to assume that you have at least read the core Window Azure training resources; you should understand key concepts such as web and worker roles, table, blob and queue storage and Azure configuration management. All of my blogs focus on using Windows Azure with .NET and C#, although many of the subjects I will discuss apply to other frameworks and languages a (such as Java).

    I thought I’d start with a subject that, if your first experiences with Windows Azure are like mine, will cause you to waste a lot of time: figuring out why your web role fails to start.

    What is a web role?

    Have a look at the documentation here which describes what a web role is:

    http://msdn.microsoft.com/en-us/library/dd179341.aspx

    What are the symptoms of a web role that fails to initialize?

    When you attempt to start you application via the Azure portal you will see it attempt to initialize. It will then stop, recycle and then attempt to initialize again. This cycle will continue until the service goes into a failed state.

    Why might a web role fail to initialize correctly?

    When you deploy your web role to Windows Azure, the fabric generates a VM image based on:

    · The base image of a web role, which contains a Window Server 2008 installation with IIS and .NET framework 3.5 installed. This image is used for all web role instances in any application running in Windows Azure.

    · The settings you provide in the service definition and configuration settings of your Azure project.

    · The .NET assemblies and other artefacts you deploy as part of your service package.

    When you attempt to start your application, the initialization process will run some code called the role entry point. This code runs before anything else in your web site – even before events such as application startup.

    You customize the behaviour of the role entry point by inheriting from the RoleEntryPoint class, provided by the Microsoft.WindowsAzure.ServiceRuntime assembly. It is in here you place code that is specific to your Windows Azure cloud application. Typical code that is placed here includes setting up Azure diagnostics, and access to the Azure configuration system.

    When this initialization process fails, the role will recycle and attempt to restart itself. Because it is here that you place your diagnostics setup, you do not get any logged failures in initialization making it very hard to understand what it wrong.

    What are the common causes of a web role failing to initialize?

    The reason that a failing web role can take up so much of your time is because most failures are only apparent when you deploy your application to Windows Azure staging (or production). This is because Dev Fabric is only an approximation of the real Windows Azure environment (see http://msdn.microsoft.com/en-us/library/ee923628.aspx for a list of how dev fabric differs from Windows Azure).

    The following checklist describes the steps you need to go through in order to ensure that your Windows Azure web role is going to (or at least likely!) to initialize correctly:

    1. Make sure any assemblies that are not part of the Windows Azure base image are set to copy local, by looking at the properties of the references assembly in Visual Studio. This includes assemblies such as System.Web.Mvc (ASP.NET MVC), which are products released by Microsoft since the Azure web role VM image was created. The Windows Azure web role VM includes a base install of .NET 3.5 SP1.

    2. Ensure that your Windows Azure storage endpoint connections strings are correct. By default, a Windows Azure storage endpoint (for table, blob and queue storage) uses HTTPS and not HTTP. If you use HTTP on your dev fabric it will work. When you deploy it, it will fail because the role entry point requires HTTPS. It’s amazing how much time I have wasted staring at a connection string in the service configuration and not realized it is set to use HTTP!

    Ensure you are deploying the correct service configuration file. On our projects, we have stored our production service configuration in blob storage, and reference that file when we update our application. This works well, but can be cumbersome when there are configuration changes required; you can easily end up using the wrong version of the configuration which will likely stop your application initializing.

  • ASP.NET Resource Provider for Windows Azure Table Storage

    Background

    Since ASP.NET 2.0, the framework has provided a localization model based on localizing text into RESX files. RESX files are compiled and deployed as part of the website, and referenced through declarative mark-up on your pages, cleanly separating your localized resources from your code.

    Using RESX files to store the resources is highly performant and scalable, because resources are deployed locally with the website, meaning the resources will scale with the number of web servers you run. There are however two main downsides to this approach:

    • Changes to resource text requires the resource assemblies to be deployed to the web servers in your farm.
    • Resource changes need to be done at design time and cannot be achieved at run time.

    These constraints to resources files have led many people to store resources in SQL Server, enabling your to centrally manage and maintain resources at run time without any deployment headaches. This model works really well, and does not require any code changes within your website, because the .NET framework uses a provider based model for swapping out RESX storage for a custom provider (using the globalization configuration section).

    For all the benefits of the SQL Server resource provider approach, it does present one major drawback; SQL Server represents a single instance which must be scaled up rather than out to cater for demand.

    Windows Azure Table storage provides a new storage option for resources. Table storage is non relational (as are resources), providing highly performant storage that scales out on demand. Therefore for resources, table storage provides all the benefits of a SQL Server provider coupled with the scale out capability of RESX files. With resources being stored in the cloud, the ASP.NET website can reside on premise or in a web role in Windows Azure.

    The Solution

    The Azure Table Storage Resource Provider source code can be found at:

    http://localization.codeplex.com

    The provider implements the standard .NET interfaces for building a resource provider. The entire set of classes used by the solution are shown in the diagram below:

    TableStorageLocalization

    The provider has been built using the Windows Azure SDK July CTP. In order for you to develop with it, you must install the SDK and have the StorageClient assembly. This assembly encapsulates data access to table storage via ADO.NET Data Services.

    Configuration

    There are two items you need to configure in order to use the provider:

    • You need to ensure your ASP.NET website config file includes the <globalization /> configuration element as demonstrated in the sample web sitein the solution.
    • You need to change your Azure configuration file to use your own Azure Table storage account (you can use the development fabric during development).

    Table Storage Structure

    The provider expects two tables to exist in your account:

    • GlobalResources, to store global resources
    • LocalResources, to store local resources

    These two tables have the same structure:

    The partition key is the culture name concatenated with the resource set name, encoding to be URI safe. For example:

    en-gbab2fpb61gesb2fsb61mplelocb61lresources.b61spx

    is the partition key for the culture en-GB and the (local) resource set /Pages/SampleLocalResources.aspx. Note that it is stored all in lower case, because access to the partition is case sensitive, so the provider uses a lower case key at all times.

    The table row key is the resource key of the resource. For example, headerliteral.text is the resource key for a literal control with the ID of HeaderLiteral. This resource key identifies the text property of the control. Again, note the provider uses the key stored in lower case.

    The other attributes each table requires are:

    • CultureName, which contain the name of the culture.
    • ResourceSetName, which contains the name of the resource set.
    • Data, which contains the actual localized data.

    The provider needs to have an invariant set of resources for each localized resource set you implement. This is used as the fall back culture when you do not have a localized resource set for your culture.

  • ADO.NET Data Services : Diagnosing problems with your data service

    ADO.NET Data Services is elegant technology that feels very easy to work with. The majority of current scenarios for using ADO.NET Data Services involve exposing an Entity Data Model (EDM) implemented using the ADO.NET Entity Framework.  Most of the time, this just works, but it can throw you when you do face problems; how exactly do you diagnose issues in data services, either at design time or in production?

    In this blog post I will cover a common check list that should be followed when things haven’t gone to plan; either the service has never successfully run, or has stopped working in production. I will cover the techniques available to you to diagnose problems with your service.

    Common Issues

    Below is a checklist of issues that are common causes of data services not functioning properly.

    Issue 1 : No database connectivity

    It sounds obvious, but many data services appear not to be working due to an incorrect database connection string.

    Start by checking the connection string that is being used by the Entity Framework. The connection string should be in the <connectionStrings /> section of your service host’s web.config file. Use the server explorer to test the connection credentials. Ensure that the CSDL / MSL / SSDL resources are set to be embedded in the EDMX designer.

    Issue 2 : Your EDM is using features not supported by ADO.NET Data Services

    Not everything that you can do in the Entity Data Model is supported by ADO.NET Data Services.

    The first and most common issue here is the use of EDM types that are not supported by data services. A complete list of EDM types supported by ADO.NET Data Services can be found here.

    The second issue you may have when surfacing your EDM is that you have used a form of inheritance not supported by data services. The Entity Framework supports table based inheritance, where the conceptual type must be mapped to a table that represents the sub class. This will only work in the data service if the sub class contains additional properties that are contained in the underlying table.

    Issue 3 : The database user account does not have access to the required database objects

    Check that all your database objects (tables, view etc) can be accessed using CRUD based SQL with the account that your are using in your database connection string.

    Issue 4 : Check you have granted appropriate access to the required entities

    By default, no entities from an EDM are surfaced by ADO.NET Data Services unless you specifically grant access to the entities. Check your data service’s Initialize method to ensure that you have granted the appropriate access. Conversely, ensure that there is no code grant access to entities or service operations that do not exist in your service.

    Issue 5 : Invalid custom service operation or query interceptor

    Custom service operations require the WebGetAttribute to be applied to the method.

    If you have stubbed out your service operation, you must ensure that the operation returns a valid IQueryable object (don’t just return null!).

    For more information on service operations, look at the MSDN documentation here.

    How exceptions are thrown and handled by ADO.NET Data Services

    ADO.NET Data Services are build on top of a WCF WebHttpBinding channel stack and use the complete WCF infrastructure to expose a service endpoint. Therefore, any unhandled exceptions from the data service will cause WCF to use its standard fault mechanism to return a response to the client. These faults need to diagnosed using the standard approaches to WCF faults as described later on in this post.

    Exceptions handled by a data service implementation causes an HTTP 404 error response. This is because ADO.NET Data Services is a good RESTful citizen. If you see this behaviour, you know that an exception has occurred within the data services runtime.

    Diagnosing exceptions handled by the data service at runtime

    As mentioned above, ADO.NET Data Services return a standard HTTP 404 message when the service encounters an exception of any kind. To understand what is going wrong in your service, you need to switch on verbose errors in the service’s initialization. However, this should not be used by a service running in production, because it will stop the service returning a standard HTTP 404 error code. This following service initialization code will switch on verbose errors:

        /// <summary>
        /// Initializes the service.
        /// </summary>
        /// <param name="config">The config.</param>
        /// <remarks>This method is called only once to initialize service-wide policies.</remarks>
        public static void InitializeService(IDataServiceConfiguration config)
        {
        // other config settings go here...
            config.UseVerboseErrors = false;
        }

    Unfortunately out of the box, ADO.NET Data Services does not provide a standard configuration section to manage these settings at runtime. Therefore, I have attached (at the very bottom of this post) a custom configuration section that can be used within the InitializeService method to drive these settings through configuration.

    The configuration section will require an entry in your host’s web.config file like the sample below:

    <?xml version="1.0"?>
    <configuration>
        <configSections>
            <sectionGroup name="emc.common">
                      <section name="dataService" type="EMC.Data.Services.Configuration.DataServiceSection, EMC.Data.Services.Configuration"/>
            </sectionGroup>
        </configSections>
        <emc.common>
            <dataService>
                <serviceSettings useVerboseErrors="false" maxExpandDepth="3" maxExpandCount="10" maxResultsPerCollection="1000" />
                <entitySetAccessRules>
                    <add name="*" entitySetRights="All" />
                </entitySetAccessRules>
                <serviceOperationAccessRules>
                    <add name="*" serviceOperationRights="All" />
                </serviceOperationAccessRules>
            </dataService>
        </emc.common>
    </configuration>

    The code below illustrates how to wire up the configuration section to the InitializeService method:

    using EMC.Data.Services.Configuration;

    public class MyDataService: DataService<MyEntities>
    {

    /// <summary>
    /// Initializes the service.
    /// </summary>
    /// <param name="config">The config.</param>
    /// <remarks>This method is called only once to initialize service-wide policies.</remarks>
    public static void InitializeService(IDataServiceConfiguration config)
    {
        DataServiceSection dataServiceSection = ConfigurationManager.GetSection("emc.common/dataService") as DataServiceSection;

        if (dataServiceSection != null)
        {
            if (dataServiceSection.ServiceSettings != null)
            {
                ServiceSettingsElement serviceSettings = dataServiceSection.ServiceSettings;

                config.MaxExpandDepth = serviceSettings.MaxExpandDepth;
                config.UseVerboseErrors = serviceSettings.UseVerboseErrors;
                config.MaxExpandCount = serviceSettings.MaxExpandCount;
                config.MaxResultsPerCollection = serviceSettings.MaxResultsPerCollection;
            }

            if (dataServiceSection.EntitySetAccessRules != null)
            {
                foreach (EntitySetAccessRulesElement element in dataServiceSection.EntitySetAccessRules)
                {
                    config.SetEntitySetAccessRule(element.Name, element.EntitySetRights);
                }
            }

            if (dataServiceSection.ServiceOperationAccessRules != null)
            {
                foreach (ServiceOperationAccessRulesElement element in dataServiceSection.ServiceOperationAccessRules)
                {
                    config.SetServiceOperationAccessRule(element.Name, element.ServiceOperationRights);
                }
            }
        }

    }

    Diagnosing WCF faults

    If an handled exception is encountered by the WCF channel stack, it will use the standard fault mechanism to respond to the client. By default, the details of the exception will be hidden from the client (again this is how your service should be configured during normal running). This is how it looks in your browser by default when you have a WCF fault in your data service:

    WCFError

    In order to see the details of the exception in the response from the service, you need to attach a ServiceBehavior to the data service. This can be done in code by applying a service behaviour attribute to the data service implementation, but again this is hard coded and not what you would want for general good running of your service.

    Unlike ADO.NET Data Services, WCF does provide a standard way of configuring this behaviour at runtime through the web config file. The configuration code below illustrates how you can drive this behaviour through configuration:

    <system.serviceModel>
        <services>
          <service name="Schedule" behaviorConfiguration="DataServiceBehavior">
            <endpoint binding="webHttpBinding" contract="System.Data.Services.IRequestHandler"/>
          </service>
        </services>
        <behaviors>
          <serviceBehaviors>
            <behavior name="DataServiceBehavior">
              <serviceDebug includeExceptionDetailInFaults="true"/>
            </behavior>
          </serviceBehaviors>
        </behaviors>
        <serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
    </system.serviceModel>

    ADO.NET Data Services implements a service contract called IRequestHandler. The name of the service needs to be the fully qualified name of your data service implementation.

    Using standard WCF messaging diagnostics

    ADO.NET Data Services also benefit from the excellent support that WCF has for standard trace diagnostics. The following configuration entry illustrates how you can log out WCF’s tracing to a flat file. Again this would only be used when you have problems with your service at runtime:

    <system.diagnostics>
      <sources>
        <source name="System.ServiceModel.MessageLogging" switchValue="Warning, ActivityTracing" >
          <listeners>
            <add name="ServiceModelTraceListener"/>
          </listeners>
        </source>

        <source name="System.ServiceModel" switchValue="Verbose,ActivityTracing"                >
          <listeners>
            <add name="ServiceModelTraceListener"/>
          </listeners>
        </source>
        <source name="System.Runtime.Serialization" switchValue="Verbose,ActivityTracing">
          <listeners>
            <add name="ServiceModelTraceListener"/>
          </listeners>
        </source>
      </sources>
      <sharedListeners>
        <add initializeData="App_tracelog.svclog"
                                        type="System.Diagnostics.XmlWriterTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
                                        name="ServiceModelTraceListener" traceOutputOptions="Timestamp"/>
      </sharedListeners>
    </system.diagnostics>

Powered by Community Server (Personal Edition), by Telligent Systems