Active Record and DDD

One of the most fatal mistakes one can conduct when beginning with Domain-Driven Design is doubling Active Record types as domain entities. This does not include Castle ActiveRecord, but all frameworks that map classes and tables one-to-one, and to some extend even more flexible solutions like NHibernate.

How does it start?

This trap is usually hit by developing a data-driven applications using ActiveRecord as an ORM. There is nothing bad with the approach per se, and I'm actually using it myself a lot. Let's see the following code taken from the Castle ActiveRecord GettingStarted section:

public class Blog : ActiveRecordBase<Blog>
	private int id;
	private String name;
	private String author;
	private IList<Post> posts = new List<Post>();

	public Blog()

	public Blog(String name)
		this.name = name;

	public int Id
		get { return id; }
		set { id = value; }

	public String Name
		get { return name; }
		set { name = value; }

	public String Author
		get { return author; }
		set { author = value; }

		Table="Posts", ColumnKey="blogid", 
		Inverse=true, Cascade= ManyRelationCascadeEnum.AllDeleteOrphan)]
	public IList<Post> Posts
		get { return posts; }
		set { posts = value; }

This is straight forward data-driven code and there is nothing bad about it. Note that no business logic is embedded in the class. In the simple GettingStarted example, the logic is buried in the GUI, but in a real application you would perhaps use Transaction Scripts to encapsulate logic in objects.

Setting the trap

The programmer eventually reads Eric Evans great book or hears from a mailing list about DDD. He might remember that Castle ActiveRecord does not require a base class and removes it, using ActiveRecordMediator for database access.

Now that there are only POCOs, our unwary programmer starts adding business logic to the ActiveRecord types. By that, he tries to encapsulate complexity within the "domain model".

But what has happened:

  • Most important, he violates the Single Responsibility Principle (SRP); the class is now responsible for multiple aspects:
    • Storing data
    • Executing business logic
  • In the first few iterations, business logic that was previously packed in one method is now cluttered over multiple classes. DDD's supple design promises to mitigate that but a design usually only becomes supple by a lot of refactoring.

The immediate result is big step backwards in maintainability. Over the long term, DDD will have a better maintainability, but you will need a lot of work to reach this state. By that time, the trap has already sprung...

The trap fires

For a while, all will be well. The programmer get accustomed to the code and does some changes. The code gets more complex and a bit unwieldy. Finally, the programmer needs a "break-through"; a big refactoring takes place to make the code more supple.

Now, violating the SRP fires back: It is not possible to refactor the design without writing complex migration scripts for the database. Integration suddenly becomes an issue. However, the redesign is utterly needed because of the business logic embedded in the design.

The typical outcome is that the redesign is put off until "there is more time", or shorter: "never". In the meanwhile, the code base is growing and getting more and more fragile.

How to recover?

The most important task in recovering from such a dilemma is deciding which approach will be used for the application. You can choose a data-driven approach or a domain-driven approach, but not both.

Using DDD

If the complexity of the domain mandates DDD, it is necessary to do it right. This means using the ActiveRecord types as a DAO/DTO layer and building a model upon it that contains the business logic. The model is then decoupled from the data structure. If the model is redesigned, the mapping code requires to adapt, not the structure of the data storage. If the storage structure changes, the mapping code changes and not the model.

This is also the reason why it is possible to use NHibernate directly on a domain model: The NHibernate mapping files are mapping code, written in an XML-based DSL (and soon with a fluent API)

So this is the other way out of the trap when using DDD. If you use Castle ActiveRecord, take the hbm-files created when using the debug switch and use NHibernate directly instead.

Using data-driven design

It is also possible to make a full turn. A domain of modest complexity that defines most of the business cases as sequential workflows and processes, will benefit from using a data-driven approach.

The processes defined in the business domain can be modeled using the Transaction Script pattern and the Active Record model is exactly that: a pattern for accessing an underlying database.

On the "Anemic Model"

Many of the solutions above use a model that is disregarded as anemic by many. But whether a model is anemic, depends on responsibilities rather than on LOC.

Thus an ActiveRecord-model is not anemic because it is responsible for accessing the data store. A DAO/DTO is part of the persistence layer and its responsibility is passing data around. By coincidence, it doesn't need any methods for this task, but it is not anemic.

After all, violating the SRP is always worse than having an anemic model.


The poor programmer who unwarily builds a trap who fired at himself was of course me.


Sumod said...

What a timely read!

I had just started down this trap when I found your post. I am currently switching from inherited Base classes to using the Mediator on my way to DDD :-) Thanks for the warning!

At this point my Domain objects map closely to the persistent structure, so I could simply wrap the AR mapped classes within the Domain classes.

But when they diverge, what approach did you find worked best for you?

Unknown said...


I don't know if I would wrap the AR classes in the domain classes. If you insist on it, do at least define DTO-Interfaces that the AR-classes implement and code your domain classes against that interfaces.

A better approach is mapping. You could use automapper (@ codeplex) as long as possible and map manually when the models diverge.

Archimedes Trajano said...

I'm actually looking at both the Active Record approach and the Domain Model + Table Module approach as specified in P for EAA.

Personally I am leaning towards the Domain Model + Table Module approach, but with a few modifications.

In the book a Table Module represents one table. However, in my approach it can span more than one table as I am planning to have one "Table Module" per Domain Model.

One Domain Model would represent multiple tables (one per class). And they would technically represent one "document" that I should be able to easily move.

I would then have multiple Domain Models per document type: e.g. Person, Case, Financial.

When I look at the Active Record pattern, I find that there is too much work being done in one class. I'd rather have separation.

However, when I look at Table Module approach, there is going to be a lot of repetition and knowledge of the underlying implementation. Which may be okay at the moment.

Anonymous said...

Good post.
Been reading Evans book, and while not doing Active Record but rather straight NHibernate, any input on the topic and how it usually shakes out in applications is great to augment the theory I am just beginning to grasp from the book.

Mike Chaliy said...

Increasing complexity of the domain logic should not mean increasing complexity of the persistance for the first place. If you u are facing situation when persistance became complex, just refactor entity to more entities. Active record is not a solution for this problem. It just makes this problem to appear later.

阿川 said...

Hey, this is an EXCELLENT article! Thank you very much! This is really impressive and insightful!

Unknown said...

Well put. I am just wondering then, if you are building a Rails application, how would you then leverage DDD principles, since by default all its models utilize the ActiveRecord pattern.