2012/08/07

Playing with Ravens

Right now, I have a few ideas for pet projects I plan to work on over the next months. More on that in another post, but most of them include a data store other than an RDBMS.

Being on .Net, I have chosen to use RavenDB. Now it’s not that easy for an old SQL boy like me, so I started playing with it and making some notes here.

Getting the software

Ayende said on the mailing list, the current unstable version will be ready in a few month. Realistically, I won’t finish my project before this upcoming version will be out of support Winking smile, so I can go for unstable confidently. Even more when I read the standard reply on the list for bug reports: "Test will pass in the next build."

Installing

Simple: Unzip build to directory, and call start.cmd. This will even work by clicking on it since a new window pops up by default. However will be a UAC popup as the server calls netsh to grant rights for http.sys, but it requests this only once.

First steps

In the beginning, there was nothing. Not even a database, so let’s create a sample database. Did I mention I love it when batteries are included?

image

After creating the sample data, let’s take a look at the contents:

image

Ok, so we can view and inspect and update entities. Now, can we also do queries?

Querying data

Now where are the queries? It turns out (with a little bit of clicking around) that they are filed under indexes.image

So let’s start with a dynamic query, which is the equivalent to ad-hoc SQL queries on an RDBMS. Raven uses Lucene query syntax, which is documented on the Lucene tutorial. Now let’s look what albums are of my taste here:

image

Ok, that’s cool. We have everything we need on the page: Query, results, statistics. However, the result data displayed is not what I wanted. So I hit “Show fields”, hoping to get some dialog for choosing fields. But when I have it activated, Raven only displays the IDs of the documents.

Well, you can change it by right clicking the column titles:

image

image

image

Ok, so we can query the data ad-hoc without writing a map-reduce-function.

Indexes

Back on the index page, the query I just executed is displayed as an temporary index:

image

Ok, let’s edit this, so we can get the same result as before:

image

Looks like C# Linq, so how about this:

image

That looks promising and the index can actually be queried, but the studio shows the same strange result as above:

image

Eventually I will find that out… some day…

More Indexing

Now let’s go for a more demanding example. I want to know how many albums each Artist has published here. Luckily, there is already an artist index we can start with:

image

It’s actually quite easy, though I needed some minutes to get it right:

image

You should keep the following in mind:

  • Don’t put the Albums count into the group
  • Use Extension Methods for aggregations
  • Everything is case sensitive!

Now we nearly got it:

image

One more thing before I send the ravens back to their nest. I want only Artists that have between 10 and 20 albums published in the database. The default Lucene Syntax could be: Albums: [10 TO 20], but

image

Lucene searches lexicographically, so we need a special, yet nearly undocumented syntax, that I found in an example in the RavenDB docs:

image

Conclusion

Now I feel somewhat familiar with the studio, so next time let’s see how we can query this values from within code.

2012/01/02

To How Many Bounded Contexts Does Your Model Belong To?

Yesterday I have read this blog post by Chad Myers. I agree with him – more or less – on the current state of web frameworks, but I have something to add:

More pants Bras on the head

Chad emphasizes the …Model. Every web framework since WebForms emphasizes the …Model. There is MVC, there is MVVM and MVP. Everywhere there is a …Model. But what is the …Model? Let’s take a closer look.

What is a model?

A model is an abstraction. If I have something complex, something too complex to handle, I create a model that allows me to handle it. Looking at typical web applications, I ask myself: What is so complex here that I need an abstraction layer or even two?

To pull the data from the data store, we use a model. We put bras on the head and chant the rhyme of “Persistence Ignorance”. Then we build another model, now called view model that consists of a projection of the “real model” along with some specialized collections with one or two extra properties, because using more than one object to pass in the view is the deadly sin of web programming.

All this to put a simple SELECT TITLE,AUTHOR FROM POSTS on a web page? You must be kidding.

Complex Applications

Ok, not all applications are tutorial like. Let’s have a look at an application that deserves a model: Imagine something more complex, like Eric Evans shipping sample in Domain Driven Design. Here we have a model and the model will be used to calculate shipping costs and the time required to deliver and so on.

But will you use it in the company’s web application? Most likely not. To be more precise, if you do, you really need to reread Mr. Evan’s book.

So what model will you use in such an application? I guess there will be a CQRS-like denormalized data store for you application and changes are either issued with service calls or commands. In this case you will use your models to put an even simpler SELECT * FROM SpecializedViewTable on a web page.

Something in Between

There are a few web applications that have a model that is the only bounded context and still complex enough to justify a model. If you have such a domain, you’ll be fine with the model cascade on your web application, but I have found them to be quite rare.

2011/09/22

The Big Rewrite (1)

It has been a long time since my last blog post. It also has been a long time since I worked on the Castle project either. I don’t want to talk about the reasons however, but I had some ideas regarding ActiveRecord which I want to share.
My ideas have nothing to do with the current version of Castle ActiveRecord. Henry has done a great job pushing an RC out of the door, but AR’s codebase has grown over the years into a state that makes it hard to impossible keeping up with NHibernate’s features.
In the meantime, other changes took place in the world of ORMs that make most AR’s current features obsolete:
  • For NHibernate there are two independent mapping libraries allowing to work without XML. I have used confORM recently and it is a pleasure to use. Mapping with attributes is rather clumsy compared to confORM, FluentNHibernate, or NHibernate’s own mapping by code.
  • Since NHibernate 2.0 (which is a long time ago), transactions are mandatory. AR uses SessionScope and TransactionScope and a few other scope types which aren’t used widely, making AR even more complicated than naked NHibernate.
  • NHibernate has new mapping features. Honestly, I don’t know whether someone has added them to the AR codebase. The last time I tried, the visitor model used for examining the attributes drove me nuts.
  • Entity Framework has matured and while EF 4.1 is still a bit behind NHibernate, it is simpler to use than ActiveRecord.

Goals of ActiveRecord vNext

So what are the features I envision for the next AR version? You might guess them from what I don’t like in the current version:
  • ActiveRecord API (entity.Save() etc.)
  • POCO Support
  • Testability
  • A single Unit-of-Work concept.
  • Abstracting from the used ORM as much as possible reasonable.
For now, I will only highlight each of the design goals in short. I will share more on them in separate posts.

ActiveRecord API

AR vNext will concentrate on one major goal: Providing an easy to use API of the active record pattern. As a user, I want to call Save() or Find() from anywhere in my code without passing around session objects, DAOs or repositories.

POCO Support

The model should consist of plain object classes. The current AR requires inheritance for full support of the active record API. AR vNext will not even contain a base class for models. It will not require to use an attribute either. However, there is a drawback: Even with the full power of C# 4.0, implementing an empty interface is needed to streamline the experience.

Testability

Testing without database access is a must for TDD. Even the use of in-memory-databases requires initialization of the ORM which is far more problematic than getting data from an in-process-call to SQLite.
AR vNext will contain the necessary abstractions and native support for mocking and stubbing data access. Have your code use entity.Save() and fail the test if the wrong entity was saved… all without even loading the ORM’s assemblies.

Single Unit-of-Work Concept

A unit of work is mandatory unless you want to code demo apps. However, we don’t need separate concepts for web apps, service executables, GUIs and CLI programs. The single UoW must include transactions and allow both implicit and explicit usage.

Reasonable Abstractions

You might have noticed that I used ORM in the paragraphs above and not NHibernate. Well, while NH will be used and integrated as default ORM in vNext, I strive to abstract it out of the core API, so that it is possible to plug in other persistence frameworks in the future, which might be EF or RavenDB.
This abstraction will not used at any price. Don’t expect to change a configuration value to move from SQL to NoSQL or something similar. The goal is to factor out ORM-specific features in separate classes, so that you can remove the assembly references and have your compiler tell you what is required to adapt in order to use another ORM.

When can I use it?

Right now.
NuGet: Monastry.ActiveRecord-NHibernate
GIT: https://github.com/mzywitza/Monastry.ActiveRecord

2010/01/21

Working with XML in PowerShell (2)

In a previous post I wrote about examining XML with PowerShell. However, it is often not only necessary to peek into XML, but also to change it.

Updating XML with NAnt

In NAnt, it is possible to use xmlpoke to update XML in arbitrary files. The following NAnt snippet is taking from the build script of ActiveRecord’s test project and allows to adjust the database settings of the build machine.

  <target name="configure-tests">
    <property name="app.config" value="${build.dir}/${project::get-name()}.dll.config" />
    <xmlpoke
      file="${app.config}"
      xpath="/configuration/activerecord/config/add[@key='connection.connection_string']/@value"
      value="${ar.connection.connection_string.1}" 
    />
  </target>

In the snippet above I have omitted some 10 more xmlpoke tasks that all change values in the same application configuration file.

How can this be done in PowerShell? The preceding post in this series showed how to assign the output of a command to a variable, so we only have to use a command that reads the contents of a file and assign the output to the variable:

    $app_config = $project_path + "\build\net-3.5\debug\Castle.ActiveRecord.Tests.dll.config"
    [xml] $cfg = Get-Content $app_config

Now we have the XML to update available as an instance of XmlDocument and the full power of .NET at our hands to manipulate it. When we just had to traverse a path to change our XML, we could simply use dotted path. But if we have to differentiate siblings by its attribute values, the dotted path becomes unwieldy as we had to use a for-each-loop. But we can use XPath instead:

    $cfg.SelectSingleNode("/configuration/activerecord/config/add[@key='connection.connection_string']").value = $myDatabaseConnString
    $cfg.SelectSingleNode("/configuration/activerecord/config/add[@key='dialect']").value = $myDatabaseDialect
    # ...

Finally, we have to save the updated XML. There is one caveat here: XmlDocuments don’t keep whitespace but normalize where it is deemed to be safe. But if we change generated files instead of manually edited ones, this is only a minor nuisance.

    $cfg.Save($app_config)   

The full script is shown below. As before it is quite short compared to the NAnt XML script.

properties {
    [string] $project_path = "C:\dev\castle\SVN\ActiveRecord"
    $myDatabaseConnString = "PSDBCFG"
    $myDatabaseDialect = "PSDBDIALECT"
}

task default -depends configure_tests

task configure_tests {
    $app_config = $project_path + "\build\net-3.5\debug\Castle.ActiveRecord.Tests.dll.config"
    [xml] $cfg = Get-Content $app_config
    $cfg.SelectSingleNode("/configuration/activerecord/config/add[@key='connection.connection_string']").value = $myDatabaseConnString
    $cfg.SelectSingleNode("/configuration/activerecord/config/add[@key='dialect']").value = $myDatabaseDialect
    # ...
    $cfg.Save($app_config)   
}

2010/01/20

Working with XML in PowerShell (1)

During the last days, I started reading about PowerShell and PSake. While looking at the NAnt scripts that I currently use, I recognized that among others the capability of examining and editing XML files is required.

Examining XML in NAnt

In NAnt, inspecting XML is done with the xmlpeek-task. It takes a file name, an XPath and a property name to fill with the result.

One of the uses of xmlpeek is to find out the current revision of a code base, for example to set the private part of the version number with it. In the Castle Project’s build script this is done this way:

  <target name="common.find-svninfo">
    <!-- For adding SVN revision to builds -->
    <property name="svn.revision" value="0" overwrite="false" />
    <!-- try to update the revision -->
    <exec
      program="svn"
      commandline='info "${project::get-base-directory()}" --xml'
      output="_revision.xml"
      failonerror="false"/>
    <xmlpeek
      file="_revision.xml"
      xpath="/info/entry/@revision"
      property="svn.revision"
      failonerror="false"/>
    <delete file="_revision.xml" failonerror="false" />
    <echo message="INFO: Using Subversion revision number: ${svn.revision}"/>
  </target>

So while this is quite a clever solution to find out the projects current revision, it has some drawbacks, such as being nearly 20 lines of code and that it uses a temporary file. But it does the job, reading the revision from the project folder.

Examining XML in PowerShell

PowerShell treats XML as a first class type and also has some shortcuts available. If we take a look at the result of svn info, we see that it is thankfully simple:

<?xml version="1.0"?>
<info>
  <entry
     kind="dir"
     path="."
     revision="6536">
...
  </entry>
</info>

And because we can traverse XML nodes and attributes in PowerShell like object properties, it is possible to get the revision number even without an XPath expression.

$revision = $rev_xml.info.entry.revision

So now it is only necessary to fill the $rev_xml variable with the output of the svn info command. The NAnt script does this using a temporary file but PowerShell allows to assign the standard output of a command line program directly to a variable:

$rev_xml = svn.exe info --xml 

But PowerShell treats the output of such a program as an array of objects, in this case strings. We have to hint PowerShell that what we want is an XMLDocument instead of an array of strings: 

[xml] $rev_xml = svn.exe info --xml 

That’s all. We now only have to put this together into a PSake build script for testing and reference.

properties {
    [string] $project_path = "C:\dev\castle\SVN\ActiveRecord"
    $revision = 0
}

task default -depends get_revision
task get_revision {
    [xml] $rev_xml = svn.exe info $project_path --xml 
    $revision = $rev_xml.info.entry.revision
    "INFO: Using Subversion revision number: $revision"
}

This whole script is shorter than the snippet of NAnt script I’ve shown at the beginning. Taking only the relevant work, 5 lines of PS script replace 15 lines of NAnt XML.

The script of course ignores the fact that getting the revision number should not be a task but rather a function that is called by a build task’s precondition for example. I will get back this in a while in another post.

2010/01/14

ActiveRecord 2.1 released

Yesterday was ActiveRecord 2.1 released. The binaries can be downloaded on SourceForge.

Most of the new functionalities in this release were contributed by Krzysztof Kozmic, a fellow castle committer, who was a great help to get this release out. More thanks go out to the HornGet team, whose efforts took a lot of work out of the release process. Finally, I want to thank all contributors for patches and spotting bugs.

So now what is new in that release?

Updated Dependencies

Castle.Core and Castle.DynamicProxy were updated to their newest releases (1.2 and 2.2 resp.).

NHibernate has been updated to the last stable version 2.1.4 and compiled against the new Castle libraries.

NHibernate.Linq and NHibernate.Search were compiled against NHibernate 2.1.4.

Default configuration feature

This new feature allows to skip all of the <add/>-tags from your configuration if you are only using the database’s default values. In this case your config will be reduced to:

<activerecord>
    <config db="MsSqlServer2005" csn="MyConnectionStringName"/>
</activerecord>

The default configurations are available for most databases.

Inferring PrimaryKeyType from property type

Inspired by FluentNH, ActiveRecord now infers the PrimaryKeyType from the key’s property type:

  • For Guid PrimaryKeyType.GuidComb is used
  • For string PrimaryKeyType.Assigned is used
  • All others default to PrimaryKeyType.Native as before.

Support for backfield and readonly properties

These property accessors are now available in ActiveRecord. Especially readonly is interesting, it allows to store values computed by the entities in the database and is therefore the complement to a computed table column.

2009/08/01

ActiveRecord 2.0

Finally it’s final. I just created the ActiveRecord 2.0 zipfile.

It can be downloaded at SourceForge. This release includes the NHLinq 1.0 release.

Have Fun!