Sunday, April 15, 2012

Seamless database content localization with NHibernate

Localization is a well known problem we as developers usually face when building multilingual applications. It is well studied, and for most development frameworks, there are ways in which you can localize your UI text or application resources. But when talking about localizing content that is going to be stored in a database, that's a whole different story.

There is little or no support in most persistence platforms to enable this scenery out of the box whether it is an ORM, or the "default" persistence method for the technology stack of your choice. Also, when trying to solve this problem we usually try to reinvent the wheel. By that, I mean there are literally hundreds of man hours lost in redoing the same thing, again and again, for each application.

The worst part is most solutions we come up with or at least that I have seen/experienced are really intrusive in the way we "ideally" try to approach persistence nowadays. This ideal being that the way we write our models shouldn't be contaminated by the implementation detail of how they are going to be persisted.  But sadly, that is not the case most of the time and our models end up getting splattered by the constraints imposed by the persistence mechanism.

Specially with ORMs this is very noticeable because we end up creating entities just to enable the localization process and making these entities part of our domain model almost by force. Turning these "localization enablers" into a sort of dependency magnets, spreading through our models. What happens then is that if you want to take your domain objects somewhere else, you need to take the localization mapping objects with you and anything else they need.

If you are interested on knowing about other approaches to localization here are a couple of links on the subject. You can start by reading an overview of all of them in this NHForge post. Here are the links to each method explained: MichalSiim Viikman, Ayende, Alkampfer [1, 2, 3], Fabio.

Again, disclaimers are in order. The code provided here is just a proof of concept and not production ready. Keep that in mind at all times. If you want to make it production ready, fork the github repository at the end of this post and send me a pull request. I will be more than glad to add your name to the contributors list and this post.

Yet another solution
As I mentioned before, there are several solutions out there already. The reason why I don't like them is that almost all of them except for Ayende's, force you to create your domain model with a particular persistence trick in mind. Whether it is adding the property as a dictionary or having a special type for it, they are kind of intrusive and I didn't like that.

So what this solutions proposes is to implement a not so commonly used feature of NHibernate called interceptors. This interceptors pattern is called upon whenever NHibernate performs an operation, and they allow you plug into the framework's pipeline so that you can transform it, update it, analyze it, enhance it, or whatever you want to do.

In this case our interceptor is going to look for the localization message entries in the database of each of the properties in our entity according to the current culture it is working on, and based on the results it will update the entity's values with the localized values. Just so we don't query the database every time, which would be a big performance hit, we are going to be using NHibernate's second level cache.

That's enough talk! Let's get down to it!

The code
I have hosted the code at this github repository. You can download it, fork it, use it, modify it, sell it (not advised), or wear it like a hat. I am going to be using NHibernate + FluentNHibernate + Moq + MSTests + SQLite but aside from NHibernate, the rest is not really that important.

So first, lets see how our localization persistence entities are going to look like.


Here we have two classes: LocalizationEntry and LocalizationEntryId. One serves as a composite Id of the other for catching purposes, but in general, they are not complex classes. The Id class consists of the entity's type, the entity's id, the property to which the localization message belongs to and the culture in which this message should be displayed.

You can see I have overridden the Equals and GetHashCode methods for the composite Id. This is required by the framework in order to use the composite object as an id.

Here is the fluent mapping for these same entities.


I activated the cache on this entity so that we can take advantage of the second level cache feature that NHibernate provides us with. This way we won't query the database every time.

Last but not least our proof of concept interceptor implementation.


How to use it
To get a better understanding on how to use the interceptor we just have to take a look at the tests, since they pretty much explain the use a consumer of this method would give to the API.


In fact, it is so transparent that you just use it the same what you would use NHibernate's persistence. We just open a session and pass in the interceptor we are going to use. I normally do this at the request level, since I usually do session per request handling, but it will work any other way. You just need to make sure that the interceptor you are using has the right culture set or you may end up getting the wrong results.

The only difference shows up when you are going to store the localized values. In the examples I insert the values into the database before I run the integration test as their own entries. This may be just is fine for you, but if you go to the ugly section, you can read on a way in which to make that process just as transparent as the load.

By the way, if you are interested in how the testing is implemented and you come from the Java side of things, you may learn more on the subject by taking a look at this post on integration tests for your database using Java.

The Good
We totally decouple the localization logic from our domain models so that we can use them as we please without carrying any baggage. This leaves the door open to lots of possibilities.

This mechanism can be replicated with Hibernate for Java and possibly other ORMs.


The Bad
I didn't implement the localization insertion part. I hope I will do it sometime, but like I said, if you want to contribute, go ahead.

As said in the disclaimer at the beginning, this is not production ready code. I didn't do any error handling or checking for the entity's types. You won't be able to localize any non string properties. 

There is no way to easily query the localized data by the localized fields. If you don't need sorting for this fields or something similar, the solution is OK. However, if you do, consider other options. I would look into indexing the localized content using Lucene or something like that, and working your search related cases from there.


The Ugly
Here are a couple of improvements that could be done and that could be easily added.

Catching and Pre-Catching
As described earlier I tried to use a composite so that I could take advantage of the second level cache. It would be more efficient to mix that with some pre-catching. For instance, executing a query to load all the localization messages for the entity on the first go and making sure the results get stored using the second level cache for queries.

Transparent persistence of the localization values
Although for the purpose of this post I didn't need to do it. The persistence for the localization values could also be done the same way as the load by implementing another method from the base interceptor class. Just setting the value for the property and saving the entity, could save the culture dependent message to the database transparently.

I leave that as an exercise to whomever wants to dig a little deeper. ;)

Cherry-picking fields to be localized
In this example I assumed I wanted to localize every property. I didn't check for entity property types. But in case we didn't want to run into performance problems we could cherry-pick which fields we would like to localize by adding a special attribute that the interceptor would look for to decide which properties to localize or not. I don't like this approach very much because it will force the attribute on the model, but there are other ways of doing the same thing without having to decorate our domain class directly. (Tip: Fluent NHibernate uses a similar approach)


Selecting values using business logic
Another interesting problem would be the fact that sometimes we want to localize but based on a specific application or culture related logic. For instance, with numerals we may like to show the pluralized version of a word. It is not the same to have 1 "message" in your inbox, that to have 2 "messages". This concept is tricky and may need some thought to get it right, but it doesn't really worries me since the localization code is totally decoupled from our models, which gives us a lot of freedom to work.

Again, that's all for now! Let me know id this post was helpful and see you soon with some more smelly code... with potatoes!

Thursday, April 12, 2012

Integration tests for your database code

I hear a lot of people talking about tests and I have been to a couple of events where speakers have given presentations on the subject. Everyone talks about unit tests, and TDD and BDD, Continuous Integration. However, I don't know if you noticed but database-related integration testing is often overlooked, omitted, or briefly mentioned when talking about tests. It makes you wonder why. Doesn't it?

Why is that fear on the subject. Are database-related tests not needed?

Yes, they are. You need them because there are things you can't simply mock (stress and load tests to mention some). More than that, you may not have other options because you are dealing with legacy code and no time ($) to do a proper re-factor and unit testing. We need them because we should make sure our "whole" system works as expected.

But to be truthful, the main reason would be that database related testing sucks. Is difficult to get right. It is slow compared to other kinds of tests and if not implemented properly it could become a waste of time and a source of headaches.

Yet a lot of the logic we write in our software relies on certain preconditions and behaviors of the underlying persistence mechanism to be correct. Most of the times we think there is no way of testing those assumptions, than to actually use them. Using them ether by running the solution on the developers local system and database or an integration server or something similar.

Before you start screaming and writing me off your list I must say.


I don't couple my business logic with my persistence. What I meant is that parts of our logic rely on behaviors that we sometimes take for granted they will happen as we think. These are things like transactions management, when using Spring's Transactional attribute, or cascaded persistence, etc. all of which could fail at run-time.

Sadly, unit tests can't help us in this regard. It would seem that the only way of actually testing that this behaviors are what we expected, is to execute or deploy the application on a controlled environment. Or is it?

You already know what the answer is.... don't you?

the answer is...

42...  :]

To explain it better, I am going to split the whole process into two scenarios or contexts and attack both in different ways.

Case one: Building Application from Scratch
Like Uncle Bob likes to say, there is nothing like the green field. That bast meadow where you first start to build your "architecturally sound" software. There is nothing there, no mess left behind by others, no constraints. It opens up the door to a lot of different opportunities (including creating a big mess). This is our first scenery. But before we start digging in on the hows and whys, some disclaimers are in order.

I am going to assume you know what and ORM is and that you are using one, and if you are not you have a pretty good reason not to. Either way I will explain what I usually do or would do when I find myself in both situations. That doesn't mean that this is the "best" or recommended way. It just means it is my preference. If you have your own ideas on how to improve the process or maybe a more efficient one: don't be shy and share!

Also, I am going to use Hibernate + JPA for the examples because a friend asked me to, but this would be easily extended to NHibernate, or other ORMs like the Entity Framework code first approach. If you ask for it on the comments I can extend or add new posts to include those too.

What do we want to achieve?
We want to test our persistence and query logic, usually located in the "DAO" layer of the application. Since we are good developers ;) we want these tests to be deterministic, self-verifiable, and order independent.

What do we need?
We need to setup a complete database environment, equal or similar to the one we are going to be using, and then populate it with test data, to be able to assert the behaviors in our test code. We will use Hibernate + JPA + HSQLDB + Spring 3.0.

First thing would be to configure Hibernate to recreate the database schema based on the mappings we have, every time it initializes for the integration tests. I will do this by initializing a new spring context and JPA persistence configuration, just for the tests.



Notice that I am also using an in memory database provider HSQLDB to make things a little less complex. However, this could be any other provider. You would just have to provide any connection details needed and to make sure you have the right permissions for the schema.

Since we are using Hibernate in Java we are going to take advantage of the functionality Hibernate gives us of executing an initialization script called import.sql after the schema update process during its own initialization. You can read about it here and here.

Another way of doing it would be to use DbUnit, of which I will talk more in the next example case.


So now we have our import.sql file ready to be executed at the context's initialization. Here you would place your test's data as a set of insert statements. This will allow us to initialize the schema created by Hibernate from your model with the tests data just before the context is accessible to the tests

As you need more tests and the schema evolves, you will extend and update this scripts.

And that's all! Now you can start writing your database tests.


The good
One of the good parts and the one I particularly like the most, is that the database schema doesn't need to be stored as a script in the source control. Instead it will be stored with the code, since the actual model that you are using is the blueprint used by Hibernate to create the database schema. So any changes in the mapping reflect on your integration tests as soon as the context is initialized and developers don't have to write down migration scripts for the changes which is error prone and boring.

This also means if you change database providers for instance, from MSSQL to Oracle, Hibernate will be the one creating the script for the creation of the schema, and it will be using the dialect of your choice. This is particularly useful when you still haven't made any decisions on what the underlying persistence support will be.

The bad
The main problem I see with this approach is that it only works if you don't have previous data that you need to maintain. But if you do, there is no way (that I know of) to use this behavior on the "update" mode and also keep the "migration" done by Hibernate.

Also, if the test data gets too big there is no way to split the script so that it would be more manageable using some kind of "import" directives. However, I think in version 3.6.10 > of Hibernate you can set the scripts to load for each persistence unit using a configuration property, or through code. You can find more info about it in this Stack Overflow thread and in this Spring Forum thread.

By the way, there seems to be a "bug" with the JpaVendorAdapter that I use in this type of set up, which sets the provider to "update". To resolve this issue just set the generateDdl flag to false. You can find more info on the reasons here.

The Ugly
Integration tests are slow. This solution needs you to instantiate the context and database for each set of tests. It only initializes the context once on each test suite. However, although this is faster than doing it for each test, you may run into order dependency problems among tests because of the data, if you don't write your tests properly. Use wisely.

You can still set it up so that each tests executes with a clean db setup. A solution would be to regenerate the schema an repopulate the tables with data for each test using the SchemaExporter class. I leave that as an exercise to the dear reader ;)

Case two: Building from a Legacy Database System
Now, take into consideration that once you release a version of this application, you are actually moving into  building on top of a legacy system.

In this case you already have a database, it may be from a previous version or from a totally unrelated product. Whatever the case is, I am assuming you can't loose data or in some cases modify the database schema. The create approach in Hibernate won't be of much help here. What do we do then?

Well, in this case we are going to need something else. Here is where DbUnit comes to the rescue.

There are other solutions like the SQL Ant task that executes SQL scripts against the db before you run your tests, but I like dbunit better because I can put the database initialization and finalization into my tests, per test, instead of at build time. Also in most cases I can escape having to write the SQL insert statements letting DbUnit in charge of the dirty bits of generating them.

Whats different from this approach?
In this case, we are going to get a little more control over what is happening on the db. We will still insert the sample data, just that this time, it will be DbUnit the one doing it. To do that, we first need to put this information in and xml file format that DbUnit will understand, so that it may be able to dump it into the db when we tell it to.


The format is pretty simple. All records are hanging from a dataset root object and the name of the node will be the name of the table where you want to insert this information. For specific column values you use the attributes of the node. The actual content to put in the column is the value of the attribute.


The Good
We have per test database setup. This is faster than the method we already talked about but, you still need to do a little cleanup afterwards. A way to avoid this would be to execute each test under a transaction and at the end of the test just roll the transaction back. However, this may not always be possible.

We also get a "semi-raw" code based access to the database for verification. In this sense you get more control over the verification of what is really happening. In case you didn't notice when I was testing the insertion in the first method, I used the findUserById method thus relaying on my own code. While for unit tests this may not be a problem as long as the code you are using has its own unit tests, when it comes to db integration tests I wouldn't recommended it. The reason being that you could fall into the trap of, for instance, thinking that your insertions are working when in fact they could just be cached by the underlying persistence mechanism. Be aware of it.

Finally, you can have both methods working side by side and use them as needed.


The Bad
While DbUnit will insert the data for you, it won't create the schema from the information we give it (not that it could). That's why we are still using the "update" mode with our persistence.

There are also some funny things about the way data gets cleared because it relies on the order the records where inserted. I remember this particular dog biting me some time ago.

The Ugly
DbUnit seems like it has not noticed the rest of the world has changed. By this I mean that you end up writing lots of boilerplate code for your tests that could have been avoided with some properly coded annotations. This is typical of the JUnit 3.X style of tests where you would need to inherit from a TestCase class or something similar. Not cool. However, if you refactor your tests, (like you do, right?) you will end up with little or no repetition.

This is all for now. You can see the full code at this repository on github.

Keep tuned for some more smelly code.. with potatoes on the side soon and leave some feedback if you want to help me improve the quality of these posts ;)

Update 14/04/2012: I renamed the github repository.

Sunday, April 8, 2012

Make it stick.

A few days ago I was watching a recorder workshop by Angel Medinilla given at the Barcelona Lean Camp this February. He was explaining lean principles to the participants by having them practice a series of Aikido techniques or movements. But there was something in particular that he said that resonated with me.

"Metaphors are a powerful way of explaining things to people, because they stick, because they create a paradigm." I totally agree. In fact this post is a compilation of some of the metaphors and similes that I tend to use when explaining agile or explaining agile practices to someone. As usual, this is just my personal repertoire, but I would love to hear what you guys think about it or if you have any other examples that your prefer.

I plan to keep updating this post as I find new examples that I like. I hope you find them as useful as I do.

Angry Birds

I think most people know the angry birds game. In this game you have a goal which is to kill the pigs sitting at different positions on the map by throwing birds at them from a slingshot. You also have lot of obstacles that may prevent you from getting to the pigs directly. for example you would have boxes, metal crates, glass, etc. and some other variables like gravity, speed, the effects of impacting an obstacle, etc.



This is a good example in which to analyze and discuss the advantages of adaptive over predictive planning: transparency, risk management, etc. Another good question to ask would be: What if the pigs could move?

Crossing the river
You need to cross a river in a small boat but there are lots of stones on the way that you can't see because they are under the water, this stones will probably delay your arrival to the other side or even sink the boat.

You can discuss how frequent feedback cycles are like allowing you to lower the river's water level so you can see the tip of the rocks and avoid them in time or minimize damage.

Growing Roses
You are growing roses. You care after the plant and you know who long it will take to grow and give flowers. However, you don't know exactly how many flowers it will give. You can compare the plant to a product being developed and the flowers to the scope or features for the product.


Running a Sprint vs a Marathon
This is kind of evident. You would compare having a sustainable pace, flow and  rhythm,  to running a marathon, and burning the midnight oil to running a sprint. At the heart of everything is the question: How good are you when you are tired?



It is easy to define sustainable pace but hard to apply. It doesn't mean you can't sprint once in a while. Marathon runners do it too, at the end line, or to surpass another runner. But if you need to do it every time, you have a problem.

Building a road
Suppose your client wants you to build him a road so that he can go to his beach house. Does it need to be an 8 tracks highway or just a 2 ways paved road will suffice? Would a track in the middle of the woods be enough?

There are different levels of achieving the same end results or goals and building incrementally gives you the opportunity to deliver value (an unpaved road in the middle of the woods) and then improve upon it with your customer's feedback at hand. Maybe the unpaved road is enough. Or maybe at the beginning he thought he needed a highway, but he realizes now he doesn't. This is also a good sample to discuss prototyping vs building incrementally.

A car going somewhere
This sample is particularly centered on the roles of a SCRUM team. Where the whole car signifies the project or product being built. The driver is the product owner who has the vision and knows where car needs to go. The Team is the chassis or engine, and the Scrum Master would be the oil or liquid for the breaks.

This is it, for now, if you want me to include something else let me know. :)