Showing posts with label open source. Show all posts
Showing posts with label open source. Show all posts

Sunday, April 15, 2012

Seamless database content localization with NHibernate

Localization is a well known problem we as developers usually face when building multilingual applications. It is well studied, and for most development frameworks, there are ways in which you can localize your UI text or application resources. But when talking about localizing content that is going to be stored in a database, that's a whole different story.

There is little or no support in most persistence platforms to enable this scenery out of the box whether it is an ORM, or the "default" persistence method for the technology stack of your choice. Also, when trying to solve this problem we usually try to reinvent the wheel. By that, I mean there are literally hundreds of man hours lost in redoing the same thing, again and again, for each application.

The worst part is most solutions we come up with or at least that I have seen/experienced are really intrusive in the way we "ideally" try to approach persistence nowadays. This ideal being that the way we write our models shouldn't be contaminated by the implementation detail of how they are going to be persisted.  But sadly, that is not the case most of the time and our models end up getting splattered by the constraints imposed by the persistence mechanism.

Specially with ORMs this is very noticeable because we end up creating entities just to enable the localization process and making these entities part of our domain model almost by force. Turning these "localization enablers" into a sort of dependency magnets, spreading through our models. What happens then is that if you want to take your domain objects somewhere else, you need to take the localization mapping objects with you and anything else they need.

If you are interested on knowing about other approaches to localization here are a couple of links on the subject. You can start by reading an overview of all of them in this NHForge post. Here are the links to each method explained: MichalSiim Viikman, Ayende, Alkampfer [1, 2, 3], Fabio.

Again, disclaimers are in order. The code provided here is just a proof of concept and not production ready. Keep that in mind at all times. If you want to make it production ready, fork the github repository at the end of this post and send me a pull request. I will be more than glad to add your name to the contributors list and this post.

Yet another solution
As I mentioned before, there are several solutions out there already. The reason why I don't like them is that almost all of them except for Ayende's, force you to create your domain model with a particular persistence trick in mind. Whether it is adding the property as a dictionary or having a special type for it, they are kind of intrusive and I didn't like that.

So what this solutions proposes is to implement a not so commonly used feature of NHibernate called interceptors. This interceptors pattern is called upon whenever NHibernate performs an operation, and they allow you plug into the framework's pipeline so that you can transform it, update it, analyze it, enhance it, or whatever you want to do.

In this case our interceptor is going to look for the localization message entries in the database of each of the properties in our entity according to the current culture it is working on, and based on the results it will update the entity's values with the localized values. Just so we don't query the database every time, which would be a big performance hit, we are going to be using NHibernate's second level cache.

That's enough talk! Let's get down to it!

The code
I have hosted the code at this github repository. You can download it, fork it, use it, modify it, sell it (not advised), or wear it like a hat. I am going to be using NHibernate + FluentNHibernate + Moq + MSTests + SQLite but aside from NHibernate, the rest is not really that important.

So first, lets see how our localization persistence entities are going to look like.


Here we have two classes: LocalizationEntry and LocalizationEntryId. One serves as a composite Id of the other for catching purposes, but in general, they are not complex classes. The Id class consists of the entity's type, the entity's id, the property to which the localization message belongs to and the culture in which this message should be displayed.

You can see I have overridden the Equals and GetHashCode methods for the composite Id. This is required by the framework in order to use the composite object as an id.

Here is the fluent mapping for these same entities.


I activated the cache on this entity so that we can take advantage of the second level cache feature that NHibernate provides us with. This way we won't query the database every time.

Last but not least our proof of concept interceptor implementation.


How to use it
To get a better understanding on how to use the interceptor we just have to take a look at the tests, since they pretty much explain the use a consumer of this method would give to the API.


In fact, it is so transparent that you just use it the same what you would use NHibernate's persistence. We just open a session and pass in the interceptor we are going to use. I normally do this at the request level, since I usually do session per request handling, but it will work any other way. You just need to make sure that the interceptor you are using has the right culture set or you may end up getting the wrong results.

The only difference shows up when you are going to store the localized values. In the examples I insert the values into the database before I run the integration test as their own entries. This may be just is fine for you, but if you go to the ugly section, you can read on a way in which to make that process just as transparent as the load.

By the way, if you are interested in how the testing is implemented and you come from the Java side of things, you may learn more on the subject by taking a look at this post on integration tests for your database using Java.

The Good
We totally decouple the localization logic from our domain models so that we can use them as we please without carrying any baggage. This leaves the door open to lots of possibilities.

This mechanism can be replicated with Hibernate for Java and possibly other ORMs.


The Bad
I didn't implement the localization insertion part. I hope I will do it sometime, but like I said, if you want to contribute, go ahead.

As said in the disclaimer at the beginning, this is not production ready code. I didn't do any error handling or checking for the entity's types. You won't be able to localize any non string properties. 

There is no way to easily query the localized data by the localized fields. If you don't need sorting for this fields or something similar, the solution is OK. However, if you do, consider other options. I would look into indexing the localized content using Lucene or something like that, and working your search related cases from there.


The Ugly
Here are a couple of improvements that could be done and that could be easily added.

Catching and Pre-Catching
As described earlier I tried to use a composite so that I could take advantage of the second level cache. It would be more efficient to mix that with some pre-catching. For instance, executing a query to load all the localization messages for the entity on the first go and making sure the results get stored using the second level cache for queries.

Transparent persistence of the localization values
Although for the purpose of this post I didn't need to do it. The persistence for the localization values could also be done the same way as the load by implementing another method from the base interceptor class. Just setting the value for the property and saving the entity, could save the culture dependent message to the database transparently.

I leave that as an exercise to whomever wants to dig a little deeper. ;)

Cherry-picking fields to be localized
In this example I assumed I wanted to localize every property. I didn't check for entity property types. But in case we didn't want to run into performance problems we could cherry-pick which fields we would like to localize by adding a special attribute that the interceptor would look for to decide which properties to localize or not. I don't like this approach very much because it will force the attribute on the model, but there are other ways of doing the same thing without having to decorate our domain class directly. (Tip: Fluent NHibernate uses a similar approach)


Selecting values using business logic
Another interesting problem would be the fact that sometimes we want to localize but based on a specific application or culture related logic. For instance, with numerals we may like to show the pluralized version of a word. It is not the same to have 1 "message" in your inbox, that to have 2 "messages". This concept is tricky and may need some thought to get it right, but it doesn't really worries me since the localization code is totally decoupled from our models, which gives us a lot of freedom to work.

Again, that's all for now! Let me know id this post was helpful and see you soon with some more smelly code... with potatoes!

Thursday, April 12, 2012

Integration tests for your database code

I hear a lot of people talking about tests and I have been to a couple of events where speakers have given presentations on the subject. Everyone talks about unit tests, and TDD and BDD, Continuous Integration. However, I don't know if you noticed but database-related integration testing is often overlooked, omitted, or briefly mentioned when talking about tests. It makes you wonder why. Doesn't it?

Why is that fear on the subject. Are database-related tests not needed?

Yes, they are. You need them because there are things you can't simply mock (stress and load tests to mention some). More than that, you may not have other options because you are dealing with legacy code and no time ($) to do a proper re-factor and unit testing. We need them because we should make sure our "whole" system works as expected.

But to be truthful, the main reason would be that database related testing sucks. Is difficult to get right. It is slow compared to other kinds of tests and if not implemented properly it could become a waste of time and a source of headaches.

Yet a lot of the logic we write in our software relies on certain preconditions and behaviors of the underlying persistence mechanism to be correct. Most of the times we think there is no way of testing those assumptions, than to actually use them. Using them ether by running the solution on the developers local system and database or an integration server or something similar.

Before you start screaming and writing me off your list I must say.


I don't couple my business logic with my persistence. What I meant is that parts of our logic rely on behaviors that we sometimes take for granted they will happen as we think. These are things like transactions management, when using Spring's Transactional attribute, or cascaded persistence, etc. all of which could fail at run-time.

Sadly, unit tests can't help us in this regard. It would seem that the only way of actually testing that this behaviors are what we expected, is to execute or deploy the application on a controlled environment. Or is it?

You already know what the answer is.... don't you?

the answer is...

42...  :]

To explain it better, I am going to split the whole process into two scenarios or contexts and attack both in different ways.

Case one: Building Application from Scratch
Like Uncle Bob likes to say, there is nothing like the green field. That bast meadow where you first start to build your "architecturally sound" software. There is nothing there, no mess left behind by others, no constraints. It opens up the door to a lot of different opportunities (including creating a big mess). This is our first scenery. But before we start digging in on the hows and whys, some disclaimers are in order.

I am going to assume you know what and ORM is and that you are using one, and if you are not you have a pretty good reason not to. Either way I will explain what I usually do or would do when I find myself in both situations. That doesn't mean that this is the "best" or recommended way. It just means it is my preference. If you have your own ideas on how to improve the process or maybe a more efficient one: don't be shy and share!

Also, I am going to use Hibernate + JPA for the examples because a friend asked me to, but this would be easily extended to NHibernate, or other ORMs like the Entity Framework code first approach. If you ask for it on the comments I can extend or add new posts to include those too.

What do we want to achieve?
We want to test our persistence and query logic, usually located in the "DAO" layer of the application. Since we are good developers ;) we want these tests to be deterministic, self-verifiable, and order independent.

What do we need?
We need to setup a complete database environment, equal or similar to the one we are going to be using, and then populate it with test data, to be able to assert the behaviors in our test code. We will use Hibernate + JPA + HSQLDB + Spring 3.0.

First thing would be to configure Hibernate to recreate the database schema based on the mappings we have, every time it initializes for the integration tests. I will do this by initializing a new spring context and JPA persistence configuration, just for the tests.



Notice that I am also using an in memory database provider HSQLDB to make things a little less complex. However, this could be any other provider. You would just have to provide any connection details needed and to make sure you have the right permissions for the schema.

Since we are using Hibernate in Java we are going to take advantage of the functionality Hibernate gives us of executing an initialization script called import.sql after the schema update process during its own initialization. You can read about it here and here.

Another way of doing it would be to use DbUnit, of which I will talk more in the next example case.


So now we have our import.sql file ready to be executed at the context's initialization. Here you would place your test's data as a set of insert statements. This will allow us to initialize the schema created by Hibernate from your model with the tests data just before the context is accessible to the tests

As you need more tests and the schema evolves, you will extend and update this scripts.

And that's all! Now you can start writing your database tests.


The good
One of the good parts and the one I particularly like the most, is that the database schema doesn't need to be stored as a script in the source control. Instead it will be stored with the code, since the actual model that you are using is the blueprint used by Hibernate to create the database schema. So any changes in the mapping reflect on your integration tests as soon as the context is initialized and developers don't have to write down migration scripts for the changes which is error prone and boring.

This also means if you change database providers for instance, from MSSQL to Oracle, Hibernate will be the one creating the script for the creation of the schema, and it will be using the dialect of your choice. This is particularly useful when you still haven't made any decisions on what the underlying persistence support will be.

The bad
The main problem I see with this approach is that it only works if you don't have previous data that you need to maintain. But if you do, there is no way (that I know of) to use this behavior on the "update" mode and also keep the "migration" done by Hibernate.

Also, if the test data gets too big there is no way to split the script so that it would be more manageable using some kind of "import" directives. However, I think in version 3.6.10 > of Hibernate you can set the scripts to load for each persistence unit using a configuration property, or through code. You can find more info about it in this Stack Overflow thread and in this Spring Forum thread.

By the way, there seems to be a "bug" with the JpaVendorAdapter that I use in this type of set up, which sets the provider to "update". To resolve this issue just set the generateDdl flag to false. You can find more info on the reasons here.

The Ugly
Integration tests are slow. This solution needs you to instantiate the context and database for each set of tests. It only initializes the context once on each test suite. However, although this is faster than doing it for each test, you may run into order dependency problems among tests because of the data, if you don't write your tests properly. Use wisely.

You can still set it up so that each tests executes with a clean db setup. A solution would be to regenerate the schema an repopulate the tables with data for each test using the SchemaExporter class. I leave that as an exercise to the dear reader ;)

Case two: Building from a Legacy Database System
Now, take into consideration that once you release a version of this application, you are actually moving into  building on top of a legacy system.

In this case you already have a database, it may be from a previous version or from a totally unrelated product. Whatever the case is, I am assuming you can't loose data or in some cases modify the database schema. The create approach in Hibernate won't be of much help here. What do we do then?

Well, in this case we are going to need something else. Here is where DbUnit comes to the rescue.

There are other solutions like the SQL Ant task that executes SQL scripts against the db before you run your tests, but I like dbunit better because I can put the database initialization and finalization into my tests, per test, instead of at build time. Also in most cases I can escape having to write the SQL insert statements letting DbUnit in charge of the dirty bits of generating them.

Whats different from this approach?
In this case, we are going to get a little more control over what is happening on the db. We will still insert the sample data, just that this time, it will be DbUnit the one doing it. To do that, we first need to put this information in and xml file format that DbUnit will understand, so that it may be able to dump it into the db when we tell it to.


The format is pretty simple. All records are hanging from a dataset root object and the name of the node will be the name of the table where you want to insert this information. For specific column values you use the attributes of the node. The actual content to put in the column is the value of the attribute.


The Good
We have per test database setup. This is faster than the method we already talked about but, you still need to do a little cleanup afterwards. A way to avoid this would be to execute each test under a transaction and at the end of the test just roll the transaction back. However, this may not always be possible.

We also get a "semi-raw" code based access to the database for verification. In this sense you get more control over the verification of what is really happening. In case you didn't notice when I was testing the insertion in the first method, I used the findUserById method thus relaying on my own code. While for unit tests this may not be a problem as long as the code you are using has its own unit tests, when it comes to db integration tests I wouldn't recommended it. The reason being that you could fall into the trap of, for instance, thinking that your insertions are working when in fact they could just be cached by the underlying persistence mechanism. Be aware of it.

Finally, you can have both methods working side by side and use them as needed.


The Bad
While DbUnit will insert the data for you, it won't create the schema from the information we give it (not that it could). That's why we are still using the "update" mode with our persistence.

There are also some funny things about the way data gets cleared because it relies on the order the records where inserted. I remember this particular dog biting me some time ago.

The Ugly
DbUnit seems like it has not noticed the rest of the world has changed. By this I mean that you end up writing lots of boilerplate code for your tests that could have been avoided with some properly coded annotations. This is typical of the JUnit 3.X style of tests where you would need to inherit from a TestCase class or something similar. Not cool. However, if you refactor your tests, (like you do, right?) you will end up with little or no repetition.

This is all for now. You can see the full code at this repository on github.

Keep tuned for some more smelly code.. with potatoes on the side soon and leave some feedback if you want to help me improve the quality of these posts ;)

Update 14/04/2012: I renamed the github repository.

Tuesday, March 6, 2012

Give me some code52. No wait, make it code365.

Have you heard of code52?

No? Well, me neither, at least not until a couple of weeks ago, when browsing through the Github trends I found that these guys had a lot of activity going on and decided to have a look. I was amazed. Not by the code, which has great quality, but by the idea. And what a great idea it is. If you didn't click on the link, don't worry, I want to share with you what's so special about it.


In a nutshell, and as publicly expressed by the current organizers, these are some of the goals for the code52 project:
  1. Introduce developers to open-source projects and help them learn the ropes
  2. Contribute to projects across a range of technologies
  3. Let contributors have a say in what they do each week
But, how? Well,  Andrew Tobin, Brendan Forster and Paul Jenkins decided to try and create a brand new open source project, each week




If you work on software development, you know how difficult completing and actively working in one project can be. Open sourced or not, it demands a lot of energy and dedication. You can imagine then that attending to and starting one each week successfully, is a monumental task. Even more if you already have a day job. However, this guys are pulling it off and that's just astonishing.


The reason why I am so excited about this idea, is that it's a conscious and continuous well organized effort, to establish a mindset of contribution and collaboration among developers around the world. Let me explain what I mean.


While there are certainly lots of open source projects out there, most of them  are born in a totally random way. That meaning, usually, someone has and itch they need to scratch, and that's how they come to be. This, as I see it, has a couple of disadvantages. 


First, there is no systematic way of giving birth to projects, and while that in itself is not bad, it definitively doesn't help the growth and quality either. 


Second, the fact that they get created in a disconnected way, most of the time means redundancy in effort and resources. By that I mean that lots of projects with the same ambitious objectives come to life, but due to lack of synergy or momentum, they ultimately die or don't get the attention they need to become a great project. Instead they just turn out to be an average one. If this effort and resources where willingly and consciously put together, that would probably not be the case.


Third, there are lots of great ideas out there from people who are not developers or simply don't have the time to put it in. Those ideas will take longer to crystallize, or probably, will never see the day of light because there was no one there to listen or pick up the ball. This is one of the best things about code52, they listen to people.


Yes, there is the Apache Software Foundation, others like it that provide an ecosystem by allowing projects into their incubation facilities. But I wouldn't call them friendly in the same way as these guys do, but more like an enabler architecture for more mature projects. Yes, they provide you with tools etc, however, to join the Apache Labs it is required that the podlings have an established and working codebase. It is then, that through the incubation process they are expected to grow and improve their communities. 


This for me, addresses other areas of open source that are not directly related to development, which is more like what code52 tries to work on.


Fourth, we need more developers helping out to create and evolve open source software projects. Although there are initiatives like Google's Summer of Code and others, they are product, participants and community centered. What happens to the rest of the people who want to contribute the rest of the year?


Well, they don't know where to start. By they I mean, I wouldn't know where to start. I can see people having trouble finding a project they would be motivated to contribute to, because everything is so spread out and there is no way to easily sort and find that info. 


Even if they do, people like me, who have never actively worked on an open source project, wouldn't be too clear on how they are supposed to join the production process. Add to that the lack of meaningful and updated documentation from which a lot of open source projects suffer. You have a recipe for alienation and disengagement. 


Well... It is only reasonable to think then, that the easiest way to get over that, would be to start your own project, which suffers from the problems already mentioned.


Another thing that I like about the code52 initiative is that you can join their channel any time of the day and there will always be someone willing to put you up to speed, provide some guidelines, or useful insights. That, is just priceless. They have taken advantage of a platform of freely available communication a collaboration tools that support each other and make the development experience more fluent.  


Also, the projects are small enough and targeted that it would be easy for someone with no expert knowledge to join in an contribute. Like they say, just grab one of the posts in the Trello board, and start coding or pop into the channel and provide some insight.


Although, as much as I am in love with the idea,  there are some things that I can see people flagging as caveats. For instance, the technology most of the projects are being worked on (c# on the Microsoft stack). I think this is more of a by product of the background the main contributors have than anything else, and that the solution for this is simple, more contributors. 


They have expressed several times that they would be willing to try other languages and platforms if there is enough interest, support and contributors. So if you are a hard-core coffeescript, ruby, python or *insert language here* developer, give it a try. Join in.


Another warning is that most projects being worked on so far, have a very hands on intention. Mostly tools with particular purposes. So right now I don't envision lots molecules simulation frameworks being born there.


Above all, code52 is a call to action and training ground that turns what-ifs into reality. The door, to who knows which other wonders like this one. Who knows, maybe I will get to see a code365. 


Kudos to everyone behind it, supporting and helping out. I'm definitively looking forward to participate in this initiative! Are you?