Archive

Archive for the ‘patterns’ Category

The Micro Framework Approach

December 3, 2010 5 comments


Flattr this
The demand for Grails and Groovy is clearly raising these days – at least here in Austria and Germany. Although most of my workshops have their individual adaptions (depending on the previous knowledge and programming language experience of participants) there are parts which can more or less be found unmodified in every workshop: Groovy essentials/advanced topics and what I call micro framework examples. This article is about the idea behind micro framework examples and why I find them that useful as workshop examples.

What is a Micro Framework Example?

I strongly believe that true understanding on patterns behind frameworks like Hibernate and Spring can’t easily be treat in a bunch of slides. Explaining patterns is one thing but to actually see how those are applied is another one. One approach I’ve found to be really useful in workshops is the use of micro framework examples. A micro framework example implements the core functionality behind a specific framework – reduced to the very fundamentals. One advantage to implement a micro framework example together with participants is to force triggering a thinking process of what functionality is needed and how it can be implemented. Another side-effect is that it allows to slightly introduce the original frameworks ubiquitous language simply by using the same class and method names.

Let me give you an example. The most threatening topic for many of my clients is to understand Hibernate and its persistence patterns. One approach to create a better understanding would be to implement a micro Hibernate framework example. This can be done in a simple Groovy script MicroHibernate.groovy which defines two classes and a simple test case. The first class implements the registry pattern and is called SessionFactory:

class SessionFactory {

    private def storage

    def SessionFactory(def storage)  {
        this.storage = storage
    }

    def newStorageConnection()  {
        return storage
    }
}

The SessionFactory acts as the main access point to get a reference to some storage connection. In the micro framework example this will simply be a Map. Dealing with SQL or even a real database would uselessly complicate the example and we want to concentrate on the core essentials. Let’s go on to the next class which implements the persistence context pattern:


class Session {

    static Log log = LogFactory.getLog(Session)

    private def sessionFactory

    def Session(def sessionFactory) { this.sessionFactory = sessionFactory }

    def snapshots = [:] // a Map(Domain-Class, Map(Identifier, Properties))
    def identityMap = [:] // a Map(Domain-Class, Map(Identifier, ObjectRef))
    def modifiedPersistentObjects = [:] // a Map(Domain-Class, List(Identifier))
    
    def propertyChanged(def obj)  {
        if (!modifiedPersistentObjects[obj.getClass()]) modifiedPersistentObjects[obj.getClass()] = []

        modifiedPersistentObjects[obj.getClass()] << obj.id

        log.info "propertyChanged of object: ${obj} with id ${obj.id}"
    }

    def load(Class<?> domainClassType, Long identifier)  {
        
        if (identityMap[domainClassType] && identityMap[domainClassType][identifier])  {
            return identityMap[domainClassType][identifier]
        }
        
        def conn = sessionFactory.newStorageConnection()
        def loadedObj = conn[domainClassType][identifier]
        if (!loadedObj) throw new Exception("Object of type ${domainClassType} with id ${identifier} could not be found!")
        
        if (!snapshots[domainClassType]) snapshots[domainClassType] = [:]
        if (!identityMap[domainClassType]) identityMap[domainClassType] = [:]


        def properties = loadedObj.getProperty("props")
        snapshots[domainClassType][identifier] = properties.inject([:], { m, property -> m[property] = loadedObj[property]; m })

        log.info "create snapshot of ${domainClassType} id ${identifier} with properties ${snapshots[domainClassType][identifier]}"

        identityMap[domainClassType][identifier] = loadedObj
        
        loadedObj.metaClass.getId = { -> identifier } 
        loadedObj.metaClass.setProperty = { String name, Object arg ->
            def metaProperty = delegate.metaClass.getMetaProperty(name)

            if (metaProperty)  {
                owner.propertyChanged(loadedObj)
                
                metaProperty.setProperty(delegate, arg)
            }
        }
        
        return loadedObj
    }
}

A Session object can be used to retrieve already persistent objects and to persist so-called transient objects. I like to start by implementing the load method which loads a persistent object from the storage connection of the current session factory. Of course, this is not an example for Groovy beginners but with a little knowledge of MOP and with some programming guidance it should not be a big thing to understand what is going on. At the end let’s define some test case which shows how both classes are actually used:

def storage = [:]

class Person {
    String name

    String toString() { name }

    static props = ['name']
}

storage[Person] = [:]
storage[Person][1 as Long] = new Person(name: 'Max Mustermann')
storage[Person][2 as Long] = new Person(name: 'Erika Mustermann')

def sessionFactory = new SessionFactory(storage)
def session = sessionFactory.newSession()

def person = session.load(Person, 1)

Interestingly, even without considering SQL, DB connection handling, threading issues etc. participants already get a feeling of several Hibernate gotchas beginners otherwise often struggle with:

  • the first level cache
  • the need for proxies or MOP modifications
  • Hibernate’s use of object snapshots
  • the IdentityMap pattern
  • repeatable read transaction isolation level
  • etc.

It is amazing how much can be explained by implementing some framework’s core functionality in about 5 minutes. The Session functionality gets than extended by flush, discard and save/delete functionality. If programmers have been through the process of implementing such a micro Hibernate example they often get a basic and fundamental understanding of how an orm framework could work and what the main challenging problems are. By keeping the class and method names in sync with the concrete Hibernate implementations participants learn the framework’s basic domain language.

GSamples – A Repository for Sharing Workshop Examples

The example mentioned above is available in a public github repository what I called GSamples [0], a collection of Groovy and Grails workshop examples. At the time of publishing this article it contains two micro framework examples, the other one is a simple dependency injection container. In addition, GSamples holds Groovy scripts dealing with Groovy Essentials and another one dealing with advanced Groovy topics like the Meta-Object Protocol and Closures. Feel free to extend, distribute or use it!

[0] GSamples – https://github.com/andresteingress/gsamples

Happy Messaging with ActiveMQ and SI (Part 1)

October 14, 2010 1 comment


Flattr this
In one of my current projects we needed to set up a communication channel between two distinct Grails applications. One of them is the master application which runs an embedded ActiveMQ message broker [0]. The second one – the slave application – provides service APIs to the master application.

Since Grails heavily relies on Spring, we decided to use Spring Integration as messaging framework [1]. Spring Integration is a messaging framework which supports various Enterprise Application Integration Patterns [2], without being bound to any specific messaging protocol. Since our project team chose to use ActiveMQ we go with JMS as underlying messaging protocol in our project.

Setting up an embedded ActiveMQ message broker

ActiveMQ is a fully JMS 1.1 compliant messaging provider which is available under the Apache Software License. It has quite a bag of features, the most important ones for us where persistent messages support. Besides running ActiveMQ as a distinct server, one can choose to run ActiveMQ as an embedded server inside the application.

Configuring an embedded ActiveMQ broker using Grails’ Beans DSL is pretty straight-forward (once you get used to the Beans DSL of course):

xmlns amq:'http://activemq.apache.org/schema/core'

def brokerName = 'myEmbeddedBroker'

amq.'broker'(brokerName: brokerName, useJmx: true, persistent: false) {
  amq.'managementContext'  {
    amq.'managementContext'(connectorPort: 2011, jmxDomainName: 'embeddedBroker')
  }

  amq.'plugins'  {
    amq.'loggingBrokerPlugin'
  }

  amq.'transportConnectors'  {
    amq.'transportConnector'(name: 'openwire', uri: 'tcp://localhost:61616')
  }
}

The code above configures an embedded broker called myEmbeddedBroker which only persists messages in-memory (persist: false), exposes itself as JMX bean (useJmx: true) and configures a transport connector using Openwire over TCP.

In order to let the master application (which holds the configuration above) connect to its embedded message broker, we need to set up a connection factory:

connectionFactoryLocal(ActiveMQConnectionFactory)  {
  brokerURL = "vm://${brokerName}"
}

After all, we will define two message queues one for outgoing API requests to the slave application and one for incoming responses:

"requestQueue"(org.apache.activemq.command.ActiveMQQueue, "QUEUE.REQUEST")
"responseQueue"(org.apache.activemq.command.ActiveMQQueue, "QUEUE.RESPONSE")

Spring Integration comes into play

So far we have set up an embedded message broker which could be used for plain JMS API message exchange. In our project we decided to go with Spring Integration because it already implements several EAI patterns (e.g. router, gateway, etc.) and abstract from the underlying messaging protocol.

A reference manual on Spring Integration can be found at [3], but let me give you a short introduction. Spring Integration (SI) is a messaging framework which implements patterns found in the book Enterprise Application Integration Patterns [4]. That is, SI is all about messages and message exchange. To exchange a message from point A to point B there needs to be a channel between A and B. Besides messages, channels are the second most important domain entity in SI.

Channels are injected into your application components just like any other Spring bean. The basic MessageChannel interface is pretty rudimentary:

public interface MessageChannel {

	boolean send(Message<?> message);
	boolean send(Message<?> message, long timeout);
}

The use-case in our project was to automatically create a message and send it to some preconfigured channel whenever the programmers chooses to call a service API method:


def someApi

def doSomething()  {   
   someApi.executeRemotely('first param', 'second param') // this should trigger message creation and sending/receiving
}

A call to executeRemotely should automatically create a message object from the input parameters and send it to some sort of API request channel.

Luckily, SI provides the concept of gateways which solve that particular problem. At runtime, a gateway is a proxy object for a particular interface which, on a method call, creates a message object and sends it via some preconfigured channel. Like channels, gateways are Spring beans and can therefore be configured via the Beans DSL:

xmlns integration:'http://www.springframework.org/schema/integration'
xmlns jms:'http://www.springframework.org/schema/integration/jms'

integration.'channel'(id: 'apiChannelRequest')
integration.'channel'(id: 'apiChannelResponse')

integration.'gateway'(id: 'someApi', 'service-interface': org.ast.SomeApi.class.getName(), 'default-request-channel': 'apiChannelRequest', 'default-reply-channel': 'apiChannelResponse')  {

  integration.'method'(name: 'executeRemotely')  {
    integration.'header'(name: 'API_METHOD_NAME', value: 'executeRemotely')
  }
}

As you can see from the configuration snippet above, the gateway has a request/reply channel configured since gateways are synchronous (in SI 2.0 there is asynchronous gateway support) and bidirectional. The SomeApi interface uses SI annotations for further message configuration:

interface SomeApi {
    Boolean executeRemotely(final @Header("HEADER_NAME") String param1, final String param2)
}

From the gateway’s view the interface above means: whenever executeRemotely is called, put param1 into a message header with name HEADER_NAME and put the second parameter into the message’s payload. Maybe you noticed the API_METHOD_NAME parameter in the gateway configuration above – that was a message header too. We needed to manually inject a unique method identification token (in our case the method name only was enough) in order to call the correct method on the slave application side.

Configuring JMS messaging

So far we’ve set up an environment with an embedded ActiveMQ message broker and two ActiveMQ message queues. Now we need to configure the link between the SI channels configured in the last section and those JMS queues. Since gateways are bidirectional, SI needs to store some reply channel information whenever instantiating an API request. This is automatically done via the gateway implementation. If we would run inside a SI environment only we wouldn’t need to care about this fact. In our case, we chose to use gatways to communicate between a master and a slave application which are in production deployed on separate server instances.

In SI, a JMSOutboundGateway can be used for those JMS request/reply scenarios. It is the clue between SI channels and out ActiveMQ JMS queues:

jms.'outbound-gateway'(id: "jmsGateway", 'connection-factory': 'pooledJmsConnectionFactoryLocal', 'request-destination': "requestQueue", 'request-channel': "apiChannelRequest", 'reply-destination': "responseQueue",'reply-channel': 'apiChannelResponse')

In the slave application, there needs to be an inverse configuration using a JMS inbound gateway:

jms.'inbound-gateway'(id: 'jmsInbound', 'connection-factory': 'pooledJmsConnectionFactoryRemote', 'request-destination-name': 'QUEUE.REQUEST', 'request-channel': 'incomingRequest', 'reply-destination-name': 'QUEUE.RESPONSE')

The configuration snippet inside the slave application simply routes incoming messages to the incomingRequests channel. Notice that no reply channel has been specified in order to keep the reply channel which has been added by the master application in the message.

In the next part of this article series we’ll have a closer look at the slave application and how it is configured to invoke methods an Grails service beans.

[0] ActiveMQ Message Broker
[1] Spring Integration
[2] Enterprise Application Integration
[3] Spring Integration – Reference Manual
[4] Amazon: Enterprise Integration Patterns

GroovyMag October 2010 Issue is Out!

GroovyMag 2010/10 features an article of mine about GORM and persistence context patterns. Check it out – really worth the 4.99 😉

Content:

  • Hibernate, GORM and the Persistence Context by Andre Steingress
  • Getting Started with Gaelyk by Peter Bell
  • Lean Groovy Part VII by Hamlet D’Arcy
  • Groovy Under the Hood – Closure Class by Kirsten Schwank
  • … and much more
Categories: hibernate, Intro, patterns

Towards Null-Safe Groovy

August 6, 2010 5 comments

Currently I am implementing some experimental features from which some of them hopefully make it into GContracts next major release.

One of the issue I’ve been playing with is support for null-safe aka void-safe code. The main idea is to let GContracts provide compile-time checks for null-safe code parts. Letting it be ‘code parts’ only is the first restriction which we will have to introduce. We all know null is evil, but: null is not the new goto.

There are use-cases where assigning null to a variable might still be appropriate. Think of a double-linked list data-structure, where null could be needed to mark the first previous and the last next reference. However, the use of null more often leads to errors whenever a method call is executed on a variable which has been assigned to null.

In order to provide compile-time checking of instance variables, local variables and parameters I added a new annotation to GContracts annotation package: @NullSafe:

@Retention(RetentionPolicy.SOURCE)
@Target({ElementType.FIELD, ElementType.LOCAL_VARIABLE, ElementType.PARAMETER})
public @interface NullSafe {}

In addition, I added the NullSafeASTTransformation which is supposed to do the compile-time checking stuff. To do that, we need a set of rules which enforce null-safety for variables annotated with @NullSafe:

Local Variables, Parameters

Whenever a local variable or a parameter is marked as null-safe, only assignments to

  • constants (except null)
  • other null-safe variables

are allowed. The AST transformation done so far checks for local variables marked with @NullSafe and takes care of valid assignments. Whenever an assignment does not conform to the rules stated above, a compile-time error is thrown. E.g. the following source code snippet fails during compilation:

package tests

import org.gcontracts.annotations.*

class A {

  def some_operation(@NullSafe def param1)  {
     def bla = "test"
     param1 = bla
  }
}

BUG! exception in phase 'semantic analysis' in source unit 'script1281039992077748355994.groovy' param1 must be assigned to NullSafe variables!
	at org.gcontracts.ast.visitor.NullSafeVisitor.visitBinaryExpression(NullSafeVisitor.java:62)

The same applies for local variables:

package tests

import org.gcontracts.annotations.*

class A {
  
  def some_operation()  {
     @NullSafe def bla = "test"
     def otherBla = "test2"

     bla = otherBla
  }
}

package tests

import org.gcontracts.annotations.*

class A {

  def some_operation()  {
     @NullSafe def bla = "test"
     def otherBla = "test2"

     bla = otherBla
  }
}

BUG! exception in phase 'semantic analysis' in source unit 'script12810425282531878012710.groovy' bla must be assigned to NullSafe variables!
	at org.gcontracts.ast.visitor.NullSafeVisitor.visitBinaryExpression(NullSafeVisitor.java:31)

Notice, the last code snippet won’t compile according to the rules defined above, as an assignment of a not-null safe variable to a null-safe variable is not valid. If we want to pass compile-time checks, we simply had to annotate otherBla as @NullSafe.

Instance Variables, Fields

Instance variables need to be assigned to a non-null value whenever the construction of the according object is finished, thus the object complies to the explicit class invariant. Whenever a method is called an assignment to null would trigger a compile-time failure:

package tests

import org.gcontracts.annotations.*

class A {

  @NullSafe String myProp

  def A()  {
    myProp = "test"
  }

  def some_op() { myProp = null }
}

BUG! exception in phase 'semantic analysis' in source unit 'script12810407257201736681757.groovy' myProp must be assigned to NullSafe variables!
	at org.gcontracts.ast.visitor.NullSafeVisitor.visitBinaryExpression(NullSafeVisitor.java:59)

Assignments from Non-Void Method Calls

In fact, this is only half the story. Whenever a variable is assigned to the return value of a method, we have no simple way to check at compile-time whether that method returns null. A possible solution for solving this scenario in terms of keeping code null-safe is to specify variable default values via annotation closures (btw: here starts the part I actually did not implement by now, so don’t get confused with the annotation class above):

package tests

import org.gcontracts.annotations.*

class A {

  def some_op() {
      @NullSafe({ "empty" }) def i = "aloha"

      i = some_function()
      // at this point, i will be "empty" not null!
  }

  def some_function() { return null }
}

The default value is assigned only when an assignment to the null-safe variable would result in a null reference.

Annotation closures would provide great flexibility to define arbitrary complex initialization code:


class Address {
    
    @NullSafe({ new City(name: 'dummy') }) City city
    // ... 
}

An alternative approach would be to annotate methods as @NullSafe, which would avoid calling methods not being marked with @NullSafe in null-safe assignments.

What’s next?

That’s it with my experimental @NullSafe feature. I know that this is a naive first approach, but at least its a starting point and I would really appreciate your feedback on this topic either in this article’s comment section or at the Groovy-Dev mailing list [1].

It would be very important to know whether this method works or if it is not applicable in real-world projects from your point of view. One issue is, that although GContracts uses AST transformations, it is not an integral part of the programming language, which could result in e.g. meta- or reflection-calls which make null-safe variables null etc. Just imagine Hibernate injecting null values into @NullSave instance variables – levering null-safe variables immediately . In fact, this is the point I am not completely clear about – does it make sense to introduce such a feature at library/AST transformation level?

[0] GContracts – Project Home Page
[1] Groovy-Dev mailing list @ nabble

Categories: groovy, patterns

Domain Patterns in Enterprise Projects


If I had to separate projects I’ve been in, i would difference between data manipulation and enterprise projects.

Data manipulation projects mainly consist of a set of forms needed to alter data which is stored in some persistent store (most of the time a relational database). In these projects, there is not much domain logic to be found. There might be some validation logic, some jobs running in the background, but that’s mainly it with domain logic.

Enterprise projects are all about domain logic and integration of external systems. Most of the time a project starts with implementing all the views and domain classes which are considered to be simple, that is, not much domain knowledge has to be available in order to implement those views. But as time goes by, domain logic creeps in and classes which seemed pretty small and granular for data manipulation style grow to be oversized monsters.

Web frameworks like Grails make it easy to implement data manipulation style applications: if project members have a clear picture how to model the domain classes, they write the domain classes and all the view and controller stuff comes automatically with it.

If your project is moving towards being an enterprise project, there are some patterns that helped in my projects to deal with additional domain logic and complexity i want to share with this article.

The Birth of Objects

The birth of an object is a special event: it is the time the object’s data starts to exist in memory. In order to create an object, one of the object’s constructors is used. A constructor is a special creation procedure which encapsulates object initialization and which ensures that at the end of the constructor call an object is in a state that satisfies the class invariant.

Okay, that is rather academic you might think, which is true for classes which resemble active records [0] – but not for real domain classes found in larger projects. If you are writing an enterprise project with Grails, you have to know that Grails per se is not ment to be used in projects like that, but rather in data manipulation projects.

Let’s assume one of our domain classes is a Customer domain class. If we create a Customer class with

grails create-domain org.ast.domain.entity.Customer

we’ll get a class that looks like

class Customer {
	
	

	static constraints = {
		
	}
}

At this point we could add all the properties that resemble a customer and we could think we’re done. But what would happen?

We would build nothing more than a class that resembles the active record pattern – at runtime instances of that class mainly are treated like database records and all the domain logic will be put in Grails service classes. But service classes are not ment to be the place are central business logic happens, because we would distribute business logic over multiple service classes, where as each service holds several procedures that modify our active records – such a design screams for better object-orientation design enforcing information hiding, inheritance, dynamic binding and polymorphism.

Let us come back to our Customer domain class and add some properties to it:

class Customer {
	
	String name
	Status status
	Account account

	static constraints = {
		name(blank: false)
		status(nullable: false)
		account(nullable: false)
	}
}

Clients of that class could now create customer objects in arbitrary ways:

def c = new Customer()
def c = new Customer(name: 'Max')
def c = new Customer(status: null)
def c = new Customer(status: Status.NEW, account: null)
// ...

If we are serious about object-orientation and the role of creation procedures, we should not support creating objects in ways that generate objects which are in an invalid state after object creation, e.g. creating a customer without a state. In order to support a more object-oriented design we could modify the Customer class like this:

@Invariant({ name?.size() > 0 && status && account })
class Customer {
	
	String name
	Status status
	Account account
	
	def Customer(String name, Account account)  {
		this.name = name
		this.status = Status.NEW
		this.account = account
	}

	void setName(final String other) { name = other }
	void setStatus(final Status other)  { status = other }
	void setAccount(final Account other) { account = other } 


	// Grails Specific 
	private def Customer() {}

	static constraints = {
		name(blank: false)
		status(nullable: false)
		account(nullable: false)
	}
}

First of all, we’ve redeclared setter methods to be only visible in package scope. This comes in handy if you modularize your domain logic in separate packages and want e.g. services to change the state of customer objects.

In addition, a custom constructor was added which is a way to create a Customer object being in a valid state after the constructor call. As it turns out, more complex classes would be candidates for factory methods [1]

@Invariant({ name?.size() &gt; 0 && status && account })
class Customer {
	
	// ...
	
	def static Customer create(final String name, final Account account)  {
		return new Customer(name, account)
	}

	// ...
}

or the Builder pattern [2].

Well-defined creation procedures/constructors/factory methods/etc. are great, because there is a predefined way on how to create a fully initialized, valid object. If we should write an integration test for the Customer class and wanted some customer test instances, we would need to know what it takes to create a “valid” customer object – it might be simple in this case, but with more complex classes programmers could simply give up to write test-cases based on the simple fact that it is not obvious to them who to create instances of some type.

Domain Logic comes in…

In the previous section, the Customer class has been modified to be created with a custom constructor. Let as assume that the following business requirement comes in:

After 30 days, if the customer is in state REGISTERED, the application needs to send an e-mail to the customer, telling that the test-period ends in 5 days.

Is is pretty obvious that we need to have a job that checks for customer accounts existing >= 30 days. In addition, the application needs to have a way to integrate with an external SMTP server, which is a rather technological issue we don’t care about now (although I did in the last blog post [3]). What’s interesting in this place is where to put the additional domain logic.

A first naive approach would be to create a service class CustomerService which implements the business requirement:

class CustomerService {	
	
	static transactional = true
	
	def void sentTestPeriodEndsMail(final Customer customer)  {
		
		if (customer.state != State.REGISTERED) return
		if (!(customer.created + 30 &gt;= new Date()))  return
		
		customer.state = State.REGISTERED_WARNED
		customer.save()
		
		sendMailMessage(customer)
	}
	
	// ...
}

The code above exactly implements the business requirement, which is a good thing. But what if more domain logic creeps in? Let’s say there is a more privileged type of users, FriendOfMineCustomer from which instances should never receive those mails. We would have to change our service class method to something like:

class CustomerService {	
	
	static transactional = true
	
	def void sentTestPeriodEndsMail(final Customer customer)  {
		
		if (customer instanceof FriendOfMineCustomer) return // <-- more domain logic comes in...
		if (customer.state != State.REGISTERED) return
		if (!(customer.created + 30 &gt;= new Date()))  return
		
		customer.state = State.REGISTERED_WARNED
		customer.save()
		
		sendMailMessage(customer)
	}
	
	// ...
}

Now we start to feel that this is obviously not a good design. Just assume we need to check if a customer object is a candidate for receiving reminder messages in another service – we would need to duplicate the code above and put in that place. What is the reason that we slipped into this design? The reason was that we treated customers as active records and created a service procedure that implemented the business requirement in first place with pure procedural design.

A better approach would be to put the additional domain logic in the Customer class itself:

@Invariant({ name?.size() &gt; 0 && status && account })
class Customer {
	
	def boolean isTestPeriodEmailCandidate()  {
		return state == State.REGISTERED && created + 30 &gt;= new Date()
	}

	// ...
}

than we could simply modify FriendOfMineCustomer to:

class FriendOfMinCustomer extends Customer {
	
	def boolean isTestPeriodEmailCandidate()  {
		return false
	}

	// ...
}

and the CustomerService class would simply be the integrational glue between the core domain class and the messaging component:

class CustomerService {	
	
	static transactional = true
	
	def void sentTestPeriodEndsMail(final Customer customer)  {
		
		if (!customer.isTestPeriodEmailCandidate()) return
		
		customer.state = State.REGISTERED_WARNED
		customer.save()
		
		sendMailMessage(customer)
	}
	
	// ...
}

As you can see, code for the service class does not feel that complicated anymore. Changing the customer’s state should be externalized to another domain class like CustomerWorkflow:

class CustomerWorkflow {	
	
	State state
	Customer customer
	
	// ...
}

This would simplify the service class’ method code to:

class CustomerService {	
	
	static transactional = true
	
	def void sentTestPeriodEndsMail(final Customer customer)  {
		
		if (!customer.isTestPeriodEmailCandidate()) return
		
		customer.workflow.mailMessageSent()
		
		sendMailMessage(customer)
	}
	
	// ...
}

Keep in mind that there is no single best way on how to treat domain logic in enterprise applications, but one should at least know some basic patterns found in [4] or [5].

Entities vs. Value Objects

Another thing which is important for larger Grails projects is to recognize the difference between entities and simple value objects.

In Grails, each generated domain class is an entity – it can be uniquely identified. On the other side, value objects do not have an identity, are immutable and interchangeable. Value object classes could be enumerations, but also separate GORM domain classes. GORM supports embedding domain classes within other domain classes. The decision whether a certain class is a value object class or entity class can change from domain to domain. It could be that an address is a value object, but put in another context an address is an entity.

Defining value objects with domain classes is usually done with the static embedded property:

class Customer {	
	
	// ...
	Address homeAddress
	
	static embedded = ['homeAddress']
}

class Address  {
	
	String street
	// ...
}

Embedding classes has the advantage that developers don’t have to care about cascading CRUD operations, since data is directly embedded in the parent table at the database level. Therefore, value object classes simplify domain class relationships.

The Role of Repositories

A repository is responsible for loading persistent objects from arbitrary data stores. In Grails, this functionality is mostly encapsulated with GORM generated methods, available as static methods in each domain class:

def customers = Customer.findAllByState(State.NEW, [max: 10])
// ...

To be honest, outsourcing data accessing logic in a separate repository class does not seem to be reasonable with GORM. At least one use case would justify separate repository classes: complex data graph deletion.

If the domain model has complex relationships the best strategy is to define Aggregate objects. An aggregate object is the root object of a graph of associated objects. Clients of that class access objects in the aggregates graph only through accessor methods in the aggregate object. It is not possible to modify objects in the data graph directly without using aggregate methods.

This pattern simplifies relationships within the domain model and therefore reduces complexity to handle the object graphs and relationships between them. Overall, modification/deletion operations are hidden and encapsulated by the root entity of the aggregate.

Conclusion

This article does not claim to offer a complete overview of best practices in Grails enterprise projects, but reflects best-practices that i have found useful in projects i participated in.

I would be happy to discuss other views or other patterns in the comments section of this article.

[0] Active Record Pattern – http://martinfowler.com/eaaCatalog/activeRecord.html
[1] Factory Method Pattern – http://en.wikipedia.org/wiki/Factory_method_pattern
[2] Builder Pattern – http://en.wikipedia.org/wiki/Builder_pattern
[3] Integration and Grails – https://andresteingress.wordpress.com/2010/05/11/integration-and-grails/
[4] Domain Driven Design – Eric Evans
[5] Patterns of Enterprise Application Architecture – Martin Fowler

Categories: basic, grails, patterns

Getting the Persistence Context Picture (Part III)

April 20, 2010 1 comment


Flattr this
Part 3 of this series deals with more advanced topics, requiring knowledge about persistence patterns and Hibernate APIs.

  • [0] Getting the Persistence Context Picture (Part I)
  • [1] Getting the Persistence Context Picture (Part II)

Conversational State Management

One advanced use case when using persistence frameworks is realization of conversations.

A conversation spans multiple user interactions and, most of the time, realizes a well-defined process. Best way to think of a conversation is to think of some kind of wizard, e.g. a newsletter registration wizard.

A Newsletter Registration Conversation

A newsletter registration wizard typically spans multiple user interactions, whereas each interaction needs user input and further validation to move on:

  1. a user needs to provide basic data, e.g. firstname, lastname, birthdate, etc.
  2. a user needs to register for several newsletter categories
  3. a user gets a summary and needs to confirm that information

Each user interaction is part of the overall newsletter registration process. Technically speaking, whenever the user aborts the process at some time, or an unrecoverable error occurs, this must have no consequence on the underlying persistent data structures. E.g. if a user registered for a newsletter (step 2) and stops its newsletter registration per closing the browser window and HTTP session runs out of time, the registration and the newly created newsletter user needs to be rolled back.

A first naive approach to realize conversations is to use a single database transaction. Modern applications hardly use that approach because its error-prone and not justifiable in terms of performance considerations. In order to really get a grasp of the problems we would face, let us take a look at some basics on database transactions.

A Small Intro to Database Transactions

Whenever a database transaction gets started, all data modification is tracked by the database. For example, in case of MySQL (InnoDB) databases, pages (think of a special data structure) are modified in a buffer pool and modifications are tracked in a redo log which is hold in synchronization with the disk. Whenever a transaction is committed the dirty pages are flushed out to the filesystem, otherwise if the transaction is rolled back, the dirty pages are removed from the pool and the changes are redone.

It depends on the current transaction level if the current transaction has access to changes done by transactions executed in parallel (more details on MySQL transactions can be found at [2]). MySQL’s default transaction level is “repeatable read”: all reads within the same transaction return the same results – even if another transaction might have changed data in the meantime. InnoDB (a transactional MySQL database engine, integrated in MySQL server) achieves this behavior by creating snapshots when the first query is created.

Other isolation levels (confirming to SQL-92 standard) are: “read uncommitted” > “read committed” > “repeatable read” > “serializable”. The order represents the magnitude of locking which is necessary to realize the respective transaction level.

A Naive Approach

Single DB Transaction

Back to conversational state management: as mentioned above, a naive approach would be to use a single database transaction for a single conversation. This approach apparently has many problems:

  • if data is modified and DML statements generated, usually locks are created, avoiding other transactions to change it.
  • databases are designed to keep transactions as short as possible, a transaction is seen as atomic unit and not a long living session, long-running transactions are typically discarded by the database management system.
  • especially in web applications, it is hard for an application to determine conversation aborts – when the user closes its browser window in the middle of a transaction, or kills the browser process, there is hardly a change for the application to detect that circumstance.
  • a transaction is typically linked to a database connection. the number of database transactions is typically limited to the application.

As you can see, spanning a conversation with a database transaction is not an option. But a pattern already known from the previous articles comes to rescue: the persistence context.

Extended Persistence Context Pattern

As we’ve already seen in the second part of this series [1] Grails uses a so-called Session-per-Request pattern.

Session per Request Pattern

Whenever a controller’s method is called, a new Hibernate session spans the method call and, with flush mode turned to manual, the view rendering. When the view rendering is done, the session is closed. Of course, this pattern is not an option when implementing conversations, since changes in a controller’s method call are committed on the method’s return. One could pass Grails standard behavior using detached objects, but let me tell you: life gets only more complicated when detaching modified objects – especially in advanced domain models.

What we will need to implement a conversation is a mechanism that spans the persistence context over several user requests, that pattern is called: the extended persistence context pattern.

Extended Persistence Context

An extended persistence context reuses the persistence context for all interactions within the same conversation. In Hibernate speak: we need to find a way to (re)use a single org.hibernate.Session instance for conversation lifetime.

Fortunately, there is a Grails plugin which serves that purpose perfectly: the web flow plugin.

Conversational Management with Web Flows

The Grails web flow plugin is based on Spring Web Flow [3]. Spring Web Flow uses XML configuration data to specify web flows:

<flow xmlns="http://www.springframework.org/schema/webflow"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.springframework.org/schema/webflow
  http://www.springframework.org/schema/webflow/spring-webflow-2.0.xsd">

  <view-state id="enterBasicUserData">
    <transition on="submit" to="registerForNewsletters" />
  </view-state>
	
  <view-state id="registerForNewsletters">
    <transition on="submit" to="showSummary" />
    <!-- ... -->
  </view-state>

  <view-state id="showSummary">
    <transition on="save" to="newsletterConfirmed" />
    <transition on="cancel" to="newsletterCanceled" />
  </view-state>  
	
  <end-state id="newsletterConfirmed" >
    <output name="newsletterId" value="newsletter.id"/>
  </end-state>

  <end-state id="newsletterCanceled" />		
</flow>	

Groovy uses its own DSL implemented in org.codehaus.groovy.grails.webflow.engine.builder.FlowBuilder. This approach has the advantage of being tightly integrated into the Grails controller concept:

// ...
def newsletterRegistrationFlow = {
  step1 {
    on("save")   {
      def User userInstance = new User(params)
      flow.userInstance = userInstance

      if (!userInstance.validate()) {
        log.error "User could not be saved: ${userInstance.errors}"
            
        return error()
      }
    }.to "step2"
  }

  step2 {
    on("save")  {
      def categoryIds = params.list('newsletter.id')*.toLong()
      // ...
      def User userInstance = flow.userInstance
      newsletterService.registerUserForNewsletterCategories(userInstance, categoryIds)
      // ...
   }.to "step3"
  }

  step3()  {
     on("save")  {
        def userInstance = flow.userInstance
        // ...
        userInstance.save()
     }
  }
}

In this case, the closure property newsletterRegistrationFlow is placed in a dedicated controller class and is automatically recognized by the web flow plugin. The plugin is responsible for instantiating a Grails web flow builder object which needs a closure as one of its input parameters.

Leaving the DSL aside, best thing about web flows is that it realizes the extended persistence context aka flow managed persistence context (FMPC). The HibernateFlowExecutionListener is the place where the Hibernate session is created and than reused over multiple user interactions. It implements the FlowExecutionListener interface.

The flow execution listener provides callbacks for various states in the lifecycle of a conversation. Grails HibernateFlowExecutionListener uses these callbacks to implement the extended persistence context pattern. On conversation start, it creates a new Hibernate session:

public void sessionStarting(RequestContext context, FlowSession session, MutableAttributeMap input) {
	// ...
	Session hibernateSession = createSession(context);
	session.getScope().put(PERSISTENCE_CONTEXT_ATTRIBUTE, hibernateSession);
	bind(hibernateSession);
	// ...
}

Whenever the session is paused, in between separate user requests, it is disconnected from the current database connection:

public void paused(RequestContext context) {
	if (isPersistenceContext(context.getActiveFlow())) {
		Session session = getHibernateSession(context.getFlowExecutionContext().getActiveSession());
		unbind(session);
		session.disconnect();
	}
}

Whenever resuming the current web flow, the session is connected with the database connection again. Whenever a web flow has completed its last step, the session is resumed and all changes are flushed in a single transaction:

public void sessionEnding(RequestContext context, FlowSession session, String outcome, MutableAttributeMap output) {
  // ...
  final Session hibernateSession = getHibernateSession(session);
  // ...
  transactionTemplate.execute(new TransactionCallbackWithoutResult() {
	  protected void doInTransactionWithoutResult(TransactionStatus status) {
	    sessionFactory.getCurrentSession();
	    }
	  });
  }
    
  unbind(hibernateSession);
  hibernateSession.close();
  // ...
}

A call to sessionFactory.getCurrentSession() causes the current session to be connected with the transaction and, at the end of the transaction template, committing all changes within that transaction. All changes which have been tracked in-memory so far, are by then synchronized with the database state.

The price to be paid for conversations is higher memory consumption. In order to estimate the included effort, we need to take a closer look at how Hibernate realizes loading and caching of entities. In addition to implementing conversations, memory consumption is especially important in Hibernate based batch jobs.

Using Hibernate in Batch Jobs

The most important thing when working with Hibernate is to remember: the persistence context references all persistent entities loaded, but entities don’t know anything about it. As long as the persistence context is alive it does not discard references automatically.

This is particularly important in batch jobs. When executing queries with large result sets you have to manually discard the Hibernate session otherwise the program is definitely running out of memory:

for (def item : newsletters)  {
  // process item...
  if (++counter % 50 == 0)  {
    session.flush()
    session.clear()
  }
  // ...
}

Session provides a clear method that detaches all persistent objects being tracked by this session instance. Invoked on specific object instances, evict always to remove selected persistent objects from a particular session.

In this context, it might be worth to take a look at Hibernate’s StatefulPersistenceContext class. This is the piece of code that actually implements the persistence context pattern. As you can see in the following code snippet, invoking clear removes all references to all tracked objects:

public void clear() {
	// ...
	entitiesByKey.clear();
	entitiesByUniqueKey.clear();
	entityEntries.clear();
	entitySnapshotsByKey.clear();
	collectionsByKey.clear();
	collectionEntries.clear();
	// ...
}

Another thing to notice when executing large result sets and keeping persistence contexts in memory is that Hibernate uses state snapshots to recognize modifications on persistent objects (remember how InnoDB realizes repeatable-read transaction isolation;-)).

Whenever a persistent object is loaded, Hibernate creates a snapshot of the current state and keeps that snapshot in internal data-structures:

// ..
EntityEntry entry = getSession().getPersistenceContext().getEntry( instance );
if ( entry == null ) {
	throw new AssertionFailure( "possible nonthreadsafe access to session" );
}
		
if ( entry.getStatus()==Status.MANAGED || persister.isVersionPropertyGenerated() ) {

TypeFactory.deepCopy(
		state,
		persister.getPropertyTypes(),
		persister.getPropertyCheckability(),
	        state,
		session
);
// ...

Whenever you don’t want Hibernate to create snapshot objects, you have to use readonly queries or objects. Marking a query as “readonly” is as easy as setting its setReadOnly(true) property. In read-only mode, no snapshots are created and modified persistent objects are not marked as dirty.

Newsletter.withSession {
        org.hibernate.classic.Session session ->

              def query = session.createQuery("from Newsletter").setReadOnly(true)
              def newsletters = query.list()

              for (def item : newsletters)  {
                // ...
              }
      }

If your batch accesses the persistence context with read-access only, there is another way to optimize DB access: using a stateless session. SessionFactory has an openStatelessSession method that creates a fully statless session, without caching, modification tracking etc. In Grails, obtaining a stateless session is nothing more than injecting the current sessionFactory bean and calling openStatelessSession on it:

def Session statelessSession = sessionFactory.openStatelessSession()
statelessSession.beginTransaction()

// ...

statelessSession.getTransaction().commit()
statelessSession.close()

In combination with stateless session objects, it is worth mentioning that if you want to modify data there is an interface to do that even when working with stateless sessions:

public void doWork(Work work) throws HibernateException;

Where interface Work has a single method declaration:

public interface Work {
	/**
	 * Execute the discrete work encapsulated by this work instance using the supplied connection.
	 *
	 * @param connection The connection on which to perform the work.
	 * @throws SQLException Thrown during execution of the underlying JDBC interaction.
	 * @throws HibernateException Generally indicates a wrapped SQLException.
	 */
	public void execute(Connection connection) throws SQLException;
}

As you can see execute gets a reference on the current Connection which, in the case of JDBC connections, can be used to formulate raw SQL queries.

If your batch is processing large chunks of data, paging might be interesting too. Again, this can be done by setting the appropriate properties of Hibernate’s Query class.

// ...
def Query query = session.createQuery("from Newsletter")
query.setFirstResult(0)
query.setMaxResults(50)
query.setReadOnly(true)
query.setFlushMode(FlushMode.MANUAL)
// ...

The code snippet above explicitly sets the flush mode to “manual”, since flushing does not make sense in this context (all retrieved objects are readonly).

A similiar API can be found in the Criteria class, being supported by Grails by its own Criteria Builder DSL [6].

Conclusion

As you can see, there are various options to use Hibernate even for batch processing of large data sets. Programmers are not restricted on using predefined methodologies, although understanding the fundamental patterns is a crucial point. Adjusting Hibernate’s behavior and generated SQL is a matter of knowing the right extension points.

I hope you had a good time reading that article series. I know, a lot of things have been unsaid but if you are missing something really much or want to gain more insights in a particular topic related to Hibernate, GORM, Grails etc. just drop a comment, i’ll try to take notice of it in one of the following blog posts.

[0] Getting the Persistence Context Picture (Part I)
[1] Getting the Persistence Context Picture (Part II)
[2] MySQL InnoDB Transactions
[3] Spring Web Flow Project
[4] Hibernate – Project Home Page
[5] Hibernate Documentation – Chapter: Improving Performance
[6] Criteria Builder DSL

Categories: basic, grails, hibernate, patterns

Getting the Persistence Context Picture (Part II)

April 8, 2010 2 comments


Flattr this
The first article of this series [0] took a look at the basic patterns found in todays persistence frameworks. In this article we will have a look at how Hibernate APIs relate to those patterns and how Hibernate is utilized in Grails.

A Closer Look at Hibernate’s Persistence Context APIs

All data creation, modification and altering has to be done in a persistence context. A persistence context is a concrete software element that maintains a list of all object modifications and, in a nutshell, at the end of the persistence context life-time or business transaction synchronizes them with the current database state.

When developing with a modern web framework – as Grails is – it is most likely you don’t even have to care about opening a persistence context or closing it, or even know about how this could be done.

But as application complexity raises, you have to know Hibernate’s persistence context APIs and understand how they are integrated within the web application framework of your choice. Let us take a look at the most import APIs and how they correspond to persistence patterns.

The Registry Pattern or org.hibernate.SessionFactory

The SessionFactory class implements the registry pattern. A registry is used by the infrastructure-layer to obtain a reference to the current persistence context, or to create a new one if not found in the current context.

Usually, as noted in SessionFactory’s Java documentation, a session-factory refers to a single persistence provider. Most of the time, application need just a single session-factory. Indeed, if an application wanted to work across multiple databases, it would have to maintain multiple SessionFactory instances, one for each database.

Imagine a session-factory to be a configuration hub – it is the place where all configuration settings are read and used for constructing persistence contexts.

In a Grails application, the application’s session-factory can be easily obtained by declaring a property of type org.hibernate.SessionFactory:


class SomeService {

    SessionFactory sessionFactory

    void myServiceMethod()  {
        def session = sessionFactory.getCurrentSession()
        // ...
    }
}

The Grails application’s session-factoy is than injected by dependency injection since every Grails service class is a Spring-managed component. Other components include controllers, domain-classes and custom beans (either in beans.groovy, beans.xml, other bean definition XMLs or annotated Groovy/Java classes).

A Grails application’s session-factory is set-up by the HibernatePluginSupport class, which is a utility class used by Grails hibernate plugin. When taking a look at the source code you’ll find out that the code uses Grails Spring builder DSL to declare a ConfigurableLocalSessionFactoryBean. This type of bean is usually used in Spring applications to create a Hibernate session-factory instance during application bootstrap and to keep track of it during the entire life-time of the application-context.

//...
sessionFactory(ConfigurableLocalSessionFactoryBean) {
    dataSource = dataSource
    // ...
    hibernateProperties = hibernateProperties
    lobHandler = lobHandlerDetector
}
//...

Btw, if we would have to create a session-factory within a plain Groovy application, it wouldn’t get much harder:


def configuration = new Configuration()
    .setProperty("hibernate.dialect", "org.hibernate.dialect.MySQLInnoDBDialect")
    .setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/test")
    .setProperty("hibernate.order_updates", "true")
    // ...

def sessionFactory = configuration.buildSessionFactory()

The Configuration class can be used to configure a session-factory programatically, other options would be to use a properties-file or an XML file (named hibernate.properties or hibernate.cfg.xml). If using any of the file-based configurations, take care your configuration file can be loaded by the current class-loader, therefore put it in the class-path’s root directory.

Grails configuration of Hibernate’s session-factory is pretty much hided from application developers. In order to provide a custom hibernate.cfg.xml, just put it in the grails-app/conf/hibernate folder.

The Persistence-Context Pattern or org.hibernate.Session

The Session builds the heart of Hibernate: it resembles a persistence context. Thus, whenever the application needs to access object-mapping functionality in either form, it needs to work with an instance of type Session.

The Session interface provides a bunch of methods letting the application infrastructure interact with the persistence context:

        // ...
	public Query createQuery(String queryString) throws HibernateException;

	public SQLQuery createSQLQuery(String queryString) throws HibernateException;

	public Query createFilter(Object collection, String queryString) throws HibernateException;

	public Query getNamedQuery(String queryName) throws HibernateException;

	public void clear();

	public Object get(Class clazz, Serializable id) throws HibernateException;

	public void setReadOnly(Object entity, boolean readOnly);

	public void doWork(Work work) throws HibernateException;

	Connection disconnect() throws HibernateException;

	void reconnect() throws HibernateException;

	void reconnect(Connection connection) throws HibernateException;
        // ...

Whenever e.g. a query is created by one of the querying methods, all objects which are retrieved are automatically linked to the session. For each attached (Hibernate term for “linked”) object, the session holds a reference and meta-data about it. Whenever a transient Groovy object is saved, it gets automatically attached to the current session. Notice that this is a unidirectional relationship: the session knows everything whereas the attached object instances don’t know anything about being linked to a session.

Lazy and Eager Loading

In regard to attaching objects to the current session, you need to know the concepts of lazy and eager loading of object relationships.

Whenever a persistent class A references another persistent class B, A is said to have a relationship with B. Object relationships are mapped either with foreign keys or relationship tables in the underlying database schema. As default, Hibernate uses a lazy loading approach: whenever a root object with relations to other objects is loaded, the related objects are not loaded. The other approach would be eager loading, where the object relationship is loaded with the root object.

Lazy vs. Eager Loading

Lazy loading does not hurt as long as objects are attached to a persistence context. Although, if the persistence context is closed, there is no way to navigate over a lazy loaded relationship. Whenever application code needs to access lazy relationships this leads to a lazy loading exceptions.

Obtaining a Session

Per default, a session instance can be obtained using a session-factory instance:


def session = sessionFactory.openSession()
def tx = session.beginTransaction()

// ... work with the session

tx.commit()
session.close()

As it is the case with the code sample above, most of the time application code is working in a transactional context, that is, the current method is executed within a single transaction. Therefore, it is a common idiom to open a transaction with the beginning of a session, although this is not enforced by Hibernate’s API. If we would not use transaction boundaries, we could just omit the method call to beginTransaction:


def session = sessionFactory.openSession()

// ... work with the session

session.close()

You need to be careful in this scenario. If Hibernate obtains a JDBC connection, it automatically turns autocommit mode off by setting jdbcConnection.setAutoCommit(false). Indeed, this is the JDBC way to tell the database to start a new transaction. However, how the database driver reacts on pending transactions is not specified, an application runs into undefined behavior.

General Session-Handling in Grails

As manual session handling can get tricky, web frameworks like Grails hide most of these problems. The Grails’ Object Relational Mapping (GORM) layer is a thin layer above Hibernate 3. Grails uses this layer to enrich domain classes with a lot of DAO like functionality. E.g. to each domain class so-called dynamic finders are added, which most of the time completely replace the need for data access objects (DAOs). Handling of Hibernate sessions is mainly hidden by GORM which internally uses Spring’s Hibernate integration and Hibernate.

Whenever executing a GORM query Grails internally creates a HibernateTemplate. A HibernateTemplate is a neat way to get defined access to a Hibernate session. It completely hides getting the session-factory and retrieving a session. Clients only need to implement callback methods, which are than called on execution of that template. Let’s take a look how such templates are used when executing e.g. a dynamic finder method like findBy.


class SomeController {
  def myAction()  {
    def User user = User.findByLogin(params.id)
    // ...
  }
}

When invoking the static findBy dynamic finder method, the following code is executed:


// ...
return super.getHibernateTemplate().execute( new HibernateCallback() {
    public Object doInHibernate(Session session) throws HibernateException, SQLException {
        Criteria crit = getCriteria(session, additionalCriteria, clazz);
	// ... do some criteria building

        final List list = crit.list();
        if(!list.isEmpty()) {
            return GrailsHibernateUtil.unwrapIfProxy(list.get(0));
        }
        return null;
     }
});

As can be seen, Grails internally does nothing more than creating a Spring HibernateTemplate and in its doInHibernate callback, creates a plain Hibernate Criteria object which is used to specify object queries. Spring hides the code of finding the current session and setting properties according to the current program context, GORM adds this functionality and does all Groovy meta-programming stuff (adding static methods etc. to the domain’s meta class).

The same is true for saving domain objects using GORM’s save method:

protected Object performSave(final Object target, final boolean flush) {
        HibernateTemplate ht = getHibernateTemplate();
        return ht.execute(new HibernateCallback() {
            public Object doInHibernate(Session session) throws HibernateException, SQLException {
                session.saveOrUpdate(target);
                if(flush) {
                    // ...
                    getHibernateTemplate().flush();
                    // ...
                }
                return target;
            }
        });
}

Session Flushing

Since a session spans a business transaction (remember, not the same as a technical transaction) it might be left open by the infrastructure-layer over several user interaction requests. The application needs to ensure that a session is closed and its state synchronized with the database at some time, which is called flushing a session.

As we have already seen in the Grails source code above, flushing is mainly handled by the web framework, but programmers should know the point-cuts where it actually happens:

  • whenever a Transaction gets committed
  • before a query is executed
  • if session.flush() is called explicitly

Be aware that flushing is a costly operation in a persistence context, as Hibernate needs to synchronize the current object model in memory with the database. Programmers could change the default behavior described above by setting an alternate flush mode on the current session:

def session = sessionFactory.openSession()
session.setFlushMode(FlushMode.NEVER)

FlushMode.NEVER in this case means that session flushing is deactivated, only explicit calls to session.flush() triggers it.

In Grails, session flushing is done after each controller call, due to Spring’s OpenSessionInView interception mechanism. In order to access lazy-loaded properties in GSP pages, the session is not closed completely after a controller’s method call but after response rendering is done. Therefore, it sets session flushing mode to FlushMode.NEVER after a controller method call to avoid DB modifications caused by GSP page code.

Another place where sessions get flushed, is at the end of each service method call (as long as the service is marked as being transactional or is annotated with @Transactional):

class SomeService {
    static transactional = true

    def someMethod()  {
       // ... transactional code
    }
}
class SomeService {

    @Transactional
    def someMethod()  {
       // ... transactional code
    }
}

When doing integration tests on Grails classes, you need to remind these point-cuts where sessions get flushed. To make things even more complicated, there is one additional thing that is different in integration tests: each integration test method runs in its own transaction, which at the end is rollbacked by Grails testing classes. E.g. if testing a controller’s save method, chances are you can’t find an SQL INSERT or UPDATE statement in database logs. This is the intended behavior, but it causes confusion if bugs dealing with persistence issues need to be reproduced by test-cases.

If it is about transactions in integration tests, there is a way to deactivate transaction boundaries there:

class SomeTest extends GrailsTestCase {
    static transactional = false

    @Test
    def testWithoutTransactionBoundary()  {
       // ... transactional code
    }
}

Summary

In this article we took a look at how Grails and GORM handles Hibernate’s basic APIs: the SessionFactory and the Session. The next article in this series will deal with more advanced features: GORM in batch-jobs and conversational state management.

[0] Getting the Persistence Context Picture (Part I)
[1] Hibernate Project
[2] Spring Hibernate Integration
[3] Grails GORM Documentation

Categories: basic, grails, hibernate, patterns