Tuesday 16 February 2016

Questions I would like to be able to ask my application…

More about the reasoning behind this post later…

  1. Show me the use cases that the application supports, grouped in some logical, or physical way;
  2. Given that the application has been in existence for [n] [unit of time], show me the application’s use cases and whether their elapsed execution time today, in a standardised environment, has increased, decreased or stayed the same compared with their execution [n] [unit(s) of time] ago;
  3. If an index is added to [Table], which use cases will have a longer execution time, a shorter execution time and which will be unaffected?

Monday 30 November 2015

Using NHibernate TableGenerator with Fluent Mapping

I’ve been using one of NHibernate’s ‘Enhanced’ Id generators for a while:
<id name="Id">
    <generator class="NHibernate.Id.Enhanced.TableGenerator">
        <param name="prefer_entity_table_as_segment_value">true</param>
        <param name="table_name">Keys</param>
        <param name="value_column_name">NextKey</param>
        <param name="segment_column_name">Type</param>
        <param name="optimizer">pooled-lo</param>
        <param name="increment_size">6</param>
    </generator>
</id>
I spent a little time figuring out how to get the equivalent working with a ‘Fluent’ mapping:
this.Id(f => f.Id)
    .GeneratedBy
    .Custom<NHibernate.Id.Enhanced.TableGenerator>(
        p =>
            {
                p.AddParam("prefer_entity_table_as_segment_value", "true");
                p.AddParam("table_name", "Keys");
                p.AddParam("value_column_name", "NextKey");
                p.AddParam("segment_column_name", "Entity");
                p.AddParam("optimizer", "pooled-lo");
                p.AddParam("increment_size", "8");
            });
When I first used this generator, it was undocumented. The documentation has now been updated. These generators are an interesting alternative to HiLo.

Thursday 19 November 2015

In-line Decoration

Often, where a method has many non-functional concerns, it’s easy for the core intent to be lost in noise. Take the following, extremely contrived, example:
public static void NoisyMain(string[] args)
{
    Console.WriteLine("Logging started...");

    try
    {
        using (var transaction = new Transaction())
        {
            transaction.Begin();

            try
            {
                Console.WriteLine("Body");                        

                transaction.Commit();
            }
            catch (Exception)
            {
                transaction.Rollback();

                throw;
            }    
        }
    }
    catch (Exception)
    {
        Console.WriteLine("There was a problem...");

        throw;
    }

    Console.WriteLine("Logging completed.");
}
It’s only the highlighted line that has any application function. Yet it’s swamped by code. While the code that surrounds it is essential, it arguably reduces the maintainability of this method as the logging and transaction management code must be negotiated before arriving at the method’s essence.

A technique I’ve started to use to improve the signal-to-noise ratio is in-line decoration. It takes the concept of the Decorator design pattern and implements it using .Net delegates. This style of implementation allows a fluent interface to be used that supports chaining and, I think, improves readability:
public static void EssentialMain(string[] args)
{
    Decorate.With.Logging(() =>
        Decorate.With.CommittedTransaction(
            tx =>
                {
                    Console.WriteLine("Body");
                }));
}
The root of the implementation is the Decorate class:
public class Decorate
{
    public static Decorate With
    {
        get
        {
            return new Decorate();
        }
    }

}
It’s designed to provide a foundation for extension methods that will support the various decorating aspects:
public static class LoggingDecorator
{
    public static void Logging(
        this Decorate baseDecorator, 
        Action thisAction)
    {
        Console.WriteLine("Logging started...");

        try
        {
            thisAction();
        }
        catch (Exception)
        {
            Console.WriteLine("There was a problem...");

            throw;
        }

        Console.WriteLine("Logging completed.");
    }
}

public static class TransactionDecorator
{
    public static void CommittedTransaction(
        this Decorate baseDecorator, 
        Action<object> thisAction)
    {
        Console.WriteLine("Transaction started...");

        var currentTransaction = new object();

        try
        {
            thisAction(currentTransaction);

            Console.WriteLine("Transaction committed.");
        }
        catch (Exception)
        {
            Console.WriteLine(
                "Something went wrong. Transaction rolled back.");

            throw;
        }
    }
}
Each decorating aspect can be represented in its own class. This allows the Decorate class to become a point of extensibility where future aspects can be added without modifying any existing code. Once the extension methods are written and the appropriate namespace is imported, the new decorating methods are available.

It's worth noting that while this technique can certainly reduce the noise in methods like these, this technique has its downsides. It can make debugging more difficult as the pathway through the code is not as clear. It can also be just as vulnerable to the problem it tries to solve, in that if you get carried away and start to chain many decorators, the essence of the code's intent will be just as hard to decipher as in the original version.
 

Thursday 18 June 2015

Contextual NHibernate Sessions

When designing classes that will use an NHibernate session, the typical approach would be to inject the session via the class's constructor:
 public MyClass(ISession session)  
 {  
   _session = session;  
 }  
This works well when the object's lifetime is typically short. I.e. the object is created, it does something and then it's disposed. However, this approach doesn't work so well when an object's lifetime might be lengthy or unpredictable. For example, in a Windows Forms application, a navigation presenter, dependent on an NHibernate session, might live for the lifetime of the application. The problem with the design that injects the session into this presenter at construction is that the session also lives for the lifetime of the application. Furthermore, this presenter cannot share the session with other components during the execution of a use case/business transaction, a concept that is considered to be the ideal use of sessions in a smart client application, as the presenter's session might have been injected long before the start of the other use cases.

To address this problem, NHibernate supports the concept of contextual sessions. This feature works by creating a session and then 'binding' it to the SessionFactory. Classes that want to use a session within a shared context access the session via the SessionFactory's GetCurrentSession() method, rather than opening a new session via the OpenSession() method.

To use contextual sessions, the functionality must be enabled via a config parameter:
 <property name="hibernate.current_session_context_class">call</property>  
I'm using the 'call' context in this case. The documentation has more detail on this and the other contexts available. Using this parameter enables me to use the static CurrentSessionContext class to define the context of a session and bind it to the SessionFactory:
 var session = sessionFactory.OpenSession();  
 CurrentSessionContext.Bind(session);  
Now any objects that want to use the session within the defined context can access it via the SessionFactory, rather than opening their own:
 var session = sessionFactory.GetCurrentSession();  

Tuesday 28 April 2015

Is a test that tests intent a valid test?

Is it valid to test intent rather than function?
1:  container.Register(  
2:     Classes  
3:        .FromThisAssembly()  
4:        .BasedOn<Controller>()  
5:        .WithServiceBase()  
6:        .WithServiceSelf()  
7:        .LifestylePerWebRequest()  
8:  );  
This code registers MVC controllers with a Castle Windsor container with the following intent:
  1. That all controllers can be resolved using their base type (Controller);
  2. That a specific controller can be resolved by name (CustomerController);
  3. That resolved components lifetime will be linked to the lifetime of the web request;
In terms of testing, creating automated tests for the first two aspects of this is straightforward. However, testing the function of third aspect is harder.

The IDependencyResolver interface used by MVC makes no provision for releasing components, so to avoid Windsor holding onto our components for too long and leaking memory, the PerWebRequest lifestyle is specified. To the best of my current knowledge, creating an automated test in which to simulate Windsor’s functionality here is not easy, if at all possible. It is possible to override the lifestyle in the test to avoid this exception:

System.InvalidOperationException: HttpContext.Current is null. PerWebRequestLifestyle can only be used in ASP.Net

However, this doesn’t really help as, although we’re testing Windsor functionality, we’re not testing the intent of the code, or the correct Windsor functionality.

I ended-up writing this test:
1:  [Test]  
2:  [Category("Integration")]  
3:  public void Install_ConfiguredInstaller_ShouldRegisterControllersWithPerWebRequestLifestyle()  
4:  {  
5:     ContainerFactory.Install(_container);  
6:    
7:     var controllerHandlers = _container.Kernel.GetAssignableHandlers(typeof(Controller));  
8:    
9:     var misRegisteredComponents =  
10:       controllerHandlers.Where(h => !h.ComponentModel.LifestyleType.Equals(LifestyleType.PerWebRequest));  
11:    
12:    Assert.That(misRegisteredComponents.Any(), Is.False);  
13: }  
While it doesn’t test Windsor’s PerWebRequest lifestyle functionality, it stands as a descriptor of the code’s intent and will flag an unwitting code change. Therefore a test with value, IMHO. However, considering that it doesn’t actually prove the Windsor functionality, is it a valid test?

Thursday 12 February 2015

Testing with NHibernate and an in-memory SQLite database

Since reading this post, I've been using SQLite to test how my code uses NHibernate. This works perfectly for most situations. However, there are situations where you need multiple sessions. Opening another session with the config outlined in Ayende's post doesn't fail but you get a connection to another instance of a SQLite database, so it fails once you try and do something meaningful.

I've just found a way around this by using this connection string:
 FullUri=file:memory.db?mode=memory&amp;cache=shared  

This creates an in-memory database that's shared amongst all connections with the same connection string, providing at least one connection is kept open. To the best of my knowledge the following config setting is still necessary:
 <property name="connection.release_mode">on_close</property>  

Tuesday 27 January 2015

Precise Assertions

​It could be argued that one of the key roles of automated tests is to protect the existing level of code quality, preventing its decline by verifying that changes to the code comply with a set of known constraints. This protection really earns its money when working on a large and complicated codebase. Changes can be made, relatively safe in the knowledge that an unwitting mistake or omission will be flagged during development by a failing test. The reliability of this protection hinges on the quality of the tests and what they verify by assertion.

A problem I’ve recently uncovered is where a test’s assertion is not precise enough to provide adequate protection. This example is contrived but it makes the point. If we take a presenter that adds items to a list on a view, we might see something like:
 foreach (var item in items)  
{  
   View.AddToList(item.Name);  
}
​This functionality was verified (using NSubstitute)​ by:​​​
 fakeView.RecievedWithAnyArgs().AddListItem(null);  
This is a reasonable test. It verifies that items are added to the view using their name. However, if the use case contains acceptance criteria that states that the item must be added to list with its name displayed, we’re unable to verify this explicitly. It could be argued that the test doesn’t need to be any more precise. The view logic, in this case, is so simple that failure would be easily discovered during manual testing. I would argue that this confidence is misplaced…

Initially our presenter just adds list items. However, we add functionality to update list items. We add tests to cover the update functionality and copy the assertion regarding the addition of the list item:​​​
[Test]  
Public void Update_ValidChanges_ShouldChangeListItem()  
{  
   // Arrange  
   // Act  
   fakeView.RecievedWithAnyArgs().AddListItem(null);  
}  
There are now tests covering the addition/modification of items in the list and verifying that the items are added to the list.

A new requirement emerges: the list items should display the name followed by its id in brackets: Item 1 (345). The presenter is changed so that the add code is modified to format the name correctly. The test covering the addition doesn't change as the perception is that the existing assertion is good enough. However, for whatever reason, the update code is missed. Code changes completed, the unit tests are run and everything is green. However, the update feature now has a defect: when an item is updated, the name that's shown in the list is not correct; it doesn't feature the id in brackets.

Ultimately, the defect is introduced due to developer not fully understanding or assessing the impact of the change. However, more detailed test assertions could've done more to protect the quality of the code. While it's true that manual testing easily finds this defect, that process is not free. If the assertion regarding the addition of the list item had contained more detail about what was being added, the update tests would've failed, alerting the developer that they had unwittingly missed something:​
fakeView.Recieved().AddListItem(Arg.Is<string>(n => n.Equals(expectedName));  
This can be taken a step further by centralising the assertion so that the attributes of a valid name are maintained in one place:​​
Private bool ListItemNameIsCorrect(string name)  
{  
   ...  
}  
   ...  
   Assert.That(ListItemNameIsCorrect(testName), Is.True);  
Now it's a lot easier to understand the scope of a change to this part of the code.
In conclusion, ensuring test assertions have the right precision and, where appropriate, are aligned with acceptance criteria can improve the quality of the test, reinforcing their role as protectors of code quality, and reduce the need to rely on manual testing.