Tuesday, August 7, 2007

Writing Effective Tests for Components with a Large Number of Dependencies

A while back, Jonathan Ariel posted the following question to the Rhino.Mocks mailing list:

Hi,
I was wondering how do you handle tests with large set of expectations? Suppose that you have some component with a lot of dependencies. For each test case you need different expectations, some are repeated some are not. How do you avoid from having a big case suite with a los of test cases and for each one a lot of expectations? Any idea?

Thanks!

Jonathan

It can indeed happen that you will end up with lots of dependencies where each dependency is perhaps only lightly used but still requires some effort to mock. This is sometimes a sign that the component under test is doing too much. On the other hand, sometimes you just can't avoid having "manager" services that tie together lots of specialized behavior. Good luck if your component under test is a stateful Observer or Mediator and its transitions depend crucially on the behavior of its dependencies.

Let's assume I can't improve the design to make the problem go away. When this happens, I apply a few different strategies to making my mocks more manageable.

1. First, I factor out the basic usage pattern for mock objects. I use a MockRepository so often I might as well make sure I've always got one if I need it.

/// <summary>
/// Base unit test.
/// All unit tests that require certain common facilities like
/// Mock Objects inherit from this class.
/// </summary>
[TestFixture]
public abstract class BaseUnitTest
{
    private MockRepository mocks;

    /// <summary>
    /// Gets the mock object repository.
    /// </summary>
    public MockRepository Mocks
    {
        get
        {
            if (mocks == null)
                mocks = new MockRepository();
            return mocks;
        }
    }

    [SetUp]
    public virtual void SetUp()
    {
    }

    [TearDown]
    public virtual void TearDown()
    {
        if (mocks != null)
        {
            try
            {
                mocks.ReplayAll();
                mocks.VerifyAll();
            }
            finally
            {
                mocks = null;
            }
        }
    }
}

2. Create mocks for each dependency in the test SetUp. Even if the tests use different expectations, they quite often have the same dependencies. My tests can be made more manageable simply by extracting the common initialization concerns.

[TestFixture]
public class MyTest : BaseUnitTest
{
    private ComponentUnderTest cut;

    private IDependency mockDependency;
    private IBananaInventory mockInventory;
    private IOrderDao mockOrderDao;
    private IOrderEmailer mockOrderEmailer;

    public override void SetUp()
    {
        base.SetUp();

        mockDependency = Mocks.CreateMock<IDependency>();
        mockInventory = Mocks.CreateMock<IBananaInventory>();
        mockOrderDao = Mocks.CreateMock<IOrderDao>();
        mockOrderEmails = Mocks.CreateMock<IOrderEmailer>();

        cut = new ComponentUnderTest(mockDependency, mockInventory, 
            mockOrderDao, mockOrderEmailer);
    }
}

3. Create helper functions for setting up complex expectations if any. For example, it can happen that I have a data-broker service that I want to mock out. I can't get around the fact that it needs to return some elaborately initialized data object. So I'll make a helper that sets up the expectations to return an adequately populated data object. From the perspective of the test, the 15 lines it took to initialize the data object have been replaced by a single one to call the helper.

    [Test]
    public void GetCustomerNameObtainsNameFromDataProvider()
    {
        ExpectFetchCustomer(123, "Jim", "ACME Widgets");
        Mocks.ReplayAll();

        string name = cut.GetCustomerName(123);
        Assert.AreEqual("Jim", name);
    }

    private void ExpectFetchCustomer(int id, string name, string companyName)
    {
        Customer customer = new Customer();
        customer.Id = id;
        customer.Name = name;
        customer.CompanyName = companyName;
        // set 10 other properties I don't care about right now but
        // are needed to satisfy various internal invariants...

        Expect.Call(mockDependency.FetchCustomer(id)).Return(customer);
    }

4. As a refinement, I'll use RowTests (in MbUnit) to capture different input data for tests that very closely follow the same pattern. That works really well if I have a test with complicated setup and only the final expected state varies.

     [RowTest]
     [Row(OrderStatus.Placed, 1, Description="Order for tomorrow")]
     [Row(OrderStatus.Error, -7, Description="Order for last week")]
     [Row(OrderStatus.Error, 365, Description="Order for next year")]
     public void ValidatorOrderShipDate(OrderStatus expectedOrderStatus, int daysFromNow)
     {
         // (NOTE: Bad idea to have validation code that uses the current time.
         //        it's often better to write is in a functional style and pass in
         //        a reference time as parameter as long as you trust the caller.)
         Order order = CreateOrder("Bananas", 48);
         order.ShipDate = DateTime.UtcNow.AddDays(daysFromNow);

         // Do 101 zany things to set up the mocks just so we can place the order...
         // Maybe the component also sends email notifications, maybe it writes to
         // an audit table in the Db.  It could fire off all sorts of actions that
         // are tricky to stub out.
         Expect.Call(mockOrderEmailer.EmailOrderStatus(
             expectedOrderStatus, "Bananas"));
         Expect.Call(mockOrderDAO.WriteAuditRecord("Bananas", 48));
         Expect.Call(mockInventory.WillWeHaveBananas(order.ShipDate)).Return(true);
         
         Mocks.ReplayAll();
         OrderStatus orderStatus = cut.PlaceOrder(order);
         Assert.AreEqual(expectedOrderStatus, orderStatus);
     }

5. Where tests really get gnarly is when I'm working with a stateful reactive object. I just cannot avoid putting it through its entire lifecycle to test it thoroughly. The trick is to make it easy to push the object through its lifecycle by scripting each state change and sequencing them as needed. This strategy is often applied to system testing concerns (where the combinatorial explosion of states is most felt) but it works well in these scenarios too.

It may seem that such situations should never occur in good code. That's not the case. For example, to test how a job scheduling service responds to cancelation of a job just after that job has executed, I can't help but actually script it to submit the job, scheduling it for execution, run it, and record the final results. Only then can I actually try canceling it. This assumes I can't just manufacture a mock job that looks like it already ran. There are lot of dependencies involved here. At the least I need a stubbed out job but I'll probably need to mock out Db transactions and possibly some factory objects.

The banana ordering service here has similar problems.

    [Test]
    public TryCancelingTheOrderAfterItHasShiped()
    {
        Mocks.ReplayAll();

        Order order = CreateOrder("yummy ones", true);
        Assert.AreEqual(OrderStatus.Placed, order.Status);

        ShipOrder(order);
        Assert.AreEqual(OrderStatus.Shipped, order.Status);

        Assert.IsFalse(cut.CancelOrder(order), "Was order canceled");
        Assert.AreEqual(OrderStatus.Shipped, order.Status);
    }

    private Order CreateOrder(string bananaSpecies,
        bool yesWeHaveBananasToday)
    {
        Mocks.BackToRecord(mockInventory);
        Expect.Call(mockInventory.DoWeHaveAnyBananas(bananaSpecies))
            .Return(yesWeHaveBananasToday);
        Mocks.Replay(mockInventory);
        return cut.CreateOrderForBananas(bananaSpecies);
    }

    private void ShipOrder(Order order)
    {
        // somehow force the order to ship now...
    }

6. Finally, if some tests clearly follow a different pattern of interactions from others then I'll just factor them out into separate test fixture classes. This lets me put more common code into the SetUp and TearDown. Separation of concerns applies to testing too!

Edit: Fixed some broken markup in the code.

1 comment:

Jonas said...

Thanks! Very interesting!