Tuesday, October 23, 2007

But the xUnit Book Says...

The xUnit Book says a lot of things.  But it's not the last word on automated testing, not by a long stretch.

Steven Micham posted this comment in relation to my blog post about using dependency injection in a test framework:

I've also just skimmed through the xUnit Test Design Patterns book and the criticism of NUnit and therefore MbUnit with the [TestFixutreSetup] is exactly that the shared state is a 'bad thing' for constructing isolated test environments on a per-test basis.

Since this injection is basically shared state, how do you answer that?

You've got a good point.  So how do I answer that?

All that glitters is not unit testing.

Well, [TestFixtureSetUp] is obviously not a great foundation for building isolated test environments on a per-test basis.  We have [SetUp] for that.  However, there are cases when fixture-level isolation is desirable.

Here are a few:

  • The tests use resources that are expensive to initialize or to dispose.  For example, the tests might need to start / stop a partially stubbed out service with a high initialization overhead.
  • The tests involve a common computation that results in a lot of state that we want to verify systematically.  For example, the tests all consume the same input data that is produced by compilation.  Each test might examine this data in a different way so you get soluble, focused assertions across independent portions of the state instead of a single massive test with dozens of unrelated assertions within.
  • Or, when you're not writing unit tests...

Smells like integration testing.

Mmm... the musty odor of integration tests.  Reminds me of mothballs, old sweaters and damp wool.

Every test you write integrates some parts of the system.  The more you integrate into the set of components under tests, the more expensive your test will be, and the more difficult it will be to write.  Eventually you find yourself writing system tests (functional regression tests) and you wonder where all of the colour in the world went.

There's a bit of a change of scenery as you leave the comfortable realm of pure unit testing with mocks, stubs, and hard lines.  Integration testing is a vast desert: raw, wild, dangerous.  You can get hurt out there until you learn to adapt.

It's remarkably easy to encounter situations writing unit tests where you cannot exercise as much control over the environment as you'd like.  For example, the component under test might do some heavy lifting with the filesystem.  Often you can refactor your way out of these situations by stubbing out behavior or moving the heavy-weight concerns elsewhere where they are easier to test.  However, there is a point of diminishing returns.

So at some time you'll look at your suite of mocks and in desperation say: "I guess I'll need to create a real file on disk and make sure it gets deleted after the test finishes."  Ok.  So you do that and put the necessary code in [SetUp] and [TearDown].  And then you need a few more files...  Oh no, you've crossed over into integration testing.

Maybe you can refactor to improve the design.  Go ahead and try it!  Or... maybe not.  Crud.

Now what?

If you're going to write integration tests, do them for real!

It's a fact: integration tests are qualitatively different from unit tests.  The most dangerous thing you can do is to look at your tests and think: "these are just slightly ugly unit tests."  They're not really unit tests anymore.  If you try to treat them that way you'll just end up with "really ugly integration tests" and which is really bad.

So what do you do?

Well, first you want to be using a testing framework that degrades nicely when you reach the limits of white box unit testing.  As you write integration tests you're going to end up breaking a lot of unit testing rules.  You need a tool that can keep up with this transition.  Or you get a completely different tool.

In practice, MbUnit and NUnit both degrade fairly well for integration testing.  That's where being able to selectively violate test-level isolation by introducing shared fixture-level concerns is really handy.  Using a completely different tool is probably not worth the pain until you start doing something very different, like functional regression testing.

But... to help keep things straight, be sure to keep your unit tests and integration tests isolated.  Spend extra effort documenting those integration tests.  Intent revealing test method names might not be enough.  You'll need real documentation: possibly even paragraphs!

And that's ok.  We're not really doing unit testing anymore.  Even if we used a unit testing framework to implement the tests.  We crossed over (and it didn't hurt much).

I do care what the xUnit Book says.

Yes, I do care.  But the xUnit book is about unit testing.

The extra features MbUnit provides that other frameworks like xUnit.Net do not are there precisely to give programmers the freedom to selectively break the unit testing rules but not the tool.

MbUnit is intended to be used for more than just by-the-book unit testing.  So we offer support and guidance for those activities too.  We just don't expect the tool to enforce our recommendations to the exclusion of all else.

There's other good stuff out there.  Trust your instincts.

1 comment:

Pierre Phaneuf said...

By the way, Avery was talking about such things recently:

http://alumnit.ca/~apenwarr/log/?m=200709#22