Thursday, February 11, 2010

Accessing Non-Public Members Using Mirrors

MbUnit v3.2 introduces a new feature called Mirrors for accessing non-public members.

Please tell me what you think.  :-)

More on the wiki: Mirrors

Technorati Tags:

Setting Up a Distributed Build – Part 1

Earlier today I wrote about the fact that I need to rebuild the Gallio build server.  Then I went downstairs to get ready to leave for work and I promptly twisted my ankle.

So… now  lying in bed with an ice pack strapped to my ankle, this seems like as good a time as any to start planning.

Requirements

Gallio currently ships with plugins for several 3rd party tools.  It supports the .Net CLR 2.0 and 4.0 on 32bit and 64bit Windows platforms and aims to eventually support Mono on Windows and on Linux.  All told there are quite a few configurations to be tested.

Here’s what we need:

  • Distributed compilation: components are compiled on platforms that provide the required dependencies.
  • Distributed testing: components are tested on platforms that reflect a significant subset of the supported configurations.
  • Distributed packaging: groups of components need to be assembled into packages on platforms that support the required packaging tools.
  • Fully automatic continuous integration.

Here are some nice things to have:

  • Centralized control of builds and maintenance procedures.
  • Centralized reporting of test results, code coverage, performance metrics and defects.
  • Centralized publication of all artifacts such as binaries and documentation.
  • Hermetic build environment.

Observation:

Most components are binary portable and should only be compiled once but may need to be tested more than once depending on how the configuration of the environment affects the component.

Hermetic Build Environment

Hermetic means “sealed.”

A hermetic build environment is one which encapsulates all of its dependencies so that the result of the build is entirely reproducible.

Think for a moment about the dependencies of your own build process.

  • Source code.
  • 3rd party libraries and tools.
  • SDKs, compilers, frameworks, header files.
  • Test tools, coverage tools, build automation tools, documentation tools, linting tools.
  • Scripting languages, command-line tools, scripts.
  • Configuration files, registry entries.
  • Custom patches, tweaks.
  • Operating System revision and patch level.
  • Crazy glue.
  • Position of the mouse pointer on the screen.  (Please don’t depend on this!)
  • Time of day.  (Ditto.)

The more dependencies you control, the more reproducible will be the build.

The idea of establishing a hermetic build environment is to ensure that the dependencies are tightly controlled.  If I change line 101 of Foo.cs to fix a bug, I need to know that the next build will be identical to the first except that it will contain my specific bug fix because that is the only change in the source tree.

Let’s suppose I upgraded the compiler in place.  Now it’s possible that due to some changes in the compiler, the same source code that I compiled yesterday might produce different output today.  That’s really bad.  It means that I could introduce new bugs otherwise unchanged versions of my code just by recompiling it!

One solution is to indicate in the source code exactly which version of the compiler should be used.  That way if I recompile yesterday’s code, then I will use exactly the same version of the compiler that was originally specified yesterday.  The only way to introduce a change would be to commit a change in the source tree indicating that a new compiler version should be used instead.

The configuration of a hermetic build environment is well-known and consists of fully specified dependencies.  The only way to change the output of the build is by changing something in the source.

Crucially, the hermetic build environment should not be permanently altered by the build process itself.  For example, running tests shouldn’t leave extra temporary files floating around that remain for the next build to trip over.  The environment should be pristine!

How do we implement something like this?

In Linux, we might run the whole build inside of a very carefully constructed chroot directory assuming that we don’t care about the kernel version or that we control it via other means.

In Windows, it’s a lot harder because important configuration information could in principle reside anywhere in the file system or system registry.  For example, what really happens when we install a new version of .Net?  Hmm…  A more practical concern for Gallio is what happens when we upgrade to a new ReSharper beta, for example.

The case of the ReSharper dependency is interesting.  To compile Gallio’s ReSharper extensions, it would be sufficient to have all of the ReSharper libraries checked into the source tree.  However, to test those extensions we actually need a working installation of ReSharper, which itself requires Visual Studio, which itself requires many things.  Checking in libraries isn’t enough.  We’re going to need a virtual machine snapshot.

Gather-Scatter-Gather-…

A build process can be described as a sequence of gathering and scattering operations.

Here’s a fairly straightforward example:

  • Task 1
    • Gather: Check out all of the sources for the core components from the source tree for a specific reversion.
    • Scatter: Compile.
    • Gather: Copy the compiled binaries to a common directory.
  • Task 2
    • Gather: Grab the compiled binaries from the first task.
    • Scatter: Compile extensions with complex environmental dependencies.
    • Gather: Copy the compiled extensions to a common directory.  Generate the documentation and the installers.
  • Task 3
    • Gather: Grab the installer from the second task.
    • Scatter: Install and test extensions with complex environmental dependencies.
    • Gather: Copy all test results into a test database.
  • Task 4
    • Gather: Grab the installer and documentation from the second task.
    • Scatter: Publish the build artifacts and documentation to the web.

In general, tasks gather stuff from the source tree or from the output of other tasks, scatter a bunch of derived artifacts around, then eventually gather them together again into a form that can be consumed by subsequent tasks.

We can represent the dependencies as a linear sequence of steps (as outlined above) except that we would be better off if we could take advantage of the inherent parallelism of certain stages.  For example, there’s no reason we couldn’t be compiling and testing the ReSharper 4.5 extension at the same time as the ReSharper 5.0 extension as long as both processes ran independently (in different virtual machines).  Instead we should model dependencies as a graph of parallel or sequential tasks.

To make this work, we will need a continuous integration tool that coordinates task dependencies elegantly.

Outlining a Solution

Here’s what the solution will look like for us.

Virtual Hermetic Build Environment

Because we need to build and test many components on Windows, the most robust and perhaps simplest way to establish a hermetic build environment will be to use a suite of virtual machines.

To that end, the virtual machine host will maintain a collection virtual machine snapshots.  Each snapshot will have a unique name, like “Win7 x64, VS 2010, R# 5.0.1161.9 (Snapshot 1.0)”.

There are many virtual machines that we could use, but for our purposes we will use VirtualBox.  VirtualBox is free and performs well although I have found it to be somewhat unstable at times.  Even so, it’s perfect for what we need.

We’re going to have a lot of snapshots.  Every time we tweak one of the virtual machines, we will create a new snapshot and then check in a change to the source tree to use the new snapshot for builds of subsequent revisions.  Fortunately VirtualBox supports differencing disk images and snapshot trees.  As a result, we only need to store the differences between snapshots.  What’s more, we can easily reset the virtual machine to a previous snapshot state just by throwing away the latest differencing disk image which is very fast.

Over time, we may choose to throw away some snapshots when they are no longer relevant for practical purposes since there will come a time when we will never need to rebuild most old revisions.  In the meantime, gigabytes are cheap and plentiful, time is not!

How many virtual machines will there be?  I’m not sure… there may be as many as a dozen.

  1. Windows Server 2003, continuous integration server, build agents and other infrastructure.
  2. Windows 7 x86, .Net Framework 2.0 & 4.0, Mono, NCover.
  3. Windows 7 x64, .Net Framework 2.0 & 4.0, Mono, NCover.
  4. Windows 7 x86, ReSharper 4.5, Visual Studio 2008, NCover.
  5. Windows 7 x86, ReSharper 5.0, Visual Studio 2010, NCover.
  6. Windows 7 x86, TypeMock, NCover.
  7. Windows 7 x86, AutoCAD 2010, NCover.
  8. Windows 7 x86, TeamCity, NCover.  (for testing TeamCity extensions)
  9. Windows 7 x86, CCNet, NCover.  (for testing CCNet extensions)
  10. Ubuntu, Mono, MonoCov.
  11. … more… ?
Continuous Integration

The build process will be broken down into a graph of tasks.  Each task will specify dependencies on other tasks whose output it consumes.  We’ll use a continuous integration build manager to run the tasks and report progress.

There are many different continuous integration tools that we could use, but for our purposes we will use TeamCity.  TeamCity does a very good job of managing dependencies and the professional edition is available for free and does everything we need.

Interestingly, because we will be using virtual machine snapshots as part of our hermetic build environment, we will not use TeamCity’s built-in support for cross-platform builds.

JetBrain’s recommended approach for cross-platform builds is to run multiple TeamCity build agents on multiple machines with different configurations.  However for that to work, the machine has to persist the state of the built agent.  The problem is that we will always wipe our virtual machines back to their snapshot state after each build.  So if the agents ran inside the virtual machine they would lose their state changes completely between builds which would probably make TeamCity somewhat unhappy.

Another problem with running separate build agents inside each virtual machine is that we would have to keep all of the virtual machines running all of the time.  Otherwise, the TeamCity primary server would not be able to connect to the build agents.  Running so many virtual machines (dozens?) would be a prohibitively expensive waste of resources.  Moreover, we would need to keep a lot of build agents running and it costs money to buy licenses for more than 3.

Our solution will be to run TeamCity build agents only on the virtual machine host (or in a virtual machine with access to the host).  Each build task will be responsible for starting up the appropriate virtual machine, invoking a remote command inside the virtual machine, gathering results from inside the virtual machine, and then shutting down the virtual machine (discarding changes to its state).

All we need is one TeamCity build agent!  That one build agent can manage as many virtual machines as we like.

In practical terms, we’ll probably run 3 TeamCity build agents.  That way we can run 3 tasks in parallel to improve performance while still fitting within the license constraints of the free TeamCity Professional Edition.

Odds and Ends

We’re also going to need a few other things.

Err, right… I guess we need to build Archimedes too…

Gallio Build Server Down for Maintenance

The build server has been in an unhappy state for quite some time now (registry corruption due to frequent crashes of the VirtualBox VM).  Now it finally seems to have gone over the edge.

As a result I’ll be rebuilding the build server sometime in the next week.  I think I'll set the new one up a bit differently if I have time.

Long Term Ideas

1. Split up the build based on component dependencies.

Gallio ships with plugins for ReSharper 3.1, 4.0, 4.1, 4.5 and 5.0.  The trick here is that in order to build and test those plugins, all 5 versions of ReSharper need to be installed.  Ouch!  (Ok, we will probably be dropping 3.1, 4.0 and 4.1 soon…)

Of course Gallio also ships with plugins for NCover, TypeMock, AutoCAD, TestDriven.Net, Visual Studio 2008, Visual Studio 2010, etc…

Imagine all of those components installed on the same VM?  Yeah… it’s a little scary.  Fortunately most of them do not conflict with the others so it mostly works out fine.

Ideally we should just build a bunch of VMs with different groups of dependencies and build them all independently.  That way we can add new dependencies at will without worrying quite so much about how they interact (or worry about the 12 hours it will take to reinstall everything if the VM blows up).

2. Multiple Platforms

We should really be testing on at least the following platforms:

  • Windows 7 64bit
  • Windows 7 32bit
  • Mono on Windows
  • Mono on Linux

Basically that means we’ll need a whole bunch of VMs for various configurations.  Tricky.

3. Add White Box Tests.

Right now there are a bunch of things we don’t test rigorously in an automated way.  Here are a few of them:

  • The installer.
  • The Visual Studio templates.
  • The Visual Studio add-in, besides checking MSTest integration.
  • The ReSharper plugin, besides integration tests for the reflection API.
  • The UIs, besides unit tests for the Models and Presenters.

I spend a lot of time before each release manually testing this stuff.  It just doesn’t scale.

All we really need are a few tests to get sufficient confidence that the system isn’t broken in really boneheaded ways.  For example, Visual Studio shouldn’t pop up a dialog complaining about the Gallio add-in encountering an error during initialization.  (Usually that means the installer is broken somehow.)

4. Faster Release Cycles

All it should take for a release is for us to spend a couple of minutes writing release notes, adding some more documentation to the wiki, and pushing a big red button to promote the current nightly build to stable.

Paradoxically, the first step in this process is that unstable builds need to be more visible.

What I mean is that we should maintain consistently up to date release notes and documentation for our pre-release builds too.  We should get more people using those pre-release builds and perform more sanity-checking with automation so we’re not chasing last-minute bugs.

When it comes time to release, all we should have to do is post a new stable build label to the website and spam the newsgroups.  :-)

Duh.

Technorati Tags: ,

Saturday, February 6, 2010

Infrastructure and Documentation

With my new job at Google, I’ve found it harder and harder to spend as much time as I would like on Gallio and MbUnit.  It’s hard to find a good solid block of time to work on stuff without too many other distractions so I’ve been spending my time on little infrastructure projects.

The theory is that if I can build a big enough lever… then I can pretend there really are 32 hours in a day.  :-)

New Stuff

1. Auto-Publish to Web

Most of the Gallio and MbUnit web site content including documentation resources are now published to the web automatically by our Continuous Integration build server.

In other words, any committer can change the website just by checking in some code, and waiting 3-5 minutes for the web site to be built and the changes to propagate.

This kind of setup is really cool.  Seriously seriously cool.  So cool I know Oren has blogged about it.

Given my time resources are dwindling (and will even more once I start working on Android stuff), automating the whole pipeline is golden.  I can practically pretend that it’s real-time.  Hit commit and move on to another task.

2. New Wiki!

Let me be blunt.  The Gallio documentation is woefully incomplete.

The main problem is that I expected that the Gallio Book writing to progress more rapidly than it has so I didn’t spend much time developing alternative documentation resources.

Of course, there have been many offers to help write content for the book but little has materialized so far.  That’s just how Open Source is sometimes.  Lots of people want to help but there is always a barrier to contribution.  In this case I’ve lowered the barrier a bit by automating the book pipeline from commit through to publication on the web but writing high quality documentation still takes time and preparation (especially in book form).

So…

Say hello to quick & dirty documentation by the masses for the masses.

The new Gallio Wiki is here!

It’s a little empty...  Please help fill it up.  :-)

Also, if you are interested, the old MbUnit v2 Wiki is still available on the Internet Archive on the Gallio website.  We should probably try to port some of this content over, or something.

3. Old MbUnit v2 Documentation Is Back

A couple of years ago, Ben Hall invested quite a lot of time getting Sandcastle and DocProject to play nicely together and assembling the documentation.

Unfortunately when we moved the MbUnit.com website a couple of years ago, we lost some content.  Specifically, we lost the documentation site.

Oh… we still had the code to generate the documentation site, of course, but the specific versions of Sandcastle and DocProject that we used were no longer available and the new versions were not compatible with the old code.

Great.  Bit rot.  For two years, I was sitting on a pile of documentation that I couldn’t compile.

Last night I decided to bite the bullet and try again.  It was a total nightmare!  The whole ordeal took me maybe 12 hours.   I spent a good 8 hours trying to upgrade to a newer version of DocProject and getting nowhere fast.

Edit: I should point out that I don't blame my problems on DocProject or Sandcastle themselves. The real problem is that I didn't really understand how it all worked so I wasted a lot of time trying stupid things that failed.

Today I finally gave up and ported the essential parts of the documentation to the latest version of Sandcastle and the Sandcastle Help File Builder.  It’s a little worse for wear but it’s still readable.

Anyways, now the old MbUnit v2 documentation is online again here: MbUnit v2 Documentation.

Maybe we can port some of that content to the wiki or book and update it for MbUnit v3…

Technorati Tags: ,

Edit: Posted new link to old MbUnit v2 wiki content.