Monday, May 10, 2010

Gallio v3.2 Beta

A couple weeks ago I posted that the new beta with R# 5.0 and VSTS 2010 support would be available.  Unfortunately I forgot to go back and post an update with a link.

Here’s the link:

Interim release notes:

Happy testing!

Wednesday, April 14, 2010

Gallio v3.2 Beta Later this Week

This is just a short note to say that we will be releasing Gallio v3.2 Beta later this week with support for Visual Studio 2010 RTM and ReSharper 5.0.

Tuesday, March 23, 2010

Sneaky Equality Bugs

While trying to figure out why running a single test with Gallio causes ReSharper 5.0 to throw up the test failure in a message box (which it never did it before), I found this sneaky little bug.

public abstract class RemoteTask : IEquatable<RemoteTask>
    public string RunnerId { get; set; }

    public override bool Equals(RemoteTask other)
        return Equals(other as RemoteTask);

    public bool Equals(RemoteTask other)
        return other != null && RunnerId == other.RunnerId;

    public override int GetHashCode()
        return RunnerId.GetHashCode();

Can you spot the bug?

Magic Triad: Dictionary<TKey, TValue>, EqualityComparer<T> and IEquatable<T>

Whenever we construct a Dictionary<TKey, TValue> without providing an explicit IEqualityComparer<TKey> in the constructor (which is most of the time), the .Net framework automatically provides one for us.  Namely, it uses EqualityComparer<TKey>.Default.

EqualityComparer<T> is one of those really useful generic classes that is really easy to overlook.  Its job is to choose an appropriate implementation of IEqualityComparer<T> for any given type T.  Here are the cases it considers.

  • If T is type byte, then use a ByteEqualityComparer that does the obvious byte comparison.
  • If T is assignable to type IEquatable<T>, then use a GenericEqualityComparer<T> that will call IEquatable<T>.Equals method to determine equality.
  • If T is nullable and its value type TV is assignable to type IEquatable<TV>, then use a NullableEqualityComparer<T> that performs a null-check then calls IEquatable<TV>.Equals(TV) method to determine equality of non-null values.
  • If T is an enum whose underlying type is int, then use an EnumEqualityComparer<T> that compares enum values as integers.
  • Otherwise, use an ObjectEqualityComparer<T> that performs a null-check then compares objects using Object.Equals.

All of this magic is designed to do one thing: minimize the number of boxing conversions and casts required to compare objects.

The Bug

Did you notice that RemoteTask is abstract?  That tells you that it is intended to be subclassed.

To ensure correct equality and hash code semantics, a subclass must override at least the following two methods:

  • bool IEquatable<RemoteTask>.Equals(RemoteTask)
  • int Object.GetHashCode()

(Note: It is not necessary to override object.Equals(object) because all it does is delegate to IEquatable<RemoteTask>.Equals(RemoteTask).  Due to covariance of the equality relation, this code will work fine for all subclasses as long as they override IEquatable<RemoteTask>.Equals(RemoteTask) with their extra logic.)

Unfortunately RemoteTask’s implementation of IEquatable<RemoteTask>.Equals(RemoteTask) is not virtual!  Consequently subclasses cannot override it.  No matter what they do, they are stuck with an implementation that only considers the RunnerId field when determining equality.

As a matter of fact, all subclasses of RemoteTask override object.Equals(object) but not IEquatable<RemoteTask>.Equals(RemoteTask). That’s really bad because the methods will probably return different results!  Any code that uses IEquatable<RemoteTask> to compare RemoteTask instances will be broken.

This bug has been in ReSharper for a few releases now but it only just reared its ugly head due to an internal refactoring.

Previous Usage of RemoteTask.Equals (Hides the Bug)

Previously, ReSharper performed a lookup of tasks using linear search, something like this:

private List<KeyValuePair<RemoteTask, UnitTestElement> tasks;

public UnitTestElement GetTestElement(RemoteTask task)
    foreach (RemoteTask candidate in tasks)
        if (object.Equals(task, candidate.Key))
            return candidate.Value;
    return null;

Notice that this code calls the object.Equals(object, object) static helper function which performs a null check then determines equality using object.Equals(object). In this case, subclasses that just override object.Equals(object) will be fine.

Refactored Usage of RemoteTask.Equals (Triggers the Bug)

For efficiency, ReSharper now performs a lookup of tasks using a hash table, something like this:

private Dictionary<RemoteTask, UnitTestElement> tasks;

public UnitTestElement GetTestElement(RemoteTask task)
    UnitTestElement element;
    tasks.TryGetValue(task, out element);
    return element;

What’s new here is that under the hood, the Dictionary<RemoteTask, UnitTestElement> uses EqualityComparer<RemoteTask>.Default.  That in turn compares values using IEquatable<RemoteTask>.Equals(RemoteTask)

Oh yeah, that’s the method that subclasses can’t override…

The net result is that the hash table considers all values to be equal as long as they have the same RunnerId, regardless of their type or whatever other properties they may consider in their implementation of object.Equals(object).

For the most part, this is only a problem when there are hash code collisions.  If the hash codes are always distinct then it won’t matter to the hash table that the equality comparer it is using returns true more often than it should…

In other words, the fact that this code appears to work at all today is a matter of pure luck given that hash code collisions are fairly rare (for well designed hash functions, at least).


If you implement IEquatable<T> in a class that is intended to be subclasses, make sure to make it virtual!  Also make sure the subclasses override it as needed!

Thursday, February 11, 2010

Accessing Non-Public Members Using Mirrors

MbUnit v3.2 introduces a new feature called Mirrors for accessing non-public members.

Please tell me what you think.  :-)

More on the wiki: Mirrors

Technorati Tags:

Setting Up a Distributed Build – Part 1

Earlier today I wrote about the fact that I need to rebuild the Gallio build server.  Then I went downstairs to get ready to leave for work and I promptly twisted my ankle.

So… now  lying in bed with an ice pack strapped to my ankle, this seems like as good a time as any to start planning.


Gallio currently ships with plugins for several 3rd party tools.  It supports the .Net CLR 2.0 and 4.0 on 32bit and 64bit Windows platforms and aims to eventually support Mono on Windows and on Linux.  All told there are quite a few configurations to be tested.

Here’s what we need:

  • Distributed compilation: components are compiled on platforms that provide the required dependencies.
  • Distributed testing: components are tested on platforms that reflect a significant subset of the supported configurations.
  • Distributed packaging: groups of components need to be assembled into packages on platforms that support the required packaging tools.
  • Fully automatic continuous integration.

Here are some nice things to have:

  • Centralized control of builds and maintenance procedures.
  • Centralized reporting of test results, code coverage, performance metrics and defects.
  • Centralized publication of all artifacts such as binaries and documentation.
  • Hermetic build environment.


Most components are binary portable and should only be compiled once but may need to be tested more than once depending on how the configuration of the environment affects the component.

Hermetic Build Environment

Hermetic means “sealed.”

A hermetic build environment is one which encapsulates all of its dependencies so that the result of the build is entirely reproducible.

Think for a moment about the dependencies of your own build process.

  • Source code.
  • 3rd party libraries and tools.
  • SDKs, compilers, frameworks, header files.
  • Test tools, coverage tools, build automation tools, documentation tools, linting tools.
  • Scripting languages, command-line tools, scripts.
  • Configuration files, registry entries.
  • Custom patches, tweaks.
  • Operating System revision and patch level.
  • Crazy glue.
  • Position of the mouse pointer on the screen.  (Please don’t depend on this!)
  • Time of day.  (Ditto.)

The more dependencies you control, the more reproducible will be the build.

The idea of establishing a hermetic build environment is to ensure that the dependencies are tightly controlled.  If I change line 101 of Foo.cs to fix a bug, I need to know that the next build will be identical to the first except that it will contain my specific bug fix because that is the only change in the source tree.

Let’s suppose I upgraded the compiler in place.  Now it’s possible that due to some changes in the compiler, the same source code that I compiled yesterday might produce different output today.  That’s really bad.  It means that I could introduce new bugs otherwise unchanged versions of my code just by recompiling it!

One solution is to indicate in the source code exactly which version of the compiler should be used.  That way if I recompile yesterday’s code, then I will use exactly the same version of the compiler that was originally specified yesterday.  The only way to introduce a change would be to commit a change in the source tree indicating that a new compiler version should be used instead.

The configuration of a hermetic build environment is well-known and consists of fully specified dependencies.  The only way to change the output of the build is by changing something in the source.

Crucially, the hermetic build environment should not be permanently altered by the build process itself.  For example, running tests shouldn’t leave extra temporary files floating around that remain for the next build to trip over.  The environment should be pristine!

How do we implement something like this?

In Linux, we might run the whole build inside of a very carefully constructed chroot directory assuming that we don’t care about the kernel version or that we control it via other means.

In Windows, it’s a lot harder because important configuration information could in principle reside anywhere in the file system or system registry.  For example, what really happens when we install a new version of .Net?  Hmm…  A more practical concern for Gallio is what happens when we upgrade to a new ReSharper beta, for example.

The case of the ReSharper dependency is interesting.  To compile Gallio’s ReSharper extensions, it would be sufficient to have all of the ReSharper libraries checked into the source tree.  However, to test those extensions we actually need a working installation of ReSharper, which itself requires Visual Studio, which itself requires many things.  Checking in libraries isn’t enough.  We’re going to need a virtual machine snapshot.


A build process can be described as a sequence of gathering and scattering operations.

Here’s a fairly straightforward example:

  • Task 1
    • Gather: Check out all of the sources for the core components from the source tree for a specific reversion.
    • Scatter: Compile.
    • Gather: Copy the compiled binaries to a common directory.
  • Task 2
    • Gather: Grab the compiled binaries from the first task.
    • Scatter: Compile extensions with complex environmental dependencies.
    • Gather: Copy the compiled extensions to a common directory.  Generate the documentation and the installers.
  • Task 3
    • Gather: Grab the installer from the second task.
    • Scatter: Install and test extensions with complex environmental dependencies.
    • Gather: Copy all test results into a test database.
  • Task 4
    • Gather: Grab the installer and documentation from the second task.
    • Scatter: Publish the build artifacts and documentation to the web.

In general, tasks gather stuff from the source tree or from the output of other tasks, scatter a bunch of derived artifacts around, then eventually gather them together again into a form that can be consumed by subsequent tasks.

We can represent the dependencies as a linear sequence of steps (as outlined above) except that we would be better off if we could take advantage of the inherent parallelism of certain stages.  For example, there’s no reason we couldn’t be compiling and testing the ReSharper 4.5 extension at the same time as the ReSharper 5.0 extension as long as both processes ran independently (in different virtual machines).  Instead we should model dependencies as a graph of parallel or sequential tasks.

To make this work, we will need a continuous integration tool that coordinates task dependencies elegantly.

Outlining a Solution

Here’s what the solution will look like for us.

Virtual Hermetic Build Environment

Because we need to build and test many components on Windows, the most robust and perhaps simplest way to establish a hermetic build environment will be to use a suite of virtual machines.

To that end, the virtual machine host will maintain a collection virtual machine snapshots.  Each snapshot will have a unique name, like “Win7 x64, VS 2010, R# 5.0.1161.9 (Snapshot 1.0)”.

There are many virtual machines that we could use, but for our purposes we will use VirtualBox.  VirtualBox is free and performs well although I have found it to be somewhat unstable at times.  Even so, it’s perfect for what we need.

We’re going to have a lot of snapshots.  Every time we tweak one of the virtual machines, we will create a new snapshot and then check in a change to the source tree to use the new snapshot for builds of subsequent revisions.  Fortunately VirtualBox supports differencing disk images and snapshot trees.  As a result, we only need to store the differences between snapshots.  What’s more, we can easily reset the virtual machine to a previous snapshot state just by throwing away the latest differencing disk image which is very fast.

Over time, we may choose to throw away some snapshots when they are no longer relevant for practical purposes since there will come a time when we will never need to rebuild most old revisions.  In the meantime, gigabytes are cheap and plentiful, time is not!

How many virtual machines will there be?  I’m not sure… there may be as many as a dozen.

  1. Windows Server 2003, continuous integration server, build agents and other infrastructure.
  2. Windows 7 x86, .Net Framework 2.0 & 4.0, Mono, NCover.
  3. Windows 7 x64, .Net Framework 2.0 & 4.0, Mono, NCover.
  4. Windows 7 x86, ReSharper 4.5, Visual Studio 2008, NCover.
  5. Windows 7 x86, ReSharper 5.0, Visual Studio 2010, NCover.
  6. Windows 7 x86, TypeMock, NCover.
  7. Windows 7 x86, AutoCAD 2010, NCover.
  8. Windows 7 x86, TeamCity, NCover.  (for testing TeamCity extensions)
  9. Windows 7 x86, CCNet, NCover.  (for testing CCNet extensions)
  10. Ubuntu, Mono, MonoCov.
  11. … more… ?
Continuous Integration

The build process will be broken down into a graph of tasks.  Each task will specify dependencies on other tasks whose output it consumes.  We’ll use a continuous integration build manager to run the tasks and report progress.

There are many different continuous integration tools that we could use, but for our purposes we will use TeamCity.  TeamCity does a very good job of managing dependencies and the professional edition is available for free and does everything we need.

Interestingly, because we will be using virtual machine snapshots as part of our hermetic build environment, we will not use TeamCity’s built-in support for cross-platform builds.

JetBrain’s recommended approach for cross-platform builds is to run multiple TeamCity build agents on multiple machines with different configurations.  However for that to work, the machine has to persist the state of the built agent.  The problem is that we will always wipe our virtual machines back to their snapshot state after each build.  So if the agents ran inside the virtual machine they would lose their state changes completely between builds which would probably make TeamCity somewhat unhappy.

Another problem with running separate build agents inside each virtual machine is that we would have to keep all of the virtual machines running all of the time.  Otherwise, the TeamCity primary server would not be able to connect to the build agents.  Running so many virtual machines (dozens?) would be a prohibitively expensive waste of resources.  Moreover, we would need to keep a lot of build agents running and it costs money to buy licenses for more than 3.

Our solution will be to run TeamCity build agents only on the virtual machine host (or in a virtual machine with access to the host).  Each build task will be responsible for starting up the appropriate virtual machine, invoking a remote command inside the virtual machine, gathering results from inside the virtual machine, and then shutting down the virtual machine (discarding changes to its state).

All we need is one TeamCity build agent!  That one build agent can manage as many virtual machines as we like.

In practical terms, we’ll probably run 3 TeamCity build agents.  That way we can run 3 tasks in parallel to improve performance while still fitting within the license constraints of the free TeamCity Professional Edition.

Odds and Ends

We’re also going to need a few other things.

Err, right… I guess we need to build Archimedes too…

Gallio Build Server Down for Maintenance

The build server has been in an unhappy state for quite some time now (registry corruption due to frequent crashes of the VirtualBox VM).  Now it finally seems to have gone over the edge.

As a result I’ll be rebuilding the build server sometime in the next week.  I think I'll set the new one up a bit differently if I have time.

Long Term Ideas

1. Split up the build based on component dependencies.

Gallio ships with plugins for ReSharper 3.1, 4.0, 4.1, 4.5 and 5.0.  The trick here is that in order to build and test those plugins, all 5 versions of ReSharper need to be installed.  Ouch!  (Ok, we will probably be dropping 3.1, 4.0 and 4.1 soon…)

Of course Gallio also ships with plugins for NCover, TypeMock, AutoCAD, TestDriven.Net, Visual Studio 2008, Visual Studio 2010, etc…

Imagine all of those components installed on the same VM?  Yeah… it’s a little scary.  Fortunately most of them do not conflict with the others so it mostly works out fine.

Ideally we should just build a bunch of VMs with different groups of dependencies and build them all independently.  That way we can add new dependencies at will without worrying quite so much about how they interact (or worry about the 12 hours it will take to reinstall everything if the VM blows up).

2. Multiple Platforms

We should really be testing on at least the following platforms:

  • Windows 7 64bit
  • Windows 7 32bit
  • Mono on Windows
  • Mono on Linux

Basically that means we’ll need a whole bunch of VMs for various configurations.  Tricky.

3. Add White Box Tests.

Right now there are a bunch of things we don’t test rigorously in an automated way.  Here are a few of them:

  • The installer.
  • The Visual Studio templates.
  • The Visual Studio add-in, besides checking MSTest integration.
  • The ReSharper plugin, besides integration tests for the reflection API.
  • The UIs, besides unit tests for the Models and Presenters.

I spend a lot of time before each release manually testing this stuff.  It just doesn’t scale.

All we really need are a few tests to get sufficient confidence that the system isn’t broken in really boneheaded ways.  For example, Visual Studio shouldn’t pop up a dialog complaining about the Gallio add-in encountering an error during initialization.  (Usually that means the installer is broken somehow.)

4. Faster Release Cycles

All it should take for a release is for us to spend a couple of minutes writing release notes, adding some more documentation to the wiki, and pushing a big red button to promote the current nightly build to stable.

Paradoxically, the first step in this process is that unstable builds need to be more visible.

What I mean is that we should maintain consistently up to date release notes and documentation for our pre-release builds too.  We should get more people using those pre-release builds and perform more sanity-checking with automation so we’re not chasing last-minute bugs.

When it comes time to release, all we should have to do is post a new stable build label to the website and spam the newsgroups.  :-)


Technorati Tags: ,

Saturday, February 6, 2010

Infrastructure and Documentation

With my new job at Google, I’ve found it harder and harder to spend as much time as I would like on Gallio and MbUnit.  It’s hard to find a good solid block of time to work on stuff without too many other distractions so I’ve been spending my time on little infrastructure projects.

The theory is that if I can build a big enough lever… then I can pretend there really are 32 hours in a day.  :-)

New Stuff

1. Auto-Publish to Web

Most of the Gallio and MbUnit web site content including documentation resources are now published to the web automatically by our Continuous Integration build server.

In other words, any committer can change the website just by checking in some code, and waiting 3-5 minutes for the web site to be built and the changes to propagate.

This kind of setup is really cool.  Seriously seriously cool.  So cool I know Oren has blogged about it.

Given my time resources are dwindling (and will even more once I start working on Android stuff), automating the whole pipeline is golden.  I can practically pretend that it’s real-time.  Hit commit and move on to another task.

2. New Wiki!

Let me be blunt.  The Gallio documentation is woefully incomplete.

The main problem is that I expected that the Gallio Book writing to progress more rapidly than it has so I didn’t spend much time developing alternative documentation resources.

Of course, there have been many offers to help write content for the book but little has materialized so far.  That’s just how Open Source is sometimes.  Lots of people want to help but there is always a barrier to contribution.  In this case I’ve lowered the barrier a bit by automating the book pipeline from commit through to publication on the web but writing high quality documentation still takes time and preparation (especially in book form).


Say hello to quick & dirty documentation by the masses for the masses.

The new Gallio Wiki is here!

It’s a little empty...  Please help fill it up.  :-)

Also, if you are interested, the old MbUnit v2 Wiki is still available on the Internet Archive on the Gallio website.  We should probably try to port some of this content over, or something.

3. Old MbUnit v2 Documentation Is Back

A couple of years ago, Ben Hall invested quite a lot of time getting Sandcastle and DocProject to play nicely together and assembling the documentation.

Unfortunately when we moved the website a couple of years ago, we lost some content.  Specifically, we lost the documentation site.

Oh… we still had the code to generate the documentation site, of course, but the specific versions of Sandcastle and DocProject that we used were no longer available and the new versions were not compatible with the old code.

Great.  Bit rot.  For two years, I was sitting on a pile of documentation that I couldn’t compile.

Last night I decided to bite the bullet and try again.  It was a total nightmare!  The whole ordeal took me maybe 12 hours.   I spent a good 8 hours trying to upgrade to a newer version of DocProject and getting nowhere fast.

Edit: I should point out that I don't blame my problems on DocProject or Sandcastle themselves. The real problem is that I didn't really understand how it all worked so I wasted a lot of time trying stupid things that failed.

Today I finally gave up and ported the essential parts of the documentation to the latest version of Sandcastle and the Sandcastle Help File Builder.  It’s a little worse for wear but it’s still readable.

Anyways, now the old MbUnit v2 documentation is online again here: MbUnit v2 Documentation.

Maybe we can port some of that content to the wiki or book and update it for MbUnit v3…

Technorati Tags: ,

Edit: Posted new link to old MbUnit v2 wiki content.

Wednesday, November 18, 2009

Announcing Gallio v3.1 Update 2

Today we are releasing Gallio v3.1 Update 2.  This releases fixes several problems on x64 platforms and includes support for Visual Studio 2010 and .Net Framework 4.0 Beta 2.

Download here:

Documentation here:

Earlier release notes: v3.1 update 1, v3.1, all versions


  • Improved startup performance by fixing a problem with pre-generated XmlSerializers.
  • Added support for Visual Studio 2010 and .Net Framework 4.0 Beta 2.
  • Fixed an AccessViolationException in Icarus.  Special thanks to Kent Hansen for identifying the root cause and proposing a fix!
  • Fixed installer bugs on x64 which caused some components to not be installed.
  • Fixed bugs running MSTest tests on x64.
  • Fixed a problem that caused long-running tests to be aborted prematurely by Visual Studio.
Technorati Tags: ,

Thursday, November 12, 2009

XmlSerializers, ModuleVersionId, ILMerge, and You

I solved a minor mystery today.

Gallio internally uses the .Net XmlSerializer class to load plug-in metadata and save reports.  You probably know all about XmlSerializer already, but maybe you don’t know just how badly using it can impact application start-up performance.

About XmlSerializer Code Generation

The heart of the issue is that XmlSerializer uses code generation to perform serialization and deserialization.  Specifically, XmlSerializer generates some fresh C# code, compiles it out of process with csc.exe, then loads the resulting assembly.  This work takes time.  Seconds… feels like forever to a user.  The generated assemblies are not cached across runs so there is a significant performance penalty on every launch.

There is a way to avoid this cost: pre-generate serializers assemblies and distribute them with the application.

To pre-generate a serializers assembly, you are supposed to use SGen.exe a bit like this:

SGen.exe /assembly:MyAssembly /type:MyRootXmlType

This command will emit MyAssembly.XmlSerializers.dll.  All you need to do then is copy it next to MyAssembly.dll and you’re done.  The cost of code generation is gone from every launch.

Two problems:

  1. SGen only lets you specify one root type.  If you use multiple Xml document types in your assembly then you will need to write a custom tool like this: Gallio.BuildTools.SGen.
  2. Make absolutely sure that you keep MyAssembly.dll and MyAssembly.XmlSerializers.dll in sync.  If you change MyAssembly.dll in any way then you must regenerate the serializers assembly.  This is easy to set up once in your build scripts and forget about.


So let’s say you do all of this work and you run your program and you still see csc.exe starting up while your program runs.

To find out why, add the following to your application’s config file.  (eg. MyApp.exe.config)


            <add name=”XmlSerialization.PregenEventLog” value=”1” />


Then run your program again and look at the Windows Event Log.  There should be some information in there to help you out.  (Another thing to try is fuslogvw.exe.)

Here’s the message that I saw:

Pre-generated serializer ‘Gallio.XmlSerializers’ has expired. You need to re-generate serializer for ‘Gallio.Runtime.Extensibility.Schema.Cache’.

This message is telling me that I violated rule #2 above: the serializers assembly must always be kept in sync with its parent assembly!

Why?  I’ve got fancy build scripts…


XmlSerializer verifies that the serializer assembly is in sync with its parent assembly by comparing the module version id of the parent assembly with an id that it previously cached when it generated the serializer assembly.

Let’s check whether the assemblies are in sync using Reflector.

Gallio.dll is the parent assembly, it has a module version id of “fbe34432-817a-46dd-832f-3e5bc679ecff”.


Gallio.XmlSerializers.dll is the serializers assembly, it has a special assembly-level attribute called XmlSerializerVersion that was emitted during code generation.  It expects the parent assembly id to be “3c420916-3f3c-45cb-a79c-9a4bbc81014a".

Pedantic note: An assembly can have multiple modules each and each module has its own id.  So the XmlSerializerVersion attribute actually contains a comma-delimited list of all module version ids of the parent asssembly sorted in increasing order.


Ok, so XmlSerializer thinks the pre-generated serializers assembly is out of sync because the module id is out of sync.  That seems bananas because my build scripts always regenerate Gallio.XmlSerializers.dll whenever Gallio.dll is recompiled.


Well, my build scripts do one more thing: they run ILMerge on Gallio.dll to internalize certain dependencies that might otherwise get in the way of running unit tests.  (Bad things would happen if your unit testing tool exposed a dependency on one version of Mono.Cecil.dll whereas your tests depended on a different version.)

When ILMerge internalizes dependencies, it has to regenerate the assembly.  As a result, the new freshly merged Gallio.dll gets a brand new ModuleVersionId.  Unfortunately we pre-generated Gallio.XmlSerializers.dll before running ILMerge so it is now out of sync.

All we need to do is regenerate Gallio.XmlSerializers.dll after the ILMerge.  Problem solved.

Monday, October 26, 2009

Announcing Gallio and MbUnit v3.1 Update 1

Today we are releasing Gallio and MbUnit v3.1 Update 1.  This is mainly a bug fix release with a few new little goodies.

Download here:

Documentation here:

Earlier release notes: v3.1, all versions

Reminder: The Visual Studio Test Tools integration requires Visual Studio 2008 SP1 or Visual Studio 2010 to work.  If you still don't see tests in the Test View then ensure that your project has the necessary ProjectTypeGuids and that the Gallio add-in is enabled in the Visual Studio Add-In Manager.


  • [New!] Added samples for web testing with WatiN.  In particular demonstrates the creation of a custom [Browser] attribute to support running tests with multiple browsers.
  • [New!] Added samples for GUI testing with White.
  • Improved basic MbUnit samples a little to make them easier to understand.
  • [New!] Added a Sources.txt file to the installation to help users find out where to check out the original source code for the release.
  • Fixed a bug with Hint Directories being ignored by the test runners.
  • Upgraded MVC templates for ASP.Net MVC 2.
  • Fixed bug with transitive dependencies not being considered by [DependsOn] attribute when deciding whether to skip a test.
  • Improved the error message displayed when attempting to use a [StaticTestFactory] in ReSharper which is not supported due to technical limitations.
  • Fixed bug with use of [Factory] attribute on a fixture type or property when the type was not explicitly specified.
  • Improved error reporting and workarounds for cases where the assertions or test context are being used incorrectly to help a test author diagnose the problem.
  • [New!] Added support for using the Delete key to delete items from the Project Explorer.
  • [New!] Pending and Ignored tests are now unchecked by default.
  • Added filtering to the test report view to only include reports that match the current report name format.
  • Fixed some threading issues.
  • Fixed some test filter issues.
  • Fixed test counts.
  • Fixed window positioning issues.
  • Fixed display of test categories.
  • [New!] Added support for viewing embedded text, html and video attachments in Icarus and Visual Studio.
  • Fixed rendering of embedded plain text attachments to preserve formatting.
  • Fixed UTF-8 character encoding issue with html reports.
  • Fixed attachment links and code navigation links in reports on 64-bit platforms and in Internet Explorer Protected Mode.
xUnit.Net Adapter:
  • Upgraded to v1.5 RTM.
AutoCAD Integration:
  • Improved detection of AutoCAD window.
ReSharper Integration:
  • Added support for certain reflection operations which were needed by NUnit to support derived test fixtures in ReSharper.
  • Fixed a bug that caused duplicate NUnit tests to appear in ReSharper.
NAnt Integration:
  • Changed how the severity of certain test results are reported so that NAnt does not incorrectly consider a test run to have encountered a non-fatal error when in fact a warning should have been logged.
TeamCity Integration:
  • Fixed a bug in the test event processing that indirectly caused TeamCity test result reporting to stall when an empty test assembly was encountered.
Technorati Tags: ,