Monday, August 27, 2007

GUI Bloopers.

Today a tester asked me for some help testing a new feature in one of our web applications. I expected him to ask me for help conjuring up some WatiN magic. Nope.

The UI in question used to look like this:

It was recently changed to:

Okay, let's ignore the fact that "Delete Ad" is hidden under a label that says "Update Ad". This is broken in all kinds of ways but I want to focus on just one: this UI is using a combo box to perform irreversible actions. The potential for user confusion and error is staggering. Combo boxes should never perform high-latency side-effects on selection, irreversible ones much less so!

I've opened a spec defect to address this issue. The new code is still on the testing environment. So hopefully it'll get fixed before any of our users actually see it.

I should heed explosions.

Two days ago while sitting at the computer, I heard an intense electrical bang. It remininded me vaguely of a capacitor or voltage regulator exploding (both of which I've experienced before). Naturally, I stopped everything I was doing and went out in search of the problem.

No smells. No smoke. The computers and all nearby electrical appliances seemed to be running smoothly without interruption. But later that night my computer spontaneously rebooted. On restart it became apparent that the video wouldn't work except in VGA mode. Uh oh.

At the time, I had a pending software update on the go (iTunes) so I started looking for a software problem with a newly installed system service. Ahh, bingo. The event log says it's having trouble loading the GearASPI driver and there are reports of that causing system startup problems. Ok so I manually uninstall it. Hmm... still doesn't work except in VGA mode. As expected, reinstalling the video driver accomplished nothing.

Looks like a hardware problem. (Gee, I wonder what that bag was about earlier?) So I put a new video card on order for testing purposes. The current card, a eVGA GeForce 6600, had been giving me problems anyways. Meanwhile since I can still remote desktop into the machine, I leave it alone. Bad move.

This morning the machine locked up again. So I finally open it up and take a peek. After removing the video card I notice it has 3 blown electrolytic capacitors on the board. Yikes! Fortunately nothing else was damaged.

Morals: 1. Trust your first instincts. 2. Open up the damn machine as soon as you suspect it's a hardware problem. 3. Don't dismiss loud unexplained noises!

Thursday, August 9, 2007

The Hidden Long-Term Costs of Doing it Right

Ayende wrote a good post about keeping quick hacks out of your programs: No Broken Windows. Go read that and come back.

Keeping your code beautiful has a lot of value. For one, it reduces the likelihood someone will later decide it is a steaming pile of crap and try to rewrite it thereby discarding all of your diligent planning, testing, and bug fixing. But I only agree conditionally with Ayende.

My beef is that the absolute cost differential Ayende cites in this case is too small to be meaningful. The difference between a 1 hour task and a 4 hour task is 3 hours of sleep later in the week. Quite simply, it is within the noise margin of software task estimation. What's not clear is that doing the 4 hour task can have ripple-down effects that are orders of magnitude larger. Choosing the 4 hour task may commit you to an endless cycle of choosing 4 hour tasks.

Let's say you have to render a simple tabular report for your customer. Since you only have one report to display, you decide to do it all in application logic. It's beautiful. So that you don't repeat yourself you reuse existing code for validating input, getting the data and calculating summaries. Moreover, you render the report using the same UI infrastructure and conventions used everywhere else so you can deeply integrate the report with the rest of the UI. The result looks great and feels very polished.

There's another choice though. You can bolt on SQL Server Reporting Services or Crystal Reports to generate the report. Then you can easily offer report downloads in CSV, XLS, XML and PDF which you believed the user will appreciate enough to be worth spending the extra 15 minutes on it. The downside is that no matter how you do it, the reporting feature will probably be a wart in the code and look like a wart to the user. However, let's suppose it's faster to do it this way because everything is already at hand.

So what do you do? Tough call. Here's what actually happened to me on two different projects.

  • On one project, I decided to keep everything clean and beautiful. It looked great and shiny and only took me 4 hours to implement a report that got sent via emails. And then I got the request to provide a similar report in the UI with download support. Crud. Oh, and there's this wish list of 5 other reports too. The cost per report was tremendous. Each report was specially tailored for usability. And there is no support for downloads. It was more polished but no one really cared! (I am still backlogged on feature requests!)
  • On another project, I bolted on SQL Server Reporting Services. Oh the pain! We had to munge configuration files to link in custom assemblies and it was a real nightmare getting the automated deployment to work. We wrote an abstraction layer to avoid coupling the application too tightly to Reporting Service but of course it's leaky. The reports look awful next to the rest of the site and we can't enhance them with AJAX or anything. Every now and then the house of cards comes a-tumbling down. But you know what? We now have 40 or so different reports available for viewing or download. (And the list keeps growing.)

So beware. The choices you make in the name of beauty and purity can lead you down avenues with non-trivial long-term costs. Just be sure that the rewards of Doing It Right justify the risks. Sometimes Doing It "Okay" is better especially if you aren't developing shrink-wrap software. (But don't ever Do It Wrong!)

Tuesday, August 7, 2007

Writing Effective Tests for Components with a Large Number of Dependencies

A while back, Jonathan Ariel posted the following question to the Rhino.Mocks mailing list:

I was wondering how do you handle tests with large set of expectations? Suppose that you have some component with a lot of dependencies. For each test case you need different expectations, some are repeated some are not. How do you avoid from having a big case suite with a los of test cases and for each one a lot of expectations? Any idea?



It can indeed happen that you will end up with lots of dependencies where each dependency is perhaps only lightly used but still requires some effort to mock. This is sometimes a sign that the component under test is doing too much. On the other hand, sometimes you just can't avoid having "manager" services that tie together lots of specialized behavior. Good luck if your component under test is a stateful Observer or Mediator and its transitions depend crucially on the behavior of its dependencies.

Let's assume I can't improve the design to make the problem go away. When this happens, I apply a few different strategies to making my mocks more manageable.

1. First, I factor out the basic usage pattern for mock objects. I use a MockRepository so often I might as well make sure I've always got one if I need it.

/// <summary>
/// Base unit test.
/// All unit tests that require certain common facilities like
/// Mock Objects inherit from this class.
/// </summary>
public abstract class BaseUnitTest
    private MockRepository mocks;

    /// <summary>
    /// Gets the mock object repository.
    /// </summary>
    public MockRepository Mocks
            if (mocks == null)
                mocks = new MockRepository();
            return mocks;

    public virtual void SetUp()

    public virtual void TearDown()
        if (mocks != null)
                mocks = null;

2. Create mocks for each dependency in the test SetUp. Even if the tests use different expectations, they quite often have the same dependencies. My tests can be made more manageable simply by extracting the common initialization concerns.

public class MyTest : BaseUnitTest
    private ComponentUnderTest cut;

    private IDependency mockDependency;
    private IBananaInventory mockInventory;
    private IOrderDao mockOrderDao;
    private IOrderEmailer mockOrderEmailer;

    public override void SetUp()

        mockDependency = Mocks.CreateMock<IDependency>();
        mockInventory = Mocks.CreateMock<IBananaInventory>();
        mockOrderDao = Mocks.CreateMock<IOrderDao>();
        mockOrderEmails = Mocks.CreateMock<IOrderEmailer>();

        cut = new ComponentUnderTest(mockDependency, mockInventory, 
            mockOrderDao, mockOrderEmailer);

3. Create helper functions for setting up complex expectations if any. For example, it can happen that I have a data-broker service that I want to mock out. I can't get around the fact that it needs to return some elaborately initialized data object. So I'll make a helper that sets up the expectations to return an adequately populated data object. From the perspective of the test, the 15 lines it took to initialize the data object have been replaced by a single one to call the helper.

    public void GetCustomerNameObtainsNameFromDataProvider()
        ExpectFetchCustomer(123, "Jim", "ACME Widgets");

        string name = cut.GetCustomerName(123);
        Assert.AreEqual("Jim", name);

    private void ExpectFetchCustomer(int id, string name, string companyName)
        Customer customer = new Customer();
        customer.Id = id;
        customer.Name = name;
        customer.CompanyName = companyName;
        // set 10 other properties I don't care about right now but
        // are needed to satisfy various internal invariants...


4. As a refinement, I'll use RowTests (in MbUnit) to capture different input data for tests that very closely follow the same pattern. That works really well if I have a test with complicated setup and only the final expected state varies.

     [Row(OrderStatus.Placed, 1, Description="Order for tomorrow")]
     [Row(OrderStatus.Error, -7, Description="Order for last week")]
     [Row(OrderStatus.Error, 365, Description="Order for next year")]
     public void ValidatorOrderShipDate(OrderStatus expectedOrderStatus, int daysFromNow)
         // (NOTE: Bad idea to have validation code that uses the current time.
         //        it's often better to write is in a functional style and pass in
         //        a reference time as parameter as long as you trust the caller.)
         Order order = CreateOrder("Bananas", 48);
         order.ShipDate = DateTime.UtcNow.AddDays(daysFromNow);

         // Do 101 zany things to set up the mocks just so we can place the order...
         // Maybe the component also sends email notifications, maybe it writes to
         // an audit table in the Db.  It could fire off all sorts of actions that
         // are tricky to stub out.
             expectedOrderStatus, "Bananas"));
         Expect.Call(mockOrderDAO.WriteAuditRecord("Bananas", 48));
         OrderStatus orderStatus = cut.PlaceOrder(order);
         Assert.AreEqual(expectedOrderStatus, orderStatus);

5. Where tests really get gnarly is when I'm working with a stateful reactive object. I just cannot avoid putting it through its entire lifecycle to test it thoroughly. The trick is to make it easy to push the object through its lifecycle by scripting each state change and sequencing them as needed. This strategy is often applied to system testing concerns (where the combinatorial explosion of states is most felt) but it works well in these scenarios too.

It may seem that such situations should never occur in good code. That's not the case. For example, to test how a job scheduling service responds to cancelation of a job just after that job has executed, I can't help but actually script it to submit the job, scheduling it for execution, run it, and record the final results. Only then can I actually try canceling it. This assumes I can't just manufacture a mock job that looks like it already ran. There are lot of dependencies involved here. At the least I need a stubbed out job but I'll probably need to mock out Db transactions and possibly some factory objects.

The banana ordering service here has similar problems.

    public TryCancelingTheOrderAfterItHasShiped()

        Order order = CreateOrder("yummy ones", true);
        Assert.AreEqual(OrderStatus.Placed, order.Status);

        Assert.AreEqual(OrderStatus.Shipped, order.Status);

        Assert.IsFalse(cut.CancelOrder(order), "Was order canceled");
        Assert.AreEqual(OrderStatus.Shipped, order.Status);

    private Order CreateOrder(string bananaSpecies,
        bool yesWeHaveBananasToday)
        return cut.CreateOrderForBananas(bananaSpecies);

    private void ShipOrder(Order order)
        // somehow force the order to ship now...

6. Finally, if some tests clearly follow a different pattern of interactions from others then I'll just factor them out into separate test fixture classes. This lets me put more common code into the SetUp and TearDown. Separation of concerns applies to testing too!

Edit: Fixed some broken markup in the code.

Saturday, August 4, 2007


I'm pulling this discussion over from some comments on Ayende's blog because I think it deserves more thought.

So I said:

However... it's been my observation that architecture patterns are much more likely to be embraced when:

1. They can explained in a few sentences without resorting to jargon like separation of concerns, single responsibility principle, decoupling, unit testing, and SmallTalk. ;-)

2. The average developer has a multitude of examples to learn from.

3. In addition to being highly recommended, the architecture is perceived as "the right way" or "the natural way" to use a given platform. This essentially means getting first mover advantage and uniform acceptance when a new platform is released. Consequently most of the ability to dictate these things is in the hands of the platform vendor.

To which Steve replied:

1. Those principles you speak of as jargon are the core concepts of what patterns are about. I feel the reverse of your statement: understanding those concepts should come FIRST before even attempting to understand a pattern. I think you have the cart before the horse.

To me the average developer should be trained similiar to how electricians are - they study under master electricians and learn the trade. These concepts such as #1 would be taught first.
(Note: We're in agreement insofar as various MVC styles are concerned.)

I agree it would be better if these concepts were understood first... but unfortunately that's not the case today. In truth, I don't expect it ever to be the case. Not everyone was born to be a master electrician: most people just run extension cords around the house. I imagine after a few years on the job many electricians won't remember much more from their training than how to run wires from source to switch to plug and a few rules of thumb for estimating capacity for large appliances. Much of this is codified in building codes anyways. So if there is a shortage of discipline and deep understanding of core principles, what can you do?

  • Improve the quality of education. (When? How? For whom? With what resources?)
  • Raise the minimum threshold of acceptability for admission into the professional trade. (Who's the guildmaster for Software Engineering?)
  • Make foolproof tools that provide on the job training. (Hotwire the keyboard to 220V AC, that'll teach 'em to ignore compiler warnings! Ha Ha Ha!)
  • Provide better examples and just hope nothing really bad happens.

I fear that the latter is really the best solution. Somehow best practices must be demonstrated in actual practice so that practicing practitioners can practice these practices. *ducks*

But really... Screwed up software design doesn't usually deliver clear, direct and timely feedback. Unless you've got a very strong design team and peer review everything, most problems only show up in hindsight. That's fine for throwaway code and most internal tools but it can be costly when you plan to maintain and extend a large product over decades.

Ensuring that all new programmers receive a firm grounding in software design principles (and sufficient practice to keep their skills fresh) is probably prohibitive. Therefore I believe the most effective approach is simply to flood the networks with coherent examples, standards and documentation that don't necessarily require -- but promote -- deep understanding to adapt and adopt.

Wednesday, August 1, 2007

Attention Recruiters

Attention Recruiters: You have 30 seconds to impress me with your proposal in my voicemail box. Please use them wisely. Here are a few tips.

  • At least make sure to clearly enunciate the name of your company. You must say that name a hundred times a day. At least this once please make sure it is intelligible! Unfortunately 80% of the calls I've received thus far have failed at this simple task.
  • I will Google you. I don't care what your phone number is. I want your web site and company name (see above). I want to know what cool projects you're working on. I want to know what your work environment is like. Tell me something interesting besides the generic company spiel. Entice me to call back.
  • Don't call me about "an exciting opportunity in [my] field." Where did you get my phone number? I don't want to hear from you unless you know at least something about who I am. Prove to me that you are not just randomly dialing your way through the company directory. If someone recommended me please tell me who did. I have a blog and I actively participate in several open source communities. You should be able to learn something about me.
  • If you can do this, I might just call you back...

    Edit: I've been receiving unsolicited calls from recruiters on my office voicemail. So far only one has managed to pique my interest. Even so, the recruiter said: "We're a company of 300 people but we still feel like a startup.". What? You mean chaotic and disorganized but too large for me to get a valuable equity stake? C'mon...