Wednesday, June 20, 2007

Silliness.

This came up in a chat with a friend today. Sure code is sometimes hacky but it generally doesn't outright declare itself horrendous...

L: thing about [the party] is that it is hrrendously crowded
L: er... horrendously... or however it is spelled
L: brb.... gotta let puppeh out
Me: ;) Not a word I used that often.
Me: Doesn't show up in Intent-Revealing Code very often.
Me: I mean when did I ever want to say EnsureWidgetWibblesHorrendouslyDuringTesting(Widget w, WibbleFunction wf)

On the other hand, sometimes The Daily WTF makes me wonder whether code should be plastered with disclaimers and apologies just in case...

Tuesday, June 19, 2007

Grammar.

I was working on a reply to Ayende's post on Working software over comprehensive documentation and I realized that I just don't care!

Okay, that's not true. Fact is, I've seen "literate" software arguments tossed around topsy-turvy in many forums over many years. It's a bit like TABs vs. SPACEs or Vi vs. EMACS. These arguments generally fail to acknowledge the simple fact that people work differently (and there's no accounting for poor taste and bad habits). Just kidding.

Instead of debating whether internal and external documentation deserve equal attention in the code, here are some of my observations on what constitutes effective documentation.

  • It defines all of your nouns.
    What do objects represent?
  • It defines all of your verbs.
    What do services do? How do objects behave?
  • It defines all of your pronouns.
    How are objects externally identified?
  • It defines all of your modifiers.
    How do configuration settings and option parameters affect services and objects?
  • It defines all of your articles.
    What is the plurality of your components? Are they singletons?
  • It defines all of your possessives.
    How are components aggregated?
  • It defines all of your prepositions.
    What are the associations among components?
  • It defines all of your phrases.
    How are useful modules constructed from the components provided?
  • It defines all of your conjunctions.
    How are modules combined?
  • It defines all of your sentences.
    How are complete self-contained applications assembled from the modules provided?
  • It defines all of your grammatical rules and exceptions.
    What are the contracts to be satisfied by any given component? What are the constraints?
  • It succinctly expresses clear, coherent and cohesive ideas.

Saturday, June 16, 2007

More on 4GLs in UI Design

[Ayende]
The classic issue that we faced with 4GL is that they are really good for what they are supposed to do, and really bad for general purpose one. I with you on extensible frameworks, certainly, but I think that this is not an applicable method to develop most applications. Unless you build to extend, there is a high cost of it.

What if it just becomes HOW you build applications? Extension is not the only (or even the best) motive.

For example, using an Inversion of Control container like Castle Windsor is great if you want to build an extensible application because you can easily wire in new components and all of the dependency injection is taken care of for you. However it also solves a lot of problems in closed applications. In fact, I imagine it is most often used in the latter context.

I believe most applications waste a lot of time reimplementing common UI concerns and design patterns that have already been addressed by others. This happens because UI implementors are working with Forms and Controls and plain old controller classes all of the time. The higher level patterns don't emerge unless a significant investment in infrastructure occurs. Once you have the right infrastructure it imposes no additional burden. You have simply learned how to work differently.

I contend that you work better and you produce a higher quality product.

For example: How many times do we need to keep hacking together global undo/redo support, menu managers, background job managers, heterogeneous views, and progress monitors? It's ludicrously wasteful.

User Interface Implementation Concerns (and 4GLs)

Ayende and I are having an interesting little discussion about various User Interface implementation concerns. I figured it would be good to copy it over here for further discussion. (I need to start using blog trackbacks instead of always replying directly in comments.)

In response to my Information in Software post, he says:
[Ayende]
It is worth point out that most organization can't agree on what something as fundamental as the Customer within the organization. This is because different parts of the organization are responsible for different aspects of the customer, and they have radically different needs.

As Jeff points out, software that is open & extensible usually carry a price tag of six figures as well as a hefty customization fee. That is just the nature of the beast, because being a generalist costs, because the business doesn't care if you you can handle fifty different ideas of customers, they want your to fit their idea of customer, do it well, and fit with the different view of a customer within the organization. That doesn't come easily.

We certainly agree on this point. I don't expect common base-line models to appear. However, in applications like Eclipse, I have seen how having a common meta-model and a model-based application framework works wonders. In Eclipse, the underlying model is usually adapted to a common meta-model that is then used for presentation. Likewise, the model objects themselves are adapted in various ways to make them usable in different contexts. So I gave this example.

[Jeff]
You'll see UI frameworks where this has been done. JFace, used by Eclipse, defines IContentProvider interfaces to adapt models of various kinds for presentation purposes. So there are IListContentProviders and ITreeContentProviders.

Likewise, the model objects themselves are often adaptable to other formats via interfaces like IAdaptable. Thus a Java Package object can be adapted to a Directory Resource and manipulated in any of the ways a directory might be manipulated. Adapters are often contributed by external plugins to add new interpretations to existing objects so that they can be used in a variety of different contexts.
These are very powerful approaches indeed! Imagine what would happen if all applications were built like this?

Ayende then points out the limitations of this approach and is concerned about the ultimate cost.

[Ayende]
In the UI, it is possible to do so because you have a limited set of things that you can display, lists and tree and objects are just about it. Likewise for things that _can_ be adapted. Hierarchy to directory is very natural thing, but it can't work for a graph, for instance.

> Imagine what would happen if all applications were built like this?

I do, imagine the cost for this. That is the problem.

But I'm not so sure. I've seen it work well before... At least when you can get everyone to play by the same rules.

[Jeff]
It's been my experience writing plugins for Eclipse that with a well-designed core set of abstractions the costs are much diminished. In fact, the cost of integrating a few extra actions and views into a platform like Eclipse appears to be much less than it would be to build all of the action and view management code you would need on your own.

Witness the difficulty of building an add-in for Visual Studio versus Eclipse. Visual Studio is much more difficult to extend because it provides comparatively primitive services. Extending views with new actions requires individually hooking into the menus and toolbars for those views. There's no way to simply contribute an action on the basis of the underlying model object that is being presented and selected by the user.

At the risk of suggesting we all go framework-happy, I am curious as to whether the overall quality of software applications would improve if they were built atop a better platform rather than all pretty much working from the ground up.

For example, what would be the benefits of using common abstractions and DSLs to specify and implement common participants in a UI such as the models, views, actions, undo/redo mementos, background jobs, etc...
In other words, should we be writing applications using a 4GL?

Does this make sense?

Edit: The Blogger Preview window sucks. It displays extra line breaks where there are none even though I disabled automatic insertion of <br/> tags. I can't trust it.

Friday, June 15, 2007

Information in Software

Warning: I've been reading Edward R. Tufte's books and following infosthetics lately. I may be a little Info-Nuts right now.

Information Ocean

How many different ways can you think of for navigating, selecting and manipulating a rich ocean of information? Odds are you're now wondering how the data is structured. Is it a number, a list, a table, a hierarchy, a graph, a set, a formula, a diagram, a photograph, a description, a definition, a summary, an essay, a book? Who's the information for? What does it mean? How is it used? Or maybe you're just one of those types who dreams about flying through cyberspace...

The presentation of information must take into account its content, its form, its context, its provenance, its audience, its purpose. A hundred decisions must be made before information is rendered concretely for consumption and for interaction. This process is extremely tedious, fickle and costly so it had better yield an effective product or the effort is wasted!

Back to Software

Each and every software application is a tool for manipulating information. Consequently each one embodies countless assumptions, decisions and tradeoffs concomitant with the management and production of that information in that context. Typically a team of Architects, Domain Experts, Product Managers, Engineers, Clients, and Users have worked very hard to define the logical data model that represents the information content of the application, describe the stories to be captured, lay out the User Interface, specify the actions to be provided, and developing a strategy to persuade and cajole stakeholders to support and use the project by pandering to their information needs. Software design is all just about information!

But It Goes Wrong...

And lo' the Engineer said, "Let there be a Tri-State Tree on thy file system chooser for it hath Tree-nature." And the User saw it and righteously complained, "But I want to backup 6 files scattered across my hard disk and network file shares. This Tree is too deep for me to find them or to verify my selections when I am quit with them! Why can I not put my files in a List?" And the Engineer replied, "Because that would take an extra 6 weeks. Wait for version 2."

This dialogue bothers me. I have had this very complaint about numerous applications including file managers, backup tools, music players, and IDEs. I was reminded of it by Rezlaj's comments on Ayende's recent Tree post (though it sounds like Ayende's use of a Tree is perfectly justified here).

It's easy to complain about the lack of foresight on the part of the designers of these applications. Just the same, often I've run into problems where I wanted to manipulate information in a different manner than was originally intended. All too often my actions send me out into a "corner case" off the beaten path.

Essential and Accidental Difficulties?

Why does a designer have to decide exactly which kind of controls to use in an application anyways? Why must the precise layout and behavior of each presentation element be predetermined? Why cannot the information managed by the application be manipulated and presented more dynamically? Oh right. It's hard to do otherwise!

Is it essentially hard? I can think of plenty of architectural choices that make information access easier or harder. Separating the application's presentation and model tiers makes things easier whereas tightly coupling them makes things harder. Leveraging a framework for managing docked views, editors and menus makes things easier whereas rigidly laying out all UI components makes things harder. So at least some of the difficulty is accidental.

I believe the essential problems for software are the same as those that occur in other information-rich contexts. However, I also believe software has an advantage. Software supports richer, more dynamic interactions than any other form of media. Moreover, software can be enriched at any time by the contributions of an information-savvy community.

On the other hand, it happens quite that software must satify competing informational objectives. Google Books can only display a limited amount of its contents in one sitting to discourage copyright abuse. Therefore its User Interface must actively restrict its features as regards providing access to the underlying information. This is not an especially productive activity, but it's the nature of the business.

An Information Bazaar

There is no technical reason preventing software applications from adopting common standards in the representation of their information. There is no technical reason that external visualizations could not be supported most anywhere. After all, why should software always build information engines from scratch? Would software interoperability improve if we could just agree on common meta-classes for data structures?

Software is Over-Specified

Software is over-specified! When viewed as an embodiment of a pure information system, it is quite irrelevant which precise control is used to interact with some piece of information. The software is performing some abstract function of displaying, summarizing, selecting, highlighting, demonstrating, navigating, and visualizing its information.

Could software offer a richer user experience if less effort were devoted to tediously stuffing a Tri-State Tree in a place where a List or a Search Query or a Shopping Cart really wanted to be? They would all achieve the same purpose after all. Why does the choice need to be fixed in place? Can the underlying framework support multiple options (including 3rd party contributions) and let the application provide hints as to which one to use by default? Can the framework be enhanced with a theory of information design much like an expert system so it can make "intelligent" recommendations about how to structure the information?

There are problems.

  • Would any time actually be saved or would implementation complexity simply get out of control?
  • Would the users benefit in any way from the added control or would they be frightened and confused by it?
  • How would the artist's intent be reconciled with the application's dynamic presentation?
  • How would the designers ensure a consistent and high quality user experience when so many unknowns may be left up to the framework to decide?
  • How would the information be represented in such a way as to be consumable by any number of generic views?
  • YAGNI?

Ziggurat

In any case, it bothers me profoundly that software is so vertical. There is too little common ground. Each application contains a wealth of information but remains steadfastly inaccessible. Those very few applications that are open and extensible are expensive to produce and don't always meet expectations. Even mashups are just cute examples of software interoperability but they don't address the essential problems.

We can design APIs but thus far we have had much difficulty designing information...

Wednesday, June 13, 2007

All together now: FlexUnit, Cassini, WatiN and MbUnit

Last week I decided to work on being a little bit more sophisticated about testing my ActionScript 3 code. I was extracting a bunch of stuff out of my Odin project to add to the Castle.FlexBridge client-side components (including the Inversion of Control container I mentioned earlier).

As always, having some automated tests is always better than having none! So I tried out FlexUnit. It's a minimalistic port of JUnit for ActionScript with a simple Flex-based test runner. Works like a charm. See Adobe's article for more information.

So then the tricky thing was to make sure I could run the tests all of the time. Unfortunately, there isn't a nice command you can easily integrate into your build tool chain to run your FlexUnit tests. No problem! I'll just write an MbUnit test that starts Cassini to host the SWF file, fires up a browser with WatiN, runs the tests and reports the results. Easy as pie!

And when I'm done the results end up in my MbUnit report. It's not very sophisticated but it works!

FlexBridgeClientSideTestSuite.SetUp.RunFlexBridgeUnitTests.TearDown 
 5906.401ms 224.96 Kb, 1 
Console Output
1) [PASS] testEmptyClass (castle.flexbridge.tests.reflection::ReflectionUtilsTest)
2) [PASS] testKitchenSinkClass (castle.flexbridge.tests.reflection::ReflectionUtilsTest)
3) [PASS] testKitchenSinkInterface (castle.flexbridge.tests.reflection::ReflectionUtilsTest)
4) [PASS] testEmptyInterface (castle.flexbridge.tests.reflection::ReflectionUtilsTest)
5) [PASS] testDefaultComponentModelBuilder (castle.flexbridge.tests.kernel::DefaultKernelTest)
6) [PASS] testSingletonWithConstructorDependencies (castle.flexbridge.tests.kernel::DefaultKernelTest)
// --- snip --- //

If you want to see how this all gets plumbed together, check out the Castle FlexBridge source. See here for more details: Castle.FlexBridge.

Services vs Frameworks.

Oren posted about seeking the right dependency level in an application's data tier. Go read it.

As I see it, what he's observing reflects the essential difference between a service and a framework. A service, like a Data Access Object, can usually be wrapped or replaced easily because it doesn't change how you structure your application. A service just sits on the side-lines waiting to be called into action. A framework, like NHibernate, is very different. A framework wants to be a layer that sits underneath your application and empowers it to do wondrous things.

[I'm sure someone can come up with a crazy sports analogy here...]

Rule of thumb: Don't ever try to put a framework in a box unless portability really is your primary concern (rather than some favourite faery tale). If you find you need to do that then quite probably you need to revisit your assumptions or find/build a little service on the side instead.

Friday, June 8, 2007

Threading in UIs.

Today's Problem

Andrew Stopford asked me a question today about the role of Threading in UI design. He pointed out that this topic is not very often discussed in treatises on UI design patterns such as MVC. I concur. Here are some thoughts of mine.

Threading Is Just An Engine

Firstly, I don't believe we should get too caught up with threading. Threading is just the engine used to maintain state and interleave execution of sequential processes. The problem with program design isn't usually that we have lots of threads running concurrently, it's that we have multiple Actors running around doing stuff.

Actors Run Around Doing Stuff

Let's say Alice and Betty both head down the hall to buy coffee from the vending machine. Alice gets there first. She whips out her changepurse and starts tediously counting out nickels and dimes. Betty arrives later but she's smarter. She has a dollar coin tucked behind her ear for just such coffee emergencies. Several things can happen here:

  • Betty might queue up behind Alice and be stuck waiting for awhile to gain access to the vending machine.
  • Betty might barge in front of Alice and buy her coffee first.
  • Betty might wait for Alice but just as Alice finishes Colin rushes in between her and the machine and starts counting out his change. Next it's Dennis, then Elmo and Francine.
  • Betty might wait for Alice but suddenly Medusa shows up and petrifies Alice right in front of the vending machine. So much for coffee this millenium.
  • Betty might wait for Alice only to discover that the vending machine is broken or out of coffee when she gets her turn.
  • Alice and Betty may pay for their coffee using the honor system in a nearby deposit box. They both drop in $1 then queue up to get coffee from the machine. After Alice gets her cup, Betty discovers that there's no more coffee. Unfortunately, she cannot void her transaction because the deposit box is locked.
  • The coffee machine might be equipped with a special extra serving station for just such occasions so Alice and Betty both get their coffee at the same time!
  • etc...

The real issue here is that multiple independent actors must share common resources through at least part of the transaction. Moreover, their actions effect the state of these resources and must be reconciled with the effects of other actors.

Zen and The Art of Model-View-Controller Maintenance

So what happens to our hypothetical UI when we've got multiple actors running around doing stuff? The View is observing the Model and dutifully informing the user about what's happening. The Controller is updating Model in response to user actions. Background threads are playing music, downloading files, crunching numbers and updating the Model as they progress. The Model needs to somehow coordinate all of the concurrent observers and actors. The View needs to be able to handle updates to the Model that are triggered by multiple actors (possibly in different threads). The Controller needs to be able to carry out transactions exactly as they was intended the instant the user clicked on some button despite everything else going on.

Wow! This is hard stuff! We need to step back and take a deep breath and meditate on what's going on here. The Actors in the system need to perform consistent transactions upon the Model. The Observers in the system need to preceive a consistent Model state at all times. Sooner or later everything hits the Model. Ah!

Designing Models for Concurrent Use

Everything depends on the Model. It follows that the design of the Model will dictate how the system behaves. Here are a few different approaches.

Note: These are all names and patterns I have just made up.

Option 1: Shared Model

The simplest way to make a Model safe for concurrent access is to add locks. Actors lock portions of the Model for the duration of a transaction. Observers lock portions of the Model while they read state and update themselves. The implementation probably involves reader/writer locks managed by the Model.

  • Pro: Existing models can easily be retrofitted for shared use by judicious application of locks.
  • Pro: Does not require any fancy or complicated framework support.
  • Con: Single point of failure.
  • Con: Priority inversion is rampant. An unimportant background task can trump an important foreground task or cause the application to become unresponsive.

Option 2: Transacted Model

Another solution is to negotiate for access to a Share Model on behalf of delimited transactions. These transactions are often called Jobs. Each Actor in the system may start up any number of Jobs. The Jobs have associated Scheduling Rules that determine whether they can safely be run concurrently or if they must be sequenced because they require exclusive access to common resources. Ordinarily, the user is unaware of these Jobs floating around except perhaps in the way of progress monitors or busy indicators that may be floating around. However, when the user attempts to initiate a Job that cannot be run immediately, the UI enters a modal state to inform the user that the Job has been blocked from executing because it conflicts with other Jobs already running. Often the user can then choose to cancel the Job, put it in the background or wait for it to complete. So there is still just a single Shared Model floating around but transactions are reified explicitly and can be controlled.

  • Remark: This is how Eclipse managed background jobs.
  • Pro: Provides more control than the plain Shared Model.
  • Pro: Jobs always run through to completion unless they are cancelled or encounter an error.
  • Pro: Jobs can be prioritized to avoid priority inversion. Unimportant background jobs can even be pre-empted, deferred or canceled to preserve responsiveness.
  • Con: This requires a lot more sophistication because each transaction must be declared as a job.

Option 3: Transacted Model With Snapshots

This is a refinement of the previous model. We still represent transactions explicitly as jobs. However, instead of having one common Shared Model, we provide each transaction with a Snapshot. The Snapshot captures the state of the Shared Model at the time the transaction began. The transaction periodically checkpoints or commits the changes made to its Snapshot back to the Shared Model. If a conflict occurs, the transaction may be rolled back and possibly retried. This pattern trades implementation complexity for reduced locking in the common case.

  • Pro: Read-only transactions initiated by Views cannot block read-write transactions initiated by Actors.
  • Pro: Views can spend more time updating their state from a Snapshot and still be guaranteed to observe a fully consistent state.
  • Con: Much more complex implementation (unless you're doing functional programming).
  • Con: It's not always clear what to do if a conflict occurs during transaction commit.

Option 4: Intentional Model

One idea is to add transitional states to the system. For example, when the Controller of the music player sends a request to the music playing loop to Stop, the Shared Model may enter a "stopping" state. Eventually the Shared Model will transition to the "stopped" state when the music playing loop actually does stop.

This technique works by enabling all transactions to be performed atomically by reflecting their intentions upon the Shared Model. The Shared Model enters a transitional state until the intentions are carried out. This eliminates some of the contention that might normally occur with a standard Shared Model because it only needs to be locked long enough to record the intentions.

  • Pro: In practice this works very well when the number of transitional states is small.
  • Pro: Locking overhead can be dramatically reduced.
  • Con: This is not really sophisticated enough to handle some cases well.
  • Con: All Actors and Observers now need to handle more states.

Option 5: Replicated Model

Another trick is to replicate the state across all Actors. For example, the Controller of a music playing application may maintain state about whether the play and pause actions are enabled. The music playing loop running in the background has its own state about whether the music is actually playing or not. The Controller and the music playing loop send messages to each other asynchronously to update their copy of the Replicated Model. Periodically they may rendezvous synchronously as well.

  • Remark: This is the keystone of Communicating Sequential Processes (CSP) architecture.
  • Pro: This approach is easiest to understand. Each component in the system encapsulates its own model state that it keeps up to date through interactions with other components.
  • Con: Some models cannot be replicated easily.
  • Con: Because there is no authoritative Shared Model, race conditions during asynchronous updates are particularly problematic.

How to Choose

It depends. What responsiveness guarantees do you want to provide to the user? How important is it that the progress of competing Actors be reified and controlled? How much contention do you expect and how badly will it impact the desired user experience? Do you have adequate framework support? How much sophistication can you manage in your UI layer?

Thursday, June 7, 2007

Rich Client UI with Inversion of Control and Dependency Injection. Oh my!

Background

I've been working on an extensible system monitoring application called Odin. It has a Rich Client UI written with Adobe Flex 2.0.1 and a bunch of ASP.Net Web Services on the back-end. One of the challenges has been to unify the conventions used across the front-end and the back-end.

I want it to feel like I'm writing one application split across two tiers rather than two completely different applications that happen to share a web service protocol.

Plugin Architecture

Since the monitoring application is extensible both on the client and on the server, it needs some kind of plugin model that spans both worlds. Plugins are really just a collection of late-bound components. This is a perfect situation to leverage an Inversion of Control container to do all of the heavy lifting.

IoC and DI on the Server

On the server side, I use the amazing Castle Windsor container. Initialization proceeds in two phases.

  • First, I create a WindsorContainer using configuration information sourced from the applications's Web.config or *.exe.config file. This configuration registers all of the foundational components I need to assemble the application such as a logger and plugin resolver.
  • Second, the registered IPluginResolver component goes out to discover all plugins. Currently the DefaultPlugingResolver just looks for *.plugin files. These files are basically just Windsor configuration files with a few extra extensions. The plugin resolver then constructs an IPluginDescriptor object for each plugin it found.
  • Third, I create another WindsorContainer as a child of the first one. The nested container will receive all of the components that are registered by the plugins as they are loaded. This is the container that will be used to resolve components for the rest of the application's lifetime.
  • Fourth, the IPluginLoader finally runs to load all fully-satisfied plugins that are enabled for use with the application. It ensures that the plugin's assemblies can be resolved then it imports the plugin's component configuration into Windsor. It also merges adds the plugin's additional configuration sections to the master IConfigurationManager.

Using Castle Windsor for inversion of control and dependency injection I get to write very natural code like the following. Dependency injection will take care of providing the constructor parameters and filling in optional configuration.

/// <summary>
/// The default issue tracking integration consults a list of <see cref="IIssueTracker" />
/// services to obtain information about an issue.  Caches issue tracking data for a short
/// period to improve performance.
/// </summary>
public class DefaultIssueTrackingIntegration : IIssueTrackingIntegration
{
    private ICache cache;
    private ILogger logger;
    // --- snip --- //

    public DefaultIssueTrackingIntegration(ICache cache, ILogger logger)
    {
        this.cache = cache;
        this.logger = logger;
    }
    // --- snip --- //
}

The nifty thing is that a plugin can install a new issue tracking extension just by registering a component in its *.plugin file. Like this:

<component id="Core.IssueTracking.Jira"
           service="Odin.Core.Integration.IssueTracking.IIssueTracker, Odin.Core"
           type="Odin.Plugins.Jira.Core.JiraIssueTracker, Odin.Plugins.Jira">
  <parameters>
    <WebAppRootUrl>#{JIRA_WebAppRootUrl}</WebAppRootUrl>
    <UserName>#{JIRA_UserName}</UserName>
    <Password>#{JIRA_Password}</Password>
  </parameters>
</component>

IoC and DI on the Client

This is where is gets interesting. The client UI is written with Adobe Flex pluggable too! I could invent a whole new way to do this or I could just use an Inversion of Control container like I do on the server. The latter has a nice sound to it. After all, Inversion of Control is useful for way more than just assembling plugins: it's the cornerstone of my system architecture.

As of the time of this writing, I am not aware of any published IoC containers for ActionScript 3. I have found posts to the effect so others have certainly been doing this.

When I started working on Odin's client-side, I wrote a miniature IoC container. I could register components by specifying the service type and the component type and then later I could resolve singleton instances of those components. It was very very simple.

Components.registerComponent(IPluginLoader, DefaultPluginLoader);
// --- snip --- //
var pluginLoader:IPluginLoader = IPluginLoader(Components.resolve(IPluginLoader));
var progressMonitor:IProgressMonitor = new ProgressMonitorDialog().progressMonitor;
pluginLoader.loadPlugins(progressMonitor);

This worked great! But soon I found I needed to add component keys so I could distinguish between two components that implement the same service. Then I needed to add support for components with a Transient lifestyle (instead of always Singleton) and for External Instances. But there was one thing missing: no Dependency Injection! That's right, I had a static reference to the IoC container and I just queried it every time I needed a component that it managed.

So last night I added support for Constructor Dependency Injection. As a bonus, I refactored the whole thing and added it to my Castle FlexBridge project. You'll be hearing more about that one soon. The short story is that Castle FlexBridge is an Open Source AMF3 remoting gateway for .Net. It also includes client-side components to simplify application development. Now it includes my Castle-inspired Inversion of Control container kernel.

Now I can write things like this on the client side too! The FlexBridge IoC kernel will take care of satisfying all of the constructor parameter dependencies.

/**
 * The default plugin loader gathers plugin SWF Urls from the
 * pluginUrls argument of FlashVars as a semi-colon delimited string or Urls.
 */
public class DefaultPluginLoader implements IPluginLoader
{
    private var _configurationManager:IConfigurationManager;
    private var _kernel:IKernel;
    // --- snip --- //

    public function DefaultPluginLoader(kernel:IKernel, configurationManager:IConfigurationManager)
    {
        _kernel = kernel;
        _configurationManager = configurationManager;
    }
    // --- snip --- //
}
Huzzah!

Wednesday, June 6, 2007

Laziness should not be observable

This is a story about the Flash Player. All told, it's not a bad little browser plugin. It performs well, provides a sane display model, all's well. Unfortunately I've occasionally run up against issues to do with optimizations that are apparently performed by the Flash Player.

The most serious problems I've had have involved reflection. There are a few gotchas here. One is that the Flex compiler optimizes out unreferenced types so you need to be a bit careful to ensure all of your types actually get included in the SWF. If they don't get compiled in, you'll be surprised when getDefinitionByName() throws an exception instead of giving you the type you expected. Oops.

However it's not always that simple. Today I was enhancing a little inversion of control framework of mine to handle constructor dependency injection. That's when I ran into a problem with describeType() providing incomplete type information for constructor parameters unless an instance has already been created! That's not good because I need to know what the parameters are before I create an instance!

Small world though. Seems someone else was doing dependency injection in Flex and encountered the problem. He came up with the same workaround as I did but I don't like it...

All for the sake of some lazy initialization the Flash Player is probably doing deep down. Lazy initialization should never disrupt the correctness of an application!

Tuesday, June 5, 2007

Introducting Castle.Components.Scheduler.

On several projects, I have found need of a robust in-process job scheduler for .Net with support for clustering and persistence. These last two requirements have always been a bit of a problem. What do you do when your service has been distributed across multiple machines for load-balancing and redundancy? What do you do when your service shuts down and some of its scheduled processing may or may not have been performed but you need to ensure it doesn't accidentally run a second time?

I encountered these requirements yet again a few weeks ago. Figuring this problem ought to have been solved a hundred times over by the community, I went hunting for existing solutions and eventually found Quartz.Net. Unfortunately, it does not yet support clustering or persistence like its progenitor Quartz from the Java world. Even more discouraging, after an hour or two spent reading the documentation and perusing the source, I realized that I simply was not going to be able to contribute the necessary features and adapt the framework to my needs in the time I had available. So with reluctance I launched my own scheduler project.

The scheduler provides all the basic operations and supports clustering and persistence just like I needed. It's designed as a collection of services that can be easily snapped together using an Inversion of Control container like Castle Windsor or with just a few lines of code as part of your application's initialization process. The scheduler is fairly basic but it does what I need. I have a high degree of confidence in the implementation thanks to a comprehensive suite of unit tests.

A few people have already proposed contributions to add management facilities, a pre-cooked standalone scheduler agent, job queuing, job history, and an alternative ActiveRecord-based job store. Any ideas?

More details over here: Castle.Components.Scheduler.

Language specifications. (Gratuitous C++ bashing?)

Strange late night observation:

  • I learned TI BASIC from the reference manual included with my TI-99/4A.
  • I learned LOGO from the help files.
  • I learned TMS 9900 assembler from the Editor/Assembler reference manual.
  • I learned Amiga E from the help files, samples and some experimentation.
  • I learned Motorola 68k assembler from various random documents I could get my hands on because I couldn't obtain a copy of the chip reference manual.
  • I learned C from the Dice C reference manual (and filled in the gaps later).
  • I learned x86 assembler from the chip reference manual.
  • I learned C++ from the language specification and by reading the STL code (filled in the gaps with Stroustrop's C++ book and Plauger's STL book).
  • I learned Java from the language specification.
  • I learned Scheme because it was obvious.
  • I learned Tcl from the reference documentation.
  • I learned Self interactively in the runtime environment.
  • I learned C# from the language specification.
  • I learned Python from the language specification.
  • I learned Ruby from the online tutorial. (What language specification? Grrr!)
  • etc...

This list is by no means exhaustive. I've skipped all sorts of things, particularly declarative, scripting and domain specific languages. Some of them I'd rather forget in favour of INTERCAL.

Anyways, I wrote up this list because I noticed something interesting about my learning style. With few exceptions I've tended towards authoritative source material. I've found myself unaccountably frustrated (or bored) when I needed to rely upon anecdotal source material such as tutorials or reverse-engineering. I have never learned a language by taking a course or by rote memorization of examples. I am very selective about the books I purchase. (They are almost always structured as reference manuals or multi-part formal specifications. My most common complaint is the lack of phone-directory style indexing in the upper corners of the pages.)

However, C++ stands out as the language I probably spent the most time ever learning. I used to bury my head in the language specification for hours at a time grappling with subtle ambiguities. It took me a couple of years to grok it in fullness by which time I was reading draft proposals for C++ extensions. It was entertaining to see just how tortuous each new extension made the language by tossing in half a dozen new rules here or there. When I used C++ every day, I knew all sorts of ways to use and abuse each feature. The funny thing is, I don't feel that same level of confidence with the screwball corner cases of any of dozen languages I use regularly today. Perhaps that just means they're better! (When was the last time you needed something like a 'typename' keyword in your favourite language?)

What were your experiences?