NHibernate Starter Kit

by Ben Hart 21. October 2008 17:28

It took me a while to get into NHibernate. I knew it was an awesome tool, with an active community, but it always seemed simpler to just roll-my-own before taking the time to learn a proper ORM. I think part of the problem was accessibility. Coming from a Microsoft development world one gets used to running installers and adding templates to try new products out.

Despite the wealth of tutorials, screen casts, a pretty thorough reference, easily available source code, and a vibrant community, many people still have the impression that NHibernate is hard. It seems that those extra 4 steps to get started (download the latest binary package, extract and add references to NHibernate, add "nhibernate.cfg.xml", and add entity mappings) is enough to deter people, myself included for quite some time.

But why go through this?

Once I broke through the initial resistance, I'm not turning back. It's a great ORM, the first that's entered my world that gets me close to modelling my domain how I'd like it, without much compromise. The ability to throw together some C# classes, test these until I'm happy with their shape and behaviour, and then have a database generated which supports their persistence has liberated me. Most ORMs in the .NET space place too much emphasis on the database, and the temptation to use the tools and wizards has me back in SQL Server Management Studio far more often than I'd like. While I'm not a zealot (at least I try to mention the alternatives when I encourage NHibernate), I do feel more people need this liberation from the database. As a great man once said,

I have a dream that one day the majority of .NET developers will emancipate themselves from the binds of an inherently cumbersome environment.

I have a dream that one day these developers will join me in dropping the notion that "business logic classes" are simply filters between the user interface and the database.

I have a dream that one day even the most hardened of stored procedure advocates will concede that the ability to refactor with ease and unit test in isolation is compelling enough a reason to forever rid themselves of TSQL and its ilk.

Today I took a small step towards that dream.

Nah, I'm just lazy.

I often want to create a quick NHibernate based application, generally for a proof of concept, and I'm tired of finding those dll's on my bloated drive. I'm also tired of copying and pasting the config file, and the plumbing I need to generate a schema.

So I thought I'd save myself some time and create a Visual Studio template that does this for me. In fairness it was a little more of an adventure than I was bargaining for, but in about 20 proofs of concept I should break even. It was interesting to get to know Visual Studio templates, and some of the limitations thereof, so never any time wasted.

When installed (I created a .vsi which is just a zip in disguise, so feel free to unzip it and butcher) a new project template becomes available under Visual C#, "NHibernate Starter Kit".

VSProjects

NHSKSolutionAdding this project creates a solution with three projects, Domain, SchemaGenerator, and Tests. Domain contains a simple base class for entities (by no means required for NHibernate), and an example entity and corresponding mapping.

SchemaGenerator is a console application that uses these mappings to generate a database schema and insert some test data. Rather than create the database from scratch, it assumes that the database as defined in the App.config has already been created.

Tests has an example integration test: creating a new entity, saving to the database, and retrieving it.

I've scattered a few comments around the code for luck.

One of the limitations of a multi-project template is the ability to include files that aren't in a project. Typically we'd all have a lib or dependencies folder that contains common 3rd party libraries, and that was my initial intention. Rather than go the whole hog to get this (which involved implementing IWizard in a class library, signing, GAC'ing, jumping around in frustration), I've cheated by placing them in the bin folder, included in the project. Not ideal, but works fine. (Incidentally, the template includes the latest released version of NHibernate, 4.0.1, and NUnit 2.4.8. I'm actually not sure whether redistributing these is a problem, so drop me a line if I need to take down).

In theory, you should simply be able to install the template, create a database (simply an empty database), match this to the connection string, run the schema generator and marvel how that table is created for you. You might even want to add a few properties to MyEntity, jack out the corresponding mapping, run that schema generator, and marvel some more how the table changes without that pesky designer. Run the NUnit test for luck too.

But that's theory, at this stage all I know is that it works on my machine (Visual Studio 2008, SQL Server 2005 Express). Let me know if it works on yours.

I intend to enhance the kit over time, adding more to the sample entity as a reference for mappings, demonstrating common patterns, and so on.

In the meantime, download the kit, and free yourself from the database.

Update: I've since updated the kit to include a few more mappings, and sorted out some issues with the persistent entity base class. Please see the updated version here.

Technorati Tags:

NHibernateStarterKit.vsi (737.61 kb)

 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

NHibernate

Quitting the game

by Ben Hart 18. October 2008 08:25

I subscribe to Tim Ferris's blog for a fresh perspective and different read to my usual development related blogs. He writes on a variety of topics, summarised, I suppose, to living the best life you can with the short time you have allotted. He's the author of The 4-Hour Workweek, and has often had me questioning the value of the traditional work environment, and the games we play therein.

His most recent post (in fairness a cross-post, I didn't bother finding the source) is great. In summary, it quotes Andrew Lahde's (a hedge fund manager) farewell letter, calling it quits after reaping the rewards of the current financial 'crisis' (in which he reaped an 866% return in one year).

I will no longer manage money for other people or institutions. I have enough of my own wealth to manage. Some people, who think they have arrived at a reasonable estimate of my net worth, might be surprised that I would call it quits with such a small war chest. That is fine; I am content with my rewards. Moreover, I will let others try to amass nine, ten or eleven figure net worths. Meanwhile, their lives suck. Appointments back to back, booked solid for the next three months, they look forward to their two week vacation in January during which they will likely be glued to their Blackberries or other such devices. What is the point? They will all be forgotten in fifty years anyway. Steve Balmer, Steven Cohen, and Larry Ellison will all be forgotten. I do not understand the legacy thing. Nearly everyone will be forgotten. Give up on leaving your mark. Throw the Blackberry away and enjoy life.

Sage advice.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Life

Mocking ASP.NET MVC HtmlHelper extension methods using Moq

by Ben Hart 17. October 2008 06:43

I'm in the process of upgrading our ASP.NET MVC Preview 5 app to Beta. Been quite painless so far, but hit a snag with an unmentioned change to the signature of the ViewContext class.

We've followed many by monkeypatching the HtmlHelper class further, extending it to a variety of uses. Obviously we need to test these extensions, so need a reference to an HtmlHelper instance.  We used to have the following helper method to get an HtmlHelper object:

public HtmlHelper CreateHtmlHelper(ViewDataDictionary viewData)
{
    var sw = new StringWriter();
    var rd = new RouteData();
    var tc = new TestController();
    var td = new TempDataDictionary();
    var tv = new TestView();
    var req = new HttpRequest("", "http://localhost/", "");
    var res = new HttpResponse(sw);
    var hc = new HttpContext(req, res);
    var hcw = new HttpContextWrapper(hc);
    var rc = new RequestContext(hcw, rd);
    var cc = new ControllerContext(rc, tc);
    var vc = new ViewContext(cc, "View", viewData, td);
 
    return new HtmlHelper(vc, tv);
}

I'd been aware of this method (had stumbled across it when certain tests seemed to be taking longer than they should), thought it looked pretty dodgy, ignored it, and added it to the growing list of technical debt. "Well it isn't broken..." I've joked with my teammate responsible about adding it the daily wtf, but we've both agreed we've both seen worse.

The beta of ASP.NET MVC has changed the ViewContext constructor to now require an IView and not a view name string, which fortunately broke the above, allowing me to reclaim some debt. We're using Moq, which allowed the following, much simpler, method.

public static HtmlHelper CreateHtmlHelper(ViewDataDictionary viewData)
{
    var mockViewContext = new Mock<ViewContext>(new Mock<HttpContextBase>().Object, 
                                                    new RouteData(), 
                                                    new Mock<ControllerBase>().Object, 
                                                    new Mock<IView>().Object, 
                                                    viewData,
                                                    new TempDataDictionary());
 
    var mockViewDataContainer = new Mock<IViewDataContainer>();
    mockViewDataContainer.Expect(v => v.ViewData).Returns(viewData);
 
    return new HtmlHelper(mockViewContext.Object, mockViewDataContainer.Object);
}

The HtmlHelper requires a ViewContext, and an IViewDataContainer. Mocking the ViewContext is clearly the most work, but not that dificult. In certain circumstances the HtmlHelper needs to get the ViewDataDictionary off the IViewDataContainer, so the Mock of the container above returns the one we fill with test data, which is passed into the method. This might not be necessary, depending on your situation.

If your helper extensions need more from the HttpContext (such as the Request), obviously you'll need to set those expectations too, ours currently don't. Ben Hall has an similar implementation using RhinoMocks (which would need to be changed to cater for the IView requirement), which sets expectations allowing the Resolve method to be used.

Update: I've since realised that one doesn't really 'Mock' an extension method (or any method for that matter), so don't call me out on the title!

Technorati Tags: ,,

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

ASP.NET MVC | TDD

It's a whole new world in here

by Ben Hart 15. October 2008 15:52

I'm a late adopter of the whole blogging thing. At times I feel like a late developer at school, finding excuses to be last showering after phys ed. Thankfully I'm a developer who's focused on web technologies for a number of years now, making the technicalities a lot more accessible. But still, hosting a site publicly has introduced a new world to me.

Obviously I want what I write to be read. There are many things I can do by myself that have more obvious immediate reward. I've decided to devote these chunks of time for various reasons, lowest on the list is a misplaced geek narcissism. While I'm in this for the long term, knowing people are reading makes it immediately worthwhile. But getting people to read at all is the challenge.

This has given me a baptism of fire into the world of analytics, submission, optimisation and all things that offend my modesty and original intent. I was vaguely aware that there were professional bloggers out there (Jeff Atwood, for one, but doubtless others I read), but I must confess sites like ProBlogger have been an eye opener.

I've recently added Google AdSense, but that'll likely end shortly. After two days and a few hundred impressions I'm sitting on a balance of $0.00, so when I hit infinity I'll still get nothing back. I think Justin Etheridge summed it up best, "You're annoying your readers, and you're not making any money". The AdSense experiment has given me insight into the sheer volumes that the pros must need to sustain themselves, though, and a better understanding of the origin of spam.

Besides, I don't want to turn this into a profession. I enjoy my job as a developer, my learning and experience that I blog about can only ever stem from this. I feel I'd have sold out if I construct posts targeting keywords, with appropriate density, of course. I'm not prepared to use Google keyword tools to find the long tail, and produce content that will draw them in. The thought of paid reviews and listings is bizarre, as is reciprocal (paid?) linking.

I've recently sent my blog out to numerous free submission sites, though. I poured myself a single malt, rubbed some anti-inflammatory into the wrists for the carpal tunnel, googled for "Blog Submission", and hit them hard. I've never typed my own name so many times. I felt like my parents must have when Christmas card mass mailing was still vogue. I'll likely draw a visitor here or there, though, so that'll be nice. Some are actually really decent, and I plan to spend some time browsing their directories.

I've had the links to dzone and DotNetKicks for a while, knowing that they're both great sites that could draw traffic, but mainly since BlogEngine had them here by default. Only recently I noticed that other bloggers submitted their own content (I had assumed that was faux pas, so avoided it). I followed their lead and submitted some of my own posts. Not everything, mind you, just the few I think are the more valuable. My ever-watched Google analytics has demonstrated the numbers of readers this had drawn, so I'll likely continue. To those of you who've followed the links, welcome, and please let me know if this is, indeed, reprehensible.

This is, after all, a whole new world to me...

Technorati Tags:

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Blogging

Dependency Inversion for Dummies: Factories and Service Locators

by Ben Hart 12. October 2008 14:09

In my previous post on dependency inversion I elaborated on constructor injection, the injecting of objects a classes depends on into said class's constructor. We saw that this injection both made the dependency more explicit, as well as going some way towards revealing the class with the dependency's purpose. I had stated that explicitly declaring these dependencies makes construction of the object more complex, and had the risk that you end up repeating much code every time you construct a class. I took some hints from the Hollywood TV Series Scriptwriter's Bible by leaving off with a cliffhanger (my way of encouraging Kiefer to roll out the next season of 24 on schedule)... "join me next time when we look at some creational patterns that solve this". And I know you've been hanging.

Creational Patterns

A cursory glance over sites that list design patterns shows that they're generally grouped into categories. These likely hark to the original groupings in the GOF book: creational, structural and behavioural. The book defines creational patterns as being those that:

"...abstract the instantiation process. They help us make a system independent of how its objects are created, composed, and represented."

The book illustrates 4 such patterns: Abstract Factory, Builder, Factory Method, Prototype and Singleton, some competing, many quite tailored to specific scenarios. Common to all, though, is centralisation of object creation. None of these patterns really fit our quite simple case, though, but quoting the GOF gives me some geek cred, so deal with it.

First a few modifications to our code

Remember that in the last post our unremarkable TagService did little more than than filter tags from a data store to be unique. It had a dependency on a concrete data store, which we effectively eliminated. We were stuck in a situation where we knew we needed an instance of a class, we just didn't want to build it in our code. While waxing lyrical about programming to an interface and not an implementation, more astute readers would have picked up the most obvious violation of this, within the console class itself. The first step of today's exercise is thus extracting an ITagService interface.

class Program
{
    static void Main(string[] args)
    {
        ITagService tagService = new UniqueTagService(new TagDatabase());
        foreach (string tag in tagService.GetUniqueTags())
        {
            Console.WriteLine(tag);
        }
        Console.ReadLine();
    }
}
 
public interface ITagService
{
    IList<string> GetUniqueTags();
    int TagCount();
}
 
public class UniqueTagService : ITagService
{
    private ITagData _db;
 
    public UniqueTagService(ITagData database)
    {
        _db = database;
    }
 
    public IList<string> GetUniqueTags()
    {
        return new List<string>(_db.GetTags().Distinct());
    }
 
    public int TagCount()
    {
        return _db.GetTags().Length;
    }
}
 
public interface ITagData
{
    string[] GetTags();
}
 
public class TagDatabase : ITagData
{
    private string[] _tags = new[] { "Word1", "Word2", "Word3", "Word1" };
 
    public string[] GetTags()
    {
        return _tags;
    }
}

Another Interface?

Better believe it. We don't want our client code (the console app) to depend directly on a concrete tag service. The interface defines the functionality we require, gluing the concrete service with the console app just creates a further dependency, one we'll regret when the inevitable changes come through.

Notice, also, that the tag service has been renamed. Its function is to draw tags from a database, and filter them to be unique. Having classes and members that are aptly named is the single best convention we can adopt, and saves having to insist on developers commenting their code (which is about as effective as herding cats).

The dependency is still there

Exactly. The console class requires the tag service, and still constructs it when required. If we continued on this path, we'd soon have a number of locations that cement that same dependency. A number of stinky dependencies. Yuck.

Simple Factory

Probably the simplest means we could improve this is through a factory. Not the abstract factory the GOF defined (which would allow us to construct and return different tag services according to need), but a plain old simple factory (to coin an acronym, POSF), a class whose purpose is to build an object.

class Program
{
    static void Main(string[] args)
    {
        ITagService tagService = TagServiceFactory.BuildTagService();
        foreach (string tag in tagService.GetUniqueTags())
        {
            Console.WriteLine(tag);
        }
        Console.ReadLine();
    }    
}
 
public static class TagServiceFactory
{
    public static ITagService BuildTagService()
    {
        return new UniqueTagService(new TagDatabase());
    }
}

In the above code we've simply created a static class with a method that constructs and returns a tag service when requested. The main advantage here is that we've encapsulated the creation of the tag service, and have centralised it. Other code that might require the service can request one, without knowing all the gritty details of the concrete class, and required dependencies. Should we need to change the implementation of ITagService (for example, change the data store), we need only change it in one location.

Service Locator

The above implementation is pretty simple, but has already greatly improved our code. A potential downside would be the explosion of a number of factories, one for each requested type. A simpler scenario will be a centralised service locator, a one-stop-shop from which we can request a type, and receive an object. DI containers do just this, but let's take a look at a pretty naive implementation of our own.

public static class ServiceLocator
{
    public static T GetInstance<T>()
    {
        return (T) GetInstance(typeof(T));
    }
    public static object GetInstance(Type type)
    {
        return GetInstance(type.Name);
    }
    public static object GetInstance(string name)
    {
        switch (name)
        {
            case "ITagService":
                return new UniqueTagService(new TagDatabase());
        }
        throw new ArgumentOutOfRangeException("name", string.Format("{0} is not a registered type.", name));
    }
}

While not so pretty, this simple class allows us a centralised and flexible means to register and construct services. A real service locator would likely construct the classes dynamically using reflection, and hopefully do some autowiring of dependencies, with some caching for good measure. This one serves to illustrate the concept, though.

Client code could choose which method they'd like to call, but having the generic saves some casting. Our console app uses the service locator with this line of code:

ITagService tagService = ServiceLocator.GetInstance<ITagService>();

Of course you'd be crazy to reinvent the wheel, and there are ample DI/IOC containers out there that do just this for us. I personally use StructureMap, but have always meant to investigate the competition.

Next week on Dependency Inversion for Dummies: We establish a shortlist of DI containers, compare how types are registered, define auto-wiring, learn about instance caching, and generally have the time of our life.

Currently rated 5.0 by 1 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

C# | Design Patterns

NHibernate objects dirty on load with nothing set in between

by Ben Hart 9. October 2008 14:35

Or "a tale of a day spent debugging".

We've advanced quite far along in quite a big application using NHibernate. I keep an eye on the statements NHibernate generates, making sure that nothing too untoward is happening. I was inspecting one test (not so great to begin with, one that got everything from a database, just to retrieve the first one), and I noticed that every single item being loaded was being updated on flush.

Hmmmm. Doesn't seem right, my honed QA skills told me. Let's take a look. Doesn't look like anything's set. Let's put some breakpoints in the constructors, and take it from there. Hmmm, this is a mighty big object, loads of children, must be one of them setting something on construction. Strange, nothing there. And so my day continued.

To illustrate, let me introduce you to my friend, Enummy. He's quite simple, only really has an id and an emotion.

public class Enummy
{
    public virtual int Id { get; set; }
    public virtual Emotions Emotion { get; set; }
}
 
public enum Emotions
{
    Happy,
    Sad,
    Frustated,
    Annoyed
}

Naturally we should test that he has the range of emotions, so we had whapped out the following tests.

[Test]
public void Enummy_should_be_happy_before_and_after_save()
{
    int id;
    ISession session = _factory.OpenSession();
    var enummy = new Enummy {Emotion = Emotions.Happy};
    session.Save(enummy);
    session.Flush();
    id = enummy.Id;
 
    session = _factory.OpenSession();
    var loaded = session.Get<Enummy>(id);
    Assert.That(loaded.Emotion, Is.EqualTo(Emotions.Happy));
}
 
[Test]
public void Enummy_should_sometimes_be_sad()
{
    int id;
    ISession session = _factory.OpenSession();
    var enummy = new Enummy {Emotion = Emotions.Sad};
    session.Save(enummy);
    session.Flush();
    id = enummy.Id;
 
    session = _factory.OpenSession();
    var loaded = session.Get<Enummy>(id);
    Assert.That(loaded.Emotion, Is.EqualTo(Emotions.Sad));
}

So far so good. That's the complete round trip, right there, all the way to the database. A proper integration test against SQL Server. We mapped Enummy with the following mapping:

<hibernate-mapping default-cascade="none" xmlns="urn:nhibernate-mapping-2.2">
  <class name="Domain.Enummy, Domain" table="Enummy">
    <id name="Id" type="System.Int32" column="Id" unsaved-value="0">
      <generator class="hilo" />
    </id>
    <property name="Emotion" type="System.Int32" />
  </class>
</hibernate-mapping>

Test pass, so all seems good. Except, of course, for the dirty session (one test you're not likely to write).

[Test]
public void Enummy_should_not_dirty_the_session_when_he_loads()
{
    int id;
    ISession session = _factory.OpenSession();
    var enummy = new Enummy { Emotion = Emotions.Frustated };
    session.Save(enummy);
    session.Flush();
    id = enummy.Id;
 
    session = _factory.OpenSession();
    var loaded = session.Get<Enummy>(id);
    Assert.IsFalse(session.IsDirty());
}

Enummy, little guy, I realise you're frustrated, but must you dirty yourself and the session so? You even do it when you're happy! Assert.That(debugger.Emotion, Is.EqualTo(Emotions.Annoyed)) :(

The problem is with that mapping. An easy mistake to make, especially if you don't realise that NHibernate has no issues with .NET enums, and presumed that an integer (or even smaller) would be appropriate. Enums should be mapped like so, with the full type of the enum specified.

<hibernate-mapping default-cascade="none" xmlns="urn:nhibernate-mapping-2.2">
  <class name="Domain.Enummy, Domain" table="Enummy">
    <id name="Id" type="System.Int32" column="Id" unsaved-value="0">
      <generator class="hilo" />
    </id>
    <property name="Emotion" type="Domain.Emotions, Domain" />
  </class>
</hibernate-mapping>

The cast from Int32 to an enum (albeit implicit at times) is enough to dirty the object. To reiterate (in the interests of good SEO), an NHibernate entity and session will be marked dirty even if nothing is explicitly set if there is a cast from one type to another in the mapping. The values might be equal when one is cast to the type of the other. Until one is cast, though, they are unequal, and as such the object is considered to be dirty. While really hard to debug, I suppose this is the correct behaviour. And once you know about it, debugger.Emotion = Emotions.Happy.

Another item on the code review checklist, watch for enums and other casts in NHibernate mappings, and add an integration test that the object is not dirty straight after a load (for that next newbie to NH who doesn't realise this).

Update: After having struggled through this all on my own, I was relieved to find that I wasn't the only one. I wish I had stumbled across metro-dev extraordinaire Justice Gray's post explaining the issue before. By his reckoning I'm in the 0.0000000000000001%. I like to think I'm even more special.

Technorati Tags:

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

C# | NHibernate | TDD

On selecting tools

by Ben Hart 6. October 2008 15:28

I started writing the follow up to my dependency inversion post this evening,  but have decided to delay that to a non-Monday day.

Last week I made some quick and unconsidered remarks on the SADeveloper.NET forums regarding TFS. Being a Subversion fan I responded to someone's comment concerning VSS, and agreed with another poster that svn would be a good choice. Someone had previously suggested TFS, so I made an aside concerning that not really being worth the cost.

Another poster replied that TFS is worth every cent, with a fairly comprehensive list of why. Having not had my 8 hours (and being a fan of more than just svn) I quickly shot off a number of alternatives to TFS functionality, and added my reasons for disliking TFS. The list of alternatives was nothing new, nor were my reasons. Unfortunately, neither was my attitude.

Reading my comment it seemed all to typical of just the people I thoroughly dislike. Too often I've seen threads in forums hijacked by anti-Microsoft trolls, waiting for their in to attack all things not 'nix. In recent years this has got more interesting because of the Apple fans, taking to heart the oftentimes misinformation spread by a particularly adept marketing department. I can't stand slashdot, for example, mainly since any reasonable discourse concerning facts and experience seems completely lacking. I realised that I must have seemed like a typical anti on sadeveloper, shooting down TFS because it was made by MS, not OSS, not hip, or whatever other irrational reason rises the passion levels. A moderator effectively commented as such, stating that I was jumping on the bandwagon, and encouraging me to check my facts (which was, in fairness, the correct response).

The thing is, I'm not one of these people. I've always had a concern around Microsoft's at times confusing 'policy' towards OSS, but recently I've become really excited by their obvious shift; the new openess, the sharing of source code, the rolling in of an OSS library. I appreciate much of what MS has achieved, and punt much of their software. I acknowledge that MS has effectively brought computing to the masses, and that (no matter what Apple might have you believe) they first gave a platform that first had most devices easily work together. I can't imagine not using MS Office. I choose to use windows as my operating system, even though now I could be a hipster with a Mac and still use my familiar apps. I love .NET and C#, which I truly believe to be the best tools for most jobs today. I like Visual Studio, and appreciate the extensibility that has allowed companies like JetBrains to take it even further. In general, I think SQL Server is a great database, in particular I'm really excited by the new BI features in 2008.

Each of these opinions is formed on my experience, though. I've tried OpenOffice, had various flavours of Linux on my machine and have listened and watched while my hipper friends have waxed lyrical about their macs. I've tried to keep up with other languages, played with SharpDevelop and Eclipse, and know Firebird and MySQL better than MSSQL. 

I'm a firm believer that any investment must be done according to a cost-benefit analysis. Investment isn't only of money, it's often your time. When faced with a few days of data migration, weigh up your cost per hour against that of buying SQL Compare. When sitting down to write that next batch of CRUD statements in your DAL, consider whether you're doing a better job for less than any of the many libraries out there. This should be extended to everything, even the time you spend on the weekend. I thus don't wash my own car, my time being worth more to me per hour than I'd spend getting a professional to do it (with the added benefit of job creation, and, at the correct place, water saving). This isn't about scrimping, it's about using limited resources to achieve maximum benefit.

Every investment you make should be subjected to this: the examination of what benefits it offers, and the consideration of how much you consider those benefits to be worth. If the value of the benefits outweighs the cost, though, you're only half way there. If you're spending someone else's money (as an employee, contractor or consultant) you have a responsibility to consider if those benefits could be realised for lower cost using another means.

To summarise, until a year ago, I really wanted to use TFS and corresponding tools. I had seen the benefits as advertised. I had sat through the sessions, and applauded loudly. To me, the key benefit was the integration of a number of tools. This seamless integration could allow an insight into the greater development process, and save me and my team the many hours per year we spend coordinating and configuring what are currently different systems.

I've since used TFS on a large project, and while I'd never claim to be close to an expert, I got to know it quite intimately. I realised that the integration comes at a cost, and that the benefits of many of the individual tools I've come to rely on are not as evident in TFS. I've taken a step back, and put a cost to the benefit of integration, estimating the effort we spend yearly on ensuring it. I've looked at the hardware TFS needs to keep running. I've looked at the price tag of the server and clients, from numerous angles.

And I stand by my comment. Currently, the benefit of TFS is, to me, and much to my disappointment, outweighed by its cost.

Technorati Tags: ,

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Software Development

Long held assumptions

by Ben Hart 1. October 2008 12:49

I had an interesting experience today that made me return to first principles. In summary, someone questioned whether closing a browser before a request was completed by the server would interrupt the processing, and result in an inconsistent state.

My first reaction was, "Hell no! That request has been counted, you can't retract it." I 'knew' this. Everything I understood about the web and development therefore was based on this 'knowledge'. I quickly replied as such, and continued about my business. Said person replied, saying this didn't make sense, and asking whether I was sure.

I quickly drafted a response along the lines of, "Of course I'm sure. Try it yourself." Before I hit post a niggling doubt set in. I removed the "Of course I'm sure" part, giving me the option to recover gracefully if I was actually wrong. At this point I was around 80:20 sure, but still backing myself.

Perhaps it was my "Introduction to Epistemology" that got me, but that 20 percent uncertainty kept growing. History has nothing but examples of first principles being reformed, new 'knowledge' gained, old 'knowledge' thrown away. I intentionally put the quotes around 'knowledge', my Philosophy 101 reminding me that there is very little knowledge, but lots of belief. It occurred to me that I had observed this behaviour (requests continuing regardless) before, but I had never really verified it. Perhaps newer browsers were more chatty. Maybe 'keep-alive' had been enhanced and was listened to by IIS. Maybe I'd been listening to the wrong people all these years. Maybe I was wrong!

When my certainty ratio hit 50:50, I only had one option. Luckily this was an easy one to test, a simple post, putting a few threads to sleep for a while, and I was away. Of course the server continues to process the request. I was happy to be right, but a part of me wishes I was wrong.

For such a young discipline, software development is startlingly filled with dogma. I cringe when the stored procedures debate comes up, having long nipped that fallacy in the bud. Don't talk to me about strongly typed datasets giving you a domain model. You've lost me if you expect me to build software that looks like those UML models you just dropped on my desk. Hell, I'll go through the process of detailed estimation if it makes you feel better, but with what I know now, I can't promise they're even close. It goes on.

We need to question our assumptions more often. We need to be prepared to reform first principles, and be happy to throw away our theorems. In short, we need to test our beliefs, and turn them into knowledge.

Technorati Tags: ,

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Software Development

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About me...

I'm a passionate .NET developer, with C# my language of choice. I've been at it for a number of years now, and enjoy that I'll never shake the feeling I'm just starting out.

I love software, and I love building it even more. I love knowing that my work facilitates others', and that one line of code at a time, we're increasing our capability.

More...



Page List