Layers are not tiers

by Ben Hart 28. September 2008 15:49

I've been spending more time at the asp.net forums recently. Aside from a bit of shameless self-promotion, I do genuinely enjoy helping other people out. I'm fairly opinionated, and I'm distinctly aware that many of my opinions offend what others consider to be rules, so it's turning out to be an interesting exercise in diplomacy too.

One recurring question people seem to have relates to layered architectures. The questions typically ask about n-tier architecture, or mention data access tiers and the like. What's fairly common is the use of the term tier and layer, and these are used by many interchangeably. This is incorrect.

What's a tier?

When I was at uni, we spent some time learning about the evolution of system architectures. The course started with mainframes, went into the development of the client/server model, discussed 2-tier applications, and the evolution to n-tier. We learnt about things like CORBA, distributed processing, and a lot more stuff I quite quickly forgot. At the time, though, I had a clear idea of what a tier was.

Mainframe It was clear that original mainframes hosted the complete application, and that interaction with the system was through interfaces served directly from the host.

Client/Server (2-Tier) Client/server made a lot of sense, too, and were the systems that I began to develop. We had rich (well we thought so) user interfaces that connected directly to a centralised database. It was clear to me that this was 2-tiered, the database was hosted on one machine, and the system that needed the data on another. Processing was indeed distributed, even if a little heavy on the client. Having all the power of client machines let us do some amazing things, never dreamed of on the mainframe.

I never really got n-tier systems back then, mainly since I never developed any. I understood academically, though, that n-tier systems introduced another tier between the client (desktop app) and the server (database), to distribute even further the load.

A 3-Tier System Architecture

In this middle-tier data could be aggregated before being sent to the client, for one, resulting in less network traffic. Business processes could be orchestrated from this middle tier too, and databases could be locked down to prevent all these clients accessing them. The clients themselves could be trimmed down, since most direct data access had been rolled into this middle tier. The clients were more about presenting the data, and acted as the interface between the user and the middle tier. Having this middle tier results in a 3-tier architecture. Further tiers could be introduced too, and rather than specifying the number of tiers each time, it's much simpler to just call the architecture n-Tier.

An n-Tier System Architecture

The key aspect, though, is that the system as a whole is distributed across many computers, to centralise related functionality, and distribute processing.

When did it all go wrong?

I think it became confusing when people started calling the middle tier(s) the business logic tier. Despite often being accurate, this is not necessarily the case. If I had an n-tiered application, it is quite feasible that all business logic remained in the client, and that my middle tier only simplified data access, and prevented all the clients on the network interacting directly with a database. This might have violated some original vision of what the middle tier should be, but it is feasible nonetheless.

If we had never started calling this middle tier business logic, we likely would have avoided all the confusion when we try and extol the virtues of a layered application architecture.

Layers

When most of us started developing data-driven applications, we generally added some controls to a form, or perhaps emitted some html from a script, and used this to display data we retrieved from a database. We generally lumped connecting to the database with retrieving the data and displaying it all within the same script or class, and were likely quite pleased with the results.

Over time we learned that mixing all this code together became quite a headache. If we needed to connect to a different database we needed to change connection information in numerous places. When we changed a column name in our database we needed to change SQL statements all over the place. When our user informed us that a customer must have at least one address, we had to add checks into numerous 'controllers'. Our developers with a better eye for design (but completely ignorant of databases) kept accidentally breaking our data access when they tweaked the appearance of the UI. It became clear that code that delivered such different functionality should be separated, and that code that is similar should be grouped, and never repeated.

So we introduced layers into our applications. Data access follows distinct patterns, and relies on the same configuration, so having a data access layer was an immediate quick-win. When we need to change database connection information it was obvious to jump into the data access layer. All SQL statements reside there too, so changing column names became much easier.

Business rules tend to change as often as our database, so centralising these were obvious too. When the rule was introduced that customers must have an address, changing this in one place meant that all dependent user input was validated, eliminating the possibility that we forgot about a seldom used interface in the admin section, for example.

This left our UI to only concern itself with processing user input, and displaying the results. Designers can work on this without interfering with any other application logic. We can have multiple different UI technologies (windows and web) running the same business logic and data access, they just need to call the now encapsulated functionality.

These are the typical layers we see in an application, often called data access, business logic and user interface, separated as such because the separation is obvious. While I've moved away from these layers per se, they're still widely used, and are what most people refer to when discussing layers (and, misguidedly, tiers)

A Traditional Layered Architecture

The diagram above represents a system distributed across three tiers, with a layered application architecture evident in the web tier. The separation of layers in the web tier is logical, they need not run on a different machine or process to exhibit it. Typically we'd divide them into different assemblies, but not necessarily. If these related concepts were grouped into classes found in the App_Code folder on an ASP.NET web site, I'd still have to concede that the site is layered along these lines.

So why is a layer not a tier?

Simply because a layer is about the logical separation of related functionality. The separation allows a much higher level of reuse within an application, and ensures greater maintainability. If we've separated out these layers into assemblies, we can likely plug these into other projects, and reuse them without any further development. If you haven't layered your application into related functionality, you'd better close your browser and start right now.

A tier, on the other hand, is all about physical distribution. In .NET we'd be making extensive use of serialisation and remoting (or xml web services), and all things quite hard to maintain. We might get a lot of reuse, but this is at the cost of much greater complexity, and lower overall performance. Distributing an application across physical boundaries (across machines and/or processes) should not be a trivial decision to make.

And, in a nutshell, that's why I think we need to clear up the semantics of tiers and layers. One cannot be argued against, the other needs to be argued for. The sooner we all are talking about the same thing when discussing them, the better.

Update: Added a few diagrams, and elaborated on them based on suggestions from Jim Tollan.

Currently rated 4.6 by 5 people

  • Currently 4.6/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

.NET | Application Architecture | Design Patterns

Dependency Inversion for Dummies: Constructor Injection

by Ben Hart 24. September 2008 12:06

Dependency inversion is not really a design pattern. It is more of a principle, one that if followed allows the creation of much more maintainable, testable and elegant code. It reduces coupling, and can increase cohesion.

It falls under a broader principle, Inversion of Control (IoC), and is widely used. I'm not going to go into great detail, mainly since that's going to reduce my average page-read time from 00:00:05 to 00:00:01, and I'm starting to find the tracking stats from Google a little depressing.

I read about Unity. That's Dependency Inversion.

Largely. Unity is Microsoft's first official entrant in the IoC container space. These are tools that simplify Dependency Inversion, and there are many others, most a lot more mature than Unity. These tools typically do much more than simple dependency injection, too, and they're worth looking into.

This post is not about tools, though, but rather the underlying principle that they support. I do reserve the right to jump on the tool comparison bandwagon at some point, though, and I will illustrate their utility in follow up posts.

What's a dependency?

Dependencies are everywhere in our code. Whenever one class relies on another class for functionality, a dependency forms between them - they become coupled. This becomes problematic when we wish to change the class that another depends on, anything beyond the most simple of changes tends to break everything else. Our code becomes brittle.

The following sample shows a simple console app that outputs tags. It uses a service that retrieves tags from a database, and returns only unique ones in a list. A fabricated scenario, but illustrative.

class Program
{
    static void Main(string[] args)
    {
        var tagService = new TagService();
        foreach (string tag in tagService.GetUniqueTags())
        {
            Console.WriteLine(tag);
        }
        Console.ReadLine();
    }
}
 
public class TagService
{
    public IList<string> GetUniqueTags()
    {
        ITagData db = new TagDatabase();
        return new List<string>(db.GetTags().Distinct());
    }
 
    public int TagCount()
    {
        ITagData db = new TagDatabase();
        return db.GetTags().Length;
    }
}
 
public interface ITagData
{
    string[] GetTags();
}
 
public class TagDatabase : ITagData
{
    private string[] _tags = new[] { "Word1", "Word2", "Word3", "Word1"};
 
    public string[] GetTags()
    {
        return _tags;
    }
}

In this code, TagService depends on a TagDatabase, evident when it creates one when we call GetUniqueTags. (Note that we're aware that it is better to program against an interface than an implementation, hence ITagData.) We also quickly realise that GetTagCount also creates the tag database, our first concern, so let's sort that out.

class Program
{
    static void Main(string[] args)
    {
        var tagService = new TagService();
        foreach (string tag in tagService.GetUniqueTags())
        {
            Console.WriteLine(tag);
        }
        Console.ReadLine();
    }
}
 
public class TagService
{
    private ITagData _db;
 
    public TagService()
    {
        _db = new TagDatabase();
    }
 
    public IList<string> GetUniqueTags()
    {
        return new List<string>(_db.GetTags().Distinct());
    }
 
    public int TagCount()
    {
        return _db.GetTags().Length;
    }
}
 
public interface ITagData
{
    string[] GetTags();
}
 
public class TagDatabase : ITagData
{
    private string[] _tags = new[] { "Word1", "Word2", "Word3", "Word1" };
 
    public string[] GetTags()
    {
        return _tags;
    }
}

All that we've done at this stage is constructed the database within the constructor of the service, allowing us to reuse it across methods.

Looks like a good layered application to me

We often see these patterns in a layered architecture, one that has UI depending on a Business Logic Layer (BLL), which in turn depends on a Data Access Layer (DAL). Often code in a form will create a BLL class, which in turn creates a DAL class for data. We'll often claim that by doing so we've separated our concerns, and can quite easily swap out a different database layer if we so desire. Given the above pattern, it's as simple as going through every BLL class, and changing the constructor to create the new DAL class. Each, and every class? That's not that simple...

It also creates ignorance amongst the users of our BLL classes that what they're actually doing is hitting the database. They just new up an object, blissfully unaware that they're actually newing up a whole chain of them. They might not even have the database set up on their machine, and get a fright when they first run the application.

Tell me what you want, what you really, really want

Apart from an anthem of an era most of us would like to forget, this turns out to be a good strategy in life. If we are open about our requirements, people can think about whether they are able to provide them. If they don't have what we want, they know they need to get it. If they can't get it, well, they better find someone else.

Ok, you've lost me with the spice girls wisdom. What does this have to do with dependency inversion?

Well, as I mentioned, we need to be open about our requirements.

I'll tell you what I want, what I really, really want

The easiest means to invert dependencies are to declare them in your constructor. Rather than allow the user of your class to whimsically new it up, force them to provide it the dependencies. This is known as Constructor Injection, when we inject the objects a class depends on into its constructor. I've modified the sample to illustrate.

class Program
{
    static void Main(string[] args)
    {
        var tagService = new TagService(new TagDatabase());
        foreach (string tag in tagService.GetUniqueTags())
        {
            Console.WriteLine(tag);
        }
        Console.ReadLine();
    }
}
 
public class TagService
{
    private ITagData _db;
 
    public TagService(ITagData database)
    {
        _db = database;
    }
 
    public IList<string> GetUniqueTags()
    {
        return new List<string>(_db.GetTags().Distinct());
    }
 
    public int TagCount()
    {
        return _db.GetTags().Length;
    }
}

Notice that the console now creates the service providing the newed up database, and that I can't create the service unless I've got a database to give it. I've decoupled the two classes, with the only dependency being that which matters, the interface.

I can now use the service against any class that implements the ITagData interface, allowing me great flexibility to unit test it, implement ITagData's that retrieve from any source, and so on.

The service is immediately a lot more reusable, and its purpose has become a lot clearer. Rather than being a class that creates a database, retrieves all tags and finds those that are unique, it's now a class that takes a given database, and returns those tags that are unique. That subtle difference goes a long way towards providing better cohesion, and a code base that's built to last.

Where to from here?

The above sample relies on the console app to create both classes, with one being passed into the other. In this simple example this is easy, but in real applications the number of classes that are created quickly spirals. Further, when we need to use the service in a number of places we'll end up repeating creating these classes again and again, resulting in some equally brittle code.

Join me next time when we look at some creational patterns that solve this, and bring in some of the tools I mentioned above.

Currently rated 5.0 by 4 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

C# | Design Patterns

Design Patterns 101: State Pattern

by Ben Hart 20. September 2008 14:07

Let's get back to basics.

I've learnt over the years that there are common problems encountered when developing software. Often one finds the same recurring challenge in code. Something seems familiar, and you're sure it's not the music blaring from your neighbour's earphones. You've a growing feeling that you've seen this before, a recognition that you're repeating yourself, a developer's déjà vu. You've just found a pattern.

Design Pattern? Is that like the paisley wallpaper in my bathroom?

No, but sort of. Design patterns are formalised solutions to common software problems. While we like to think that we're special, the truth is that most software problems have already been solved. You're actually one of millions hacking away at your IDE. One of millions who's had to step back, and say, "hey, what's the best way to do this?" The best ways have a habit of recurring, and eventually become distilled as a named pattern.

There is no better way for a developer to spend their time than familiarising themselves with design patterns. If you were sitting with a car designer who was pondering how best he could get his funky new Italian sports car rolling on the tarmac, you'd be quite quick to quip, "Dude, it's called a wheel, connected to an axle. Where did you go to car design school?" Software design patterns are to your software as wheels, axles and doors are to a car designer. You customise them to your need, weave them together while building your masterpiece. You shouldn't dream of reinventing them.

State

The problem that the state pattern solves is everywhere. Look into your code and find those switch's and nested ifs. Look at them, and see if you repeat the same switch or nested ifs elsewhere. Yes? Really? Man, you better get yourself some state.

State is useful when we find ourselves manipulating an object differently based on the object's state. Imagine we had to deal with orders in our system. An order is for a customer, and it can be dispatched to that customer. We cannot change the customer of an order after it has been dispatched. Our class might look something like this.

public class Order
{
    public string Customer { get; set; }
    public bool IsDispatched { get; set; }
 
    public void ChangeCustomer(string customer)
    {
        if (IsDispatched)
        {
            throw new Exception("Order has already been dispatched");
        }
        Customer = customer;
    }
 
    public void Dispatch()
    {
        if (IsDispatched)
        {
            throw new Exception("Order has already been dispatched");
        }
        IsDispatched = true;
    }
}

We've only implemented the simplest of features, and already we're seeing ourselves repeating ourselves. I hate repeating myself. Look at those checks for IsDispatched. They're ugly, and fragile.

If we puzzle over the above class, we're quick to notice that order actually consists of two states, let's call them "New" and "Dispatched". A "New" order behaves differently to a "Dispatched" order.

A battle of two approaches

Some might say at this point that what we actually have is a hierarchy, and instinctively jack out the base class Order, with 2 sub-classes New and Dispatched. For better or worse, an object can't be two types at once, and transitioning it from New to Dispatched is going to be more painful than gender reassignment surgery, and probably less effective.

Besides, there's a saying that we should favour composition over inheritance, and I think it to be true. Inheritance paints us in a corner, it constrains our code. As soon as sub-classes depend on their base's behaviour, we can never easily modify the base. We might save a few lines of code here and there, but we pay for those later.

What we need to do here is introduce a class that represents the Order's state.

public class Order
{
    public Order()
    {
        State = new New(this);
    }
 
    public IOrderState State { get; set; }
    public string Customer { get; set; }
 
    public bool IsDispatched
    {
        get { return State.IsDispatched; }
    }
 
    public void ChangeCustomer(string customer)
    {
        State.ChangeCustomer(customer);
    }
 
    public void Dispatch()
    {
        State.Dispatch();
    }
}
 
public interface IOrderState
{
    bool IsDispatched { get; }
    void ChangeCustomer(string customer);
    void Dispatch();
}
 
public class New : IOrderState
{
    private readonly Order _order;
 
    public New(Order order)
    {
        _order = order;
    }
 
    public bool IsDispatched
    {
        get { return false; }
    }
 
    public void ChangeCustomer(string customer)
    {
        _order.Customer = customer;
    }
 
    public void Dispatch()
    {
        _order.State = new Dispatched();
    }
}
 
public class Dispatched : IOrderState
{
    public bool IsDispatched
    {
        get { return true; }
    }
 
    public void ChangeCustomer(string customer)
    {
        throw new Exception("Order has already been dispatched");
    }
 
    public void Dispatch()
    {
        throw new Exception("Order has already been dispatched");
    }
}

Note, firstly, that the order does a lot less now. It defers much of its decision making to its state. Rather than trying to change its own customer, it asks its state to. It doesn't dispatch itself, the state takes care of that.

Secondly, note that we have a lot more classes. This is often a good thing. A class should have a single responsibility - it should define, exhibit and encapsulate one concept. When something goes wrong, we know exactly where to look. If we're curious, we can take in a whole concept just by reading one class.

In this code we have a simple interface that defines what each state must be able to do. We can open New and understand what a New order should do. I can change this behaviour easily, without worry about what might happen to my other states, or Order. Similarly for Dispatched.

And, most usefully, should I need to introduce a new state, I just need to create one class, and define the transitions between it and the other states. My order remains unchanged.

Wrapping up

Note that this is not necessarily the best example, and you might right to point out that such simple requirements have been over-engineered. State really shows its mettle when you have fairly complex states, and many of them (but is extremely useful even if not!). Consider that Order would most likely have "New", "Approved", "AwaitingDispatch", "Dispatched" and "Delivered" states. Complex workflows like these are crazy to do without State.

Other good examples are adjusting calculations based on state (if the account balance is less than 0, charge interest, greater than 500 give more interest), changing rewards based on importance (gold customers get double points), and so on. Good examples are everywhere. Unfortunately, I didn't think of them when I started writing this post.

Technorati Tags: ,,

Currently rated 5.0 by 1 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Design Patterns | C#

BenHartOnline? Dude...

by Ben Hart 19. September 2008 11:12

BenHartOnline is my new domain. Not my first choice, I should point out.

Firstly I thought I'd like benhart.com, the most obvious choice. This is my blog, my thoughts, it seemed most logical to just use my name. Unfortunately that was taken (and, frustratingly, parked). Maybe I'll contact him someday.

Next I thought I'd go for something quirky. A clever play on words. Something curt, fitting, representative of who I am. Sadly, it seems I'm not the most creative guy. My creativity is expressed in the patterns and logic of code. Everything I thought of was taken. Time and time again.

Yes, I could have gone for the not-dot-com's. I know full well, though, that the first extension people try is a .com. "No, mom, somequirkydomainname.za.net, not .com".

And then I tried again, on a different domain registrar, whether my name was available, just in case Dave decided he no longer wanted it. A little more sophisticated than my usual WebAfrica, it gave me a few suggestions, top of the list, BenHartOnline.

It seemed fitting that my online identity was chosen by an algorithm, so I took it.

And here we are.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Blogging

Agile estimation and tendering

by Ben Hart 18. September 2008 15:41

Lately I've been involved in a number of interesting conversations regarding agile development, particularly in how it affects estimation and project management.

This is less relevant to internal IT departments, or those developing products. This concerns the world of bespoke software, custom-developed line-of-business applications, where your obtaining work is often less about what you'll actually deliver than the promises you make up front.

An agile refresher

Agile by definition does not constrain scope. It avoids BDUF. It works with the customer to keep development effort focused by priority. Scope becomes fluid, the constant development of the next most important feature. Change is welcomed, as it is only a misunderstanding of what the customer really needed. It is a paradigm shift, an attempt to reverse the trend of failed software projects.

At agile's heart is the rejection of the notion that software can be constrained. It acknowledges that users can't completely define their requirements in a workshop. It concedes that system analysts cannot specify how to build software through UML. It recognises that real requirements gathering begins when developers start building, and start to question what is and is not possible. It embraces the fact that a user will change their mind when they interact with what they asked for. More than anything, it's all about ensuring that the customer gets what they need.

You want an open chequebook?

This of course conflicts with many accepted business practices. Software costs money, and is a significant investment. Decision makers need to know what is required and why. Companies need budget approval. Procurers need a means to evaluate different tenders. Boards need to know how much the software will cost. Up front!

This creates a quandary. Do we concede that we can't tell a company how much it will cost ("yes, we want an open chequebook")? Do we go through the motions of tendering, completing documents that we know are worth little more than the paper they're printed on (but cost more in man-hours than the printer and network that printed them)? Do we estimate what the other tenders will be, and undercut them, dealing with the fallout later knowing that it's easier to get budget extension than initial approval?

NO, none of the above

We educate. We engage with our customer, reminding them of the failures of our industry, and the collective learning of the last decade or so. We ask them what success they've had on software projects recently, and whether what they received matched their needs, on time, within budget. We explain that Agile is about continuous delivery, and that we'd rather take a small piece now and prove ourselves before we've both entered into a contract that will likely fail.

With this small piece we prove our capability. We build a trusting relationship. We bring the customer into our team, involve them in the daily challenges of building software. We surprise them when we smile when we hear that they forgot about the workflow of an entity, and not demand RFCs or other contractual documents. We delight them when we show them that this workflow has been included the next week, and is ready to be used. We impress them by the high quality of our work, and share a laugh over the joys of automated tests.

We demonstrate delivery momentum, and together understand how much software can be delivered in any given period. We share openly what challenges the future holds, and how demonstrated momentum will fluctuate with complexity. We agree to the renewed now-six-month contract, and our development output doesn't skip a beat.

We build on our success, and change the appalling track record of our industry, one less failed project at a time.

But what if they don't buy in?

We leave our contact details, and walk away.

Currently rated 5.0 by 1 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

Agile

Running unit tests from the keyboard with ReSharper

by Ben Hart 16. September 2008 15:01

I’m a huge ReSharper fan. I consider it an essential tool. Without it you’re simply wasting your own time (or that of your employer, depending on how you see such things). Apart from extending Visual Studio with much better snippets, essential refactoring, static code analysis, intelligent code completion and much, much more, it allows you to spend so much more time without grabbing the mouse. Not just party-trick stuff; functionality that empowers you, tasks that could take hours, done in seconds.

Uhhh, I know all about ReSharper, just not how to run Unit Tests from the keyboard!

Of course never leaving the keyboard means using it for more than typing. This is typically either achieved through context menus (often displayed with the little used (and now I notice absent from my laptop) properties key), or, better, shortcut keys. Visual Studio is jam-packed with shortcuts, and the time spent learning them will pay off ten fold.

What ReSharper really gets, though, is context. Try refactor any piece of code through Ctrl-Shift-R, and the options appropriate to the member are filtered. Suggestions for code completion have insight into surroundings, suggest variable names I hadn't thought of yet, but actually wanted.

Action menuLight-bulbs appear, allowing the convenient Alt+Enter, a shortcut to actions that you might wish to perform.

A further bonus of ReSharper is the Unit Test Runner. Many argue that it’s not the best (it probably isn’t), but it suits my needs, and is included in an already required tool. One feature I really like (that I haven’t noticed elsewhere) is the console output, “remembered” by test. Using NHibernate, for one, I can easily examine the SQL being output, and peruse these statements conveniently long after the suite has run.

Resharper Console Out

Yet despite the R# devs clearly having spent much time in the keyboard ninja dojo, it isn't immediately obvious how to simply run the tests you're working on from the keyboard. I put effort into keeping my tests focused and quick. I don’t want to have to stop coding and grab the mouse to see how my tests are going. ReSharper gets context, it really does. How did they forget this?

It's simpler than I realised. Embarrassingly obvious.

Visual Studio, of course, has the ability to bind shortcuts to actions (Tools - Options - Environment - Keyboard). Filtering these by “Resharper.UnitTest_” shows the applicable commands.

Resharper Keybindings

"ReSharper.UnitTest_ContextDebug", "ReSharper.UnitTest_ContextProfile", "ReSharper.UnitTest_ContextRun" debug, profile or run tests contextually. If the caret is within a test, only that test will be run, within a fixture (but not individual test) all tests in the fixture will be run.

ReSharper allows the selecting of tests to be run in sessions, arbitrary collections of tests you build up. "ReSharper.UnitTest_RunCurrentSession" will run all tests in the session currently selected (topmost in the window), and "ReSharper.UnitTest_RunSolution" will, well, run all the tests in the solution.

The hardest part is finding combinations of keys that aren't already assigned.

Technorati Tags: ,

Thanks to Duncan for the heads up on this one.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

ReSharper | TDD

First post

by Ben Hart 16. September 2008 04:34

I've finally bit the bullet and decided to start blogging. To give credit, I guess two posts pushed me over the edge.

Lately I've been relying more and more on other's blogs for learning, so I hope in some way this will become a way to pay it forward. I will use it to record my learning, and as a holder for future reference. I hope it to be a means to document my thoughts, express my ideas, and, ideally, share a few tips and tricks along the way.

I settled on BlogEngine.NET, after some initial tinkering with SubText. This choice was mainly due to BlogEngine's simplicity of setup and its meeting my (very short) list of requirements with minimal effort. (In fairness, it is feature rich, I just don't yet know exactly what my requirements are.) I haven't settled yet (I'm still maintaining my private SubText instance for comparison), and will likely post my findings in due course.

I've included a little more about me. Most likely my future posts will revolve around these topics, but let's see where this heads.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Blogging

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About me...

I'm a passionate .NET developer, with C# my language of choice. I've been at it for a number of years now, and enjoy that I'll never shake the feeling I'm just starting out.

I love software, and I love building it even more. I love knowing that my work facilitates others', and that one line of code at a time, we're increasing our capability.

More...



Page List