Wednesday, January 21, 2009

Essential Project Practices: Communication and close customer collaboration

I'm a firm believer in the need for close customer collaboration in projects, a high degree of communication and continual status updates to best manage basically anything. I think I would define it as one of the major "make or break" characteristics of a project.


I remember being asked during an interview with my current employer what I believed was the biggest reasons projects fail. Communication was the big thing that came to mind. Projects fail for a myriad of reasons, but not the least because of a lack of communication between a number of involved parties. I thought I'd try to get in to a few of those aspects here.


Working on communication at all times is important. The various roles in a project or organization has different responsibilities here.

 


Upper management and project champion
First of all, any project, and especially those related to organizational changes, needs upper management support. If the project needs to communicate, integrate or in other ways depend on other parts of the organization, it is hard to do it without the support from the top. The unofficial networks built by social interaction within an organization are one way it can work.  But management support is extremely important, so you better find out how to get it - even if it is not your direct responsibility, pull some strings to find out if something can be done about it.


Another role of extreme importance is the project champion. A project champion is a person highly dedicated to see the project through from inception to success. Not completely about the topic, but this is a person that will help make sure that the needs of the project are taken care of throughout the organization. Communication is tool number one here.


Project manager -> developers
The project manager and the development group also need to communicate. I use project manager here to mean several possible positions, both project manager in the traditional meaning, and for instance product owners in Scrum tongue, or even in part Scrum masters. The two sides has different needs for communication. The project manager has the primary responsibility of delivering the project on time, within budget and according to specification. There's many ways of handling this.

 

Organizational requirements and processes is the primary one, but trust has no less importance.
Organizational requirements and processes is something you have to live with in some form. If you are able to build trust in what you deliver outside of the project scope as well, you'll most likely have more control of what you need to deliver. The major reason behind many of the demands and project artifacts found today is a lack of trust. The lack of trust is compensated for by putting in a myriad of artifacts, processes and control points. Even though that is often a false sense of security, I generally completely agree with the approach - if you don't have trust, the next best thing is to monitor the progress.

 

So how do you build trust? First and foremost you need to deliver (Hopefully on time, within budget, and according to specification :) ). Communication is tool number two. When problems arise they need to be communicated up the ladder as needed. If you have trust that problems once discovered will be taken through the right channels, you have a good basis for being able to solve any problem once they arise. To build trust, you need to communicate continuously. People are different, but little and often is often better than much and seldom. Manager Tools (brilliant podcast-series) has a podcast describing how humans relate to trust and relationships here.

 

Project -> Business expert and users
An essential step in delivering a product that fulfills needs is close collaboration with the business experts and users. It cannot be a single iteration waterfall-thing, it needs to be a continuous collaborative effort. You do something, then you verify what is done. You wonder about something, you contact the business expert or user. And how much easier is it to get both attention and clear answers if you have someone within the same room than over phone or email?

 

Developer -> Developer
Developers are certainly not excluded from the communication effort. Developers need to continuously communicate within the group to ensure everyone has a clear understanding of the problem at hand, to build project ownership, to share project knowledge, and to notify and get help for problems. And if you are a developer, what better way is there to increase your skill-set than to communicate with other developers? And try to be aware of groupthink, that is certainly not good to have to much of.

 

In conclusion
Writing about something like this is impossible to do well without references and ahead planning, so I'll easily admit that the above text is heavily flawed. However, the concept is very important, and that's why I wanted to share a few disconnected thoughts on it.

 

If you find the topic interesting it's not hard to find sources talking about it. It is slightly harder to find interesting sources.
The podcast mentioned above has interesting topics, but it is mostly related to managerial behaviours. You can access it here
If teams and communication is interesting, you'll find Peopleware's Productive Projects and Teams great.
If other aspects of communication seems interesting, I'll be happy to recommend books on the subject.

Tuesday, January 20, 2009

NNUG: Tonight's meeting

Jimmy Nilsson and Tore Vestues was the speakers on todays NNUG-meeting.


Jimmy Nilsson is most known for having written "Applying Domain Driven Design and Patterns", one of the few good books on DDD, as well as having a practical focus on for instance TDD. Jimmy was nice enough to speak for us tonight after having held a LEAP course today. I haven't heard him live before, but if you can say anything about Jimmy, besides being very skilled, is that he is a really nice guy to listen to. Good stuff :) For those that missed it, his speech "En ny era för dataåtkomst?" (A new era of data access) took on the history and general approaches about data access, as well as more general info about TDD. Please remind me that we must have a topic with a more close look at the data access options soon.


Tore Vestues had a speech about about Code Quality. (In the interest of full disclosure: we are colleagues.) The speech was well worth the time, and the topic should be on the absolute top of your list of priorities.

Monday, January 19, 2009

Interesting Old Stuff: Management tutorial

I  browsed through an old mailbox I used long ago today, and stumbled upon an old weekly tutorial for a project management class I attended.  That week's tutorial seems like it was about methodologies, continuous feedback and client demonstration. I know I would have a lot to say about that now :)

 

An interesting read out of a sort of sentimental fashion for me. I'll post it here just for fun

 

You are the IT project manager of an organisation developing innovative ‘leading-edge’ warehousing solutions. Although a number of individual software components have yet to be completed your clients are insisting on regular updates of your progress, backed up by demonstrations of the latest versions of the individual software components. Their insistence is being reinforced by the client’s threat to withhold staged payments that are required to be made as part of the contract terms.

 

How might this type of client behaviour affect the type of development methodology you employ?

With any project we believe it is important to supply the client with regular updates and demonstrations of the latest versions of the individual software components. When dealing with a very insistent client, as in this case, we believe that there is a need to use a methodology that incorporates user/client involvement throughout the project.

The development methodology type that is most suited for these requirements is one based on prototyping. By using such a methodology the project will focus on and deliver working prototypes, and show early efforts of integrating the different parts of the project. An example of a methodology that could be suitable for this type of project is Extreme Programming (XP). XP is a spiral type methodology with short iterations and close customer contact. It has a strong focus on integrating early and creating working prototypes. Since the customer will be updated on the progress of the project at short intervals and are able to change direction due to problems or new requirements they will be able to do real-time planning with the project.

Generally we believe that it is important to know and understand any development methodology, and not use any aspect of it without considering the consequences. Often you need to use parts of different methodologies, or change part of the one you are using, as it does not necessarily work perfect in your organisation.

 

Why do you believe the client might be so insistent on receiving regular ‘proof of progress’ updates?

There are many possible reasons for why a client would insist on receiving regular “proof of progress” updates. We believe that such an insistent client’s main reason for behaving like that is the risk of project failure. It is well known that many IT projects fail to complete at all, complete far over time or budget, or just isn’t used as planned in the organisation.

During a project there are many problems that occur. A client wants to see that everything is going as planned, that the project milestones are met, and that the contracted agreements are fulfilled.

If a project develops away from the project specifications and the project plan agreed upon with the client, then the client can demand corrections according to the contract or even bail out. The earlier this is discovered, the better it is for the client.


You discuss the issue with your own executive management and they make a suggestion that you merely ‘fake’ the demonstrations without making any genuine attempt to provide a genuine product. What are some of the implications here?

If you were to “fake” the demonstration of a product we believe that there can be both positive and negative implications to this.

Firstly, it might be positive for the project to do this if you are having a slow start to the project but you are still very confident that you will be able to finish the section in question on time.

Another reason might be if you are up to date on the coding, but you might still have problems showing visual proof of progress. By faking the demonstration you can avoid problems you might get by not producing visual results. In other words you will not be picking the low hanging fruits first just to have something to demonstrate.

On the other hand there are some negative implications too. You can risk giving the client false hopes about the probability of project success. If the client detects the faking of the demonstration you will have to take the consequences. This can range from the client loosing trust in you, a loss of reputation and a loss of the contract to possible lawsuits.

All of these negative implications are risks that can lead to economical consequences. Therefore it’s important that you consider the implications thoroughly before making a decision to fake such a presentation.

Interesting Blog Posts: NHibernate and Entity Framework Battle it Out in the Real World

If an OR-mapper is what you need, here's a short and interesting real life example of how a showdown between EF and NH went.

 

Note: Look, I am saying OR-mapper here, so don't get me started with the "they are impossible to compare since EF is supposed to be a persistence-, object-, view-, query-, dataservices"-discussion. That is so 2008.

 

Sunday, January 18, 2009

Interesting Software Thoughts: 8 Things We Hate About IT

It's interesting to step a bit back from the daily technical crunch and see how the world of IT is viewed from the outside at times.


A management guru  has a blog post  titled the same as the header.


The points listed are the following, as well as a few thoughts I formed while reading them:

1. IT Limits Managers' Authority.
2. They're Missing Adult Supervision: IT needs close supervision. Short feedback loops through iterations that produce results are the way to get it. Milestones and a kill switch is another requirement.
3. They're Financial Extortionists: It's scary to think about all the money wasted on the appalling track record of IT in general. But there's undoubtedly a shared responsibility there, both on the IT and general business side.
4. Their Projects Never End: Setting deadlines, having milestones and defining Done is essential in any task.
5. The Help Desk is Helpless: Software has bugs. Especially since it is often a conscious decision to release something when it's "good enough". I don't envy the help desk personnel in IT in general.
6. They Let Outsourcers Run Amok: I strongly doubt IT is the main driver in outsourcing!
7. IT is Stocked with Out-of-Date Geeks: With the rate IT is changing, this is a most truthful and dangerous statement. It takes a lot of effort to stay up to date on the technologies and practices of IT. The number of years a person has been in IT doesn't necessarily add up to the skill level of the person.
8. IT Never Has Good News: Too seldom, that's for sure!


So what do we need to succeed with IT projects?

  • We need close customer collaboration.
  • We need projects with short feedback loops.
  • We need sane requirements engineering.
  • We need to know what Done means.
  • We need the business side to be active in the entire process to deliver what they need.
  • We need quality in the development force. Such a simple task :)
  • We need companies that dare to change. Running IT the old fashioned waterfall-way didn't work very well, did it? Time to try something new.
  • We need to be careful with new technologies. If it's new and you don't run a spike, then shame on you.
  • We need developers with a vested interest in the projects success. Not just their own. The project must have priority over individuals. Heroic individuals is not the best approach for a project.
  • We need to build trust. Now that's a big one!
  • We need a hell of a lot more than I have listed here, but these just came from the top of my mind.

One thing is for sure. The challenge is not only on our side of the table. But we should help educate the business people we work with every day on how we can more readily succeed.

Saturday, January 17, 2009

Interesting .NET Bugs: Object Initializers and using-statements

Ayende posted about this bug, but it's one hard to spot bug, so I felt it was valid to pass on.


If you use object intializers in the definition of a using statement it is normal to expect that if an exception occurs the dispose method will be called. After all that is the point of the pattern.




However, object initializers are compiled in an unexpected way here:

e


Beware.


(Both images stolen from Ayende. No harm intended.)

Friday, January 16, 2009

Interesting Software Resources: Domain Driven Design Quickly

If you haven't had time to get into any literature about Domain Driven Design yet, InfoQ has created a free online short version of the Domain Driven Design bible which is well worth the time.

 

Order the book now, and read the resource until you get it. This is ESSENTIAL.

 

Why it is needed, an except:

Is it possible to create complex banking software without good domain knowledge? No way. Never. Who knows banking? The software architect? No. He just uses the bank to keep his money safe and available when he needs them. The software analyst? Not really. He knows to analyze a given topic, when he is given all the necessary ingredients. The developer? Forget it. Who then? The bankers, of course. The banking system is very well understood by the people inside, by their specialists. They know all the details, all the catches, all the possible issues, all the rules. This is where we should always start: the domain.

When we begin a software project, we should focus on the domain it is operating in. The entire purpose of the software is to enhance a specific domain. To be able to do that, the software has to fit harmoniously with the domain it has been created for. Otherwise it will introduce strain into the domain, provoking malfunction, damage, and even wreak chaos.

Thursday, January 15, 2009

Essential Project Practices: Throwing more programmers at a project

Fredrick Brooks Jr. made an interesting and in time very well known statement when he coined Brooks's Law in 1975:

"adding manpower to a late software project makes it later"

He himself said it was an "outrageous oversimplification", but it is an observation that has had value in all these years.


I think its primary value lie in the simplicity of the statement. It's a law that can be used to easily argue against common sense: that doubling the amount of developers should double the productivity. And even more, it's a statement which can easily catch the interest to find out why it was stated. One thing is that adding developers to a project won't double the productivity, quite another is to say that it won't increase productivity at all, and it sounds really absurd to say that I will make it later.

The Law is based on a few main points:

  • New programmers will have to learn about the project.
  • Communication overhead will increase

So what can you do to avoid fulfilling Brooks's Law? It's not like its gravity we're talking about here.
There's many factors involved:

  • How skilled are the new developers?
  • How well do they know the domain?
  • How many are they?
  • At what time do they come into the project. Early or late?
  • How well do they fit into the culture?
  • Are they team players that can accept the current process, or will they fight it?
  • How well can the work be segregated?
  • What practices are in place to ensure the quality of the work produced? Automated testing, Continuous Integration, close customer collaboration, code reviews and lot's more can help to ensure that things don't get out of hand.
  • ...

Pretty obvious, but I doesn't hurt to have a law for it.

Wednesday, January 14, 2009

Essential Software Practices: Favor composition over inheritance

This is an old and important principle that people seem to forget at times. So time to refresh your memory of this one as well.


In OO design you have two common ways of dealing with duplication and reuse: composition and inheritance.


Composition: Assembling objects. "Has a" relationships. Black box reuse (No internals visible)
Inheritance: Defined in terms of the parents implementation. "Is a" relationships. (Don't forget Liskov) White-box reuse (Access to parent internals).


As stated in the definition above, you use inheritance for "is a" relationships (that are substitutable). 


The are several problems with using inheritance for anything else

  • You're not supposed to use inheritance for code reuse.
  • You can (sort of) break encapsulation since the sub-class is so tightly coupled to the parent's implementation.
  • Inheritance is set in stone at runtime, so no dynamic fun.
  • A subclass can easily make assumptions about the context the method it overrides is called.

 

Composition has other strengths than inheritance

  • Any object with the same type can be replaced at runtime. This is important for many things, for instance testing - as we want to test in isolation.
  • You can "just plug in behavior" with composition. A small object can help decide how a bigger object works.
  • It is the better approach when you want to reuse functionality.

 

It is also erroneous that composition does not use inheritance. It is typically done through implementing smaller interfaces rather than inheriting from a big class.

If you want to have a look at what Grady Booch defines as the "prototypical example of the flexibility of composition over inheritance", take a look at the strategy pattern.

To sum up a short text: This is really a no-brainer. Think for a couple of seconds before choosing an approach, and just use your gut feeling.

Tuesday, January 13, 2009

Essential Software Concepts: Static methods

Static methods is a common functionality in many languages. Whereas instance methods work on instantiated objects, static methods work on types or instantiated objects.


The big question is - when is it appropriate to use static methods? 

 

First the naive positives. These are the positives that often comes up, but which aren't really true.

  • Makes it more obvious which methods works on internal state. If it doesn't have state, it should be static.
  • Great for Factory Methods (See real negative).

 

And then the real pros and cons:

Real positive

  • Don't need to initialize an object. This can be good in the case of small and discrete methods. Some utility classes are good examples, for instance Math.
  • Can be good for simple and effective, fire and forget functions.

Real negative

  • Is hard to refactor away from.
  • Doesn't work with inheritance, polymorphism and interfaces.
  • Makes it hard to test (Some test-frameworks can). For those that can't it's impossible to mock the behavior, since there is no natural way of substituting the implementation at runtime.
  • Static Factory Methods makes implementers bound to the concrete implementation. Impossible to replace during test. Handling object graph wiring this way makes it even worse.
  • Simple static methods can evolve into larger (still, possibly, untestable) beasts, which are not only a hassle to refactor away from, but do they really give the advantage of risking it?

 

What you potentially can read from a static method

  • Unless global state exists, the static method can only work on the input parameters. It is often more natural to move the method to one of these. Remember to keep logic as close to where it belongs as possible.

Just a note. Untestable in the points above isn't necessarily true. I'm talking about the possibility of switching implementations with stubs/mocks to be able to isolate what we are testing. It depends what dependencies it has of course.

 

There is use for static methods. Just be sure that it outweighs the potential negative issues!

Windows 7 useful stuff (link)

Tim Sneath, director of the Windows and Silverlight technical evangelism team, has an interesting post about 30 windows 7 "secrets". Certainly served as a good reminder to myself that I need to get the Windows 7 beta. The fact that they have spent much more time preparing 7 for power users is a great. I despise using things I can't customize.


note to self: download windows 7 beta shortly

Monday, January 12, 2009

Essential Software Principles: Don't Repeat Yourself (DRY)

Don't Repeat Yourself, also called the Single Point of Truth, is about representing any and every piece of information in a single representation.


It is usually used only for code, but was meant to have a broader meaning. According to the authors of The Pragmatic Programmer it should include "database schemas, test plans, the build system, even documentation". They further refer to various techniques for achieving this, like code generation, automatic build systems and scripting languages.


It is an important principle to focus on. Whatever it is you do, if it's described in two places one is going to get out of sync from the other. Other issues come as well. Duplication will make it harder to do changes, it reduces clarity and is perfect for creating inconsistencies in logic.


The Pragmatic Programmer defines for reasons for duplication:

  • Imposed duplication: You feel you have no other choice
  • Inadvertent duplication: You don't realize you are duplicating
  • Impatient duplication: Duplication seems like the simplest option
  • Interdeveloper duplication: Multiple people duplicate


All are common cases for duplication. Being aware of each and working to avoid them are essential.


The prime example of imposed duplication is documentation vs code. Often this is something you cannot avoid. It can also bring value. For instance can different views of a system be terribly helpful in simplifying and understanding issues. But as we all know, these things quickly get out of sync. And the effort of keeping documentation up to date is seldom a funny effort.


Often when you have an example of Imposed Duplication, other ways exist to solve it. Code is often duplicated into documentation because it is not available externally in a concise form. Perhaps you could get away with the BDD form of doing it, having running pieces of business rules. That would in many ways remove duplication.


I don't have much to concrete to say about it concerning our prime artifact: the code. Except one thing. You don't duplicate unless you have a really, really good reason for it. Duplication leads to anger, anger leads to hate...hate leads to suffering.

Sunday, January 11, 2009

Essential Testing Practices: Arrange, Act, Assert (AAA)

Unit testing is an important practice for developing a system. There's many things to it, and many reasons behind it, things I'll probably get into at a later time. One of the aspects about HOW tests are written is the structure of the tests.


Since unit testing was introduced, there has been many ways of specifying unit tests. The usefulness of mocking frameworks has been more and more acknowledged, but the structure of tests have often suffered under immature frameworks (and languages!).


The structure of unit tests matter because it makes it
- easier to understand the usage of the code under test
- easier to change
- easier to fix
- easier to write. You should be able to do it in the natural way you think about it.


The Arrange, Act, Assert unit testing structure pattern is an old way of solving this. The concept is utterly simple to understand:


First you arrange the state of the test. The you act by executing the code that is tested. At last you assert by checking that what you expect to happen actually has.


Note: Both Typemock and RhinoMocks support the AAA syntax. That means you don't need to/shouldn't use the record/replay mode anymore.

Saturday, January 10, 2009

Interesting Software Practices: Code in non-English

Which language to write code and comments in can sometimes come up as a question in countries where English isn't the native language.


There's both good and bad things with doing this, and I thought I'd list a few thoughts about it.

The bad things about writing code and comments in your own language is:

  • People that don't speak the language can't easily work with the system. This involves both professionals moving to a foreign country and concerning outsourcing. 
  • There will always be mix of languages involved. Any external libraries/frameworks/components will use a different language.
  • Any characters not present in the English language will have to be converted somehow. If there's no match, this can lead to various translations.

The good things about using your own language is:
  • You might have an easier time with your ubiquitous language. Speaking in a mixture of languages or trying to translate could lead to misunderstandings and loss of important distinctions.
  • You might use less time finding the correct words.

 

Did I forget anything obvious? What do you think?

Friday, January 9, 2009

Interesting Software Books: Dreaming in Code

Dreaming in Code is a novel best described by the author himself:


"Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software ... I spent three years following the work of the Chandler developers as they scaled programming peaks and slogged through software swamps. In Dreaming in Code I tell their stories."


This is not a book meant for the general developer, at least not so much to learn about software development. It is not supposed to be either. I'd recommend it for a developer as an interesting story only. But the main reason I'm mentioning it here is that I think it could have value for an outsider to learn many things about software development. And doing it through an interesting story. It does so partly by telling a story, and with that showing (an example) of a software process (granted quite different from many non-open source projects), and partly by getting into important information and characteristics about our industry.


Here's one of the less informative but quite funny jokes about our profession (I'm copying it since he said himself it could be found a myriad of places)


A Software Engineer, a Hardware Engineer, and a Departmental Manager were on their way to a meeting in Switzerland. They were driving down a steep mountain road when suddenly the brakes on their car failed. The car careened almost out of control down the road, bouncing off the crash barriers, until it miraculously ground to a halt scraping along the mountainside. The car's occupants, shaken but unhurt, now had a problem: They were stuck halfway down a mountain in a car with no brakes. what were they to do?
    "I know", said the Departmental Manager. "Let's have a meeting, propose a Vision, formulate a Mission Statement, define some Goals, and by a process of Continuous Improvement find a solution to the Critical Problems, and we can be on our way."
    "No, no", said the Hardware Engineer. "That will take far too long, and, besides, that method has never worked before. I've got my Swiss Army knife with me, and in no time at all I can strip down the car's braking system, isolate the fault, fix it, and we can be on our way."
    "Well", said the Software Engineer, "before we do anything, I think we should push the car back up the road and see if it happens again."

Thursday, January 8, 2009

Essential Software Refactorings: Compose Method

Joshua Kerievsky introduced a refactoring in his (great) book Refactoring to Patterns, which involves a practice I believe strongly in. The refactoring is called Compose Method. (Spoiler: It is really all about using short and descriptive methods.)


The technique is to transform complex logic into a number of self-explanatory methods. The whole point is to increase understandability, and by that also maintainability.


Here's a simple example taken from the industrial logic catalog page for this refactoring. You have a complex method. To increase clarity, you refactor out the individual parts into smaller methods, making the logic easier to understand (I don't fully agree with the coding standard though):



The main advantage with this approach is that core logic will be more readable and maintainable. A disadvantage is that classes will be very fragmented, which can be annoying. It can also affect debugging, since logic is so decentralized.


In terms of length of methods, Joshua defines composed methods as usually being about five lines long, rarely over ten.

Long methods have many undesirable properties, like being harder to understand, harder to refactor and containing duplicated logic.

Wednesday, January 7, 2009

Essential Software Concepts: Liskov Substitution Principle (LSP)

The original definition of the principle is:

"Let q(x) be a property provable about objects x of type T. Then q(y) should be true for objects y of type S where S is a subtype of T."

This is all about substitution. Any place where a base class is used in the code, you should be able to use a subclass as well.


A few implications come from this:
- You shouldn't do runtime checks for which subtype a certain object is. That means it is not automatically extendable and must be modified each time a new subclass is added.
- You shouldn't do less in a derived class. That means for instance that you shouldn't override a certain method and just leave it empty. That's not always a violation of LSP, but it can be.
- You shouldn't hide the implemention of a base class method (by using new). If you have a method that takes in an object of the type of the base class and calls such a method, only the base class method will be called.
- Even if it is often said that IS A means inheritance is the right approach, that not really true.  Substitutability should be taken into account.
- You can't simply look at the implementation of the model in isolation and decide if it is correct or not. You need to take into account the natural assumptions users will have about them. KISS still applies though.
- Not following this can bring out some hard to find bugs if someone else believes the implementation follows LSP.

Tuesday, January 6, 2009

Essential Software Concepts: The Open/Closed Principle (OCP)

Another essential software concept is what is known as the Open/Closed Principle. It's again a simple concept that is important to follow. The definition is as follows:

"Software entities (classes, modules, functions, etc.) should be open for extension but closed for modification."

The whole point is that in places where you expect to extend the functionality you should work to make sure that you don't need to change existing code to solve it. The code should be designed in a way that enables you to extend the behavior by for instance adding new classes that automatically works with the current model. This is usually solved through abstractions and polymorphism, and a number of patterns can help solve this. A common mantra for code that is expected to change is nonetheless followed: code to abstractions, not implementations.


What the principle tries to achieve is to make sure that changes have as little as possible effect on the current implementation. If you need to modify existing code you need to identify the various parts needing change and you impose the risk of making breaking changes. Extending the behavior can often be a lot easier and can enable a more flexible solution.


Does this mean that every piece of code should be written using abstractions to allow extensions more easily? Certainly not! Abstractions can add unnecessary code, can make the parts that are supposed to change less obvious and can add to the complexity of the solution. Unless you know that something is expected to change, an old childhood rule is nice to follow: Trick me once, shame on you, trick me twice, shame on me. Don't do the extra work up front unless you know it is necessary. But once you need it, don't hesitate to put it in. Uncle Bob has a wise quote about this:

"Resisting premature abstraction is as important as abstraction itself".

Monday, January 5, 2009

Essential Software Concepts: The Single Responsibility Principle (SRP)

The Single Responsibility Principle defines that a class should have only one responsibility, also defined as only one reason to change. (Defined by Uncle Bob in Agile Principles, Patterns, and Practices in C#. A book well worth reading.)


In other words, it should only be responsible for one thing. Do only one thing. Handle only one area. Potato, Potato. 


The principle is really simple in concept, yet it can be difficult to follow at times. It is an important principle to follow because it will make your classes more robust to change. Coupled responsibilities can lead to code that breaks more easily, changes that are more expensive to implement, tests that are harder to make concrete, less readable code and more code to step through when debugging.


That doesn't mean you _always_ need to follow it. The cost of separating things can be higher than the cost of holding them together. If you expect no change, then stay with that. Possibly separate the concepts by creating separate interfaces. That way the implementation stays coupled, but at least the rest of the system is not coupled to the mess.

Sunday, January 4, 2009

Essential Management Practices: Don't micro manage

Do you like to micro manage, or are you being micro managed?


That has really got to stop. It is one of the most annoying practices I know about, and one which makes no sense in our industry.


Software development is a form of knowledge management. Developers (obviously) use their heads as their tools. If there is one thing knowledge management professionals despise, it is being told at a low level exactly what to do.


In essence this just tells someone that you don't trust their work or judgment. Not exactly a good way to get motivated employees that feels ownership for the project.


I'm not advocating that you shouldn't follow up work. Absolutely not. In fact, code reviews should be a part of any development organization. I think that is quite different though. Verifying and sharing a solution is different from closely managing how to solve an issue.

Saturday, January 3, 2009

Essential Software Concepts: Side-Effect-Free Functions

I'll start of with a section about naming and understanding of the concepts involved.


Side-Effect-Free Functions is a name that will mean different things to different people. The explanation in common English is "a function without unintentional consequences". If you use Side effects as explained in software science (wikipedia), a side-effect means any state change in the system. The explanation will then change to "a function without state changes". Eric Evans in his Domain Driven Design (DDD) book further narrows the side-effect concept to mean "any change in the state of the system that will affect future operations". So we have a situation where we have a common expression which is supposed to be understandable, but you need to be aware that that people can think of three different things when you talk about this.


Another concept we need to define is Functions. Functions is another concept with different meanings in different settings. In some languages, functions and methods means different things, which means you'll have two separate concepts. In .NET you only have methods, but in my experience functions and methods are used as synonyms in .NET realm. In DDD functions are explained as being operations that return results with no observable side effects. In DDD it is further explained that methods that result in modifications to observable state are called commands. I think that's a definition that is lost on most developers. I think it is more important to focus on the concept rather than that, so I won't pay more attention to that.


(Just a stupid observation: If Functions as defined in DDD means "operations that return results with no observable side effects", then coining an expression called "Side-Effect-Free Functions" does in fact not make any sense, since you can't have side effects in DDD Functions.)


There is a reason why we would want to separate operations that will change state and operations that only queries for information. Queries for information is in essence safe to use at any given moment, whereas state-changing operations, even if perceived safe, can be very hard to be certain about in a complex system. By separating them completely, you'll have an easier job handling that complexity.


I'm not so sure that this should be applied in every situation. But what I believe is an absolute demand is that you should be able to trust that a given method/function/operation does exactly what it says it does. Trust is the big thing here. If you have a method called GetBills(), and you in fact do a call to a GenerateBills() method within, we have a big trust issue! Let's not focus on the fact that these don't belong together, but you should at least have followed the concepts behind Intention-Revealing Interfaces, and named it as GenerateAndGetBills() or such.


I believe the biggest issue here is the same as I came into in Intention-Revealing Interfaces; If you can't be sure that the parts of the system you are working with are behaving the way you think they should, then you'll use all your focus making sure they do, and not on the problem at hand.


DDD also has a conclusion worth reading: "Place as much of the logic of the program as possible into functions, operations that return results with no observable side effect. Strictly segregate commands into very simple operations that do not return domain information."

 

Update

Have a look at the link provided in Morten's comment below for more info on commands and queries. Thanks :)

Friday, January 2, 2009

Essential Software Concepts: Intention-Revealing Interfaces

Donald Knuth famously said "Software is hard". And that is an absolute truth. But that doesn't mean we shouldn't always strive to make things as simple as possible. One concept that is easy to understand that can help in that is Intention-Revealing Interfaces.


The interface comprises all the publicly visible parts. Together they should combine to an Intention-Revealing Interface. That means an interface where the intent and usage pattern is clear.


One of the largest costs of software development is understanding and changing code. That certainly involves the maintenance phase of the project, but living in the increasingly agile world we do, it most certainly is a continuous factor throughout the initial development of the system. If the code you need to work with is poorly described, you'll have to keep digging into code all the time, trying to get a full understanding of how everything works before you dare to do changes. Unfortunately, since our brain can only hold _so_ much information at a time, this severely limits our ability to to work efficiently. Instead of using our focus on the problem at hand, all is wasted on the surroundings. Like Eric Evans said, "If a developer must consider the implementation of a component in order to use it, the value of encapsulation is lost".


So work continuously to be as clear as possible in how you name your classes, methods, properties, etc. That means describing what they do, and not how they do it.


And a last thing. Don't be afraid of long method names. Use as long a name you need to describe what a particular part does. If you need to use many words to describe it, chances are you should start thinking about a refactoring.

Thursday, January 1, 2009

Interesting General Knowledge - Pareto's principle

I don't remember when I first heard of Pareto's principle, but I've always been fond of it. It was originally used to state that 80 percent of Italian income went to only 20 percent of the population, but has since been used as a rule of thumb for a myriad of other cases.


I think it fits very well in the software domain. The exact numbers are not the important fact, but rather as a guiding principle of what takes more time than the other.

 

It is not hard to think of a few examples that could follow the principle:

  • Creating demo-ware software will take 20% of the time, creating production grade software will take 80%.
  • Creating 80% of a system will take 20% of the time.
  • 20% of a systems functionality will be used 80% of the time.
  • Resolving 80% of the defects will take 20% of the time.
  • Once you have done the 80% normal cases, fitting in the last 20% exceptional cases will take 80% percent of the time. 
  • 20% percent of a systems developers create 80% percent of the working functionality (Hopefully not :) )


I know these are crazy numbers. And I know this is certainly not true in a lot of cases. But still, you'll benefit if you keep Pareto's principle in mind.