Phil Haack told some of his pains in dealing with unit testing in the post Tell Me Your Unit Testing Pains. Here I would like to share some experience of my own.
I was doing desktop applications, CAD software like AutoCAD, so some of my experience may be different from those who do web applications. Unlike Phil Haack, I didn't mess around with ASP.NET, so I didn't have any issues in mocking the Framework. We needed to develop our own Framework on top of the .Net Framework. In retro respect I wished that there was a Framework, because at least an existing Framework would have been designed by many others who knew well about software development issues. Some of the unit testing pains, or should I say, a majority of the unit testing pains had to do with an improper design of our own framework.
I wasn't even thinking of unit testing when I first started my programming career. No one in my company practiced it, and even the most enlightened ones had had only some vague idea about it. We thought that unit testing was limited to methods or classes that were easy to test or had well-defined input/output. Static methods were obviously good candidates for unit testing. But Object Oriented Classes when all different types of instances interact together? No, not for unit testing.
And so I blithely developed my code, there were just pages and pages of these code. In the course of software development I cut corners, copy pasting similar logics, modifying method parameter calls without much thought, lumping different sets of data into a single class for convenient sake, premature optimization and practiced ( or committed?) other numerous software anti-patterns.
So in the end, at the maintenance stage, I looked at my code in awe. There simply was no way I could modify the code anymore without making subtle breaking changes. The code was highly coupled, lowly cohesive: exhibit A of How Not to Develop Software.
And I started to think of unit testing. But how to test spaghetti code that wasn't written with properly object oriented techniques and testability in mind?
My old code exhibited the following characteristics:
- Free use of Singleton. In essence, you won't know what a class depended on unless you took a magnifying glass and took a good look of what was contained inside.
- No separation of concern. Since we were not doing database, so we didn't use relational database such as SQL server etc. We used binary file to store the application data. Guess what, the logic of data access was tightly intertwined with the application logics, and the application logics were sprinkled in the midst of GUI code, right in the same class as the Form class and in the event delegate methods. Oh dear, try to test code that asked you to OK a dialog box out of nowhere? There was simply no way to get automated unit test.
- Complicated dependencies between methods and classes. To properly test a method I needed to setup a lot of input objects that were difficult to setup. To make things worse, sometimes the whole purpose of setup input objects was to get only a single value or property from those objects. When you needed to write thousands lines of setup code just to test a single short method, you would lose all the interest in unit testing. There was simply no hope to test each method in isolation.
- Don't know what to test! Yes, this was one of the hurdles I faced. As mentioned above, I was doing CAD software, the output could only be properly verified by looking at the screen and counting the objects, checking the colors etc. All the screen outputs were fuzzy; if they looked good, they were correct, if not, wrong. How to translate those requirements to precise logics and conditions?
Life had been better for me since then.
Follow up post: