Thursday, April 10, 2008

More Unit Testing Woes

Last night I talked to a former colleague of mine from the company that makes a really big enterprise Java application! Huge application! Ginormongulously huge! So big that it makes most IDEs, even our beloved IDEA, crawl if you're not careful. So big that they have a specialized team to compile the application each week and provide it as a weekly build to the people who actually write the code.
Developers pick up that build and work off of it as a base instead of compiling all 75,000+ classes--that's how big it was when I left. It's probably much bigger.
My friend was griping about an issue that he and his fellow developers are having with the people who build their HUGE Java application. The issue revolves around the unit tests.
This company is pretty forward thinking, they have a lot of JUnit tests. They call them 'Unit Tests'.
The developers run the Unit Tests locally before they commit changes to the source repository. When the integration engineers pick up the changes for a weekly build they compile them and then run the same Unit Tests.
Sometimes those Unit Tests fail in the integration environment because of differences between the integration and development environments.
The integration environment is compiled inside a vacuum and the developers use the packages in a build. The tests are run by each respective group within their environments.
Having an integration validation test fail is a big deal. There are hundreds of developers who depend on the weekly builds and breakage issues have historically snowballed into person years of wasted time--that can happen quickly when you have over 365 developers.
In the developers' environments they have a set of properties and classes that they can safely assume will always be out in the wild.
My understanding of a Unit Test is that it should be completely independent of environmental issues. You're testing the unit in isolation to make sure that you are not violating it's intended functionality.
If you want to test how units work together, and you should, create other tests that will test that and call that test something different, like integration test.
Back to the issue, neither side is willing to budge on this issue. The developers believe that they cannot write high quality tests that will only run inside a vacuum and the integration people believe that unit tests should run in a vacuum.
I agree with the integration people on this issue to an extent. Unit tests should run regardless of environmental issues. They're meant to test each unit.
I also agree with the developers that testing just the units of their code is not responsible.
In this case I would recommend refactoring their process to accommodate a tiered testing strategy. Have Unit Tests that are Unit Tests and separate them so that just those Unit Tests are run by integration within the vacuum. Also have deeper integration and use case tests that can make use of environmental resources and are more fragile. Keep those tests separate and run them in a suitable environment. And run them as often as resources permit.
This is the reason that I don't care for the term 'Unit Test'. It carries different meanings to different people. There are some who use the term 'Unit Testing' interchangeably with testing, manual testing, or automated testing. The word's meanings have been soiled.
I think it would be easiest to abandon the term 'Unit Test' and to call the specific type of test an Isolation Test. I think that the word isolation forms a clearer picture in peoples' minds that the purpose of that test is to test something in isolation.

No comments: