Feb 06, 2016

"Do you think it is a good idea?"

Well, I've tried TDD several times myself, and as part of our local Python User's Group kata. I find it to be a frustrating technique, but I can't deny there are many people who find it useful.

I've watched a handful of video walkthroughs of TDD, and gotten frustrated at the lack of good testing or development in them. One example is Robert Martin's primes kata. His solution works, but realistically only for small numbers. He never specified an input domain, and during development never tried to identify ambiguities between the API 'promise' that the function would take int32, the expectation by most that it's fast to factor 2,147,483,647 (2^31-1), and the reality that his code would take a long time for that.

You write "I think after-the-fact tests added after you already have decent coverage".

I love code coverage as a way to identify missing tests. For me that's usually in error handling.

Many of the TDD writings, going back to Kent Beck, say that TDD will naturally lead to high code coverage. I think that's wrong. I again thing many of the TDD writings ignore the problems that can come up during refactoring.

I think Martin's TDD style works because the problem he uses are "simple" in an algorithm sense, which is why he can talk about a 'Transformation Priority Premise'. Martin says "Refactorings are simple operations that change the structure of code without changing its behavior", but as I wrote before, 'substitute algorithm' is not a simple operation.

Then again, I think most businesses only need simple algorithms, even if the overall goal is quite complex. My complaint is about where TDD isn't enough.

So if you also include coverage and after-the-fact tests, that would be far beyond what I've seen so far in TDD walkthroughs.

That said, I don't think many people will watch a code walkthrough, so don't expect much fame or uptake from your series.

That said, yes, I think it's a good idea.

"but it does force you to design it in a unit-testable fashion"

You might want to read http://www.rbcs-us.com/documents/Why-Most-Unit-Testing-is-Wa... . There's a follow-up as well. He argues that most time should be spent on integration testing, and I agree.

I think unit tests implicitly enforce an API on the code, even when there's no need for an API. It's a special purpose internal API designed for testing. Because they are internal functions, you should have the freedom to refactor them, but the unit test burden keeps you from doing so. (In one TDD kata we did, we got working code, then compared it to others who were there. Our code was several times larger because we had tests for multiple code layers, while they only had one layer. We could have removed the layers, but that would have called for rewriting most of the unit tests, and I think that inhibited our design sense.)

In my own code, most of my tests are base on a unit-test framework but I almost only use the public API (for the library code) or a call to main() (for the command-line code). I feel this gives me more freedom to refactor without worrying about the extra work to change the unittests.

Anyway, Coplien describes it much better than I, and my comments just now don't even apply to a "sort()" function as that would be a public API.