Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks. I wouldn't call TDD folks "weirdos." I 100% support the goals of TDD, and I think that it's a great discipline.

But as I explained in [2], I usually (not all the time) tend to write unit tests after the fact. This is something that seems to get TDD folks all hot and bothered.

My development testing is usually done with test harnesses. You can call it "spike," or whatever.

Actually, as I write this, I am taking a break from some fairly significant refactoring of a backend server that I wrote a couple of years ago[0]. It's a layered system, with each layer having a standalone product lifecycle and integrated tests.

I have a plan for the feature that I'm adding, but not a full project timeline. I have already encountered a couple of places where I deviated from my plan, and I'm barely getting started.

At this layer, the tests are more like test harnesses, than complete unit tests. By the time I get to BASALT (the top layer), the tests are pretty much complete unit tests, examining and reporting on results. At this level, my tests basically output runtime data, below an explanation of what we want to see in that data, so that means I need to spend quality time, reviewing the output. By the time I get to BASALT, I can just scan the reports, looking for red and green; which is good, because I run thousands of tests, by then. At this point, I'll be running less than a hundred tests.

So I guess all my tests are "spike" tests.

[0] https://riftvalleysoftware.com/work/open-source-projects/#ba...



I did a poor job of explaining myself. By "spike" I mean exploratory coding without test-driving. Sometimes you don't know enough about the problem or the solution spaces to test-drive effectively. That means hacking around for a while to get your bearings.

Test harnesses are really a different thing and I think they're always a good idea. I like to test "outside-in", I think it's good at preventing low-level assumptions from upsetting the top level. But again, it depends on the code and context.

As an aside, small studies of test-before vs test-after show that in terms of bug yield, there's no major difference. Over the long term I think they diverge. That means that the magic of TDD isn't that it causes you to write better code than writing tests afterwards. It's that it forces you to write tests at all. TDD is eating your vegetables first.


> TDD is eating your vegetables first.

I like that philosophy.

I spent many years, at an insanely quality-driven company. I wanted to strangle the QA folks on many occasions, but they trained me to not accept crap.

According to a lot of people, that disqualifies me from working at startups.

I'll have to let the folks at the startup I'm working with know that. They'll need to shop around for a slob that will work for free.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: