Testing Questions

Adrian Howard adrianh at quietstars.com
Mon May 14 15:02:10 BST 2007

On 14 May 2007, at 13:02, Pete Sergeant wrote:

> Some assorted questions about testing:
> * Do you aim for 100% code coverage? Pro: everytime I manage it, I
> tend to zap a bunch of edge-case bugs; Cons: tends to lead to fragile
> tests that know far too much about how the thing you're testing is
> implemented

I'm usually fairly happy with anything above 90% (statement, branch &  

Test coverage is good - and 100% code coverage is nice to have.  
However, I often find I get more bang for my buck by spending testing  
time elsewhere rather than forcing everything to 100% all of the time.

I'm pretty much in agreement with <http://www.testing.com/writings/ 

If I have nothing better to do I'll spend time upping the percentage.  
I usually have something better to do.

> * To what degree do you try and separate programmer-generated unit
> tests, and tests generated by a test engineer?

I don't.

> Do you make any
> distinction between regression tests and acceptance tests?

Nope :-)

The distinctions I tend to make are between
* Fast and Slow tests
* Technology-Facing vs Business-Facing tests (this classification  
comes from Brian Marick - see <http://www.testing.com/cgi-bin/blog/ 

The former so we can make useful test suites that run in under 10m.

The latter so we can track test changes related to spec changes (the  
customer facing tests).

> * At what point are you actually writing your tests? Before you start
> writing code? During the time you write code? Once the code is  
> written?
> Something else?

Mostly before. After if I'm highlighting a bug, or feel the need to  
touch a corner case explicitly.

> * Does anyone else add extra targets to their make files? I tend to  
> have
> a 'test_coverage' target that runs the tests using Devel::Cover.  
> Anyone
> do anything else that's interesting here?

I use Module::Build, so I get ./Build testcover by default. However I  
also have some custom targets for building and testing to different  
platforms (staging, live, etc.), producing pretty HTML output, etc.

> * I tend to use t::lib::FooBar 't/lib/FooBar.pm' to put testing
> libraries in. Anyone have a better suggestion? How do you share these
> between codebases?

That's what I do. If I feel the need to share between code bases I'll  
wrap it up into a local Test::* module.

> * Do you use subdirectories in t/ for different classes of test? We  
> have
> a project with 60 test files with 1400 tests in it, but getting  
> this to
> work involved some Makefile.PL trickery. Any thoughts?

Yes. Since I use Module::Build and prove/runtests I don't have to go  
to much effort to make this work.

> * I tend to name tests with preceeding numbers to make them easy to
> tab-complete to: t/foobar/030_logging_020_file_based.t - anyone  
> have an
> improvement on this?

For organisation I just use subdirectories.

I don't care about test ordering. I like my tests to be able to run  
in any order.

I have some hacks that mean during development the running order of  
tests is
	1) test scripts that failed in the last run
	2) everything else in most-recently-changed order
which I find is pretty much ideal 90% of the time.

I really need to tidy those hacks into something for CPAN :-)

> * Is anyone auto-generating tests for their APIs? Documentation? If  
> so,
> how?

Depends what you mean by auto-generate.

I don't generate *.t files. I do use things like Test::Pod::Coverage,  
Test::perl::Critic, Test::Spelling, etc. that do a bunch of tests on  
all code without me having to think about it.

I also have a bunch of application-specific versions of this, and am  
quite happy to add code to the application to allow enough  
introspection to make test writing easy.

For example the web app framework I've hacked together knows about  
all the possible pages that it can generate - so one of the test  
scripts I have gets this list and throws them all through  
Test::HTML::W3C. Trez handy.



More information about the london.pm mailing list