Testing Questions

Ovid publiustemp-londonpm at yahoo.com
Mon May 14 14:03:03 BST 2007


--- Pete Sergeant <pete at clueball.com> wrote:

> Some assorted questions about testing:
> 
> * Do you aim for 100% code coverage?

No.  That's often a waste of money.  If you're testing a Web site, for
example, grep your access logs and it becomes very obvious where
placing your testing time should go (note that not only am I advocating
not trying to test the really hard-to-test stuff, but I'm also
advocating not testing the easy-to-test stuff if you can tell that your
time/money is better spent elsewhere).

> * To what degree do you try and separate programmer-generated unit
> tests, and tests generated by a test engineer?

Define "test engineer".  You might find 13 people giving 17
definitions.  Can't answer it until then.

> Do you make any
> distinction between regression tests and acceptance tests?

Personally, no, but your mileage may vary.  I often make a distinction
between white and black box testing.  I usually drop the white box
testing after I am very comfortable with the level of black box
testing.
 
> * At what point are you actually writing your tests? Before you start
> writing code? During the time you write code? Once the code is
> written?
> Something else?

I write tests while I write code.  Sometimes it's before a feature,
sometimes it's after.  A lot depends upon how well you understand what
you're building.  If you're doing a quick exploratory spike, spending a
lot of time writing tests only to find the spike unworkable can be a
waste of time.  Don't get bound by dogma.

> * Does anyone else add extra targets to their make files? I tend to
> have
> a 'test_coverage' target that runs the tests using Devel::Cover.
> Anyone
> do anything else that's interesting here?

I don't.  I have a cron job that automatically rebuilds a code-coverage
web site, but for Kineticode we often had different targets to specify
whether or not we'd build  database for the test.

> * I tend to use t::lib::FooBar 't/lib/FooBar.pm' to put testing
> libraries in. Anyone have a better suggestion? How do you share these
> between codebases?

Use a different top-level namespace.  If I have a lib/Customer.pm file,
the resulting test class is in something like t/tests/Tests/Customer.pm
(admittedly an awkward path).  This allows the directory structure
under t/lib/Tests/ to be similar, if not identical, to lib/.  This
makes certain tools easy to create (such as automatically toggling back
and forth between the code and the tests while developing).  It also
makes it very easy for new developers to find the correct tests.  All
testing support modules go in t/lib/:

> * I tend to name tests with preceeding numbers to make them easy to
> tab-complete to: t/foobar/030_logging_020_file_based.t - anyone have
> an improvement on this?

I do that all the time.  I use a variation of a test renumbering
program I wrote to manage them:

  http://use.perl.org/~Ovid/journal/27667

> * Is anyone auto-generating tests for their APIs?

Nope.  Perl's introspective capabilities suck :(

> Documentation?

I highly recommend Test::POD and Test::POD::Coverage.  The former
ensures that your POD is correct and the latter (tries to) ensure that
your POD is complete.  Some object to the latter due to the assumptions
it makes about documentation, but it's really nice to have the
documentation format standardized and enforced.

Cheers,
Ovid

--

Buy the book -- http://www.oreilly.com/catalog/perlhks/
Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/


More information about the london.pm mailing list