tom at eborcom.com
Mon May 14 14:58:35 BST 2007
On Mon, May 14, 2007 at 06:03:03AM -0700, Ovid wrote:
> --- Pete Sergeant <pete at clueball.com> wrote:
> > * Do you aim for 100% code coverage?
> No. That's often a waste of money.
Even if your tests cover all your code, they don't necessarily cover
all your inputs and the behaviour expected, assuming any definition of
these expectations occurred.
So even if I spent the effort to cover all my code, I wouldn't want to
spend the effort covering all conceivable inputs.
> If you're testing a Web site, for example, grep your access logs and
> it becomes very obvious where placing your testing time should go
Also consider logging the data passed to your code. For Web
applications, you can get query string parameters from the access log;
you can configure your application to log POST requests elsewhere.
You can then test common success modes, common failure modes and
interesting boundary conditions.
I value code coverage, but I tend to treat it as a tool to find code I
don't currently test that I can test easily. If Module A has 90%
coverage and Module B has 50% coverage, that neither means you should
focus testing efforts on Module B, nor that you should consider Module
A more rigorously tested.
More information about the london.pm