Perl Is not Java: Quick and Dirty Unit Testing in Perl

10 November, 2013


At work I recently needed to edit a file-manipulation script written in Perl.  The script had no unit testing.  It had no documentation.  It had all of one subroutine; the rest was just straight script in loops and if blocks.  It worked perfectly.  However, it was time to implement additional functionality.  Left with a raw script and only a basic working knowledge of Perl, I embarked on a quest to make the script a little more robust through unit testing.

Most of my coding experience is in Java, SQL, and proprietary IBM™ tools.  I also dabble in a fair amount of PHP outside of work.  Not to say I've never used anything else--just that I haven't done a lot of it.  Since the Perl script in question is actually a fairly important component in our software, I really didn't want to screw it up.  Of course, that means test-driven development, right?  I didn't have a lot of time to spend, so I wanted to use tools that were fairly familiar.  I also wanted anyone on the project to be able to quickly and easily pick up the script and not feel like they need to learn a whole new language to muddle through.  So reuse of existing tools and only adding very basic components was important.

Also, since this was a work project, the code is confidential.  If I'm being vague, it's because I can't tell your the secrets!  Otherwise I'd probably give better examples.


Running Perl in Windows

The first thing I figured was to be able to actually run a Perl script in Windows.  What I found was a thing called Strawberry.  It worked with no fuss.  I have no comment on whether it is the best or whatever.



The best IDE I found was the EPIC plugin for Eclipse.  This is with heavy bias because we use Eclipse for Java development, and remember: familiarity and not expanding the tool set is important in this case.  Here is a link on how to install it:

Side-node:  Padre looks kinda cool.


Unit Test Package

There are quite a number of unit testing tools for Perl.  Actually, they are quite good.  They output in TAP (Test Anything Protocol).  The most like JUnit I could find was Test::Unit.  Here's a fairly decent example of how to use it:

Unfortunately, Test::Unit was not suitable for my purposes because it kind of assumed that my Perl code was object-oriented and neatly organized into modules.  Also, it didn't seem to be part of the basic install of Strawberry.  Keeping to the simple principle, Test::Unit was out...for now.

I finally landed on Test::More.  It was fairly simple to use and part of the packages installed with Strawberry.


Organizing the Project

I spent way too much time trying to figure out how a project should be organized.  In the end I completely ignored what others were explaining because it over-complicated for how simple this project was.  This is what I ended up with:

Project Root
| |
| |-testFile1.txt
| |-expectedOutput1.txt
| |
| |-testResults.txt

One thing to note here is that my test is a simple script.  I didn't need any speical .t file to make it a test case.  To run the test script, I just right-clicked on the file and went to Run As>Perl Local.  I didn't need any configuration settings, and it was nearly as easy to run as a JUnit test.


The Test Cases

Admittedly, I ended up kind of running some scenarios rather than truly writing unit tests.  But it worked fine if you think the script as a whole is the unit....  The approach was to run the script on various files and compare the output against expected results.  Then the test files become the test cases.  It's maybe more like automated quality assurance, but since the script really only did some basic file reading and writing sort of stuff, it was ugly to write proper unit tests.

To run the test case, I needed to run the script from my test, which required input arguments.  As it happens, that is not terribly difficult.

system("perl ../scripts/ \"$inputFileName\" \"$outputFileName\"");

I found the module File::Compare useful for doing the comparison of expected result to actual result.  My tests looked something like this:

is(compare($expectedOutputFileName, $inputFileName), 0, "Test 1");


The Test Output

What I really wanted was a summarized output with red and green colors.  Who doesn't, right?  I didn't get it, though. 

There is a module Test::Harness that may have been useful here, but I was too lazy to look into it completely; it looked overly complicated, and I didn't have that much time.

I did, however, manage to output the test cases to a flat file, which I found to be slightly better than scrolling through the console output with all the logging and such.  I put this up at the beginning of the test script:

my $builder =Test::More->builder->output('testResults.txt');

On problem with this approach is that you don't see anything but pass or fail in this file.  So the details about the failures are still in the console output, as is the summary information about how many tests were run and how many failed.


Putting It All Together

So here is the final test script, in its "entirety" (edited so that it doesn't give away so many trade secrets):


use File::Compare;
use Test::More tests => 3;

my $self = shift;

@testFileNameArray = (

foreach $testFileName (@testFileNameArray) {
    runScenario("../testData/" . $testFileName);

sub runScenario() {
    local ($inputFileName);
    ($inputFileName) = ( $_[0] );
    $expectedResultFileName = replace($inputFileName, "[.]txt", ".out");
    system("perl ../scripts/ \"$inputFileName\" \"$inputFileName.tmp\"" );
    is( compare( $expectedResultFileName, $inputFileName . ".tmp" ), 0, $inputFileName );
    unlink( $inputFileName . ".tmp" );

sub replace {
    my ( $string, $textToFind, $replacementText ) = @_;
    $string =~ s/$textToFind/$replacementText/ig;
    return $string;

To add a test case, one would just add a new file name into the array and bump up the number on line 4.

Here's an example of the output result file on success:

ok 1 - ../testData/TestCase1.txt
ok 2 - ../testData/TestCase2.txt
ok 3 - ../testData/TestCase3.txt

And here's where the second test failed:

ok 1 - ../testData/TestCase1.txt
not ok 2 - ../testData/TestCase2.txt
ok 3 - ../testData/TestCase3.txt



So that's it.  I'm sure there is a better way, but this got me by for now.  I am dissatisfied with the lack of testing tools available, though.  Maybe that would be a good project....

New comment