CPAN Testers is only made possible with the support of our sponsors.
For more information on sponsoring, please visit the I CPAN Testers website.

Upgrade Notice

The CPAN Testers Blog site has been upgraded since you last accessed the site. Please press the F5 key or CTRL-R to refresh your browser cache to use the latest javascript and CSS files.

News & Views

If you haven't come across QuestHub, you might want to take a look at the Perl Realm. It's essentially a TODO list of tasks and projects, which can earn you points, if you like to game these sorts of things. There are several stencils to help get you started, and in particular there are now several relating to CPAN Testers. If you're new to the Perl or CPAN Testers community, and want to contribute in some way, the stencils are an ideal starting point. Some are technical, others are more social, but all are to the benefit of Perl. Neil Bowers pointed me to the site in the first place, thanks to his post Make me do some work, via Questhub!, which in turn drew inspiration from a previous CPAN Testers Summary post.

Last month the CPAN::Changes Kwalitee Service saw 40% of CPAN meet the CPAN::Changes::Spec. In the past month, and thanks to Neil and his stencils on QuestHub, that figure has risen to 41%, and is continuing to improve. While the specification is no way necessary to upload your distribution to CPAN, it does help to have some form of standard format for Changes files. As a consequence, Neil Bowers submitted a ticket to include a suggestion in the Perl documentation, which has now been approved and patched.

Reini Urban posted Smoking CPAN in one line on blogs.perl. Although we would recommend more conventional ways of smoking CPAN, this one-liner certainly does the job. Reini had a specific test he was looking, at, but the outcome has been to raise tickets on several distributions, many of which have already been fixed. Testing for specific bugs is not usually what CPAN Testers is about, but in this instance it proved a very productive effort.

After reading several posts and emails over the past few years, and particularly one in the past few months, I felt a post was needed to provide a bit of clarity. There are many ways to get involved with CPAN Testers, aside from being a tester. If you wish to help improve the tools and websites, feel free to take a look at the repos and see how you can help. Let us know your ideas on the mailing list, and by all means ask for guidance and advice there too. CPAN Testers is continually evolving, but it does take time.

The Admin site is coming along. The original framework has been in place for sometime, as it was originally designed before we had CPAN Testers 2.0 in place, but I've had to redesign some of the work with accessing and indexing tester emails. You can follow my progress on QuestHub, as I complete various aspects of coding and testing. A more public repo will also appear soon.

On the mailing list last month, Serguei Trouchelle highlighted some unusual behavior with a tester using CPANPLUS. Chris picked this up and it seems some of the instructions on the Wiki for testing CPANPLUS might not be as complete as they need to be. Hopefully we can clean these up soon, so new testers can ensure their test environments are working correctly. A few people emailed me to say the CPAN Testers Reports site was down. Every so often several backend process are all running at once, and with a few extra Apache processes on top, the disk I/O can be a bit traumatic. As such the websites are usually the first things to suffer. I'm looking at moving some of the websites around soon, to hopefully reduce the impact.

Olivier Mengué posted that he thought the YAML files that are retrieved from the Reports site were invalid. We use YAML::XS, and the real file it reads from is actually a JSON file. The probalem seems to be with the YAML module, as it requires a final newline character at the end of the file. The results of YAML::XS are sent straight out in the HTTP response, and are not written to disk, and don't include a final newline character. I'm not aware of this being a requirement in the YAML spec, so would recommend people switching to YAML::XS or YAML::Syck if they wish to read the YAML files from the Reports site. Or you could just request the JSON file directly.

This week is YAPC::Europe 2013 in Kiev. Sadly I'm not there, but I hope everyone who is making the most of the event. Four talks specifically about testing are being presented, so please check them out if you can.

We're now up to 33 million reports submitted, which is great. But it does means some spring clean is still on going to keep the disk space down. The databases combined now take up around 610GB of storage, and the websites on the main server (including CPAN and BACKPAN) take up another 150GB. I'll be looking at upgrading our disk allocation soon, but it never hurts to do a bit of spring cleaning. The compression of reports is still underway and over 16 million of them are now compressed. More updates for you next month.

Posted by Barbie
on 4th August 2013

While there are many who really appreciate the work of CPAN Testers, and value the feedback it gives, but it would seem there are still several people who are less than complimentary. One recently posted about what they see as wrong with the project, while continually making incorrect and misguided references. What follows is my attempt to explain and clarify many often mistaken facts about CPAN Testers.

1) CPAN Testers != CPANTS

First on the agenda is the all too frequent mistaken assumption that the CPAN Testers and CPANTS projects are one and the same. They are not. Both projects are very different, but both are run for the benefit of the Perl community. CPANTS is the CPAN Testing Service, currently run by Charsbar, and provides a static analysis of the code and package files with each distribution uploaded to CPAN. It provides a very valuable service to Authors, and can help to highlight areas of a distribution that can be improved. CPANTS does not run any test suite in the distribution.

CPAN Testers is very definitely aimed at both Authors and Users, and is very much focused on the test suite associated with the distribution. The Users can use to the CPAN Testers project to see why a distribution might be good (or not) to use within their own projects. Well tested distributions are often well supported by the Author or associated project team.

2) CPAN Testers != CPAN Ratings

CPAN Testers does not rate any distribution. It only provides information about the test suite provided with the distribution, to help authors improve their distributions, and others to see whether they will have problems using the distribution. Any perception of a rating of the distribution is misguided.

Which is better, a distribution with a test suite that only tests the enclosed module loads, or one with a comprehensive test suite, that occasionally highlights edge cases? Don't attribute number of fails or passes as any sign of how bad or good a distribution is. It may highlight this for a particular release version, but these are only signs of kwalitee, and should not be used to rate a module or distribution. The reports themselves may help Users make an informed choice, as they can review the individual reports and see whether they would be applicable for them or their user base.

On the Statistics site, I do highlight distributions with no tests or high counts of FAIL reports. These lists are intended for interested parties to help fix distributions and/or provide test reports, helping to improve the distributions. However, none of the lists rate these distributions, or say they are not worth using, only that the test suite might not be as robust as the author thinks.

3) Development Release != Production Release

If you're banking you're whole decision on whether to use a distribution, based on whether there is a FAIL report against a development release of Perl (5.19 is exactly that), then you're going to get the rug pulled from under you. Then again, if you're using a development version of Perl in a production environment, a FAIL report is the least of your worries.

Testing CPAN against the latest development version of Perl, is extremely useful for p5p and the Author. If the Author is aware of potential issues that may affect it working with a future production version of Perl, they can hopefully fix those before the production version is released.

The default report listings in CPAN Testers excludes the development Perl releases. If you change the preferences in the left hand panel, you can see these reports, but for the regular User they are not going to see these.

4) Volunteers != Employees

All the people involved in CPAN Testers are volunteers. Not just the testers, but the toolchain developers, website developers and sysadmins. We do it because we want to provide the best service we can to the Perl community. For the most part the infrastructure has been paid by the developers themselves, although now we do have the CPAN Testers Fund, graciously managed by the Enlighten Perl Organisation, which allows individuals and companies to donate to help keep CPAN Testers up and running.

None of us get paid to work on CPAN Testers, and as such please don't expect your great idea will be the focus of our attention until we get it done. There are several sub-projects already on-going, but being volunteers our time working on the code is subject to our availability. If you wish to contribute to the project, you are free to do so. We have a Development site that lists many of the CPAN and GitHub links to get at the code. If you fork a GitHub repository, contributing back is simply a matter of a pull request.

5) Invalid Reports

Unfortunately, we do see some smokers that have badly configured setups. These are usually picked up quite quickly, and the tester is alerted and advised how to fix the configuration. Normally an Author or Tester will post to the CPAN Testers mailing list, and make the Admins and the Tester aware of the problem. In most cases the Tester responds quickly, and the smoker is fixed, but on occasion we do have Testers that cannot be contacted, and the smoker continues to submit bogus reports.

We do have a mechanism in place for run-away smokers, but it has only seriously been used on one occasion, when an automated smoker broke, while the tester was on holiday. In these cases we ignore all the reports sent, until the smoker is fixed. Although we can back track and ignore reports previously sent, it isn't always easy to do manually. This is where the Admin site project aims to make marking bogus reports easier for both Authors and Testers.

I have been working on the Admin site recently, which has been too long coming. Although a large portion of the site is complete, there is still work to do. The site will allow Authors to selected by Date, Distribution and Tester to enable them to selective mark reports that they believe to be invalid. The Tester will then be required to verify that these reports are indeed invalid. The reason for this two-stage process, is to prevent Authors abusing the system and deleting all the FAIL reports for any of their distributions. The Tester on the other hand can mark and delete reports before being asked, if they are already aware of a broken smoker having submitted invalid reports. Admins will also get involved if necessary, but the hope is that Authors and Testers can better manage these invalid reports themselves.

In the meantime, if you see a badly configured smoker, inform the Tester and/or the CPAN Testers mailing list. If it is bad enough, we can flag the smoker as a run-away. Posting that we never do anything, or are not interested in fixing the process, does everyone involved a disservice, including yourself. If you don't post to the mailing list, it is unlikely that we will see a request to remove invalid reports.

6) Rating Testers == No Testers

Rating a smoker or tester is pretty meaningless, and likely to mislead others who might otherwise find their reports useful. How do you take back a bad rating? How do testers appeal against a rating that may be from an author with a personal vendetta? How many current testers or future testers would you dissuade from ever contributing, even if they only got one bad rating? CPAN Testers should be encouraging diverse testing, not providing reasons not to get involved.

A recent post demanded that it was "a matter of basic fairness" that we allow Authors to rate Testers. Singling out your favourite, or least favourite, Tester is not productive. Just because one tester generates a lot FAIL reports, doesn't mean that they are not instructive. If we were to allow Authors to exclude or include specific Testers, you are opening up the gates to Authors who wish to only accept PASS reports. In that case, there would be no point to CPAN Testers at all.

There are Testers that are not responsive for various reasons. It was once a goal of Strawberry Perl to enable CPAN Testers reporting by default, such that the User could post anonymously, without having to actively respond for more detailed information, in the event that the User may not have sufficient knowledge or time to help. ActiveState have also considered contributing their reports to CPAN Testers, which would be a great addition to the wealth of information, but as their build systems run automatically, getting a detailed response for a specific report wouldn't be possible. Rating these scenarios gives the wrong impression of the usefulness of both the Tester and the smoker.

There are already too many negative barriers for CPAN Testers, I'm not willing to support another.

7) What is Duplication?

The argument of duplication crops up every so often. However, what judgement do you base that on? Just the OS, and maybe the version of Perl? If you only consider the metadata we store for the report, then you're missing a lot of differences there can be within the testing environments. What about the C libraries installed, the file system, other Perl modules, disk space, internet/firewall/port connections, user permissions, memory, CPU type, etc. Considering all that and more, are we really duplicating effort?

Taking a recent case (sadly I can't find the link right now), one tester was generating FAIL reports, while others had no problem. It turned out he had an unusual filesystem setup that didn't play well with the distribution he was testing. If we'd have rejected his reports simply because the OS/Perl had already been tested, we'd have a very poor picture of what could be tested, and would likely have missed his unique environment. There are so many potential differences between smokers, it is unlikely we are duplicating effort anywhere as much as you might think.

8) The Alternatives?

In that recent post it was suggested there were "alternatives to the CPAN testing system." Sadly the poster elected to not mention or link to them, so I have no idea what these alternatives might be, or whether they are indeed alternatives.

In all the time I've been looking out for other systems, there have been Cheese Cake for Python (which is more like CPANTS) and Firebrigade for Ruby (which seems to have died now), but I've not seen anything else that works similar to CPAN Testers. There is Travis CI and in the past Adam Kennedy worked on PITA, and even had some dedicated Windows machines at one point, to help Authors test their code on other environments. However, CPAN Testers still tests on a much wider variety of platforms and environments, and more importantly is much more public about its results. I'd be interested to hear about others, if they do exist, and potentially learn from them if they do anything better or have features that CPAN Testers don't have.

Conclusion

You are free to ignore CPAN Testers, but there are thousands inside and outside the Perl community that are very glad it exists, and pleased to know that it is what it is. If you have suggestions for improvements, there are several resources available to enable discussion. There is always the mailing list, but also there are various bug tracking resources (RT, GitHub Issues) available. We do like to hear of project ideas, and even if we don't have the time implement them, you are welcome to work on your idea and show us the results. If appropriate we'll integrate with the respective sub-project.

CPAN Testers isn't opposed to evolving, it just takes time :)

<< September 2013 (2) July 2013 (2) >>