CPAN Testers is only made possible with the support of our sponsors.
For more information on sponsoring, please visit the I CPAN Testers website.

Upgrade Notice

The CPAN Testers Blog site has been upgraded since you last accessed the site. Please press the F5 key or CTRL-R to refresh your browser cache to use the latest javascript and CSS files.

News & Views

Posted by Barbie
on 19th September 2015

One of the first offers of help for getting the CPAN Testers server fixed, was from a guy called Doug Bell. Doug impressed me a lot with that first email, and was the only person who I felt really understood what I was asking for in my post about wanting a successor. Not that others didn't grasp that, but the fact Doug was the only person to send me a CV, proved that he got that this wasn't just a small project, or that I wanted to hand over the keys to someone who wasn't going to make a commitment to the project. Doug credentials impressed me, and his ideas and help since being invited to be involved, only proved he wasn't in this for the short-haul.

Many may know Doug as PREACTION, both on CPAN and IRC. Doug is the current leader of Chicago Perl Mongers, which also amused me, particularly once you know I'm based in Birmingham (UK) and Chicago is our twin city in America, and that I'm the leader of Birmingham Perl Mongers. Cue Twilight Zone theme tune! Doug has worked in web development, systems development and understands SysAdmin responsibilities. An ideal person to look after CPAN Testers, as you really do need a bit of everything. He also, perhaps most importantly, understands scale. CPAN Testers is a huge project, not just in terms of the number of reports (nearly 60 million), but the whole eco-system that is driven across the whole project.

I really needed a successor, as my home life and work life have been taking priority in recent years, and I don't want CPAN Testers to suffer because of that. I believe Doug is ideal to take over the project, and I hope you can give Doug the help and support that most have given me over the past 10 years or so. I'm not walking away completely, and will still be around to offer advice and help where needed, but Doug will be the one to drive the project forward from now on. He has some great ideas to help grow and improve the project, and I for one am really looking forward to seeing them come to fruition.

Doug and I have yet to meet, but hopefully the QA Hackathon 2016 will be the first time that I, along with all the other QA enthusiasts, will be able to sit down and discuss/design/plan his ideas for the future.

Welcome Doug, and thank you.

Posted by Barbie
on 4th August 2013

While there are many who really appreciate the work of CPAN Testers, and value the feedback it gives, but it would seem there are still several people who are less than complimentary. One recently posted about what they see as wrong with the project, while continually making incorrect and misguided references. What follows is my attempt to explain and clarify many often mistaken facts about CPAN Testers.

1) CPAN Testers != CPANTS

First on the agenda is the all too frequent mistaken assumption that the CPAN Testers and CPANTS projects are one and the same. They are not. Both projects are very different, but both are run for the benefit of the Perl community. CPANTS is the CPAN Testing Service, currently run by Charsbar, and provides a static analysis of the code and package files with each distribution uploaded to CPAN. It provides a very valuable service to Authors, and can help to highlight areas of a distribution that can be improved. CPANTS does not run any test suite in the distribution.

CPAN Testers is very definitely aimed at both Authors and Users, and is very much focused on the test suite associated with the distribution. The Users can use to the CPAN Testers project to see why a distribution might be good (or not) to use within their own projects. Well tested distributions are often well supported by the Author or associated project team.

2) CPAN Testers != CPAN Ratings

CPAN Testers does not rate any distribution. It only provides information about the test suite provided with the distribution, to help authors improve their distributions, and others to see whether they will have problems using the distribution. Any perception of a rating of the distribution is misguided.

Which is better, a distribution with a test suite that only tests the enclosed module loads, or one with a comprehensive test suite, that occasionally highlights edge cases? Don't attribute number of fails or passes as any sign of how bad or good a distribution is. It may highlight this for a particular release version, but these are only signs of kwalitee, and should not be used to rate a module or distribution. The reports themselves may help Users make an informed choice, as they can review the individual reports and see whether they would be applicable for them or their user base.

On the Statistics site, I do highlight distributions with no tests or high counts of FAIL reports. These lists are intended for interested parties to help fix distributions and/or provide test reports, helping to improve the distributions. However, none of the lists rate these distributions, or say they are not worth using, only that the test suite might not be as robust as the author thinks.

3) Development Release != Production Release

If you're banking you're whole decision on whether to use a distribution, based on whether there is a FAIL report against a development release of Perl (5.19 is exactly that), then you're going to get the rug pulled from under you. Then again, if you're using a development version of Perl in a production environment, a FAIL report is the least of your worries.

Testing CPAN against the latest development version of Perl, is extremely useful for p5p and the Author. If the Author is aware of potential issues that may affect it working with a future production version of Perl, they can hopefully fix those before the production version is released.

The default report listings in CPAN Testers excludes the development Perl releases. If you change the preferences in the left hand panel, you can see these reports, but for the regular User they are not going to see these.

4) Volunteers != Employees

All the people involved in CPAN Testers are volunteers. Not just the testers, but the toolchain developers, website developers and sysadmins. We do it because we want to provide the best service we can to the Perl community. For the most part the infrastructure has been paid by the developers themselves, although now we do have the CPAN Testers Fund, graciously managed by the Enlighten Perl Organisation, which allows individuals and companies to donate to help keep CPAN Testers up and running.

None of us get paid to work on CPAN Testers, and as such please don't expect your great idea will be the focus of our attention until we get it done. There are several sub-projects already on-going, but being volunteers our time working on the code is subject to our availability. If you wish to contribute to the project, you are free to do so. We have a Development site that lists many of the CPAN and GitHub links to get at the code. If you fork a GitHub repository, contributing back is simply a matter of a pull request.

5) Invalid Reports

Unfortunately, we do see some smokers that have badly configured setups. These are usually picked up quite quickly, and the tester is alerted and advised how to fix the configuration. Normally an Author or Tester will post to the CPAN Testers mailing list, and make the Admins and the Tester aware of the problem. In most cases the Tester responds quickly, and the smoker is fixed, but on occasion we do have Testers that cannot be contacted, and the smoker continues to submit bogus reports.

We do have a mechanism in place for run-away smokers, but it has only seriously been used on one occasion, when an automated smoker broke, while the tester was on holiday. In these cases we ignore all the reports sent, until the smoker is fixed. Although we can back track and ignore reports previously sent, it isn't always easy to do manually. This is where the Admin site project aims to make marking bogus reports easier for both Authors and Testers.

I have been working on the Admin site recently, which has been too long coming. Although a large portion of the site is complete, there is still work to do. The site will allow Authors to selected by Date, Distribution and Tester to enable them to selective mark reports that they believe to be invalid. The Tester will then be required to verify that these reports are indeed invalid. The reason for this two-stage process, is to prevent Authors abusing the system and deleting all the FAIL reports for any of their distributions. The Tester on the other hand can mark and delete reports before being asked, if they are already aware of a broken smoker having submitted invalid reports. Admins will also get involved if necessary, but the hope is that Authors and Testers can better manage these invalid reports themselves.

In the meantime, if you see a badly configured smoker, inform the Tester and/or the CPAN Testers mailing list. If it is bad enough, we can flag the smoker as a run-away. Posting that we never do anything, or are not interested in fixing the process, does everyone involved a disservice, including yourself. If you don't post to the mailing list, it is unlikely that we will see a request to remove invalid reports.

6) Rating Testers == No Testers

Rating a smoker or tester is pretty meaningless, and likely to mislead others who might otherwise find their reports useful. How do you take back a bad rating? How do testers appeal against a rating that may be from an author with a personal vendetta? How many current testers or future testers would you dissuade from ever contributing, even if they only got one bad rating? CPAN Testers should be encouraging diverse testing, not providing reasons not to get involved.

A recent post demanded that it was "a matter of basic fairness" that we allow Authors to rate Testers. Singling out your favourite, or least favourite, Tester is not productive. Just because one tester generates a lot FAIL reports, doesn't mean that they are not instructive. If we were to allow Authors to exclude or include specific Testers, you are opening up the gates to Authors who wish to only accept PASS reports. In that case, there would be no point to CPAN Testers at all.

There are Testers that are not responsive for various reasons. It was once a goal of Strawberry Perl to enable CPAN Testers reporting by default, such that the User could post anonymously, without having to actively respond for more detailed information, in the event that the User may not have sufficient knowledge or time to help. ActiveState have also considered contributing their reports to CPAN Testers, which would be a great addition to the wealth of information, but as their build systems run automatically, getting a detailed response for a specific report wouldn't be possible. Rating these scenarios gives the wrong impression of the usefulness of both the Tester and the smoker.

There are already too many negative barriers for CPAN Testers, I'm not willing to support another.

7) What is Duplication?

The argument of duplication crops up every so often. However, what judgement do you base that on? Just the OS, and maybe the version of Perl? If you only consider the metadata we store for the report, then you're missing a lot of differences there can be within the testing environments. What about the C libraries installed, the file system, other Perl modules, disk space, internet/firewall/port connections, user permissions, memory, CPU type, etc. Considering all that and more, are we really duplicating effort?

Taking a recent case (sadly I can't find the link right now), one tester was generating FAIL reports, while others had no problem. It turned out he had an unusual filesystem setup that didn't play well with the distribution he was testing. If we'd have rejected his reports simply because the OS/Perl had already been tested, we'd have a very poor picture of what could be tested, and would likely have missed his unique environment. There are so many potential differences between smokers, it is unlikely we are duplicating effort anywhere as much as you might think.

8) The Alternatives?

In that recent post it was suggested there were "alternatives to the CPAN testing system." Sadly the poster elected to not mention or link to them, so I have no idea what these alternatives might be, or whether they are indeed alternatives.

In all the time I've been looking out for other systems, there have been Cheese Cake for Python (which is more like CPANTS) and Firebrigade for Ruby (which seems to have died now), but I've not seen anything else that works similar to CPAN Testers. There is Travis CI and in the past Adam Kennedy worked on PITA, and even had some dedicated Windows machines at one point, to help Authors test their code on other environments. However, CPAN Testers still tests on a much wider variety of platforms and environments, and more importantly is much more public about its results. I'd be interested to hear about others, if they do exist, and potentially learn from them if they do anything better or have features that CPAN Testers don't have.

Conclusion

You are free to ignore CPAN Testers, but there are thousands inside and outside the Perl community that are very glad it exists, and pleased to know that it is what it is. If you have suggestions for improvements, there are several resources available to enable discussion. There is always the mailing list, but also there are various bug tracking resources (RT, GitHub Issues) available. We do like to hear of project ideas, and even if we don't have the time implement them, you are welcome to work on your idea and show us the results. If appropriate we'll integrate with the respective sub-project.

CPAN Testers isn't opposed to evolving, it just takes time :)

Posted by Barbie
on 18th April 2013

In the final moments of the QA Hackathon at the weekend, this report came through the CPAN Testers system to mark some notable progress to the future of CPAN Testers.

The report itself might not look too different from any other report submitted in recent times, except there a two significant differences if you know what to look for. The first is a little more visible and is the fact that the report was created by cpanminus and submitted by cpanminus-reporter.

Breno G. de Oliveira (GARU) started work on a CPAN Testers client for cpanminus last year, asked lots of questions during the 2012 QA Hackathon in Paris, and completed the work this weekend at the 2013 QA Hackathon in Lancaster. So those who have switched to using cpanminus, you can now run the cpanm-reporter script to process the build log from cpanminus and submit reports.

However, while this is a great step forward, the not so obvious change is probably even more significant. This report marks the first ever report to be submitted using the all new CPAN::Testers::Common::Client. This takes a lot of the core analysis of the report and environment out of the CPAN smoker clients and brings it all together, so that any current or new smoker can make use of it. This will mean as smoker clients are upgraded we will start to get more consistency between reports, and all the useful information needed can be seen regardless of whether the report was tested via cpan, cpanplus or cpanminus.

But this still isn't the full story. There is an even bigger significance with the use of CPAN::Testers::Common::Client, and that is it submits the report broken down into its component parts as meta data. This now means that although a plain text report is available, the individual items of data, such as prerequisites and versions, are now submitted separately and are stored as facts within the Metabase.

This last part is significant because in the future it will allow us to do more interesting analysis on the reports, and hopefully provide more information to authors to understand why some tests fail and others pass.

Many thanks to Garu for finally completing all the work he has been doing over the last year, and congratulations for being the first to submit a report with the new client engine.

At the QA Hackathon in Lancaster, Garu and I were discussing which distribution to test. We looked at the distributions with no reports and spotted Task-BeLike-SARTAK. Garu thought he would to be like SARTAK, but unfortunately we hit a problem with a prerequisite, which wouldn't create a report, so we ended up picking IO-Prompt-Tiny. Thankfully this went through without a hitch and arrived on the reports website a couple of hours later.

So March ended on quite a high, following the 2012 QA Hackathon. With so many key people in one room, it was impressive to see how much got done. You can read reports from myself (parts 1 & 2), David Golden, Ricardo Signes, Miyagawa, Paul Johnson, Ovid and Dominique Dumont, and there were several tweets too, during and after the event, and the wiki also has a Results page. There was a significant number of uploads to PAUSE during and after the event too. And CPAN Testers has benefited hugely from the event.

Arguably the two biggest significant developments during the event were thanks to Breno G. de Oliveira, who not only added support for CPAN Testers within cpanminus, but also began the work on the long desired CPAN::Testers::Common::Client. While working on the former, Breno noticed that there was a lot of common reporting tasks (and differences) performed within CPAN::Reporter and CPANPLUS::YACSmoke. As he wanted to replicate this within his client for cpanminus, he asked whether it would make more sense to wrap this into a separate library, to which David and I were full of encouragement for. Breno set about setting up a GitHub repository and has been doing some fantastic work bring all the reporting together. You can follow his efforts of GitHub, as garu, and hopefully we shall start to see this distribution on CPAN and in Smoker clients soon.

While that may have been the most significant output for CPAN Testers, other parts of the toolchain also made some giant leaps. Having Andreas, David, Ricardo, Schwern and Miyagawa all together meant David's idea of changing the way we look at indexing CPAN, allowed kinks and ideas to be ironed out in minutes rather than weeks. David's idea is to distance the indexing system from the repository. This will allow other repositories, such as the many DarkPAN repositories out there, to use the same system and enable toolchain installers to point to different repositories as needed. Nick Perez is now working on CPAN::Common::Index, which will form the basis for the new system. You can read the details in David's write-up of the event. Hopefully this will be a major step forward to enable CPAN Testers to be used for independent repositories, which has been a request for many years.

In other news we finally announced the sponsorship website and CPAN Testers Fund. Over the past 10 years the CPAN Testers has been funded largely by the developers. In the last 5 years hosting, network and bandwidth has been increasing, with the developers and Birmingham.pm being the principal donors. While this is great, and we really do appreciate this, the bigger CPAN Testers becomes, the more support we need. As such we are hoping to encourage sponsorship from businesses and corporations, especially if they use Perl. If you would like to know more, please get in touch. Many thanks to the Enlightened Perl Organisation for managing the CPAN Testers Fund for us.

On the mailing list there was a discussion about the reports not showing the differences in tests where strings look identical, but where one may contain control characters. Personally I feel trying to fix this in the browser is too late. As we treat the test report as a complete piece of text, we cannot currently isolate the test output to know what control characters to highlight. It could end up confusing the viewer. Andreas also thought that the author should include more appropriate tests in their test suite, and suggested the use of Test::LongString, Test::Differences or Test::HexDifferences.

As we've mentioned a few times, the SQLite database download is no longer as reliable as it once was. Andreas believes that we may have a unusual text pattern that is causing the problem, but whatever it is, it's not something we know how to solve. As the database file is high maintenance, I would like to abandon it sooner rather than later. If you currently consume the SQLite database, please take a look at the new CPAN::Testers::WWW::Reports::Query::Reports release. This now the preferred way to request records from the master cpanstats database. For the summaries, you now also have CPAN::Testers::WWW::Reports::Query::AJAX. If you have any problems or suggestions for improvements, please let me know.

On a final note, please be aware that CPAN Testers will be down for maintenance on Thursday 12th April. We'll try get everything back online as soon as possible. Thanks to Bytemark Hosting for helping us with the disk upgrade.

January ended up being quite a productive month, with several issues with websites getting sorted finally.

A few people noticed that the leaderboards weren't producing the right numbers on the Statistics site. Due to it being January a rather deceptive bug came to light regarding the calculations of previous months. Thankfully I found and fixed the bug before the end of January, as for the remaining months of the year, the bug doesn't surface :)

Next up was the Preferences site. After several weeks trying to get the SSL certificate set up correctly, I left it to work on the Statistics site bug, only to return to look at it again a couple of weeks later to discover it all working! No idea what the problem was, but suspect something was caching somewhere with the wrong settings. Any road up, if you've been wanting to change any preference settings for emails, you can now login and update these for yourself again.

I also managed to finally release the codebase that runs the Reports site. The first release isn't the latest code that is running live now, but I will be backporting live into the repository over the coming weeks. This now means that all the code to run the full process for yourself, from testing to displaying reports is now all available online. And all Open Source. It also means that two of the old codebases (CPAN-WWW-Testers & CPAN-WWW-Testers-Generator) can now be archived. The RT queues for both will be reviewed and will either be transfered to the newer releases or closed. In future you will need to post any issues for the sites to their respective queues.

It seems we had a few people praising or at least putting CPAN Testers in a good light last month. Alberto Simões said "Thank you, CPAN Testers", Buddy Burden gave us "A Tale of CPAN Testers" and Joseph Walton wrote about New releases and old perls in their blogs. Many thanks to those guys, and to everyone who tweets or talks about how CPAN Testers has helped you. It is also great to know that people are making use of all the sites in the family, to help them write more robust code or get a better idea of what distribution will work for them.

And continuing the thank yous, I'd also like to thank Shadowcat, The Perl Foundation and the Sponsors for enabling myself, Ricardo Signes and David Golden to all attend the forthcoming 2012 QA Hackathon in Paris. Also in attendance will be Slaven Rezić and Andreas König, so CPAN Testers is likely to be featured quite heavily during the event. You can see our specific aims on the Attendees wiki page. We'll feature some of the updates in a future post and I'm sure many of the attendees will be blogging and tweeting during and after the event.

Lastly, expect to see a milestone update this month. It's a big one!