CPAN Testers is only made possible with the support of our sponsors.
For more information on sponsoring, please visit the I CPAN Testers website.

Upgrade Notice

The CPAN Testers Blog site has been upgraded since you last accessed the site. Please press the F5 key or CTRL-R to refresh your browser cache to use the latest javascript and CSS files.

News & Views

Posted by Barbie
on 18th April 2013

In the final moments of the QA Hackathon at the weekend, this report came through the CPAN Testers system to mark some notable progress to the future of CPAN Testers.

The report itself might not look too different from any other report submitted in recent times, except there a two significant differences if you know what to look for. The first is a little more visible and is the fact that the report was created by cpanminus and submitted by cpanminus-reporter.

Breno G. de Oliveira (GARU) started work on a CPAN Testers client for cpanminus last year, asked lots of questions during the 2012 QA Hackathon in Paris, and completed the work this weekend at the 2013 QA Hackathon in Lancaster. So those who have switched to using cpanminus, you can now run the cpanm-reporter script to process the build log from cpanminus and submit reports.

However, while this is a great step forward, the not so obvious change is probably even more significant. This report marks the first ever report to be submitted using the all new CPAN::Testers::Common::Client. This takes a lot of the core analysis of the report and environment out of the CPAN smoker clients and brings it all together, so that any current or new smoker can make use of it. This will mean as smoker clients are upgraded we will start to get more consistency between reports, and all the useful information needed can be seen regardless of whether the report was tested via cpan, cpanplus or cpanminus.

But this still isn't the full story. There is an even bigger significance with the use of CPAN::Testers::Common::Client, and that is it submits the report broken down into its component parts as meta data. This now means that although a plain text report is available, the individual items of data, such as prerequisites and versions, are now submitted separately and are stored as facts within the Metabase.

This last part is significant because in the future it will allow us to do more interesting analysis on the reports, and hopefully provide more information to authors to understand why some tests fail and others pass.

Many thanks to Garu for finally completing all the work he has been doing over the last year, and congratulations for being the first to submit a report with the new client engine.

At the QA Hackathon in Lancaster, Garu and I were discussing which distribution to test. We looked at the distributions with no reports and spotted Task-BeLike-SARTAK. Garu thought he would to be like SARTAK, but unfortunately we hit a problem with a prerequisite, which wouldn't create a report, so we ended up picking IO-Prompt-Tiny. Thankfully this went through without a hitch and arrived on the reports website a couple of hours later.

Posted by Barbie
on 18th April 2013

Congratulations to Nigel Horne for submitting the 30 millionth CPAN Testers Report. The report itself was a PASS for DBI.

There have been a couple of milestones over the pass few days, and goes to prove that CPAN Testers is going from strength to strength. More news soon.

This coming weekend, 12th-14th April will be the 6th annual QA Hackathaon. Once again returning to the UK, and will be hosted by the folks at NorthWestEngland.pm, in Lancaster. There will be several CPAN Testers in attendance, and hopefully several CPAN Testers related projects that will feature some attention. However, the event is much bigger than CPAN Testers, and there are plenty of projects involving testing, CPAN, the installation toolchain and others. If you're not attending, but would still like to help over the weekend, join the #perlqa IRC channel and let people know your available. With so many projects being worked on, additional coding help from remote hands are often very welcome.

March ended up being a slightly quieter month on the mailing list. Bo Johansson answered his own beginners questions, which I aim to put on the wiki at some point, unless anyone else gets their first. Chris Marshall asked if we were able to speed up the report feedback for PDL. Unfortunately, this is something that is dependent on a number of things, not least the speed of processing all the reports coming in. There were some issues last month, which have been (or are being) fixed, but the general speed of processing could do with improving. Disk I/O is now the bottleneck for most operations, so I'm looking at ways we can improve that. However, I would like to emphasise that the current processes run automated with very little manual intervention. Fast tracking reports for a particular distribution is not something the system is designed to do. If the process is running too slow for you, then by all means have a look at the code and suggest ways to improve it further. Alceu Rodrigues de Freitas Junior asked about distroprefs, which adds to Bo Johansson's beginners questions, and was answered by Steffen Schwigon, which again I'll be looking to add those to the wiki.

We're fast approaching the 30 million mark for number of reports submitted, but I suspect we'll have to wait until after the QA Hackathon for when it's submitted.

We have some news in the pipeline for CPAN Testers, and once we've had a chance to discuss the details at the QA Hackathon, we'll let you folks know about it all. If you can't make the QA Hackathon, remember you can always join the IRC channel, or even spend the weekend writing code. More next month.

<< May 2013 (1) March 2013 (1) >>