CPAN Testers is only made possible with the support of our sponsors.
For more information on sponsoring, please visit the I CPAN Testers website.

Upgrade Notice

The CPAN Testers Blog site has been upgraded since you last accessed the site. Please press the F5 key or CTRL-R to refresh your browser cache to use the latest javascript and CSS files.

News & Views

Posted by Barbie
on 12th October 2015

Last month CPAN Testers managed to clock up 60 million reports.

Congratulations to Chris Williams (BINGOS) for submitting the 60 millionth CPAN Testers Report. The report itself was a PASS for BenchmarkAnything-Storage-Frontend-Lib and was submitted on 20th September 2015.

Back in March 2014 I predicted the 50 millionth report would arrive in February 2015. I wasn"t too far off, as Chris Williams (again) submitted a PASS report for Map-Tube on 9th January 2015. As such I predict we"ll see the 70 millionth in June 2016. Time will tell.

Happy CPAN Day. If you weren't aware last month, Neil Bowers took on a promotion of the first CPAN Day. A day to celebrate the birth of CPAN. Although the concept of CPAN started before 16th August 1995, that was the day that the first true upload occurred. As such, Neil spent several days writing blog posts to help identify ways to improve your distributions. While the module code within might be great, it is just as important to write clear documentation and make the most of your package files, particularly the META files, so that PAUSE, METACPAN and others can link to all the right places.

The day itself was a success in terms of numbers, as prior to the event, working with Neil, I created a new CPAN Uploads page, on the Statistics site. This helped everyone monitor the days events in almost realtime. Its proved so successful, the page has continued. With significant thanks to INGY, we blew away the previous score of 150 for releases in a single day with 775. We also had 107 different authors submitting releases, with 41 brand new distributions finding their way on to CPAN. It was quite an achievement, and one that might take a while to pass, but as Neil noted after the event, perhaps next year we should think about something other than numbers. I'm not sure what that might be, but hopefully we'll think of something suitable to celebrate 20 years of CPAN!

During CPAN Day, unsurprisingly, there were quite a number of CPAN Testers reports (37764 in 1 day, which although 10k higher than the average, wasn't the highest we've seen in a day, that was 47929 back in January). One aspect of which I wanted to see was, how many different OSes were being tested? The subject also cropped on the mailing list, thanks to Dean Hamstead, as to which OSes are undertested, which we would really love to see more reports for. Of the testing that has been done over the last few months, the following popular OSes could use a bit more testing. So if you have a spare box and are looking to get involved with CPAN Testers, one of the following would be very gratefully appreciated:

  • Solaris
  • Irix
  • Windows (Cygwin)
  • NetBSD
  • FreeBSD
  • OpenBSD

Solaris is an often requested platform, but as most earlier versions are only supported on SPARCs, it is a tricky one to test with. However, later versions do support the x86 systems. Irix, although no longer supported by Silicon Graphics, is still used, so if you're able to set a test environment on with Irix, there are several authors who would be very grateful. The other 4 are a little more popular and easier to install in a test environment, but they are just as much in need of testing as other higher profile platforms.

The thread started by Dean also raised up the point by BooK, about whether the CPAN Testers Fund could be used to rent boxes. Unfortunately, the Fund itself isn't substantial enough to do this, as funding the existing main servers is already quite expensive. If, as Mark Keating has suggest on many occasions, several major companies using Perl were to contribute a regular, even a small, amount to the Fund each month, we could think about renting some servers or VMs to regularly test otherwise less tested systems. We already have a gittip/gratipay account, and there has been the suggestion of creating an Amazon Smile account too. These are all great ways for individuals to contribute, but realistically for us to grow it does need more major help from bigger companies. If your company can help financially in any way, please suggest the CPAN Testers Fund to them.

Gabor Szabo asked about why the Perl special variables list also includes a reference to the MSWin32 variables. The reason is simple, its how David Golden wrote it :) Longer term this will be a little easier to manage, and will hopefully be a little clearer, once Garu implements the common reporting client. Consolidating the reports with consistent data about the platform, perl and installed modules is a goal so that test reports can be better analysed, and to more easily see differences between installers as well as other environment criteria.

On the London.pm mailing list, Tom Hukins took the time to explain the differences between CPANTS and CPAN Testers. Frequently these two projects are often thought of as the same thing, and I'm not sure how tomake any clearer that they are very different projects. Paul Johnson's Devel Coverage Reports beta site is gaining fans, and I hope that it too is not confused with CPANTS and CPAN Testers too. As again it's a very different project. One that compliments the other two, but still has a different goal.

I shall be attending various HackDay, Workshop and Perl Monger tech meet events in the coming months, and hope to promote CPAN Testers in some fashion. If you're planning talks involving testing at future events, please get in touch and I'll promote them here too. Happy testing.

Posted by Barbie
on 4th July 2014

I was trying to keep track of what distributions I released for my monthly challenge quest, and it made me think about when I released the first version of each distribution I have on CPAN. This then got me thinking about the first versions everyone had on CPAN. Seeing as I was already adding to the statistics, I thought I might as well add some more.

And so now you can see all the first version releases to CPAN. It also helps to keep track of all those first versions that happening each month, in case you wanted to check out what new cool stuff your favourite author was releasing to CPAN. So far this month we've had  21 new distributions (at the time of writing), and we're only 3 days into the month (the stats run in the early hours of the morning for the preceeding days).

However, that then got me thinking about the popularity of version numbers, and in particular what version numbers authors used for their first release of a distribution to CPAN. Unsurprisngly 0.01 is the most popular, but it turns out there is quite a vareity before we get down to single figures. Going that little bit further, looking at all the version numbers used, it was interesting to see there is a reasonably uniform trend, certainly for the top 14 positions.

If you have any ideas for more CPAN statistics, please post them on Github or RT.

Posted by Barbie
on 3rd July 2014

Back at the 2014 QA Hackathon in March, Neil Bowers asked about the frequency of testers submitting reports. We looked at a few different stats, and he blogged about it in How soon do CPAN Testers start testing your releases?. However, he also gave me the idea to look some other statistics to do with when authors release their distributions.

This idea is now available as the CPAN Submission Rates pages and graphs on the CPAN Testers Statistics site. When I have a bit more free time, I'm going to look at providing the preceeding few ranges, so that for day of the week, I'll include the previous 4 weeks individually. This will hopefully show the fluctuations in each period a little more clearly. However, as a first step it does show that as authors we are pretty much consistent with our releases over the lifetime of CPAN.

The one graph that does show a noticeable difference is the hour of release. I had expected this to be reasonable even, but the fact that there is a dip around 05:00 UTC and two peaks around 15:00 UTC and 21:00 UTC did suprise me. I'm guessing the first peak equates with European submissions and the second with American submissions. However, we have plenty of authors in Australia, Japan, China and Korea, so I am suprised that there isn't a similar peak somewhere between 03:00 UTC and 05:00 UTC. It might be interesting to filter the results using the wealth of Acme::CPANAuthors::* modules to figure out what periods each group favours. If anyone fancies patching CPAN::Testers::WWW::Statistics, or even setting up their own statistics page for that, please let me know.

If you have any more ideas for these sorts of statistical analysis, please post to GutHub or RT.

So the 2014 QA Hackathon has drawn to a close, but it is far from the end of the work, particularly for CPAN Testers. You can read several blog posts detailing many aspects of the QA and testing community work done during the hackathon, as well as several aspects of the toolchain, including PAUSE (which saw a lot of collaboration). It does get said often, and it bears repeating, that the QA Hackathons are a vaulable part of the Perl community, and help to drive many projects. Without them it is likely that key elements of the infrastructure we have come to rely on (PAUSE, CPAN, BACKPAN, MetaCPAN, CPAN Testers, CPANTS) would be a long way from being the resourceful, stable and continually developing components we have come to accept. Next year's hackathon will be in Berlin, and I for one am very much looking forward to it.

Aside from the work on the database for CPAN Testers during the hackathon, I did get to make a few releases, but several elements I started during the hackathon, were only completed during the following weeks. One of these CPAN-Testers-WWW-Reports-Query-Report, now enables you to retrieve the Metabase Fact object, JSON or hash representation of a specific report. For those looking at analysing the similarities and differencies between reports, this may make things a little easier, particularly when we start introducing more element facts into the report fact. Currently this only works for reports stored in the Metabase, so those early reports are not currently retrievable .. yet.

I discussed a new command line tool to submit reports with H.Merijn "Tux" Brand, who is keen to run reports in standalone instances. I see this working similar to Garu's cpanm-reporter, and with the common client that Garu has been working on, this could be a nice addition to the submission options. Liz Mattijsen, Tobias Leich (who easily has the coolest gravatar on CPAN) and I talked about how Perl6 distributions could be incorporated into CPAN Testers. There are some subtle differences, but there are also many common factors too. It was interesting to read the Rakudo blog post about Perl 6 and CPAN, as overcoming some of the hurdles potentially facing Perl6 developers are likely to help make CPAN a better place for all of us. The currently proposed solution is a similar approach to how different namespaces are stored in the Metabase. For the time being though, Perl6 distributions are excluded from CPAN Testers, but once we have a Perl6 smoker there is no reason not to include them. I'm not sure how soon that will be, but watch this space.

Andreas König and I once again looked at the way the reports are stored in the Metabase. Andreas had already highlighted the updated date, which is meant to be the date it entered the Metabase, was in actual fact the created date (the date of the testers platform). Along with David Golden, we looked at the code used by the Metabase, but failed to find anything wrong with it. It's hopefully something we can take more time over in the future, however the next priority for the Metabase is getting it moved onto MongoDB and away from SimpleDB. In the meantime, the Generation code, due to time constraints and the lack thereof, has been running using the 2010 version of the Metabase/AWS interface. During the hackathon and the following weeks, I finally upgraded to use the revamped version released in 2012. Although still troublesome to find all the reports, the search interface has been much improved, and now we have a much more reliable feed from the Metabase. This is also in part due to the rewrite of the internals of the Generation code to be more pro-active in find unusual gaps between reports.

I spoke with Neil Bowers during the hackthon too. Neil had suggested some ideas that I'd wanted to included into the CPAN Testers Statistics site. We discussed others during the hackathon, and although I have several notes to work from, it will be a little while yet before I can put aside some time to implement them. Neil has no end to the ideas to help improve CPAN, and I hope he'll be a good sounding board for ideas to incorporate into the site in the future. On the first day of the hackathon he posted about how quickly CPAN Testers test a distribution, after being uploaded to PAUSE. He was surprised to see some reports posted almost instantly. This is largely thanks to Andreas' work to bring the Tier 1 mirrors up to date within 10 seconds of a distribution being successfully uploaded to PAUSE. People such as Chris and Andreas use their own T1 mirrors to feed into their smokers, so as a consequence the reports can appear on the Reports site within one hour of the distribution hitting PAUSE!

I had intended to launch the CPAN Testers Admin site during the hackathon, but didn't get the chance to prepare a launch statement. I did wonder how a launch on 1st April would go down, but elected to wait a little longer. So expect some news on that front very soon.

Just before the hackathon we reach 40 million report submissions. An impressive number by any standards, and gives a compelling argument that Perl & CPAN are still being actively maintained and developed.

John Scoles made a post related to CPAN Testers, highlighting how CPAN Testers showed why a particular test needed a little more thought. Thanks to the wide variety of perls and platforms tested by the CPAN Testers, it provided John with some intriguing failures, and made for an informative look at how hashes called in scalar context are handled differently between versions of Perl and platforms. Prior to the QA Hackathon Neil Bowers posted about his new idea for An author's CPAN dashboard, which he continued to develop during the hackathon too. We even discussed, together with Olaf Alders, whether this was something that should feature in MetaCPAN. For now it won't, but hopefully it can develop further and w'll see what people make of it, once more APIs are fed into the dashboard.

Another month wrapped up, and another QA Hackathon over. Lots to do and plenty of fresh ideas coming. Happy testing.