CPAN Testers is only made possible with the support of our sponsors.
For more information on sponsoring, please visit the I CPAN Testers website.

Upgrade Notice

The CPAN Testers Blog site has been upgraded since you last accessed the site. Please press the F5 key or CTRL-R to refresh your browser cache to use the latest javascript and CSS files.

News & Views

Below is a post to the CPAN Testers mailing list by Mark Overmeer. Mark asked me earlier in the week about promoting under-tested OSes, which has been something I've been meaning to do for awhile. Alhough, I planned to write a blog post, however as Mark's post to the list covers it all so well, with Mark's permission I am posting it in full. If you have an under-CPAN tested OS, please consider running a smoke tester to help both Mark and CPAN testers. Over to you Mark ....

 

Dear testers,

CPAN Testers is a unique infrastructure.  I (ab)use it to collect platform specific information around POSIX compliancy, which I cannot find anywhere else.  I need more uncommon platforms to run my tests...  Please cpantest my POSIX::1003 module.

* Let me explain.
The POSIX.pm module offers a few extra OS calls, which are not in core (why is getpwnam() still build-in?)  But the POSIX standards are much larger.  See where the 1200+ functions of POSIX* are: Overview - Functions

POSIX::1003 extends POSIX.pm, firstly breaking its exported info into groups with some documentation.  Most importantly, the module autodetects constants and adds more functions.  There are many many more constants than core modules define.

* Approach.
I have manually grabbed constant names from the manual pages of many operating systems.  There are collections of manual pages which are helpful.  I have found close to 1700 constants this way. During installation of the module, XS builds tables with the names which are found.  Those names are also added to the manual pages.  Example: Errno - Constants

I add new functions very fast, but need to see which platforms support which extensions.  Also, I need to spot compilation problems for platforms which I do not have.

Worth to notice: I go to great pain *not* to hide platform differences! People may want to produce abstractions which do hide differences, but to do that correctly, we need the pure native interface first. For instance, I offer a pure getuid() and geteuid() and getreuid() and getresuid() (if platforms offer them), without an attempt to unify them.* New project. Ok, to support my own development, I was looking for the reverse mapping: we do have a "platform to constants" map via manual-pages, but it is not clear when constants and functions got added to some OS. Now, I use cpantesters to get the reverse mapping!  When tests are run on cpantesters, they dump the discovered tables.  It will benefit Perl's connection to the OS, but this data is useful for any OS programmer.

Have a look at posix.cpan6.net what I do with the information collected via cpantesters.  It's a new project, so far from optimal output.  When I get more cpantest results, I will (have to) be smarter with the browsing.

* Please help
Please run tests for POSIX::1003 -- especially when you run something else than Mac, Linux, FreeBSD, Windows.   At least, I would like to get results from aix, hpux, solaris, and openbsd.  Thanks in advance!

Happy CPAN Day. If you weren't aware last month, Neil Bowers took on a promotion of the first CPAN Day. A day to celebrate the birth of CPAN. Although the concept of CPAN started before 16th August 1995, that was the day that the first true upload occurred. As such, Neil spent several days writing blog posts to help identify ways to improve your distributions. While the module code within might be great, it is just as important to write clear documentation and make the most of your package files, particularly the META files, so that PAUSE, METACPAN and others can link to all the right places.

The day itself was a success in terms of numbers, as prior to the event, working with Neil, I created a new CPAN Uploads page, on the Statistics site. This helped everyone monitor the days events in almost realtime. Its proved so successful, the page has continued. With significant thanks to INGY, we blew away the previous score of 150 for releases in a single day with 775. We also had 107 different authors submitting releases, with 41 brand new distributions finding their way on to CPAN. It was quite an achievement, and one that might take a while to pass, but as Neil noted after the event, perhaps next year we should think about something other than numbers. I'm not sure what that might be, but hopefully we'll think of something suitable to celebrate 20 years of CPAN!

During CPAN Day, unsurprisingly, there were quite a number of CPAN Testers reports (37764 in 1 day, which although 10k higher than the average, wasn't the highest we've seen in a day, that was 47929 back in January). One aspect of which I wanted to see was, how many different OSes were being tested? The subject also cropped on the mailing list, thanks to Dean Hamstead, as to which OSes are undertested, which we would really love to see more reports for. Of the testing that has been done over the last few months, the following popular OSes could use a bit more testing. So if you have a spare box and are looking to get involved with CPAN Testers, one of the following would be very gratefully appreciated:

  • Solaris
  • Irix
  • Windows (Cygwin)
  • NetBSD
  • FreeBSD
  • OpenBSD

Solaris is an often requested platform, but as most earlier versions are only supported on SPARCs, it is a tricky one to test with. However, later versions do support the x86 systems. Irix, although no longer supported by Silicon Graphics, is still used, so if you're able to set a test environment on with Irix, there are several authors who would be very grateful. The other 4 are a little more popular and easier to install in a test environment, but they are just as much in need of testing as other higher profile platforms.

The thread started by Dean also raised up the point by BooK, about whether the CPAN Testers Fund could be used to rent boxes. Unfortunately, the Fund itself isn't substantial enough to do this, as funding the existing main servers is already quite expensive. If, as Mark Keating has suggest on many occasions, several major companies using Perl were to contribute a regular, even a small, amount to the Fund each month, we could think about renting some servers or VMs to regularly test otherwise less tested systems. We already have a gittip/gratipay account, and there has been the suggestion of creating an Amazon Smile account too. These are all great ways for individuals to contribute, but realistically for us to grow it does need more major help from bigger companies. If your company can help financially in any way, please suggest the CPAN Testers Fund to them.

Gabor Szabo asked about why the Perl special variables list also includes a reference to the MSWin32 variables. The reason is simple, its how David Golden wrote it :) Longer term this will be a little easier to manage, and will hopefully be a little clearer, once Garu implements the common reporting client. Consolidating the reports with consistent data about the platform, perl and installed modules is a goal so that test reports can be better analysed, and to more easily see differences between installers as well as other environment criteria.

On the London.pm mailing list, Tom Hukins took the time to explain the differences between CPANTS and CPAN Testers. Frequently these two projects are often thought of as the same thing, and I'm not sure how tomake any clearer that they are very different projects. Paul Johnson's Devel Coverage Reports beta site is gaining fans, and I hope that it too is not confused with CPANTS and CPAN Testers too. As again it's a very different project. One that compliments the other two, but still has a different goal.

I shall be attending various HackDay, Workshop and Perl Monger tech meet events in the coming months, and hope to promote CPAN Testers in some fashion. If you're planning talks involving testing at future events, please get in touch and I'll promote them here too. Happy testing.

Posted by Barbie
on 4th July 2014

I was trying to keep track of what distributions I released for my monthly challenge quest, and it made me think about when I released the first version of each distribution I have on CPAN. This then got me thinking about the first versions everyone had on CPAN. Seeing as I was already adding to the statistics, I thought I might as well add some more.

And so now you can see all the first version releases to CPAN. It also helps to keep track of all those first versions that happening each month, in case you wanted to check out what new cool stuff your favourite author was releasing to CPAN. So far this month we've had  21 new distributions (at the time of writing), and we're only 3 days into the month (the stats run in the early hours of the morning for the preceeding days).

However, that then got me thinking about the popularity of version numbers, and in particular what version numbers authors used for their first release of a distribution to CPAN. Unsurprisngly 0.01 is the most popular, but it turns out there is quite a vareity before we get down to single figures. Going that little bit further, looking at all the version numbers used, it was interesting to see there is a reasonably uniform trend, certainly for the top 14 positions.

If you have any ideas for more CPAN statistics, please post them on Github or RT.

Posted by Barbie
on 3rd July 2014

Back at the 2014 QA Hackathon in March, Neil Bowers asked about the frequency of testers submitting reports. We looked at a few different stats, and he blogged about it in How soon do CPAN Testers start testing your releases?. However, he also gave me the idea to look some other statistics to do with when authors release their distributions.

This idea is now available as the CPAN Submission Rates pages and graphs on the CPAN Testers Statistics site. When I have a bit more free time, I'm going to look at providing the preceeding few ranges, so that for day of the week, I'll include the previous 4 weeks individually. This will hopefully show the fluctuations in each period a little more clearly. However, as a first step it does show that as authors we are pretty much consistent with our releases over the lifetime of CPAN.

The one graph that does show a noticeable difference is the hour of release. I had expected this to be reasonable even, but the fact that there is a dip around 05:00 UTC and two peaks around 15:00 UTC and 21:00 UTC did suprise me. I'm guessing the first peak equates with European submissions and the second with American submissions. However, we have plenty of authors in Australia, Japan, China and Korea, so I am suprised that there isn't a similar peak somewhere between 03:00 UTC and 05:00 UTC. It might be interesting to filter the results using the wealth of Acme::CPANAuthors::* modules to figure out what periods each group favours. If anyone fancies patching CPAN::Testers::WWW::Statistics, or even setting up their own statistics page for that, please let me know.

If you have any more ideas for these sorts of statistical analysis, please post to GutHub or RT.