|
このページは大阪弁化フィルタによって翻訳生成されたんですわ。 |
Planet Perl is an aggregation of Perl blogs from around the world. It is an often interesting, occasionally amusing and usually Perl-related view of a small part of the Perl community. Posts are filtered on perl related keywords. The list of contributors changes periodically. You may also enjoy Planet Parrot or Planet Perl Six for more focus on their respective topics.
Planet Perl provides its aggregated feeds in Atom, RSS 2.0, and RSS 1.0, and its blogroll in FOAF and OPML
There is life on other planets. A heck of a lot, considering the community of Planet sites forming. It's the Big Bang all over again!
This site is powered by Python (via planetplanet) and maintained by Robert and Ask.

Planet Perl is licensed under
a Creative
Commons Attribution-Noncommercial-Share Alike 3.0 United States
License. Individual blog posts and source feeds are the
property of their respsective authors, and licensed
indepdently.
A few housekeeping issues.
1. I may be a day behind on this weekend's competition update. I've completed the Mojo half, but not the Dancer half. I'd rather post the results at the same time, so I'm delaying by one day.
2. After an email from Andreas to all CPAN authors to clean up our author directories, I've deleted about 1000 files. Unfortunately, that accidentally included all production versions of Class::Adapter, breaking Padre and a number of other things. A new 1.07 has been uploaded and this situation should be resolved shortly.
Hate the Perl debugger? Want it to do more? I do. For example, I hate it when I see this:
DB<1> x $before
0 HASH(0x100e37fd8)
'foo' => ARRAY(0x100c5f1d8)
0 1
1 2
2 4
'guess' => CODE(0x100dc0d78)
-> &main::__ANON__[run.pl:13] in run.pl:10-13
'this' => 'that'
'uno' => HASH(0x100db9000)
'this' => 'that'
'what?' => HASH(0x10088dfc0)
'this' => 'them'
So I've extended it to allow you to type xx $var:
DB<2> xx $after
{
foo => [
1,
2,
4
],
guess => sub {
my $x = shift @_;
return $x + 1;
},
this => "that",
uno => {
this => "that",
"what?" => {
this => "them"
}
}
}
That's the same data structure, but it's much easier to read.
It's build on top of Marcel Gr?nauer's DB::Pluggable, but I can't release it yet because we've agreed that some of my work should be pushed back into the pluggable layer. Later, I'll make it more extensible, including changing serialisation options.
Module versions can be found using several ways. I know two.
You can use the module in a one liner and print the module's $VERSION variable:
perl -MSVG -le'print $SVG::VERSION'
This gets annoying when the module name gets long:
perl -MWWW::Mechanize -le'print $WWW::Mechanize::VERSION'
Or the insane:
perl -MPOE::Component::WWW::Pastebin::Bot::Pastebot::Create -le'print $POE::Component::WWW::Pastebin::Bot::Pastebot::Create::VERSION'
Another method I've seen people use is to actually cause an error. Try to load the module in a high version that doesn't exist. The error will show what version it is:
perl -MWWW::Mechanize\ 9999
perl -MPOE::Component::WWW::PasteBin::Bot::Pastebot::Create\ 9999
This is pretty good, since it's short to write and most likely you won't find many versions above 9999 (except perhaps, File::Slurp - last version 9999.13). However, this is a bit confusing to newbies, trying to cause an error on purpose to simply find the version.
Moreover, since it causes a compilation error, you can't easily check multiple versions.
I like this new way better:
$ module-version WWW::Mechanize
1.60
$ module-version --full WWW::Mechanize SVG Moose
WWW::Mechanize 1.60
Warning: module 'SVG' does not seem to be installed.
Moose 0.99
And so on.
If you like it too, you can install Module::Version and have it. There's a few more options there like input from files and no warnings (--quiet).
A few folks have talked about adding tags to Test::Class. This would allow us to do things like load 'customer' fixtures if something is tagged 'customer', or only run tests tagged 'model'.
Here's what I have working now:
#!/usr/bin/env perl
use Modern::Perl;
{
package Foo;
use Test::Class::Most;
INIT { Test::Class->runtests }
sub setup : Tests(setup => 2) {
my $test = shift;
ok $test->has_tag('customer'), 'We have a customer tag';
ok !$test->has_tag('foobar'), 'We do not have a foobar tag';
}
sub foo : Tests Tags(customer items) {
my $test = shift;
ok $test->has_tag('customer'), 'We have a customer tag';
ok !$test->has_tag('foobar'), 'We do not have a foobar tag';
}
}
(Note: you don't really want tests in setup, but this is just an example)
Internally it's a bit hackish (due to attribute order and the fact that attribute arguments aren't Perl), but it works. Does this look like a reasonable implementation?
It's not clear to me if overridden tests should inherit parent tags or replace them. The latter is easy (that's what I have now). The former involves walking up the inheritance tree. Implementing both involves figuring out a clean syntax and this is what stumps me (and it would get nasty with MI).
Update: We have a solution. You can both replace and add tags. Tags(...) is a simple assertion of the tags for a method. AddTags(...) checks that we have a parent (unless it's test class) and the parent can do the method we're in. We add the tags to the parent tags (if any).
The only failure case I see here is if the parent had tags you depend on and someone comes in later and deletes them. Hopefully, they should be running the tests and see this.
The Perl QA-Hackathon in Vienna just starter a few hours ago. Most of the people actually arrived yesterday and we met in the near-by Centimeter where ate and drank quite well as you can see.
Already at the breakfast table I got some really good ideas from the other people implementing a wysiwyg-like feature for Padre.
At the Hackathon itself first people introduced themselves and said a few words about the projects they are planning to work on. Personally I have two unrelated issues. I'd like to play with the new Archos 5 internet tablet that runs Android and see how can I put Perl on it and how can I use the Perl on it for testing other applications. I started to collect some information and links on Perl on Android.
The other project is about a tool to allow writing tests by non-programmers. I'd like to move this forward and get help from the other people here both in the concept and in implementing some of the features. I explained the idea to Ovid who directed me to this video about literate functional testing.
Arrived in Vienna last night and walked across the city center to get to the hotel. It's my second time in Vienna and I love how beautiful it is.
Today I'll be giving some love to Test::Class, Test::Class::Most and (hopefully) TAP::Parser. I've finished the Test::Class work. I had the following output in a test run:
[17:32:26] t/001-test-classes.t .............................. 7983/? #
expected 0 test(s) in setup, 1 completed
[17:32:26] t/001-test-classes.t .............................. 7987/? #
expected 0 test(s) in setup, 1 completed
[17:32:26] t/001-test-classes.t .............................. 7989/? #
expected 0 test(s) in setup, 1 completed
It takes our tests almost an hour to get there, so debugging that is annoying. I've submitted a patch which will add the class name to that warning.
Side note: don't put tests in your test control methods (startup, setup, teardown and shutdown). If something fails in those, your test class probably shouldn't even run. Make 'em assertions instead.
I also plan to add "attributes" to Test::Class::Most so you can do this:
use Test::Class::Most
parent => 'My::Test::Class',
attributes => [qw/customer items/];
sub setup : Tests(setup) {
my $test = shift;
my $schema = $test->test_schema;
$test->customer( $schema->resultset('Customer')->find({ name => 'bob'}) );
$test->items( $schema->resultset('Items') );
}
sub some_tests : Tests {
my $test = shift;
my $customer = $test->customer;
...
}
Basically, I keep seeing test classes where people need data shared across methods and they'll do something like this:
use base 'Some::Test::Class';
my $customer_id = 7;
sub some_tests : Tests {
my $test = shift;
my $customer = some_func();
is $customer->id, $customer_id, '... ack!'; # don't do this!
...
}
As from some very annoying timing issues between initialisation and assignment, that's not very OO. How do you inherit $customer_id? You don't. How do you encapsulate that variable? You don't. Adding attributes isn't perfect, but it helps a lot here.
Update: Test::Class::Most 0.05 with attributes is now on the CPAN.
The Perl 6 design team met by phone on 07 April 2010. Larry, Allison, Patrick, Jerry, Will, and chromatic attended.
Larry:
WHICH may not be a mundane value typeObjAt to avoid type name collisions.tr/// and carps about malformed rangesforeach, !!op, $!{}, EOF, and missing punctuation after blocksAllison:
c:
Allison:
Patrick:
Jerry:
c:
Larry:
c:
Larry:
Allison:
Jerry:
The new and shiny oe1.orf.at is finally online!
As you might expect it's crafted using the finest ingredients of Modern Perl: Catalyst, DBIx::Class, Moose, HTML::FormHandler, KinoSearch. Relaunching the site was a nice project, even though there were some setbacks:
I was forced to switch from Postgres to MySQL (using - the horrors - MyISAM), so I couldn't use any real database features like transactions and referential integrity; the launch date was postponed a few times, so I couldn't help organising the QA Hackathon as much as I wanted (in fact I can also not attend all days, because I want to spend some time with my family before leaving for Berlin / Icleand).
Anyway, after fixing some last post-deployment glitches everything seems to work now. Yay!
March has been a very busy time. Although we weren't able to meet the 1st March deadline, the switch to the HTTP submission process has started. Currently it's still considered Beta, but initial problems appear to have been worked out, and the Metabase is receiving reports thick and fast. So much so that some testers started to ramp up their smoker bots again, forgetting that some were still submitting SMTP reports. You can read David Golden's report of his beta test update.
Last month also saw a switch away from the NNTP ID for reports. This caused a little confusion in places, but is necessary to support the two feeds for reports, before ultimately switching completely to the Metabase, at which point the NNTP ID will no longer be used. As a consequence some of the tools used with the CPAN Testers website may not have appeared to be correct. Currently the IDs used to display reports are specific to the cpanstats database, and no longer match the NNTP ID. The GUID which is used by the Metabase can be used, and this can be generated from a NNTP ID using David Golden's CPAN-Testers-Common-Utils distribution. If you spot any instances where there appears to be a mis-match, please let me know, and I'll investigate.
Which brings us to the next phase of CT2.0 development. Now the Metabase is up and running, the cpanstats database needs to use a feed from it to include the HTTP submitted reports. Tests have performed well and all appears to be working as intended. The next step is to integrate the reports into the live websites. The majority of work has been completed, but unfortunately due to unforeseen personal circumstances, the final integration work has been delayed a little. However, we do expect the Metabase reports to be live within the cpanstats database this month.
Normally at the end of each report I provide counts for the number of testers for the month. Due to the move to CT2.0, this figure would currently show only half of the picture, so we'll be holding off that this month. Hopefully we can provide a better numbers next month.
Keep watching for updates to CT2.0, both here and on David Golden's blog.
Cross-posted from the CPAN Testers Blog.
Update: I am informed that while I may imply a timeline which sees the pragma modules mentioned below taking action before Moose, time-wise Moose acted first and then the pragma modules came later
Last weekend in my first round comparison between Mojo and Dancer, I noted that neither project used strict or warnings.
At the time, I suspected this was done for clarity. After all, it can get annoying when use strict and use warnings to get in the way of having a nice a clean synopsis.
It was too my great surprise that I discovered both web frameworks had decided that use strict and use warnings were good enough for everybody and they would silently turn both of them on.
This makes them the third group of modules to decide how I should write my code.
First are your $Adjective::Perl style modules.
This I can live with and playing with pragmas seems quite reasonable, since it's a style pragma module itself. By saying "use perl5i" I'm explicitly buying into their view of what code should be written and formatted like.
Then Moose decided that they would turn on strict and warnings as well.
This makes me a bit uncomfortable, since I use Moose for it's object model. I don't really want it imposing it's views on how I should write the rest of my code.
I can hear you already saying "But wait! You can turn it off with no warnings, but you shouldn't do that because it's best practice (and Best Practice) to always have them both on, and anyway it's only enabled until you say "no Moose;"
Or is it? That alone is an interesting question.
Do Moose's views on strictness and warnings able to escape it's scope, and will be imposed on me even when I tell it to go away with no Moose;
Or if they do go away, does that mean I've accidentally been running a whole bunch of code without strict and warnings on by mistake?
But I digress, now where was I... oh right!
<rant>
I appreciate you are trying to be nice and save me two lines, but dammit I'm not paying you (metaphorically) for that, and now I have to THINK instead because the LACK of an option to your code can be meaningful. It's worse than meaningful whitespace, it's a meaningful unknown. And I can trivially automate the production of those "use strict;" or "use strict;\nuse warnings;" (as you prefer) lines in pretty much any hackable editor written in Perl. Automating the thinking you have to do when there ISN'T something in the code is much harder, or impossible.
This kind of thing with Exporter is one of the (four) provably impossible problems that prevent Perl being parsable. Gee thanks!
From a perception point of view it's the same kind of situation when a Media Company announces they are going to buy a Mining Company. Why? Because the Mining Company has a lot of cash but little revenue, and the Media Company has a lot of revenue but little cash, so they'd "Go well together".
Before I say any more, you should already be a bit suspicious. And it's probably no surprise when you find out that the part-owner boss of the Media Company is also a part-owner of the Mining Company.
But that kind of thing is an obvious form of Conflict Of Interest. Humans are almost universally tuned to spot that kind of thing and see it as a negative.
It's a much trickier situation when the conflict is between Doing Your Job and things like Trying To Be Nice, or things like Clearly You Probably Meant It, So I'll Just Silently Correct That For You. There's a variety of meme's in this situation, different mixed perceptions based on your own personal morality.
But that doesn't remove the technical issue that you've conflated two entirely different functions into one module.
So now with Moose if I don't want warnings, but I do want strict, I'm not sure if I need to do this...
use Moose;
no warnings;
...
no Moose;
or this...
use Moose;
# Lets say I'm nice and allow all my Moose definition code
# to follow their warnings policy, because I like their rigourous approach.
no Moose;
use strict;
use warnings;
Neither of these things are particularly pretty, but I'm stuck with the situation because there's a conflict of interest. The Moose authors choose to impose their views in an area outside of their scope, because it's convenient for them and saves them a couple of lines, And Besides Everyone Should Do It That Way.
But as long as it's only pragma modules that change pragmas (plus Moose) it's just an idiosyncracy, one of those weird little things modules do sometimes.
Except now we have another problem, because now It's A Trend. Everyone famous is doing it. Clearly it's The Right Thing.
So now we see web frameworks doing it. Mojo does it. Dancer does it.
Clearly it's the right thing to do, because chromatic and Schwern and Damian and the Moose cabal are doing it so it must be awesome.
At this point, you're probably preparing a snarky comment about how I'm just curmudgeonly and nit-picking. How I should only turn off warnings when I know a warnings is going to happen inside a SCOPE: { no warnings; ... } and how we need to set a good example for the less experienced people, and how YOU always want to see the warnings in production, and how warnings create bug reports so you fix more bugs, and so on.
But what about practical issues? What about situations where you need to do big things, complicated things, large scale things in one or many of the dimensions of width, or throughput, or reliability or complexity or code size.
Over the last 10 years, in which 90% of my paid work has been on websites, I can recall three situations in which I found truly important warnings on a production web server that I didn't find in any testing, and that would have led me to a fix I otherwise would have overlooked.
I've found tons of exceptions on production sure, but not that many warnings that mattered.
However, in that same 10 years, I've seen the opposite situation 6-8 times.
I've seen sysadmins blank out a config variable the wrong way, resulting in an undef where there should have been a value, which is checked in an "eq" comparison 20 times per page, each of which produced 2-3k of log file.
Or worse, I've seen this in a foreach ( ... ) { if ( undef eq 'string' ) { ... } } which is operating on several hundred or thousand entries.
Half the time, this happened because someone in the same office at the same time you are at work touched something they should have and uncovered it.
And when you see the load graph on the box spike the next day, you investigate and find it compressing 20gig of log files, all of which contain the same identical warning printed 40 or 50 million times.
But if you aren't so lucky, it happens on a weekend, or at night, or you haven't set something up right in Nagios, and now you've filled your machine's entire /var partition over the weekend, which prevented 2 or 3 other services that need /var too from working, which brought down the service, or the server.
I've seen 10 machine clusters running at high volume overflow every log partition in the cluster at a rate of a gig per host per minute because a telco outage at night caused a minor backend service to fail, which returned a single status string that wasn't checked for defined'ness to go undef and that status hash value was checked in the hot loop.
I've seen horrible UDP network syslog storms, and boxes dead so fast the Nagios Poll -> Human Alert -> Getting Online lag of 15 minutes wasn't enough to catch it and prevent it.
All of it because in a codebase of 50,000 or 100,000 lines, you only have to miss ONE thing in the wrong place to produce a warning. And nobody's perfect.
Now, by all means I encourage development with warnings on. And I absolutely think warnings should always be on in your test suite, with Test::NoWarnings enabled wherever possible for good measure, and a BEGIN { $^X = 1 } in the test header to really make sure warnings are aggressive.
If there's a configuration option for it, I'll even leave them on through User Acceptance Testing and Fuzz Testing and Penetration Testing and Load Testing and anything else that isn't Production.
In production I don't want to know about mistakes.
Well, that's not entirely true...
I want to know about mistakes in production, but what I want more is that production absolutely positively NEVER goes down. There's no debugging convenience in the world that should result in even the slight risk of turning into a Denial of Service.
The Spice MUST Flow.
If I go down in production, I want it to be for a reason that has never happened before. Ideally involve a three or four factor failure.
If a plane crashes into a telco NOC, triggering a complete network outage on my side of the city, and we switch to the disaster recovery site but the power ripples from the plane crash caused a transformer to blow, and the generator fails after 20 minutes because of a critical heat event due to a bird nest in the radiator catching on fire, because the maintenance man was on paternity 2 month paternity leave and the stand-in techie doing his job wasn't legally qualified to be on the roof with the electrical gear, THAT I can live with.
If an East European mafia takes a shining to me as a blackmail target, and initiates a 50gig/sec botnet distribution denial of service attack and we haven't set up the DDOS-protection contract because the financial crisis caused our budget to be cut this year, well that I can live with too.
If I can't afford any of that fancy stuff, but the Facebook utility my shoe-string budget startup created turns out to be too popular and despite my best efforts to keep it blazingly fast to prevent this kind of thing the success overloads my server someone just dropped the default Ubuntu on because we needed it up quickly, hey I'm happy to have that kind of problem.
Compared to these kinds of reasons for going down, having a volunteer website administrator who lives in Europe fiddle a setting they shouldn't have while I was asleep, and having the 500gig hard disk overflow with the same identical message repeated over and over 5 billion times really doesn't cut it.
And this same kind of thing seems to happen over and over again about once every 14 months, and only half the time am I lucky enough to do something about it in time.
When my object model is forcing warnings on me, and my web framework is forcing warnings on me, what am I supposed to do?
package Foo;
use MyWebFramework;
no warnings;
use Moose;
no warnings;
use Some::Random::Module;
no warnings; # Can I really be sure they don't enable warnings?
...
Am I supposed to repeat this in every single class?
What about when I want warnings ON, now what?
Unlike exceptions, it's way way harder to catch and manage warnings, to force them always on and force them always off when you are in different environments.
The only way I know of to reliable distinguish between maximum noise and diagnostics and explosions in dev/test/uat, and no noise at all in production (except properly managed exceptions) is to have the code NOT use warnings, and then force it on from the top down in the right environments.
We've seen similar things before, stuff that starts out simple and obvious but just causes pain.
The @EXPORT array was, I'm sure, just a fine idea when it was added. It lets you import whole swathes of functions into your program without that annoying typing.
Of course, since now ANYBODY can fuck with it, if you are trying to write robust code you need to do stupid annoying things like this to avoid accidentally polluting your code.
use Carp ();
use Cwd ();
use File::Spec ();
use File::HomeDir ();
use List::Util ();
use Scalar::Util ();
use Getopt::Long ();
use YAML::Tiny ();
use DBI ();
use DBD::SQLite ();
Why do I need to do that stupid braces shit? Because the alternative is I have to audit every single dependency to make sure it doesn't export by default, and THEN I have to also trust/hope they don't start exporting by default in the future.
Loading modules the safe and scalable way means doing MORE work than the unsafe and unscalable way.
DBI gets it right. The default way of using DBI that is documented everywhere is superficially more verbose and anal retentive than I need for simple things.
But as the code gets bigger, the code keeps working just as well and just as safely. I would hypothesise that this diligence on the part of DBI and Tim Bunce has in a single stroke kept Perl web applications industry-wide almost entirely free of SQL injection attacks.
The savings in terms of just the admin workload and security spending and security-forced upgrades done on overtime on the first Tuesday of every month have probably justified Tim's entire career.
Has default-import really given us such a large benefit that it overcomes all the times people have to type () and resolved clashing imports of corrupted OO APIs? Is the time saved not having to type ':ALL' really worth all that?
I say no.
And I say that this growing nascent fad to screw around with my pragmas when your module isn't actually a pragma itself needs to be nipped in the bud before it gets worse.
</rant>
While this is perhaps a controversial position (and so it won't be factored into the scoring as part of the competition) I have to say I was greatly impressed that the Dancer guys have offered to implement some kind of configuration option so I can explicitly turn disable their Dancer-imposed warnings in production (which at least mitigates the worst Real World problem, while retaining the magic pragma behaviour).
Most aspects of the first round between Dancer and Mojo are covered in Alexis' post - a recommended read. However - with your approval, or not - I'd like to add another side of it, our overall developer understanding of the contest.
While it seems fun to "win" something, what we the developers (and I'm assuming it's pretty much the same for the Mojo people) liked most about the competition was that we'd get a complete understanding of the end-user learning experience.
Here are a few things we understood (and most corrected by now):
One really important lesson is that: if a user did something wrong - and they're not complete idiots and have a relatively fine understanding of Perl - it's probably our fault.
Since we put Dancer up on the community development dancefloor, we've accepted quite a few patches from people on documentation. People who email the mailing list or come to the IRC channel and ask for clarifications - once had the issue clarified - are kindly asked to contribute what they understood back to the documentation to help others who would be in that situation. This overlaps the lesson mst tried to teach in a post a while back.
Winning the competition would be a nice achievement but compared to the intellectual might of our adversaries and their immense experience in this field (along with a longer running project), we don't expect to necessarily win the gold here. However, our gold is measured by improving our project: understanding what the user wants, what they understand, what they try to do and where they've failed.
And that is a gold medal which can be shared by more than one party. :)
These links are collected from the Perlbuzz Twitter feed. If you have suggestions for news bits, please mail me at andy@perlbuzz.com.
The next meeting of the Rehovot Perl Mongers will take place on 13 April 2010 in our usual place in the Weizmann Institute. This time Tamir Lousky will talk about the SVG Perl module and how to use Perl to produce really nice Scalable Vector Graphics (the opensource alternative to Flash / Illustrator graphics).
As usual we meet at 18:00 but the actual talk will only start at 18:30.
People who have not been to any of our meetings are also very welcome. If you think about coming but don't know anyone else in the group, feel free to get in touch with me even before the meeting so I can help introduce you to the other participants.
Regarding the content. I am not sure I'll have time learning about SVG before the meeting but a quick search lead me to a very old article on Creating Scalable Vector Graphics with Perl and to the SVG Perl tutorial written by the author of the module. You might have some time and try it out even before the meeting.
For further details please see the web site of the Rehovot Perl Mongers and feel free to join our mailing lists.
I know why "date" and "time" are in there, but I suspect it's not common for most folks.
~ $ history | awk {'print $2'} | sort | uniq -c | sort -k1 -rn | head
255 git
174 fg
108 vim
79 prove
58 ack
56 cd
51 ls
42 time
41 rm
31 date
The Perl 6 design team met by phone on 31 March 2010. Larry, Allison, Patrick, Will, and chromatic attended.
Larry:
:{} keyless adverbial syntax:({})..map:{...} Junction type again because I couldn't get people to stop capitalizing itEach type that autothreads lists like junctions, but is serial and lazy, and is used for its values in list context, not boolean context<&foo> regex assertion form to explicitly call a routine, just like <.foo> always calls a method<foo> assertion now prefers to call a lexical function if visible, or calls as a method in current grammar if notgimme5 now sets the correct xact on || alternationsLazyMap now always passes through the first result regardless of its associated commit transaction state:{} if(...) {...}), or a statement control like 'given' where one isn't expected ($x = given {...}):has :foo<>) to list some better optionsNil list:! not followed by an identifier, or pairs with duplicate argumentssuppose in place of custom try blocks in diagnosing such things as two terms in a row, or unexpected infixessuppose to soften the warning about backtick-less embedded comments by not complaining if the supposed comment eats the whole line anyway-->) in the check for redundant 'of' typesActions.pm from viv so that it can be used by other STD-based AST buildersAllison:
compact_pool() functionPatrick:
Will:
c:
The CT 2.0 beta test has been running for about three weeks. About 160,000 reports have been submitted by beta testers at an average rate of a little over 5 per minute. It’s also seen some sustained spikes over 150 reports per minute. You can see the log graphics below.
Reports have been received from most major operating systems and versions of Perl.
So far, I’ve fixed several issue relating to character encoding, removed a number of dependencies and fixed a few other oddball bugs. Chris Williams developed a new “zero dependency” report proxy, Test::Reporter::Transport::Socket and metabase-relayd to allow CT2.0 testing without any Metabase prerequisites on the target perl.
What to look forward to:
That’s all for now. Thank you, again, to all the beta testers for their efforts and thank you to Perl NOC administrators for their forebearance and support.
I've just launched a 'white papers' section on Perl.org:
http://www.perl.org/about/whitepapers/
I've set the following as a rough brief for these documents:
Target audience(s):
Goals:
Key points:
Limitations:
If you have corrections or are interested in writing one of the outstanding topics do let me know.
I recently asked Adam Kennedy why he went back to blogging on use.perl.org. He replied:
I stopped posting to blogs.perl.org until I can migrate everything from use.perl over to it.
Anyone want to work on a migration script? I'd do it myself were it not for the Veure project I'm on. You just have to look at the gists in this post or the image on this post to see why the communication is so much richer here. Of course, there are plenty of other reasons, but clearly blogs.perl.org is a great, modern platform for the Perl community and I'd love to see more people promoting it.
Lets Get Ready To Ruuuumblllleeee*cough*splutter* ahem. Sorry about that.
Welcome to the Mojo vs Dancer Top 100 Competition.
Over the next month or so I'll be building a replacement for my prototype CPAN Top 100 website simultaneously using the Dancer and Mojolicious micro-web frameworks.
The Competition Rules
While I do have a fair bit of experience with Perl coding, I will be trying wherever possible to behave in naively and newbie'ish fashion when it comes to community, support and tools.
I hope that during this process we can get some idea of the typical end user who won't know the right people to talk to or the right places to go for help.
One round will occur each weekend. I shall address one area each round, progressing until I have a working application (hopefully two).
While each weekend you will be subjected to my newbie code, during the rest of the week I will be inviting the development teams of both web frameworks to "improve" my code. They do so, however, at risk of LOSING existing points, should they try to hard to show off and create something I don't understand.
The week gap also gives plenty of time for each team to respond to my comments, to deny problems, to clarifying mistakes, and to fix, upgrade and do new releases.
For each issue I stumble across on my journey, I shall appoint only one winner. Each week may address more than one issue.
However, while I'll be recording the scores issue by issue, ultimate victory will be based entirely on my subjective personal preference for the one I think will be quickest and easiest for me to maintain.
If you'd like to follow along at home you can checkout the code for each project at the following locations.
Mojolicious - http://svn.ali.as/cpan/trunk/Top100-Mojo
Dancer - http://svn.ali.as/cpan/trunk/Top100-Dancer
Mojo - Getting to Hello World
I have some history with Mojo, being present at (and in a small way contributing too) its birth when Sebastian Riedel left the Catalyst project.
I've even attempted to build a Mojo application before, but was told at the time they weren't quite ready for actual users yet.
Their website is clean, efficient, but practically unchanged since I looked at it the first time.
It's also somewhat broken or at least unmaintained. The "Blog" link just points back to the same page, the "Reference" link points at what looks like a search.cpan.org failed search, and the "Development" link just throws me out to Github (which doesn't seem to really help much if I wanted to write a plugin or something other than hacking the core).
The "Book" link points to another search.cpan error page, and the most recent version at time of writing is 0.999924 which seems weird and makes me wonder how well they run the project.
Although the website doesn't fill me with confidence, the installation process is an entirely different story. One of Mojo's core design principles is to be pure perl and have zero non-core dependencies.
Installation via the CPAN client is fast, simple, and effortless. And I have full confidence that (if I needed to) I could just drop the contents of lib into my project and upload it to shared hosting somewhere.
I hear some rumors that to achieve this they've done rewrites of some very common APIs that work slightly differently, but I won't be looking into this right now. It will be a matter for another week.
To create my initial Hello World I've taken the most obvious approach and just cut-and-paste the code off the front page of the Mojolicious website, then stripped out the param-handling stuff, and modified the rest to something obvious looking. I've also added in strict and warnings, which the sample doesn't have.
Before attempting to run it, I have the following.
#!/usr/bin/perl
use strict;
use warnings;
use Mojolicious::Lite;
get '/' => 'index';
shagadelic;
__DATA__
@@ index.html.ep
<html>
<body>
Hello World!
</body>
</html>
Looking at this code, it seems that everything is template based. This should be a good thing in general, as I'm a heavy Dreamweaver user and don't much like generating pages from raw code.
So far, it seems fairly simple. My main problem is that I have no idea what the hell "shagadelic" does, although I suspect it's some kind of way of saying "done". Whatever it is for, it annoys me enormously and dates the framework to (I assume) the release date of one of the Austin Powers movies. I get the feeling it is going to make Mojo feel more and more dated over time.
And they don't use strict or warnings, which seems a bit iffy.
When I run this helloworld.pl script, I get a handy little block of quite informative help text for my application for free.
C:\cpan\trunk\Top100-Mojo>perl helloworld.pl
usage: helloworld.pl COMMAND [OPTIONS]
Tip: CGI, FastCGI and PSGI environments can be automatically detected very
often and work without commands.
These commands are currently available:
generate Generate files and directories from templates.
inflate Inflate embedded files to real files.
routes Show available routes.
cgi Start application with CGI backend.
daemon Start application with HTTP 1.1 backend.
daemon_prefork Start application with preforking HTTP 1.1 backend.
fastcgi Start application with FastCGI backend.
get Get file from URL.
psgi Start application with PSGI backend.
test Run unit tests.
version Show versions of installed modules.
See 'helloworld.pl help COMMAND' for more information on a specific command.
Running the obvious "perl helloworld.pl daemon" like it says on the website and connecting to http://localhost:3000/ I get a simple working "Hello World!" response first time.
So far so good then, except for the rather dead website. And no need to try any of the support channels yet either.
Dancer - Getting to Hello World
The Dancer website seems quite a bit more enticing than the Mojo website, at least superficially. There's evidence of more attention to some of the visual details, with more design elegance and things like source code syntax highlighting.
Clicking through the links, however, it's clear information is still a bit thin on the ground. And the "latest release" version on the download page is behind the version on CPAN, but not by much.
The website generally has more of a "new and undeveloped" feel to it, compared to Mojo's "mild neglect" feel.
One nice thing about the website, is that they've dropped a Hello World example directly on the front page for me to copy and paste.
After some small tweaks for my personal take on Perl "correctness" and legibility (the Dancer guys also don't use strict or warnings...) I have the following.
#!/usr/bin/perl
use strict;
use warnings;
use Dancer;
get '/' => sub { return <<'END_PAGE' };
<html>
<body>
Hello World!
</body>
</html>
END_PAGE
dance;
The Dancer example is smaller and simpler than Mojo example, and doesn't make template use compulsory. Again, I can't stand this use of non-descriptive functions to end these programs. But at least "dance" is cleaner, is an actual verb, and is a bit less tragic than "shagadelic".
Instead, tragedy strikes for Dancer when I try to install it.
Because it doesn't install. Or at least, it doesn't install on Windows. Or perhaps it's just my Vista machine.
A redirect test is failing with this...
t/03_route_handler/11_redirect.t ............. 1/?
# Failed test 'location is set to http://localhost/'
# at t/03_route_handler/11_redirect.t line 36.
# got: '//localhost/'
# expected: 'http://localhost/'
# Failed test 'location is set to /login?failed=1'
# at t/03_route_handler/11_redirect.t line 44.
# got: '//localhost/login?failed=1'
# expected: 'http://localhost/login?failed=1'
# Looks like you failed 2 tests of 9.
t/03_route_handler/11_redirect.t ............. Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/9 subtests
As a non-expert, that looks pretty serious. Maybe I'd force the install if it wasn't something as essential as redirecting that fails. But this is a pretty ordinary feature, and it's not working, and forcing in general scares me.
The one saving grace is at least it failed quickly. While not keeping dependencies to zero, they've done a fairly decent job of keeping the dependency list down to a minimum.
But the most damning factor here is a not that it failed once, but when I follow up by taking a look at their CPAN Testers results. These show failure rates all over the place, up and down, with some big regressions.
This kind of pattern usually suggests that Dancer is seriously lacking in their QA procedures, or have a complete disregard for platforms or Perl versions other than newer Perls on operating systems they personally have. This makes Dancer a risky choice for me to bet on, because it means it could all go wrong down the line unexpectedly.
So at this point, I'm going to stop with Dancer for this week, having failed to get Hello World working. We'll see if the Dancer guys can address this before next weekend.
Week 1 Results
Best Website - Dancer
While this isn't a massive victory, Mojo's many broken links hurt it, while Dancer shows at least some desire to be pretty, which I take as a hint that it might be easier for me in the future to make on my website pretty too.
Best Installation - Mojo
Zero dependencies and a fast installation that Just Worked, contrasts enormously with the failed installation of Dancer, and its unreliability over time (according to CPAN Testers at least).
If left to my own devices, I would probably already have committed to Mojo at this point, although reluctantly given Dancer's more desirable prettiness.
Best Hello World - Dancer
This one was quite close. Mojo suffered a bit because it forced its templating syntax onto me during Hello World, while Dancer suffered a bit because I had to resort to a Heredoc in the hello world.
In the end I'm awarding this to Dancer because of the pain in my brain that the function name "shagadelic" causes. I might have been cool for the first day or two, but long term I just know this is going to become an eyesore in my code.
Overall Leader after Week 1 - Mojo
Despite Dancer beating Mojo two to one on the individual factors, when the time came to do what I needed to do, Mojo installed quickly, gave me some help in the right place, and ran my Hello World without error or argument.
And Dancer did not these things.
Clearly, there's some QA work for the Dancer guys to do before next week, and the Mojo guys should probably dust some of the cobwebs off their website at the same time.
Next week, the competition will continue with database and ORM integration.
Until then, hopefully the respective teams will be blogging their responses and hopefully dealing with any issues raised.
Testing is, in some sense, a mess. Part of the problem is similar to the dynamic/static schism: nobody seems to agree on what those terms mean. Case in point: what's "integration testing"? Here's the definition from the excellent Code Complete 2:
Integration testing is the combined execution of two or more classes, packages, components, or subsystems that have been created by multiple programmers or programming teams. This kind of testing typically starts as soon as there are two classes to test and continues until the entire system is complete.
Read that carefully. Look for a flaw.
So, let's say that you're working for a company and you're the only programmer. By the aforementioned definition, you're not capable of doing integration testing because there are no multiple programmers or teams. In fact, the "component testing" definition has a similar flaw. Lone programmers are not capable of it if you accept the "Code Complete" definition.
Mind you, I don't want to use this as an excuse to rip into an otherwise excellent book. There are many areas of testing which are, um, not terribly well defined. BDD (Behavior Driven Development), for example, is often described in rather curious terms to the point where some people admit that they just don't understand it. I've never done it, but I've proposed similar things before and I think I understand it (I think it would be a nice way to kill off FIT testing).
I would love to see a standardised description of different testing techniques along with plenty of hard data (I've never seen this for FIT or BDD) to suggest which give more bang for the buck. For example, are there areas where QA is less useful than others? Having a single repository of this information, along with references, would be great.
I have completed a migration of the Module::Build repository from Subversion to git.
The new repository is available publicly on github.com:
I have done my best to clean up the merge/branch/tag history, but have not bothered to clean up empty commits left over from the original cvs2svn conversion. If anyone sees any glaring errors please let me know.
I hope this migration makes it easy for people to contribute to Module-Build. I was pleasantly surprised to find that within 12 hours of it being publishing on github, it’s already being watched by several people.
At the very least, it will make working on feature branches much easier to manage, which will make it easier to experiment with new ideas without affecting the main line of development.
If anyone would like to start applying patches from the Module::Build bug queue, or creating patches for other open tickets, that would be huge help.
I will be “closing out” the old Subversion repository with a pointer to the new location shortly.
The CPAN Leaderboard shows CPAN uthors ranked by the number of modules — but use this with care as it doesn't say anything about the quality of one author's distribution.
If CPAN was an arcade game, its high score screen might look a bit like this:
The training courses for this summer's YAPC in Pisa have been announced. And my course on Modern Perl has been chosen. It's a one-day course on August 2th (just before the conference). It costs ? 180. You'll be able to book once the payments system on the conference web site goes live.
Here's the description of the course from the YAPC site:
This course introduces the major building blocks of modern Perl. We'll be looking at a number of CPAN modules that can make your Perl programming life far more productive.
The major tools that we will cover will be:
- Template Toolkit
- DBIx::Class
- Moose
- Catalyst
- Plack
We'll also look at some other modules including autodie, DateTime and TryCatch.
There are several other good courses running both before and after the conference. I'm sure there'll be something that you'll find interesting.
N.B: This is not an April Fool's joke!
The Perl 6 design team met by phone on 24 March 2010. Larry, Allison, Patrick, Jerry, and chromatic attended.
Larry:
=== and eqv \| parcel parameter syntaxR metaoperator does not change associativitytrusts traits do not extend to child classes, and moritz++ specced it# radcalc to make :16<.BABEFACE> easier to allow** part of radix literals:: doesn't correctly suppress relexing of multi tokensPatrick:
Allison:
Jerry:
c:
The Perl 6 design team met by phone on 17 March 2010. Larry, Allison, and chromatic attended.
Larry:
*, including assignment1..* Z to go with X metaop; documented that X and Z desugar to higher-order methods, crosswith and zipwith zip/cross dwimmily with non-identical ops; possibly creating a real use case for surreal precedenceviv againgimme5 viv --p6 so it exactly reproduces STD.pm againviv --p5 toward replacing gimme5 gimme5 understands$ hard reference.meth I" is a two-terms-in-a-row errory/// or tr/a-z/A-Z/ syntax. probably indicates p5-think, not missing method parensq-like sublanguage for tr/// string parsingMONKEY_TYPING constraint on augment and supersede declaratorsZ metaoperator%*RX structure at parse time$?FOO variables to parser's $*FOO dynamic variablesAllison:
c:
Allison:
set_want c:
Adam Kennedy has declared a contest between Dancer and Mojolicious. Seems to me like a great idea. We'll both get a chance to learn from each other, show our strengths and try to work on our exposed weaknesses.
One major issue about Mojolicious is that they decided to do everything in core (at least from perl 5.10.1), without any additional dependencies that aren't in Perl core (while, we at Dancer, try to keep the dependencies to a minimum). This has many disadvantages (reinventing the wheel, handling the same issues over and over, needing a lot of knowledge on how to implement everything) and a lot of advantages (easier deployment, easier packaging for vendors, more control). This post is about one of those advantages.
Right now, Dancer cannot participate in the challenge. The reason is since Adam uses Windows and Dancer doesn't install on Windows. It isn't that Dancer is not Windows-compatible, it's just that it uses HTTP::Server::Simple::PSGI, which uses HTTP::Server::Simple which fails a test on Windows. Thus, Dancer cannot be installed on Windows (at least with StrawberryPerl, which is what Adam - and many others - use).
Mojolicious do not need to be concerned about it, since they don't depend on it. That's a pretty big advantage. I can honestly say I'm not smart enough to be able to write everything the way people in Mojolicious did. Kudos to them!
Instead, we had to make HTTP::Server::Simple work, to be able to participate in the Kennedy Top100 V2 Challenge(tm?). HTTP::Server::Simple is written by Jesse Vincent, and I'll say this: considering how busy I assume he should be, he's very responsive. We discussed the need to fix it, but both of us couldn't fathom the exact problem with the test script that failed. Seems like the code is fine.
The entire Dancer team decided to try and fix the test. This was very encouraging since it showed how we all cared about the Windows users (which none of us were, really) being able to work with Dancer. (I can't say we don't want to win the challenge too :)
I decided to play with it over the weekend and act as if I had any chance of fixing it. After playing with it a bit, emailing Jesse a few times with several observations I've made and playing with it some more, I was actually able to fix it! Finally Dancer can be installed successfully on Windows (StrawberryPerl).
I sent a patch to Jesse and hopefully the next version of HTTP::Server::Simple will be out soon and we could step up to the plate of the challenge.
Thanks goes to Jesse (for being such a nice guy and actually answering my many emails in such a short time) and Adam (for starting the challenge, an idea I really like).
May the better framework win, may we all learn from this experience and may the code be with us!
|
When you need perl, think perl.org
|
|