このページは大阪弁化フィルタによって翻訳生成されたんですわ。

翻訳前ページへ


Planet Perl - an aggregation of Perl blogs
The Wayback Machine - http://web.archive.org/web/20091028132821/http://planet.perl.org:80/

Planet Perl

October 28, 2009

v
^
x

Dave CrossLearn Perl in London

I'm astonished to realise that I hadn't already mentioned the three days of Perl training that I'm going to be running in central London next month. So please excuse a quick plug.

The training is running from 24th - 26th November and there are three separate one-day courses covering beginner, intermediate and advanced topics. If you're wondering what subjects will be covered on each of the days then there's a handy list on the web site. The courses are designed to be standalone, but they also work well together if you wanted to do two (or even all three) of them together.

The courses will be held at the Imperial Hotel in Russell Square. Each day's course costs £339.25 (inc VAT). That includes lunch.

The classes will be run lecture style, but there will be plenty of time in the breaks for you to try stuff out if you want to bring a laptop with you.

If you want to improve your Perl (or you want your colleagues to improve their Perl) then please think about signing up.

by Dave Cross at October 28, 2009 12:26 UTC

v
^
x

Adam KennedyDBD::SQLite 1.26_06 - Something close to "release candidate"

http://svn.ali.as/cpan/releases/DBD-SQLite-1.26_06.tar.gz

I'm happy to report that we now believe all major bugs are (well, MIGHT anyway) now resolved in trunk.

So this dev release can be considered something close to a release candidate, I encourage you to download and test it.

Of particular note are any problems you might encounter caused by having foreign keys turned on by default...

by Alias at October 28, 2009 12:06 UTC

October 27, 2009

v
^
x

Curtis JewellFunny thng saw (on the p5p list this time, rather than IRC)...

"I believe, but have not Googled..."

(personally, is that really faith? But then, we are talking Perl here!)

October 27, 2009 15:21 UTC

v
^
x

Curtis PoeCriticizing Academia -- Oops

I've been known to criticize universities for churning out students who don't have basic skills needed in industry. Now I need to step back and rethink that. Joel Spolsky has a particularly scathing blog post about Computer Science education and at first blush, I was tempted to agree with him. Then I read Joel Spolsky - Snake-Oil Salesman and immediately thought back to my own experiences with academia and have to revisit my thinking.

About a decade ago, I was doing some work with the Alaska Department of Education (side note: if you want to see an example of "dysfunctional", study Alaska politics -- and that's not an oblique reference to Palin). The Department was thinking about creating a Web site that allowed instructors to share lesson plans. Naturally, I learned quite a bit about what was involved. While the people in the Department were dedicated professionals, they were trying to build cathedrals while handcuffed.

Case in point: Alaska was spending a lot of money on education and getting poor results, so the legislature passed a law offering early retirement to the best paid teachers. Many of them took this offer, but grades plummeted. Turns out the best paid teachers were often the best teachers. Who knew?

It's awfully tough to figure out how to maximize return on investment with education. "Pay for performance" schemes are often outlined, but usually by people who have no idea how to measure performance in academia. You can't simply pay for higher grades -- and if you can't see the problem with that, please stop voting :)

Another popular "pay for performance" idea is standardized tests. Give all students the same test and see how they do. Give the best pay to the teachers (or school districts) whose students do the best on this test. One teacher in Oregon lamented to me that she teaches Russian immigrant students. They can't do as well on these tests -- English isn't their first language -- and thus the teachers who take these particularly difficult assignments are looking at less pay for more work. Hmm ...

Another teacher, a friend of mine from Texas, is upset because so much of her time is now spent on "teaching the test". She complains that she struggles to teach her students new skills or critical thinking because she has to spend all of her time figuring out what those tests will ask and prepping the students for those tests. Creative teaching? Forget it. If she doesn't teach the test, her students will do poorer on them and this threatens her job because it threatens her school system's budget. With a lower budget, fewer teachers can be hired. You know who the school system would have to let go.

The "Snake Oil" rebuttal to Joel seemed spot on and from my experience with academia, had the ring of truth (though, of course, I can offer no evidence). Academia is hard. You can't just teach students a narrow set of skills. You have to teach them a broad set because you don't know what will be relevant tomorrow. You don't know what will be relevant to the student. The student won't know what's relevant to the student (which is why we teach algebra to high school students who hate it). It's easy to criticize something we're not intimately familiar with. I should remember that more.

by Ovid at October 27, 2009 13:41 UTC

v
^
x

Curtis PoeSwitching and resizing windows in vim

I've gotten really tired of manually switching and resizing windows in vim. You can create more than one window in vim with :split or :vpslit, but navigating between them is often an annoying combination of "control w, direction key towards the window you want go to". Of course, if you hit the wrong direction key, you don't move anywhere and you silently curse because you were hitting "h" instead of "j". But then the windows often take up large chunks of real estate, so they're tough to read, but you don't want to keep only one window open and switching buffers all the time, so basically, window navigation can be a pain (even with BufExplorer and other plugins).

What some vim users aren't aware of is that you can type "control W, control W" and it will simply cycle you to the next window. You have to do this a few times if you have more than one window open, but since you don't have to move your fingers and remember navigation keys, it's quick and easy. Thus, I wrote this mapping (this does something similar to what other developers on our team do, but fits my work style a touch better):

noremap <Leader>w <C-W><C-W>:res<cr>

In other words, it automatically switches to the next window and resizes it. You can still see the other windows, but they're out of the way.

by Ovid at October 27, 2009 10:52 UTC

v
^
x

chromaticPerl 6 Design Minutes for 14 October 2009

The Perl 6 design team met by phone on 14 October 2009. Larry, Allison, Patrick, and chromatic attended.

Patrick:

  • have a question
  • I'm not happy with the :lang attribute in regexes
  • I'll explore in my own direction
  • Larry can complain about that when he gets back

Larry:

  • I wouldn't expect anything less

Patrick:

  • worked more on nqp_rx
  • PAST- and POST-based
  • going exceedingly well
  • I have almost every feature that PGE provides
  • what's left is minor
  • either we don't need them and/or they're easy to add when necessary
  • the new engine passes about 3/4 of PGE's test suite
  • the rest is minor
  • specifying characters by number or Unicode name, for example, \x and \c
  • today's task is to make the new engine self-hosting
  • the regex parser should be able to parse itself
  • that shouldn't be hard to do at all
  • that'll also speed things up quite a bit
  • it's currently two or three times slower than PGE when running the tests
  • that's because the regex parser is slower
  • the code that does the matches is much faster
  • that's a nice tradeoff; we generally parse regexes once and run them a lot
  • adding features to the new engine has been incredibly easy
  • many of them tend to be five or six lines at most
  • it's a bunch of short code
  • I'll continue working on that for the next week
  • I expect to write the new version of NQP in the next couple of days

Allison:

  • working on the pcc_reapply branch of Parrot
  • lots of fixes there
  • went down from 82 failing tests on the corevm target to 18 failing tests on the full Parrot test suite
  • PGE now compiles and runs just fine on the branch
  • still working on the last few failures
  • started doing language testing as well
  • Cardinal and another language pass on the branch with no problems
  • have CallSignature optimizations ready to merge in
  • waiting for one feature from chromatic on that
  • also, quantum computation is a lot of fun

c:

  • refactored CallSignature PMC to optimize it
  • one last change to make
  • where did the 12,000 passing Rakudo tests come from?

Patrick:

  • mostly trig, transcendental, and math functions
  • they're automatically generated for Int, Num, and Complex
  • it's a nice jump on the graph
  • we need to put out a nice message about what happened
  • I'll talk more about the percentage of the suite passing than the number of passing tests
  • the argument "You can add thousands of passing tests and not make progress" makes some sense
  • that's why I'll talk about percentage of the suite from now on
  • I'll report for Jonathan too
  • he's redoing parameter binding in a branch
  • currently, that's fairly slow in Rakudo
  • after we call a subroutine, we perform the type checks and throw exceptions if necessary
  • we do binding from incoming parameters to lexicals there too
  • that all required a lot of method calls
  • every subroutine call in Rakudo caused at least four or five Parrot method calls
  • he's redone parameter binding
  • primarily in C
  • he's created a set of dynops to make it much quicker
  • Rakudo's Subroutine PMC type can do the type checking before invocation now
  • he's seeing a significant performance improvement there
  • the pcc_reapply branch in Parrot will also simplify what he's doing there
  • he expects to merge before the release next week
  • it's been outstanding work
  • do we have an ETA for the branch landing?

Allison:

  • very shortly after 1.7
  • within minutes or hours

Patrick:

  • that fits our planning well
  • Parrot releases 1.7 next Tuesday
  • Jonathan will probably merge his branch in then
  • his branch will use Parrot's existing calling conventions
  • if someone tries to build Rakudo today or Jonathan's branch against pcc_reapply, Rakudo will likely fail
  • we're fine with that
  • don't consider it a blocker for Parrot
  • we'll adapt to whatever you do
  • I expect that Rakudo's trunk will run against the 1.7 release until 1.8
  • we have two major projects
  • one is converting our argument handling to Parrot's new PCC
  • the other is completely replacing Rakudo's grammar to use the new NQP
  • almost all of our work will take place in branches, not trunk
  • we'll merge back to trunk when we have everything working

by chromatic at October 27, 2009 02:05 UTC

v
^
x

Adam KennedyStrawberry Professional, Padre, and Ashton's Law

I love polls and surveys, because they tend to clarify almost any situation enormously. Data has a way of stripping away doubt and uncertainty.

Strawberry Perl exists in the form it does largely because of Michael Schwern's Perl Community survey. His discovery that 10% of respondents worked primarily on Windows, but 40% of respondents worked on Windows "At least once a year" essentially defined the target market, and brought about the current Strawberry mission statement.

"A 100% Open Source Perl for your Windows computer that is exactly the same as Perl everywhere else"

With the about-to-be-released Strawberry October 2009 release, I think we've finally met that challenge. The entire core crypto module set now builds properly, finally giving Windows support for https:// URLs and Module::Signature support (a long-time bugbear). Thanks to hard work by kmx, we can now also include C libraries in a manner that is protected from PATH collisions.

As far as the 40% of people that use Windows once a year goes, Strawberry Perl is now basically "done".

So it's time to turn our attention to the remaining 10% that use Windows every day, and particularly the newbie subset of this group. These people now make up the majority of people that we see arriving in the #win32 channel.

As a general rule, they arrive in the #win32 support channel having successfully downloaded and installed the headline Strawberry release from the front page. But because it is installed Unix-style as just the language, they have no idea what to do next.

There's a notable expectation amongst maybe a third of arrivals that the language comes with an IDE built-in, and questions like "How do I run a program" are common.

Looking at the recent Perl IDE and Editor Poll results, you can see the result of this lack of assistance.

Looking at the lower-percentage results, you can see that a significant number of people are using generic programmers text editors for Perl work, when superior options are now available.

While Padre is not yet a fully capable alternative to EPIC and Komodo, and will likely not be competing with vi(*) or Emacs any time soon, anyone currently using Ultraedit or Notepad++ (the two biggest Windows-specific editors) could easily switch to Padre and see productivity improvements (particularly in the case of Notepad++).

Taking over these groups would triple Padre's market share even from the Padre-biased respondents, it gives us a well-identified target market we can aim at, and it would mean thousands more potential Padre contributors.

We know from Ashton's Law ("Just make it fucking easy to install") that the ease of procurement of software is twice as important to market share as the quality of the software.

So as good as Padre is becoming in its own right, we can amplify the adoption rate enormously by simply making Padre so easy to procure that users don't even need to think about it.

Given the clear and significant benefits to both the Strawberry and Padre that would come from working more closely together, and the increased maturity and reliability we are seeing in both codebases, I think the time to finally start serious work on the long awaited "Chocolate Perl" concept has arrived.

With regards to the name itself, a number of people have mentioned that it would be a bad idea to go with this as the real name of the product, due to brand confusion. And I agree.

So while we will keep the name Chocolate as a codename in the short term, the more likely product name will be something like "Strawberry Perl Professional" (in keeping with the principle of least surprise) and will become the primary download link on the front page of the website.

The current Strawberry Perl product will probably be renamed to something like "Strawberry Perl Lite" or "Strawberry Perl Express" and will be potentially be moved off the front page to prevent confusion and limit the users of it to people that understand specifically what they need.

Looking at my currently installed Strawberry Perl, which contains a reasonable sample of the additions that will be included in Chocolate, I would say we can expect a download size of around 100meg and an installation footprint of around 500meg (not including data cached by the CPAN client), making Perl truly "Enterprise Grade" software :)

On top of a bundled copy of Padre, the list of included modules will be long and extensive, and will include four main categories.

1. A set of all significant (working) Win32:: modules, as well as modules for Excel integration and so on.

2. The most popular module sets for specific common project types (BioPerl, Catalyst + plugins, POE + plugins, PDL, SDL, Imager/GD, WWW::Mechanise, and so on)

3. All of the CPAN Top 100 that install on Windows.

4. A set chosen from the most-downloaded modules off http://cpan.strawberryperl.com/

Given how long this installer will take to build, the availability of a true production grade Perl 5.10.1, and to preserve CSJewell's sanity, I would anticipate that Chocolate will only be released as a Perl 5.10 variant.

Anyone savvy enough to know they need 5.8.9 should be reasonably capable of using the current lighter Strawberry product and installing the additional required modules themselves.

Since most of the pieces needed for Chocolate now install quite well (on the October release) I imagine we can produce a fairly decent beta by Christmas, with the first production version arriving as part of the January 2010 Strawberry release.

by Alias at October 27, 2009 00:21 UTC

v
^
x

Paul FenwickPerl 5.11.1

Perl 5.11.1
I've been behind in my blogging; time seems to fly when one is having fun, and I've been having a pretty good time recently. Most of it's involved working with people and science, rather than technology. After I finish my taxes (not yet overdue), this may change.

In the meantime, I can't go without mentioning that Perl 5.11.1 has been released. This isn't a stable version of Perl, but it's a point release on the way toward 5.12.0. I'm quite excited about 5.12.0 for many reasons I'll go into later, but they all involving modernisation of the language.

Of note in 5.11.1 (and hence 5.12.0) is that deprecation warnings are turned on by default. This isn't scary; it means that if you've got old code that's going to break in the future, then Perl will start warning you about that well in advance.

Of other note is a minor point, and that's the ability to include version numbers in package declarations. One can now write package Foo::Bar 1.23, rather than having to do cumbersome things with the $VERSION package variable.

Posted: 27th October 2009.

Tags:

Bookmark:

Digg this Digg this

October 27, 2009 00:00 UTC

October 26, 2009

v
^
x

Gabor SzaboPerl Editor and IDE Poll results

A few days ago I announced a quick poll to find out what editor or IDE people are using when writing Perl code. The poll is now closed. There were 3,234 answers but as multiple answers were allowed the number of people is lower.

The data of the poll can be found under Perl IDE and Editor Poll, October 2009 on the Perl IDE web site along with the raw CSV files. It does not allow deep analysis so just a few quick observations.

The traditional Unix editors (vi/vim/gvim and emacs) got almost 50% of the answers. Taking into account that the number of people who answered is probably considerably lower than the number of answers this means a lot more than 50% of the responding people use either of those editors. For example I marked Padre and vim and I know I was not alone with this.

On the other hand I assume - without any proof - that vim and emacs use is much higher in the core Perl community than among people writing Perl in companies without a connection to the community. As quite likely more "community people" answered the poll than "non-community people" this skews the data in favor of vim/emacs.

I was surprised to see Padre getting 101 votes. I did not think so many people already use Padre.

Padre got more or less the same number of "votes" as TextMate and Komodo Edit which probably means that for all the other editors only a small fraction of the users voted while for Padre every one of its users voted. I don't think that 3% of the people writing Perl are using Padre. I don't think 3% even heard of it.

Anyway, some more thought is required on how to understand this data.

by Gabor Szabo at October 26, 2009 23:40 UTC

v
^
x

Curtis PoeSyntax Highlighting in Pod::Parser::Groffmom

I decided that I really needed syntax highlighting in Pod::Parser::Groffman. I have an example at Testing with Test::Class. Note that the Perl examples are now colored. It's not what everyone would like, but it works.

To handle syntax highlighting, you just do this:

=for highlight Perl

  sub add {
      my ( $self, $data ) = @_;
      my $add = $self->in_list_mode ? 'add_to_list' : 'add_to_mom';
      $self->$add($data);
  }

=end highlight

This turns on syntax highlighting. Allowable highlight types are the types allowed for Syntax::Highlight::Engine::Kate. We default to Perl, so the above can be written as:

=for highlight

  sub add {
      my ( $self, $data ) = @_;
      my $add = $self->in_list_mode ? 'add_to_list' : 'add_to_mom';
      $self->$add($data);
  }

=end highlight

Syntax highlighting is experimental and a bit flaky. Some lines after comments are highlighted as comments. Also, POD in verbatim (indented) POD highlights incorrectly. Common_Lisp is allegedly supported by Syntax::Highlight::Engine::Kate, but I was getting weird stack errors when I tried to highlight it.

Also, don't use angle brackets with quote operators like "q" or "qq". The highlighter gets confused. I've not filed any bug reports as I've no idea if the errors are mine or the syntax highlighting module.

by Ovid at October 26, 2009 23:20 UTC

v
^
x

Curtis PoeUsing subtests

In writing Pod::Parser::Groffmom, I decided to start using the new subtest feature in Test::More. Since I added that, I may as well eat my own dog food.

Why would you want subtests? As test suites grow in size, you often see stuff like this:

{
    diag "Checking customer";
    ok my $customer = Customer->new({
      given_name  => 'John',
      family_name => 'Public',
    }), 'Creating a new customer should succeed';
    isa_ok $customer, 'Customer';

    can_ok $customer, 'given_name';
    is $customer->given_name, 'John',
      'given_name() should return the correct value';
    # ... and so on
}

Programming like this is sometimes useful when we want to:

  1. Locally override a subroutine, method or variable.
  2. Create new variables without them leaking to a file scope.
  3. Group a bunch of tests.

Points 1 and 2 are obvious, but what about 3? A sure sign of a desire for grouping comes when you see test output like this (from my Sub::Information tests):

ok 5 - Sub::Information->can('name')
ok 6 - ... and its helper module should not be loaded before it is needed
ok 7 - ... and it should return the original name of the subroutine
ok 8 - ... and its helper module should be loaded after it is needed

That's an example of writing tests in a narrative style (something chromatic taught me) so that the output can be (somewhat) human-readable. It's also a case of me writing a test in such a way that I have four assertions logically grouped together to test the behavior of a single feature (think "xUnit"). By grouping tests this way and by encapsulating our scope, we can often more easily refactor our tests in a way that makes sense. So I decided to use subtests. Here's what it looks like:

#!/usr/bin/env perl

use strict;
use warnings;

use Test::Most tests => 5;
use Pod::Parser::Groffmom;

my $parser;

subtest 'constructor' => sub {
    plan tests => 2;
    can_ok 'Pod::Parser::Groffmom', 'new';
    $parser = Pod::Parser::Groffmom->new;
    isa_ok $parser, 'Pod::Parser::Groffmom', '... and the object it returns';
};

subtest 'trim' => sub {
    plan tests => 2;
    can_ok $parser, '_trim';
    my $text = <<' END';

this is
text

    END
    is $parser->_trim($text), "this is\n text",
      '... and it should remove leading and trailing whitespace';
};

subtest 'escape' => sub {
    plan tests => 2;
    can_ok $parser, '_escape';
    is $parser->_escape('Curtis "Ovid" Poe'), 'Curtis \\[dq]Ovid\\[dq] Poe',
      '... and it should properly escape our data';
};

subtest 'interior sequences' => sub {
    plan tests => 6;
    can_ok $parser, 'interior_sequence';

    is $parser->interior_sequence( 'I', 'italics' ),
      '\\f[I]italics\\f[P]', '... and it should render italics correctly';
    is $parser->interior_sequence( 'B', 'bold' ),
      '\\f[B]bold\\f[P]', '... and it should render bold correctly';
    is $parser->interior_sequence( 'C', 'code' ),
      '\\f[C]code\\f[P]', '... and it should render code correctly';
    my $result;
    warning_like { $result = $parser->interior_sequence( '?', 'unknown' ) }
    qr/^Unknown sequence \Q(?<unknown>)\E/,
      'Unknown sequences should warn correctly';
    is $result, 'unknown', '... but still return the sequence interior';
};

subtest 'textblock' => sub {
    plan tests => 2;
    my $text = <<' END';
This is some text with
  an embedded C<code> block.
    END
    my $expected = <<' END';
This is some text with
  an embedded \f[C]code\f[P] block.

    END
    can_ok $parser, 'textblock';
    eq_or_diff $parser->textblock( $text, 2, 3 ), $expected,
      '... and it should parse textblocks correctly';
};

(Note that the top level plan only lists five tests because each subtest is counted as one test)

And the output:

$ prove -lv t/internals.t
t/internals.t ..
1..5
    1..2
    ok 1 - Pod::Parser::Groffmom->can('new')
    ok 2 - ... and the object it returns isa Pod::Parser::Groffmom
ok 1 - constructor
    1..2
    ok 1 - Pod::Parser::Groffmom->can('_trim')
    ok 2 - ... and it should remove leading and trailing whitespace
ok 2 - trim
    1..2
    ok 1 - Pod::Parser::Groffmom->can('_escape')
    ok 2 - ... and it should properly escape our data
ok 3 - escape
    1..6
    ok 1 - Pod::Parser::Groffmom->can('interior_sequence')
    ok 2 - ... and it should render italics correctly
    ok 3 - ... and it should render bold correctly
    ok 4 - ... and it should render code correctly
    ok 5 - Unknown sequences should warn correctly
    ok 6 - ... but still return the sequence interior
ok 4 - interior sequences
    1..2
    ok 1 - Pod::Parser::Groffmom->can('textblock')
    ok 2 - ... and it should parse textblocks correctly
ok 5 - textblock
ok
All tests successful.
Files=1, Tests=5,  2 wallclock secs ( 0.03 usr  0.01 sys +  0.29 cusr  0.07 csys =  0.40 CPU)
Result: PASS

That quickly showed an annoyance. Using subtests surprised me even though I created them! Specifically, having to specify a plan for every subtest is frustrating, but I don't know the number of tests before I've written them. Thus, I have to use no_plan for each subtest and then switch it afterwards.

I think a better strategy is clear: if no plan is included in a subtest, an implicit done_testing should be there. Thus, you can write subtests without specifying a plan but still have a bit of safety. I think I know how to implement this and it would make test author's lives simpler.

by Ovid at October 26, 2009 13:02 UTC

v
^
x

Curtis PoeWhy Do We Believe Random Assertions?

So I wrote a blog post about anecdote-driven development and I confess that not everyone was swayed by my opinion. Seems that lots of people like to introduce themselves as "hi! I'm captain of the USS Make Shit Up" and then start tossing out "facts" (I can be one of these people when I'm not careful). Here's an antidote:

What we actually know about software development, and why we believe it's true, by Professor Greg Wilson of the University of Toronto.

One bit I appreciated, from another presentation of his, is the debunking of the "best programmers are X times more productive than the worst" myth. He cites "28 times" as a commonly used figure, but I usually hear "10". However, he lists the studies for this information and they usually find the best programmers are only 5 times more productive than the worst and this is consistent with other fields.

I love the idea that we can use evidence rather than guesswork, but I doubt this idea will prove popular any time soon.

by Ovid at October 26, 2009 07:34 UTC

October 25, 2009

v
^
x

Curtis PoePod to MOM format

Having struggled repeatedly with LaTeX, I gave up. That's why github now hosts Pod::Parser::Groffmom. This is an undocumented module (just started hacking on it) which produces love MOM output suitable for groff -mom. That will transform it into PostScript for viewing with gv, Preview.app or anything else which can read PostScript. You can see a sample at Slideshare: I used this module to transform my Logic Programming In Perl article to a lovely PDF. There's virtually no control over output (patches and docs welcome :), but it works for me. It's only testing on Mac OS X Snow Leopard, but I imagine that any system with a modern 'groff' should be able to read the output.

Now I suppose I should write some docs.

by Ovid at October 25, 2009 12:21 UTC

October 24, 2009

v
^
x

chromaticPerl 6 Design Minutes for 07 October 2009

The Perl 6 design team met by phone on 07 October 2009. Larry, Allison, Patrick, and chromatic attended.

Patrick:

  • did more work on the regex engine
  • coming along exceedingly well
  • new engine written primarily in NQP
  • started in a new GitHub repo
  • the end goal is to come up with an implementation with built-in support for regexes and grammars
  • current NQP supports neither
  • NQP will be the component that handles regex support
  • it'll be much more maintainable written in NQP
  • pleasantly surprised at how much easier many of the constructs are to write in this form than in PIR
  • have a stage one prototype for protoregexes
  • necessary to implement the grammar for parsing Perl 6 regexes
  • have most of the base classes for Cursors and Match objects
  • doesn't exactly match Larry's STD
  • but I expect STD to match after Larry sees how I do it
  • how's that for hubris?
  • continuing to work on that
  • would like to have a self-hosted NQP by this time next week
  • mostly figuring out design issues, bootstrapping, and where the pieces go
  • had my breakthrough on that on Monday
  • now it's just however much time I can sit in front of the keyboard

Allison:

  • worked on the PCC branch this week
  • very productive
  • the hackathon was a huge motivation for me
  • most of the work I did was in the days before the hackathon
  • I knew that if I could clear away known issues, I could make other people more productive
  • wasn't coding as much as I was helping other people code during the hackathon
  • both worked out well
  • definitely like the hackathon idea
  • working with Tene on finishing return handling code
  • that's the last piece
  • potential blockers are class time
  • still working on getting a good space for work at University between classes
  • too much time and trouble to go home and back

Larry:

  • went to the LLVM conference with diakopter
  • very informative
  • forwarded chromatic's questions to Chris Lattner
  • some good preliminary information
  • worked on the semantics of warning exceptions
  • classified them as control exceptions
  • added a statement prefix quietly to trap warning exceptions
  • differentiated the use of warn to emit warnings to talk to standard error
  • the new note function now emits a warning to standard error
  • it's more informative
  • suppressed in a different way from trapping an exception
  • an outcome of going to the LLVM conference:
  • both the people trying to implement Python and Ruby backends had some serious carpage about the overly dynamic nature of both language definitions
  • thought about how we can avoid that in Perl 6
  • using the same approach we defined for finalization, the default is now that after CHECK time, any routine may be aggressively inlined
  • unless declared ahead of time as prepessimal
  • still coming up with good terms
  • you have to predeclare in any of several ways that you're interested in wrapping a function to avoid aggressive inlining
  • it's okay for the system to refuse to wrap a routine in that case
  • does not close the door on partial pessimization
  • if you're willing to inline the check for a wrap, you can get some optimization out of that
  • the Ruby and Python people are trying to do that
  • you can only get so far with that
  • we should be able to make aggressive optimizations, unless the system requests otherwise
  • need to distinguish routine type from hard or soft inlinable routines
  • have SoftRoutine and HardRoutine types now
  • dehuffmanized methods on ranges to return whether min and max are exclusive
  • added i as a constant, so you don't have to say 1i
  • due to scoping, it shouldn't matter if you declare an i routine
  • yours will take precedence
  • refined range definitions
  • even if there's no succ function, you get a range -- but can't do iteration
  • wrote a long rationalization for why short-circuit and should always turn a False into a Nil in list context
  • pseudo-spec, apocrypha-spec or something
  • Cursor's string enum parsing works better
  • recognizes pair notation with strings
  • moving files down from the perl6/ directory into lib/ for hygiene
  • too much stuff for one directory
  • refactored the symbol table code
  • none of it uses hard references anymore
  • various symbol tables stored in a global hash
  • refer to each other via symbolic links through that hash
  • much easier to debug symbol tables
  • it doesn't chase links all over the place to make information you don't want
  • stash objects attributes all start with ! pseudo-sigil to hide them from normal Perl 6 code
  • hacked in the quietly statement prefix
  • finally made internal import work correctly
  • the Setting now defines the Num package inline and imports constants from it
  • reorganized Cursor methods into appropriate sections
  • hope to move some out to separate files
  • the longest token JITter probably doesn't belong there
  • the loop construct now requires space between parenthesis and curly for consistency with other constructs
  • also because I'm mean
  • viv raises the middle argument of a ternary operator to the same level as the left and right arguments
  • STD symbol table construction now longer adds @_ or %_ unless it's seen them in the block
  • Matthew found a terrible bug
  • a block at the start of a file with the same identifier as a lexical scope was very confusing
  • lexical scopes with signatures now have a signature variable to tell you that signature
  • my ($a, $b, $c) used to be considered a signature, added to the current pad
  • that was bogus
  • it doesn't do that now
  • generates missing signatures from placeholders or implicit $_ semantics when reaching the end of a block
  • fixed AST bugs in regexes, now returns atoms
  • repeat/while and repeat/until now require whitespace
  • lots of cleanups in STD
  • getting rid of useless file-scoped contextuals
  • chopped out {*} stubs, as neither STD nor viv uses them
  • refactored Perl 5 into a separate file
  • loaded on demand
  • unless you use Perl 5 regexes, you don't get the Perl 5 grammar
  • started hacking a Perl 5 grammar which parses something that's not really Perl 5 yet
  • no way to invoke the resulting code, either

c:

  • fixed some Parrot bugs
  • cleaning up some other things in Parrot

Allison:

  • regarding development speed of PGE, NQP, etc and the Parrot core
  • if we have a good, clean module strategy, does it make sense to host them externally?

Patrick:

  • I'm already planning that NQP should be a separate module
  • I expect that NQP will be the primary interface into PCT

Allison:

  • which makes PCT an external module

Patrick:

  • right
  • we tell people to use PCT if they want to write languages in Parrot
  • it'll be a fairly efficient compiler down to PIR
  • even more than it is now
  • when it comes time to write a PCT book, it'll be a book about using NQP to write compilers
  • could be a standalone project
  • if other people want to write other toolkits, that's fine

Allison:

  • bytecode generation API should be core to Parrot
  • other parts belong with PCT, if they're integral parts of PCT
  • you may have synchronization problems if there are some parts in core and some aren't

Patrick:

  • I have that same issue with the grammar engine
  • we can recouple those components if it makes sense later

by chromatic at October 24, 2009 01:11 UTC

v
^
x

Yuval KogmanAuthenticated Encryption

One thing that makes me cringe is when people randomly invent their own cryptographic protocols. There's a Google Tech Talk by Nate Lawson where he explains some surprising approaches to attacking a cryptographic algorithm. It illustrates why rolling your own is probably a bad idea ;-)

Perhaps the most NIH cryptographic protocol I've seen is digitally signing as well as encrypting a message, in order to store tamper resistant data without revealing its contents. This is often done for storing sensitive data in cookies.

Obviously such a protocol can be built using HMACs and ciphers, but high level tools are already available, ones that have already been designed and analyzed by people who actually know what they're doing: authenticated encryption modes of operation.

WTF is a cipher mode?

Block ciphers are the sort of like hash functions, they take a block of data and scramble the block.

Simply encrypting your data blocks one by one is not a good way of securing it though. Wikipedia has a striking example:

Even though every pixel is encrypted, the data as a whole still reveals a lot.

Suffice it to say that blocks of operation are a wrapper that takes a low level scrambling function, the block cipher, and provide a less error prone tool, one that is more difficult to misuse.

On the CPAN

Crypt::CBC and Crypt::Ctr are implementations of some of the more classic cipher modes. But this post is ranting about people not using authenticated modes.

Crypt::GCM and Crypt::EAX implement two different AEAD modes of operation using any block cipher.

These are carefully designed and analyzed algorithms, and the CPAN implementations make use of the tests from the articles describing the algorithms, so it sure beats rolling your own.

Secondly, Crypt::Util provides a convenience layer that builds on these tools (and many others), so perhaps Crypt::Util already handles what you want.

To tamper protect a simple data structure you can do something like this:

my $cu = Crypt::Util->new( key => ... );

my $ciphertext = $cu->tamper_proof_data( data = { ... }, encrypt => 1 );

Crypt::Util will use Storable to encode the data into a string, and then use an authenticated encryption mode to produce the ciphertext.

To decrypt, simply do:

my $data = $c->thaw_tamper_proof( string => $ciphertext );

Crypt::Util will decrypt and validate the ciphertext, and only after it's sure that the data is trusted it'll start unpacking the data, and if appropriate using Storable to deserialize the message. All allocations based on untrusted data are limited to 64KiB.

Don't sue me

I'm not saying that the CPAN code is guaranteed to be safe. I'm saying this is a better idea than rolling your own. If your application is sensitive you have no excuse not to open up the code and audit it.

by nothingmuch (nothingmuch@woobling.org) at October 24, 2009 01:21 UTC

October 23, 2009

v
^
x

Curtis PoePOD to PDF?

I was working with Pod::PseudoPod to create rich, structured documents with the idea that I could create nicely formatted PDFs. So far, the only reasonable way (no, plain text and HTML are not "reasonable") to accomplish this seems to be to emit DocBook xml from the PseudoPod and then convert that to LaTeX or PDF format. However, trying to find a tool which does that on Mac OS X Snow Leopard has left me at a dead end (CPAN libraries which claim to help are falling down badly).

So, what do you use to address this situation? Software which can actually install on OS X would be a bonus. Software which doesn't require me to read through volumes of documentation to figure out that one little setting would also be a bonus. I don't mind a little work, but so far, most software I've found has been miserable (e.g., openjade segfaults and I've no idea why).

by Ovid at October 23, 2009 21:45 UTC

v
^
x

Gabor SzaboSupplying examples with CPAN modules

One of the optional metrics used by CPANTS is wether the module has examples. I think it checks if there is an eg/ directory but actually having an eg/ directory with examples does not give the users much.

Most of the people install the modules using CPAN.pm or the native package management tool. Neither of those install the examples. Very few download the source and unzip it so they won't see the examples.

OK, I admit. I always have trouble locating examples even when the documentation of the module tells me to look at them.

So what other options are there?

POD in the sample script

If there is POD in the example files then search.cpan.org will display them, but of course it will display the POD and not the code.

Generate a module and install it

In Win32::GuiTest there is a make_eg.pl script that is executed when the distribution is generated. It takes all the files from the eg/ directory and creates a module called Win32::GuiTest::Examples puting the examples in the POD section. This means the actual examples can be easily read on search.cpan.org and once the module is installed they can be easily found by typing

  perldoc Win32::GuiTest::Examples

Install the examples

In Padre and in [dist:Padre::Plugin::Parrot] we instruct CPAN.pm to install the examples in the share directory. Actually we have a share/ directory and the examples are within that directory in share/examples. Then in Makefile.PL we have instruction to install the share directory. (In the case of Module::Install it is a call to install_share.)

Once the module is installed the examples are also installed and they can be found with the following call:

  $dir = File::Spec->catdir(
		File::ShareDir::dist_dir('Padre-Plugin-Parrot'),
		'examples');

That is

  File::ShareDir::dist_dir('Padre-Plugin-Parrot')

returns the path to the share directory of the package.

In Padre then we can create menu items to Open Example that will open the regular Open File window just it will position it in the directory of the examples supplied by Padre. Similarily the Parrot Plugin has (or I think will have in the next release) a menu item for the same thing.

Of course it is not restricted to people using Padre as anyone could use the above code.

run examples embedded in modules

There are a number of modules on CPAN with some way to run the examples embedded in modules. for example if you look at the SDL::Tutorial package of SDL_Perl, you know, the newly (and rightly) hyped module that will help you burn all your free time by writing games in Perl... there you will find simple instructions on how to run the sample script:

  perl -MSDL::Tutorial -e 1


Autoextract examples

There are a number of modules on CPAN with some form of autoextract examples I just can't find them. The idea there is that the sample script is embedded in a module and running

  perl -MModule::Name -e1

will create a file example.pl that will hold the example script. Actually I think the SDL examples meant to do this as well but if I remember correctly they don't extract anything, just run the example.

Conclusion

As we can see there are several ways to inclue examples in a CPAN distribution with varying levels of ease of use. Actually we could even combine all the solutions and make sure people have several easy ways to see running examples of the modules on CPAN.

These could also help a lot with module evaluation and learning.

by Gabor Szabo at October 23, 2009 18:16 UTC

v
^
x

Adam KennedyDear Lazyweb: Alternatives to dprofpp -T ?

When it still worked reliably, my heaviest usage Devel::DProf wasn't for the profiling. What I used it the most for was tracing.

The dprofpp -T option would generate a space-indented dump of every single function call made in the program, including all the BEGIN stuff as the program compiled.

It was particularly useful for crashing or failing programs, because you could just start at the end of the trace and then watch the explosion backwards in slow motion, moving back to find the place where it all started to go wrong.

Unfortunately the new king of profilers, Devel::NYTProf, can't replicate this neat trick (yet?).

In the mean time, does anyone have a recommendation of where I can go to get the same information? I can't find anything obvious...

by Alias at October 23, 2009 00:26 UTC

October 22, 2009

v
^
x

Curtis PoePerl Training Australia

So my fiancée was rather curious about what I do for a living, so she started researching Perl Training companies. Part of this is with the idea that, long-term, we might consider launching a Perl consultancy firm (initially focused on training). Given that her background is legal (she has a Masters in French law), project management, communications and political consulting, it's fair to say that she has plenty of business background but no technical background. As a result, she was looking at various sites with the eye of someone who might hire them to do Perl training. She found one, and only one, that she would consider. That was Perl Training Australia. Why?

  • The site made clear the competitive advantage of Perl
  • It was easy to find the information she wanted
  • It was easy to read the information
  • It was pleasant to read
  • Perl looked "sexy" (her words) via the presentation and explanation
  • The Web site actually looked professional (but only compared to other Perl training sites)
  • They listed the companies they've worked for, making it clear that Perl is ubiquitous
  • There were biographies which made it clear that their people were competent

Make of that what you will, but if you comment, don't insult my fiancée :) Remember, she's looking at this from the point of view of someone considering Perl for their company. And just to be clear, she's looked at many, many Web sites of companies offering Perl training (and if you know Perl, you know some of those names).

by Ovid at October 22, 2009 20:42 UTC

v
^
x

brian d foyI'm in Dublin on Sunday

I kept meaning to post this, and I can't believe it's already this late in October.

Jonas Nielsen and I are going to be in Dublin, Ireland on Sunday because we're both running the marathon on Monday. If any local Perl Mongers want to get together for a drink on Sunday early evening, let me know. I'm staying about a mile from the marathon start and otherwise have no idea of the geography.

It might be my only time to see Jonas. Once he leaves the start line, I won't see him for hours. :)

by brian_d_foy at October 22, 2009 20:13 UTC

v
^
x

use.perlTesting Needed: Strawberry October 2009 BioPerl

(This is a copy of an email to the BioPerl mailing list)Dear BioPeopleSince we added a "Live Support Chat" link to the frontpage of the Strawberry Perl website ( http://strawberryperl.com/ ) we've noticed that one of the main types of visitors that we see in the IRC channel ( #win32 on irc.perl.org ) is biologists trying to install variousthings.As a result, for the October 2009 release of Strawberry Perl, we've prioritised adding support for CPAN-installation of BioPerl.Changes include a major improvement to crypto support which provides OpenSSL and support for https:// URLs, we now ship Postgres and MySQL DBI drivers in the default installation, and we have added Berkely DB support (which previously prevented us meeting the dependencies for the BioPerl distribution).I'm happy to report that we now believe we are in a position to officially support the installation of BioPerl on Strawberry Perl.Prior to the official release next week, we would appreciate testing from the BioPerl community.Release candidate installers for the October release are available at the following URLs http://strawberryperl.com/download/strawberryperl-5.10.1.0.msi http://strawberryperl.com/download/strawberryperl-5.8.9.3.msi Once installed, run Start -> Program Files -> Strawberry Perl -> CPAN ClientFrom the CPAN client command line, run "install BioPerl" and selectthe default options.Once installed, you should be able to do anything you do normally with a BioPerl install.Assuming all goes well in this release, for the next release January 2010 we plan to also produce a "Strawberry Perl Professional" distribution that will bundle BioPerl as part of the default installation, as well as a Perl IDE and other useful packages.Success or failure of the release candidate can be reported either here [the BioPerl list] or to #win32 on irc.perl.org.Adam K

Read more of this story at use Perl.

by Alias (posted by brian_d_foy) at October 22, 2009 14:15 UTC

v
^
x

brian d foyI'm a Perl::Critic committer.

Elliot forced me to accept a commit bit to the Perl Critic repo.

My first policy will be *::YouArentAllowedToProgramAnymore, which deletes your source if it finds that you fail any policy, you have any use of 'no critic', you run Perl::Critic more than once on the same file, or if it's Friday afternoon. If you have your stuff in source control, you should be safe. That's the fence you have to jump over to get back in, though.

Elliot already shot down my policy suggestions for *::NotEnoughVowels, *::PassiveVoiceInString, *::MisconjugatedVerb, *::StupidVariableName, *::UsesWindows, *:;DependsOnModules, *::YouEditedThisInEmacs, *::Magic8Ball, *::YourNameIsPudge, *::YourModuleWebsiteIsUgly, and *::YourPerlIsSoLastMonth.

by brian_d_foy at October 22, 2009 14:07 UTC

v
^
x

David GoldenHammering away in the Perl toolchain smithy

What have I been up to lately? I’ve been tooling (toiling?) away in the Perl toolchain smithy, fixing up modules on CPAN and patching blead.

CPAN module work:

  • Released Module::Build 0.35_03. Among other things, this is much quieter by default, with less junk spewed onto the terminal.
  • Released ExtUtils::CBuilder 0.26_04. This splits Windows compiler packages into separate module files, fixes several MSVC bugs and has a couple minor fixes to support mingw64.
  • (Earlier in October) Released ExtUtils::ParseXS 2.21. This fixes major breakages on older Perls.
  • For CPAN.pm, fixing up various failing tests on Win32 and making auto-configuration quieter and friendlier. These are now available in CPAN 1.94_52, now on CPAN.

Work on the Perl core (and related modules):

  • Added is_deprecated() to Module::CoreList to identify modules that are marked as such in 5.11.X.
  • Revised auto-generation of Module::CoreList to populate the list of deprecated modules when Module::CoreList is updated before a Perl release.
  • I also added support for this to CPAN.pm to “do the right thing” when a deprecated module is found in a prerequisite list. These are still in my repository and I hope will soon be merged in the CPAN.pm master repository and eventually into the blead branch of the Perl source
  • Other minor tweaks and Todo list additions

This has been keeping me too busy to work on finishing inc bundling for Module::Build and a number of personal projects, but I hope to get back to them soon.

by dagolden at October 22, 2009 04:15 UTC

v
^
x

Adam KennedyTesting Needed: Strawberry October 2009 BioPerl support

(This is a copy of an email to the BioPerl mailing list)

Dear BioPeople

Since we added a "Live Support Chat" link to the frontpage of the Strawberry Perl website ( http://strawberryperl.com/ ) we've noticed that one of the main types of visitors that we see in the IRC channel ( #win32 on irc.perl.org ) is biologists trying to install various
things.

As a result, for the October 2009 release of Strawberry Perl, we've prioritised adding support for CPAN-installation of BioPerl.

Changes include a major improvement to crypto support which provides OpenSSL and support for https:// URLs, we now ship Postgres and MySQL DBI drivers in the default installation, and we have added Berkely DB support (which previously prevented us meeting the dependencies for the BioPerl distribution).

I'm happy to report that we now believe we are in a position to officially support the installation of BioPerl on Strawberry Perl.

Prior to the official release next week, we would appreciate testing from the BioPerl community.

Release candidate installers for the October release are available at the following URLs

http://strawberryperl.com/download/strawberry-perl-5.10.1.0.msi

http://strawberryperl.com/download/strawberry-perl-5.8.9.3.msi

Once installed, run Start -> Program Files -> Strawberry Perl -> CPAN Client

From the CPAN client command line, run "install BioPerl" and select
the default options.

Once installed, you should be able to do anything you do normally with a BioPerl install.

Assuming all goes well in this release, for the next release January 2010 we plan to also produce a "Strawberry Perl Professional" distribution that will bundle BioPerl as part of the default installation, as well as a Perl IDE and other useful packages.

Success or failure of the release candidate can be reported either here [the BioPerl list] or to #win32 on irc.perl.org.

Adam K

by Alias at October 22, 2009 00:25 UTC

October 21, 2009

v
^
x

Curtis JewellThird release candidate for 5.10.1.0 is out...

v
^
x

chromaticPerl 6 Design Minutes for 30 September 2009

The Perl 6 design team met by phone on 30 September 2009. Larry, Allison, Patrick, Jerry, Nicholas, and chromatic attended.

Allison:

  • went to Linux Con
  • spoke at Bay Piggies
  • talked to the Unladen Swallow people
  • refreshing the PCC branch for Parrot
  • looking forward to a hackathon this weekend
  • playing with Git, trying to get a handle on how people use branches
  • I think we can comfortably use Git with SVN as the backing store if we fine-tune our processes a little bit

Patrick:

  • working on updates to PGE
  • going very well
  • hacking in the beginnings of protoregexes today
  • none of the code does anything, but I have the basic pieces in place
  • will have the glue in soon
  • entirely PAST- and POST-based
  • PGE generated PIR
  • now you build up your regexes using PAST nodes
  • the compiler emits POST notes
  • that can go to PIR or eventually PBC or another output format

c:

  • even Perl 5, if someone were sufficiently crazy

Patrick:

  • that's a possibility at some point
  • get correct lexical and block handling for free
  • patching that into the old PGE would take a bit of work
  • had a local hackathon here on Saturday
  • we updated complex number handling and something else
  • also marked a lot of tickets in RT as LHF -- low-hanging fruit
  • won't require heroics of information or compiler knowledge
  • easy to find a bunch of tickets that novices can work on
  • plan to continue working on the regex engine
  • hope to pass the PGE test suite in a couple of days
  • once I can do that, I'll start switching NQP to the new engine
  • should speed up NQP
  • NQP will support regexes and grammars
  • then NQP can be the main interface for all of the compiler toolkit writers
  • write and compile a program in NQP and you end up with a compiler

c:

  • this will benefit from optimization stages for PAST and POST

Patrick:

  • this will probably be one of the first stages where an optimization stage shows up
  • we can optimize a regex tree at the PAST step
  • seems likely
  • haven't come up with a name for the new engine
  • calling it PAST-regex, the regex portion of NQP

Larry:

  • going to the LLVM conference on Friday with Matthew Wilson
  • managed to get a pass from the right people
  • if you have any questions, please send them to him or me
  • decided to require parens on method calls with quoted names
  • helps detect accidental use of dot as concatenation
  • Patrick noticed that I used temp and let in regular expressions
  • documented that you can do that
  • tired of the defines infix; now we have a statement control import

Patrick:

  • that one didn't last long!

Larry:

  • it didn't work well
  • the notion of an infix with BEGIN semantics was problematic

Patrick:

  • I like it much better

Larry:

  • you can import an inline module
  • still trying to implement that
  • redid the import code
  • works for external modules, but not internal yet
  • did a major overhaul of types
  • split more of the abstract roles such as Numeric and Integral
  • added Real and Stringy
  • decided we were trying to do too much with pairs and mappings
  • broke those into immutable and mutable types
  • trying to document type relationships better now, especially in the numerics
  • replacing defines with import in the code
  • Carl noticed the lack of report of nonexistent variables used in the default of a parameter
  • easy to fix by localizing the appropriate flag
  • packages and namespaces have much more informative ID fields, including lexical scopes
  • contain file and line position now
  • we can track lexical scopes even if they're not linked into other scopes directly
  • a debugger can access other lexical scopes
  • redeclaration messages on symbol conflicts now leaves off "lexical" on lexical collisions
  • package conflicts now include the package name with the symbol
  • much more readable
  • infix operators now can perform various checks during reduction in the operator precedence parser
  • the .= rules, for example, is an infix that used to check that it was a Perl 5 version
  • it didn't have enough information at that point
  • now you can set a callback for the reduction point
  • it has enough information to do an obsolescence check there now
  • deleted the obsolete form of method with an adverbial
  • have to decide what to do with prefix operators that leave out prefix
  • STD recognizes stubs with ... and doesn't complain about redefinition
  • fairly intelligent approach
  • if the top-level operator is ... or a friend, it's a stub
  • not just textual analysis
  • it even allows statement modifiers
  • removed the grammar terms for pi, e, and i
  • they're constants in CORE::
  • upgrading viv in various ways as Matthew discovers places where it doesn't propagate information into the AST correctly
  • rad numbers now parse
  • regexs now produce more correct ASTs

c:

  • fixed some bugs
  • will fix more bugs
  • working on longer term planning for Parrot

Jerry:

  • will the PGE changes speed up Rakudo more than the Parrot optimizations?

Patrick:

  • I don't know
  • it'll be hard to measure
  • I get the advantage of all of the Parrot speedups
  • the new system is a lot friendlier in terms of object creation
  • it doesn't create as many objects as PGE does
  • I've optimized down most of the backtracking state
  • kept as a single ResizableIntegerArray
  • other objects created to keep track of the state of the match
  • backtracking is basically clipping an array of size 10 to size 5, for example
  • don't have to keep track of nearly as much information
  • Cursors and Match objects are much more immutable
  • much smarter about creating them
  • creates them only when there's a mutating change
  • should have a lot less GC pressure
  • less jumping around
  • smarter about backtracking points
  • should be able to do a basic benchmark on the PGE tests in the old system and the new system
  • that'll tell us quite a bit
  • protoregexes should gives us a lot of speedup
  • we get to avoid a lot of false trails by pruning a lot of trees that Rakudo's parser goes down
  • can detect potential rules and jump right to them
  • can avoid checking rules that can't possibly match
  • if you're parsing an expression right now, we check for all of the types of statements, and then parse it as an expression
  • if you do that for every statement, it takes a while
  • lots of GCables created for those checks too
  • we'll know in a couple of weeks

c:

  • reducing GC pressure always helps

Patrick:

  • the old version was faithful to the Perl 6 method
  • every rule is a method with a slurpy hash
  • the new version doesn't do that
  • it gets the slurpy hash options out of the way at the beginning
  • the rest can ignore it
  • at least for now, it's not making empty slurpy hashes on every rule

Jerry:

  • you talked about a cloning system outside of PGE before

Patrick:

  • it'd be nice if Parrot had a way to access something as a hash and have it build up the hash only when you read them
  • regardless of speed improvements, we need protoregexes and contextuals to make spec progress
  • we'll go this way even if there's no speed improvement
  • pointed out a problem with having methods on a Match object with the same name as rules
  • does having Cursors not behave like Match objects present a problem?

Larry:

  • inheritance fixes one direction of the problem, but not the other
  • what a user views as a Match object might contain the Cursor used to produce it
  • might start from there
  • probably needs to be a separate type from the user's point of view

Patrick:

  • regexes still return Cursors?
  • it's easy to get to a Match object from there?

Larry:

  • the method that implements a regex?
  • that returns a Cursor
  • what the user uses as what she thinks of as a normal regular expression, she gets back a Match object

Patrick:

  • you can't use the methods on Match in a grammar
  • the only problematic one was pos
  • there's a method conflict there

Larry:

  • we might make the builtins uppercase

Patrick:

  • I'm making them private with exclamation points for now
  • I know you can't access them from subclasses
  • if and when we decide to make them public, we can do that

Larry:

  • maybe we can make a way of disambiguating them
  • if there's a conflict in the rules...
  • that part works pretty well
  • the pos comes from the base class

Patrick:

  • if I do Cursor.pos...

Larry:

  • you can say Cursor::.pos
  • we can work with it in that direction
  • try to disambiguate to the outside one, if it's something internal

Patrick:

  • we could delegate the subscripting operator to the Match object
  • haven't needed that yet
  • STD doesn't do that
  • it treates the Cursor as a hash in a few places
  • could easily go through the Match to do that

Larry:

  • STD cheats all over the places

Patrick:

  • Perl 6 is being built by a bunch of cheaters!

Larry:

  • something there represents an object attribute of some point
  • most cheats are to make it work correctly on Perl 5

Nicholas:

  • the Saturday before a Parrot release used to be a Bug Day
  • has that fallen by the wayside?
  • is that a good or a bad thing?

Jerry:

  • yes

Allison:

  • it's expanded into the week before a release

Jerry:

  • we stopped advertising it in our release annoucements
  • we've talked about having a dedicated hackathon day
  • the Saturday before a release probably isn't the right day for it

Allison:

  • we're doing a PCC hackathon this weekend
  • the amount of time before the next release is a good idea
  • we can merge some changes into trunk before that release

Nicholas:

  • that distinction seems quite sensible

Allison:

  • at one point, we had a bugfixing hackathon the weekend before the release and a hacking hackathon the weekend after

c:

  • we're also trying to increase the bus number for features and branches

Jerry:

  • the bugday was nice as a time to introduce new contributors to Parrot
  • now we have other ways to get them excited about using Parrot
  • downloadable and installable packages, for example

by chromatic at October 21, 2009 23:11 UTC

v
^
x

Perl BuzzWhat editor/IDE do you use for Perl development?

Gabor Szabo is running a survey about Perl development:

I have set up a simple five-second poll to find out what editor(s) or IDE(s) people use for Perl development. I'd appreciate very much if you clicked on the link and answered the question. You can mark up to 3 answers.

Please also forward this mail in the company you are working and to people in your previous company so we can get a large and diverse set of responses.

The poll will be closed within a week or after we reached 1000 voters. Whichever comes first.

by Andy Lester at October 21, 2009 22:36 UTC

v
^
x

Gabor SzaboWhich editor(s) or IDE(s) are you using for Perl development?

Let's pretend you never heard me talking about Perl editors and IDEs. Would you please spend 5 seconds to answer the poll Which editor(s) or IDE(s) are you using for Perl development? I will close the poll in 10 days or after 1000 responses. Whichever comes first. Act now!

List based on Perl Development Tools table on PerlMonks.

by Gabor Szabo at October 21, 2009 21:16 UTC

v
^
x

Gabor SzaboPerl Mongers in Amsterdam

After yesterdays visit to the Budapest Perl Mongers, today I went to see the Perl Mongers in Amsterdam.

by Gabor Szabo at October 21, 2009 20:51 UTC

v
^
x

Perl BuzzPerlbuzz news roundup for 2009-10-21

These links are collected from the Perlbuzz Twitter feed. If you have suggestions for news bits, please mail me at andy@perlbuzz.com.

by Andy Lester at October 21, 2009 16:16 UTC

v
^
x

use.perlPerl 5.11.1

Milo had been caught red-handed in the act of plundering his countrymen, and, as a result, his stock had never been higher. He proved good as his word when a rawboned major from Minnesota curled his lip in rebellious disavowal and demanded his share of the syndicate Milo kept saying everybody owned. Milo met the challenge by writing the words "A Share" on the nearest scrap of paper and handing it away with a virtuous disdain that won the envy and admiration of almost everyone who knew him. His glory was at a peak, and Colonel Cathcart, who knew and admired his war record, was astonished by the deferential humility with which Milo presented himself at Group Headquarters and made his fantastic appeal for more hazardous assignment. - Joseph Heller, Catch-22 It gives me great pleasure to announce the release of Perl 5.11.1. This is the second DEVELOPMENT release in the 5.11.x series leading to a stable release of Perl 5.12.0. You can find a list of high-profile changes in this release in the file "perl5111delta.pod" inside the distribution. You can (or will shortly be able to) download the 5.11.1 release from: http://search.cpan.org/~jesse/perl-5.11.1/

Read more of this story at use Perl.

by jesse at October 21, 2009 14:56 UTC

v
^
x

use.perlMadrid Perl Mongers Social Meeting

salva writes "The Madrid Perl Mongers are having a social meeting on Wednesday October 21 from 19:30 (localtime) at El Rincon Guay, Embajadores 62, Lavapies, Madrid. Everybody is invited! Just come and enjoy some "cañas y pinchos" with us while we talk about Perl!"

Read more of this story at use Perl.

by jesse at October 21, 2009 14:55 UTC

v
^
x

Curtis JewellTime for a second release candidate...

RC2 includes Win32::ErrorLog (in order for one of the Geo:: modules to work), and the 5.10.1.0 RC2 includes CPANPLUS 0.89_02, and 5.8.9.3 RC2 includes CPAN 1.94_52.

The dev versions each fix a bug that we want to include the fix for in the October 2009 versions.

I'll be building a third release candidate within the next few days for the 5.10.1.0 versions, because I forgot to make sure it included CPAN 1.94_52 (which has a bug fix that specifically applies to using a minicpan within Strawberry.)

The README file generation has also been corrected.

No other changes have been made.

At any rate, they're at the same place. Test thoroughly.

October 21, 2009 06:13 UTC

October 20, 2009

v
^
x

Marcel Gr?nauerHTML stack trace from the Perl debugger

Tatsuhiko Miyagawa released Devel::StackTrace::AsHTML and blogged about it.

I thought this would make a neat Perl debugger command, so I wrote DB::Pluggable::StackTraceAsHTML. It is a plugin to DB::Pluggable. It adds the Th command to the debugger, which displays a stack trace in HTML format, with lexical variables. It then opens the page in the default browser.

Here is an example of how to use it:

$ perl -d test.pl

Loading DB routines from perl5db.pl version 1.3
Editor support available.

Enter h or `h h' for help, or `man perldebug' for more help.

main::(test.pl:14): my $n = 12;
  DB<1> r                                                                                                   main::fib(test.pl:12):      return fib($i - 1) + fib($i - 2);
  DB<1> Th                                                                                                  

The result would look something like:

To enable the plugin, just add it to your ~/.perldb, like so:

use DB::Pluggable;
use YAML;

$DB::PluginHandler = DB::Pluggable->new(config => Load <<EOYAML);
global:
  log:
    level: error

plugins:
  - module: BreakOnTestNumber
  - module: StackTraceAsHTML
EOYAML

$DB::PluginHandler->run;

By the way, to be minimally invasive to the existing Perl debugger, the command is defined using the debugger's aliasing mechanism. Normally you define an alias as a regular expression that will change the command the user enters to a known command, but here we circumvent that and call our command handler directly. The following method is from DB::Pluggable::Plugin:

sub make_command {
    my ($self, $cmd_name, $code) = @_;
    no strict 'refs';
    my $sub_name = "DB::cmd_$cmd_name";
    *{$sub_name} = $code;
    $DB::alias{$cmd_name} = "/./; &$sub_name;";
}

To define a new foo command in a plugin, you then use:

package DB::Pluggable::StackTraceAsHTML;
use strict;
use warnings;
use base qw(DB::Pluggable::Plugin);

sub register {
    my ($self, $context) = @_;
    $self->make_command(
        foo => sub {
            # ...
        }
    );
}

Write a comment | Bookmark and Share

October 20, 2009 22:09 UTC

v
^
x

Gabor SzaboPerl Mongers: A world tour on the back of a virtual camel

I opened a separate blog to write about the Perl Mongers in general and about the specific groups. I started with a few words about the Perl Mongers and an entry about the Perl Mongers in Budapest, Hungary.

by Gabor Szabo at October 20, 2009 21:54 UTC

v
^
x

chromaticPerl 6 Design Minutes for 23 September 2009

The Perl 6 design team met by phone on 23 September 2009. Larry, Patrick, Jerry, and Nicholas attended.

Larry:

  • thinking about longest token matching, and how to do it with a real DFA
  • profiling standard grammar, to see where the current bottlenecks are
  • been in IRC design discussions with Patrick and others
  • helping diakopter with his JavaScript backend
  • originally inclined to write off David Green's suggestion about ranges and non ranges, but the more I thought about it, and how series operators worked, the more I liked a variation of his idea
  • that's what I spec'd
  • now ranges objects are primarily objects that reflect an interval, min and max; no from, to or mutable
  • Can use them to make an iterator, but you only get an iterator by ones, or a, b, c, d
  • series operator now extended to handle the notion of steps and limits
  • since it can do that now, and since I always thought that :by was ugly, it's now deprecated
  • simplified the matching of alpha ranges. Used to do fancy footwork to ensure that the range would never exceed the length of the right hand argument, but now it just goes and produces an infinite lazy list if you didn't produce a valid end point that can be compared
  • with simplification of ranges, looked at the interaction of ranges and subscripts in Synopsis 9
  • if a range overlaps the beginning or end of an array, it throws away the part that doesn't apply
  • doesn't matter if you say 0..* or 0..*-1 (or 0..Inf), they all work out the same
  • looked at some spec'd feature that others came up with, negative subscripts causing unshift behaviour, and I thought that that would be too error prone, so I took that out
  • spec'd a way to declare modular subscripts, so that they wrap around at both ends, just the way APL subscripts work, as it happens
  • actual hacking, most was for diakopter, to fix up things that were missing
  • it's now easy to pull arguments out of operators from the AST
  • previously you needed to know where in the linear sequences it was
  • for example, infix [operators ?] you had to know that it was first and third, and that was kind of bogus
  • the string nibbler had a bogus AST node, so I fixed that up
  • STD didn't do bracketing quotes right, so I fixed that
  • it did not correctly create nodes from the operator precedence parser in the AST so I fixed that
  • put in an optimisation that string nibbler, if it gets a null string between two things that are interpolating, throws it away, because there's no point in concatenating a null string
  • improving STD error messages
  • it used not to complain if you had a symbol in a block and referred to an outer lexical, but you redefined that. It's spec'd to be an error, but it didn't catch it. Now it does. It's the longest error message of anything (three lines)
  • STD was giving a bad error message if you left out the space before an infix < because it would misinterpret it as a string subscript
  • now give a good error message on that, if it looks like it's not intended to be a string subscript
  • there was a way to backtrack out of the radix syntax, such as a :3 without a trinary number after
  • it would give a strange error message; fixed that
  • there was other stuff, but it's all in the noise

Patrick:

  • much of week and weekend was for my other job, so didn't get onto Perl 6, Parrot or Rakudo until Monday
  • looking at refactoring grammar engine to do protoregexes, and to get closer to how the standard grammar is doing. it. Reading it, cursor, gimme5, to do similar things for implementing the grammar engine
  • it's going well
  • I expect actual code by this time next week

Jerry:

  • released Rakudo #21 "Seattle", named for Seattle Perl User Group
  • one of the questions I got was "is this in Debian?"
  • I believe nobody is packaging it for Debian yet, but I wondered if there has been any push towards Debian

Patrick:

  • there are RPM packages already. I know people were working on Debian packages, but I don't know where it ended up.

Jerry:

  • release went very well, very smooth
  • anyone can do it, which I think I have proven
  • I'd not been building Rakudo, let alone writing source for it, and yet it only took a couple of hours, must of which was waiting for the spec test results (multiple times), so kudos for the release process

Patrick:

  • it works great for Parrot, so I just stole it and adapted it for Rakudo

Jerry:

  • trying to keep up with what's in the mailing list and the IRC channels, and occasionally give a semi-enlightened comment about a design issue
  • no blockers for me
  • why a proper DFA, Larry?

Larry:

  • hopefully to run faster
  • profiling suggests it would help some, in the cursor implementation, but there's a lot of overhead distributed in a lot of other places
  • want to avoid repeatedly running a lot of patterns, if really a DFA or a parallel NFA, instead of faking it with a trie structure for constant strings, and for non-constants regular Perl 5 patterns, and sorting them into longest to shortest order
  • not sure how much it will get, but I'd like to have the "correct" algorithm there.

Patrick:

  • I plan to start with the "Cheating" algorithm, or various cheats, instead of going directly to a DFA

Larry:

  • for the profiling, I get a lot of overhead from running an interpreter on top of an interpreter -- it's always going to be slow.

Patrick:

  • curious how performance changes in the new engine, with the redesign
  • I suspect it will improve, but I don't know by how much
  • I don't have a really fast regex engine underneath

Larry:

  • like just delegating character classes off to Perl 5

Patrick:

  • I've thought of creating one

Larry:

  • better to use Blizkost

Patrick:

  • considered implementing one using standard traditional syntax but haven't quite got there yet

Jerry:

  • Patrick, you've been talking about more near-term goals and your Hague grant

Patrick:

  • updated the Roadmap in August
  • it's due for another update
  • I have a Hague grant due to reimburse the costs for Lisbon, but I promised for that to have a much more detailed plan for Rakudo Star
  • the roadmap shows that the critical components all depend on two things
  • one is the grammar engine
  • one is calling conventions
  • almost half block on the grammar engine
  • every hour spent on the grammar engine is related to Rakudo Star
  • if it doesn't get done, we're going to miss our deadline

Jerry:

  • you're in the early stages of PGE
  • do you see any of it as parallelizable?

Patrick:

  • should be parallelisable; Carl was planning to following closely
  • looking at code that Cursor is using, can write many of the parts in NQP (not PIR), in particular, the operator precedence parser
  • will rewrite STD in NQP to provide that to all the higher level languages
  • the code changes slightly from STD, as NQP only does binding, not assignment
  • some function calls will become methods, because NQP prefers methods
  • when it's done, STD may be able to adopt this
  • Carl's been looking at the existing code: "oh, it makes sense"
  • just in time for me to change it all around

Jerry:

  • progress from last time you created a grammar engine

Patrick:

  • there's a good chance it may not be called PGE
  • to avoid deprecation issues, it might be easier to leave the old one alone
  • the new one is NQP, as NQP has regexes
  • it's the same language for the parser and the action methods
  • the underlying engine gets a new name, as it's pretty different

by chromatic at October 20, 2009 21:06 UTC

v
^
x

Curtis PoeDowngrading Moose

One problem with Bundles (and Task::, from what I can see) is that they don't let you specify a particular set of module versions which play well together.

Quite often, I find that I'm trying to downgrade Moose when I need to run an older version of our work code. But that means I also have a few other modules which need to be downgraded at the same time and this becomes painful.

It would be nice to have monolithic "Everything you need for this Moose version" modules, bundling a known-good set of modules together. This would allow a developer to do something like "cpan 'D/DR/DROLSKY/Everything-Moose-0.83.tar.gz'" and get everything they need for that version of Moose installed.

The problem seems to be that CPAN doesn't allow authorities to discriminate between different authorities releasing the same module. I can upload a version of Moose, but it's labeled an "Unauthorized released" if I do. Authorities might help. For example, Class::MOP has the following code:

our $AUTHORITY = 'cpan:STEVAN';

If CPAN had recognized this (it doesn't yet, does it?), I could theoretically bundle Moose 0.83 with Class::MOP 0.88 (and other dependencies) and have this in each module:

our $AUTHORITY = 'cpan:OVID';

And create a distribution which is easy to upgrade/downgrade with full dependencies packaged with it. In fact, we could have several people release identical versions of the same module and the "default" authority would be the owners of that namespace, but you could then do this:

cpan Moose --authority=cpan:SOMECPANID

So if you want an unauthorized release, you must specifically ask for it. Otherwise, the authority is assumed to be cpan:DROLSKY.

Would this work?

by Ovid at October 20, 2009 13:20 UTC

v
^
x

Curtis PoeThe Perl Foundation Marketing Committee

As mentioned previously, I've still been working on Perl marketing. Now that I'm on the TPF Board, I've been setting things up there. I'm happy to announce that TPF has approved The Perl Foundation Marketing Committee.

We've been doing more than just creating the committee. Dan Magnuszewski, the committee chair, has also been laying a lot of the groundwork for the various things we need to do. Please respond on the blog rather than here. It would be nice to have feedback in one place.

by Ovid at October 20, 2009 08:47 UTC

v
^
x

chromaticPerl 6 Design Minutes for 16 September 2009

The Perl 6 design team met by phone on 16 September 2009. Larry, Patrick, Jerry, Nicholas, and chromatic attended.

Larry:

  • mostly worked on my open source ecology talk
  • backlogging after the return from family trips
  • pushing a slideshow of Guy Steele about the implementation of easily parallelized data structures
  • talking over that with Daniel Ruoso
  • helping Matthew Wilson bootstrap his JS emitter for STD
  • not much spec hacking, except a conjecture about the parallel semantics

Jerry:

  • released Parrot 1.6.0
  • went very smoothly
  • really like how boring it was
  • due to release Rakudo 21 tomorrow

Patrick:

  • mostly a lot of thinking this week
  • protoregexes in PGE, for example
  • reran the spectest progress file since the last release
  • Moritz and others greatly improved the calculation of the test suite size
  • previously it looked as if the suite were shrinking, due to plan lines
  • the curve is more accurate since August
  • Rakudo now passes over 15,000 tests -- almost 15,500
  • substantial increase since August
  • most of those tests are of operators on Nums, Rats, and Ints
  • lots of transcendental functions
  • I claim very little credit other than getting the process started
  • turned on HLL mapping for converting Parrot Floats into Rakudo Nums
  • in some places, Parrot Float internals leaked out
  • passed more tests, didn't seem to take a speed hit in the spec tests
  • that wasn't true with Int and String before
  • haven't seen direct reports of slowdowns from anyone else
  • will be able to focus all of my energy in Rakudo and NQP over the weekend

Nicholas:

  • Leon pointed me to a link about Python 3's uptake a year later
  • only about 1% of registered Python packages support Python 3
  • they originally had a 2to3 script to help migrate forward
  • now they also have a 3to2 script to encourage people to write in the new version
  • interesting to see how one version is more important than the other

Larry:

  • we already have a couple of those already
  • it's what kp6 was
  • it's what STD with gimme5 does
  • what I'm trying to bootstrap on viv is
  • the problem is that Perl 5 engine isn't actually capable of supporting the Perl 6 semantics on the bare metal, as it were
  • you either do emulation on top of Perl 5
  • or use cheaters on the innards of Perl 5 to sneak the required semantics in
  • both of those have their downsides
  • code that uses one approach is not interoperable with code that uses a different approach
  • neither of them are strictly interoperable with standard Perl 5 code
  • STD does regexp matching, but it doesn't use P5 regexps to do it

Nicholas:

  • it writes out Perl 5 code to do it longhand?

Larry:

  • yes
  • if you've seen Damian's recent talk on getting the P5 regexp engine to store trees...
  • he had to go through contortions to make that work
  • it's not extensible in the way that P6 regexps are
  • it's only part of the way there
  • it probably has other bugs
  • STD has that emulation layer on top
  • getting that to smoothly stitch Perl 6 code back to Perl 5 as if it were Perl 5....
  • it may be possible at the subroutine boundary, but that's about it

Nicholas:

  • it might have all of the fun of using cfront, then eventually the C compiler chokes

Larry:

  • any time you do multiple passes, you set yourself up for various dislocations
  • most of them painful
  • even so, STD is written in a subset of Perl 6 amenable to backtranslation
  • using that subset of Perl 6 as a better Perl 5 will take you only so far and no further

c:

  • fixed bugs
  • improved performance
  • 1.6 should be faster than 1.4, measurably

Patrick:

  • I have trouble measuring that
  • but my gut agrees
  • we keep doing things in Rakudo that should slow things down
  • but it doesn't get slower
  • I just can't measure it
  • the number of tests per second stays relatively flat
  • Jerry, did you try a practice run of the release?

Jerry:

  • hope to do that tonight
  • started reading the release manager guide

Patrick:

  • it's straightforward

Jerry:

  • it's in English, which helps

Patrick:

  • it's similar to the Parrot steps
  • do a practice release in your own GitHub account
  • the steps are the same as if you were doing a real release

by chromatic at October 20, 2009 01:56 UTC

October 19, 2009

v
^
x

Dave CrossSpeaking in Milton Keynes

Last Thursday I went to visit the nice people at Milton Keynes Perl Mongers. I think I've spoken at one of the technical meetings every year since they started holding them in 2006. I always enjoy speaking to MK.pm. They're a small and friendly group. And they always make me feel really welcome.

This time I tried something a bit different. I had a few talks prepared that I'd given earlier this year, but on their mailing list I asked them to suggest what they wanted me to talk about. After a bit of discussion they came up with a few interesting suggestions and I agreed to present two of them. And, interestingly they came up with two talks that I would never have considered writing.

The talks seemed to go down pretty well and the slides are now available on Slideshare. They probably won't work quite so well without me waffling on in front of them, but you might find them interesting.

  • Maintaining CPAN Modules - the tools and techniques that I use to maintain my small selection of CPAN module
  • Perl Training - Some experiences, anecdotes and vague conclusions drawn from the right years that I've been running Perl training courses
I found it an interesting experience writing talks that I hasn't planned to write. It's one that I hope to repeat in the future. Perhaps conferences should consider changing the way that Calls for Papers work. Maybe they should add a checkbox which means "I don't care what I talk about - please give me a title."

by Dave Cross at October 19, 2009 08:11 UTC

v
^
x

Leon BrocardGames

A few weeks ago I was up in the hills about Geneva reminiscing with my sister about all the things we used to enjoy when we were smaller. When I was younger I used to really enjoy programming computer games, first on my 48K Spectrum and then later on in STOS BASIC and then 68000 assembly language on my Atari ST.

I haven't programmed a game in a very long time. However, I'm an avid gamer, playing games while travelling on my DS and at home on my Xbox 360. I almost enjoy reading Edge magazine more than I like playing games.

At YAPC::Europe in Lisbon, Domm pointed out that the Perl SDL project (which wraps the Simple DirectMedia Layer) was languishing and that we should all programs games in Perl.

A few months later I got around to playing with SDL and made a simple breakout clone which I styled after Batty on the Spectrum, but with gravity. It was fairly easy to program, but there was a lot to grasp. The Perl libraries are a mix between a Perl interface to SDL and a Perlish interface to SDL, with limited documentation, tests and examples.

Of course this is where I join the #sdl IRC channel on irc.perl.org and start discussing with the other hackers (kthakore, garu, nothingmuch). We decide on a major redesign to split the project into two sections: the main code will just wrap SDL and then there will be another layer which makes it easier to use. I've started writing a bunch of XS on the redesign branch of the repository while trying to keep Bouncy (my game) still working. There is a bunch of work still to do but we've made a good start. This is what Bouncy looks like at the moment:

[YouTube video]

The physics are pretty fun and it runs pretty fast (1800 frames/second). I'm taking a little break as I'm off to Taipei...

by acme at October 19, 2009 07:34 UTC

v
^
x

Ricardo Signespod::elemental approaches first major resting point

After numerous jerks and stops, Pod-Elemental is about as useful as it has to be for work on Pod::Weaver to really build up some steam.

It's well past my bed time, here, but I wanted to do a quick run through of what it can now do.

First, I have it read in the very basic Pod events from a document and convert them into elements. This is exercising only the most basic dialect of Pod. If I load in this document and then dump out its structure (using Pod::Elemental's as_debug_string code) I get this:

Document
  =pod
  |
  (Generic Text)
  |
  =begin
  |
  (Generic Text)
  |
  =image
  |
  =end
  |
  =head1
  |
  =head2
  |
  =method
  |
  (Generic Text)
  |
  =over
  |
  =item
  |
  =back
  |
  =head2
  |
  (Generic Text)
  |
  =head3
  |
  =over
  |
  =item
  |
  =back
  |
  =head1
  |
  (Generic Text)
  |
  =begin
  |
  (Generic Text)
  |
  (Generic Text)
  |
  =end
  |
  =method
  |
  (Generic Text)
  |
  =cut

All those pipes are "Blank" events. Everything else is either a text paragraph or a command. There's nothing else structural. We feed that document to the Pod5 translator, which eliminates the need for blanks, understands the context of various text types, and deals with =begin/=end regions. It takes runs of several text elements separated by blanks and turns them into single text elements.

Document
  (Pod5 Ordinary)
  =begin :dialect
    (Pod5 Ordinary)
    =image
  =head1
  =head2
  =method
  (Pod5 Ordinary)
  =over
  =item
  =back
  =head2
  (Pod5 Ordinary)
  =head3
  =over
  =item
  =back
  =head1
  (Pod5 Ordinary)
  =begin comments
    (Pod5 Data)
  =method
  (Pod5 Ordinary)

So, already this is more readable. That goes for dealing with the structure, too, because we've eliminated all the boring Blank elements. Now we'll feed this to a Nester transformer, which can be set up to nest the document into subsections however we like. This is useful because Pod has no really clearly defined notion of hierarchy apart from regions (and lists, which I have not handled and probably don't need to).

Document
  (Pod5 Ordinary)
  =begin :dialect
    (Pod5 Ordinary)
    =image
  =head1
    =head2
  =method
    (Pod5 Ordinary)
    =over
    =item
    =back
    =head2
    (Pod5 Ordinary)
    =head3
    =over
    =item
    =back
  =head1
    (Pod5 Ordinary)
  =begin comments
    (Pod5 Data)
  =method
    (Pod5 Ordinary)

Now we've got a document with clear sections, but we've got these =method events scattered around at the top level, so we feed the whole document to a Gatherer transformer, which will find all the =method elements and gather them under a container that we specify. (Here we used a =head1 METHODS command.)

Document
  (Pod5 Ordinary)
  =begin :dialect
    (Pod5 Ordinary)
    =image
  =head1
    =head2
  =head1
    =method
      (Pod5 Ordinary)
      =over
      =item
      =back
      =head2
      (Pod5 Ordinary)
      =head3
      =over
      =item
      =back
    =method
      (Pod5 Ordinary)
  =head1
    (Pod5 Ordinary)
  =begin comments
    (Pod5 Data)

That still leaves us with =method elements, so we update the command on all the immediate descendants of the newly-Gathered node and end up with a pretty reasonable looking Pod5-compliant document tree:

Document
  (Pod5 Ordinary)
  =begin :dialect
    (Pod5 Ordinary)
    =image
  =head1
    =head2
  =head1
    =head2
      (Pod5 Ordinary)
      =over
      =item
      =back
      =head2
      (Pod5 Ordinary)
      =head3
      =over
      =item
      =back
    =head2
      (Pod5 Ordinary)
  =head1
    (Pod5 Ordinary)
  =begin comments
    (Pod5 Data)

It doesn't round-trip, but that's the point. We've taken a simple not-quite-Pod5 document and turned it into a Pod5 document. We've also got it into a state where further manipulation is quite simple, because we've created a tree structured nested just the way we want for our uses.

I think the next steps will be further tests for a while. I need to deal with parsing =for events a bit more, then I'll consider making =over groups easier to handle.

At this point, I believe I could replace PodPurler's code with a Pod::Elemental recipe. I might even do that. The real goal, now, is to start implementing Pod::Weaver itself. I think the way forward is clear!

by rjbs at October 19, 2009 04:58 UTC

October 18, 2009

v
^
x

Curtis PoeCPAN Updates

I've updated a few things on CPAN today. The most important is probably Test::Differences. No new features, but it's a production release (after a year!).

I've also updated Test::Aggregate to a production release. It now supports nested TAP via Test::Aggregate::Nested. The latter is ALPHA code, but it gets around the __END__ and __DATA__ limitations of Test::Aggregate. When Test::Harness supports richer parsing of nested TAP, it should be fully production ready (crosses fingers). Having dinner with Andy Armstrong tonight, so maybe we can talk about this.

Also, by request, I've made Test::Kit ready for production. In the process, I also fixed a bug where it wouldn't handle Test::Most due to namespace clashes.

I wanted to update MooseX::Role::Strict by adding MooseX::Role::Warnings, but due to behavioral changes in Moose, I need to postpone that for a bit.

I've also updated Class::Trait, but only to mark it as deprecated in favor of Moose::Role.

Update: Just uploaded a new Perl6::Caller to eliminate spurious test failures.

by Ovid at October 18, 2009 12:50 UTC

v
^
x

Paul FenwickTeaching Perl in Sydney

Teaching Perl in Sydney
I've just spent the week teaching Perl in Sydney. It was good. Actually, it was really good. My class were close in ability, asked intelligent questions, thought through problems, asked for assistance when needed, quizzed me about advanced topics during the breaks, and generally showed themselves to be awesome. It felt just like the good ol' days.

Posted: 18th October 2009.

Tags:

Bookmark:

Digg this Digg this

October 18, 2009 00:00 UTC

October 17, 2009

v
^
x

Perl NOC LogAll is well again

We got most things up pretty quickly and I've beaten our console server into submission again so it's ready next time we need it (going to the data center sucks).  If there are any of our services still missing, please let us know.

by Ask Bj?rn Hansen at October 17, 2009 20:10 UTC

v
^
x

Perl NOC LogFuel Pump Fail

The building where the perl.org datacenter is hosted was performing safety tests today that involved running on generator power for a few hours.  No problems were expected, as our UPS would have (and did) covered the transition.  And then a fuel pump failed in one of the generators, requiring it to be shut down, resulting in us losing power.

Several machines didn't come back up.  So if your favorite perl.org service isn't available or isn't working right today, that's why. 

Ask is on his way to the datacenter to get things sorted.  We'll update this blog as we have more information.

by Robert S at October 17, 2009 18:13 UTC

Perl.org sites : books | dev | history | jobs | learn | lists | use   
When you need perl, think perl.org  
the camel    
(Last updated: October 28, 2009 13:21 GMT)