| このページは大阪弁化フィルタによって翻訳生成されたんですわ。 | 
 
      Planet Perl is an aggregation of Perl blogs from around the world. It is an often interesting, occasionally amusing and usually Perl-related view of a small part of the Perl community. Posts are filtered on perl related keywords. The list of contributors changes periodically. You may also enjoy Planet Parrot or Planet Perl Six for more focus on their respective topics.
Planet Perl provides its aggregated feeds in Atom, RSS 2.0, and RSS 1.0, and its blogroll in FOAF and OPML
There is life on other planets. A heck of a lot, considering the community of Planet sites forming. It's the Big Bang all over again!
This site is powered by Python (via planetplanet) and maintained by Robert and Ask.

Planet Perl is licensed under
      a Creative
      Commons Attribution-Noncommercial-Share Alike 3.0 United States
      License.  Individual blog posts and source feeds are the
      property of their respsective authors, and licensed
      indepdently.
http://svn.ali.as/cpan/releases/DBD-SQLite-1.26_06.tar.gz
I'm happy to report that we now believe all major bugs are (well, MIGHT anyway) now resolved in trunk.
So this dev release can be considered something close to a release candidate, I encourage you to download and test it.
Of particular note are any problems you might encounter caused by having foreign keys turned on by default...
I've been known to criticize universities for churning out students who don't have basic skills needed in industry. Now I need to step back and rethink that. Joel Spolsky has a particularly scathing blog post about Computer Science education and at first blush, I was tempted to agree with him. Then I read Joel Spolsky - Snake-Oil Salesman and immediately thought back to my own experiences with academia and have to revisit my thinking.
About a decade ago, I was doing some work with the Alaska Department of Education (side note: if you want to see an example of "dysfunctional", study Alaska politics -- and that's not an oblique reference to Palin). The Department was thinking about creating a Web site that allowed instructors to share lesson plans. Naturally, I learned quite a bit about what was involved. While the people in the Department were dedicated professionals, they were trying to build cathedrals while handcuffed.
Case in point: Alaska was spending a lot of money on education and getting poor results, so the legislature passed a law offering early retirement to the best paid teachers. Many of them took this offer, but grades plummeted. Turns out the best paid teachers were often the best teachers. Who knew?
It's awfully tough to figure out how to maximize return on investment with education. "Pay for performance" schemes are often outlined, but usually by people who have no idea how to measure performance in academia. You can't simply pay for higher grades -- and if you can't see the problem with that, please stop voting :)
Another popular "pay for performance" idea is standardized tests. Give all students the same test and see how they do. Give the best pay to the teachers (or school districts) whose students do the best on this test. One teacher in Oregon lamented to me that she teaches Russian immigrant students. They can't do as well on these tests -- English isn't their first language -- and thus the teachers who take these particularly difficult assignments are looking at less pay for more work. Hmm ...
Another teacher, a friend of mine from Texas, is upset because so much of her time is now spent on "teaching the test". She complains that she struggles to teach her students new skills or critical thinking because she has to spend all of her time figuring out what those tests will ask and prepping the students for those tests. Creative teaching? Forget it. If she doesn't teach the test, her students will do poorer on them and this threatens her job because it threatens her school system's budget. With a lower budget, fewer teachers can be hired. You know who the school system would have to let go.
The "Snake Oil" rebuttal to Joel seemed spot on and from my experience with academia, had the ring of truth (though, of course, I can offer no evidence). Academia is hard. You can't just teach students a narrow set of skills. You have to teach them a broad set because you don't know what will be relevant tomorrow. You don't know what will be relevant to the student. The student won't know what's relevant to the student (which is why we teach algebra to high school students who hate it). It's easy to criticize something we're not intimately familiar with. I should remember that more.
I've gotten really tired of manually switching and resizing windows in vim. You can create more than one window in vim with :split or :vpslit, but navigating between them is often an annoying combination of "control w, direction key towards the window you want go to". Of course, if you hit the wrong direction key, you don't move anywhere and you silently curse because you were hitting "h" instead of "j". But then the windows often take up large chunks of real estate, so they're tough to read, but you don't want to keep only one window open and switching buffers all the time, so basically, window navigation can be a pain (even with BufExplorer and other plugins).
What some vim users aren't aware of is that you can type "control W, control W" and it will simply cycle you to the next window. You have to do this a few times if you have more than one window open, but since you don't have to move your fingers and remember navigation keys, it's quick and easy. Thus, I wrote this mapping (this does something similar to what other developers on our team do, but fits my work style a touch better):
noremap <Leader>w <C-W><C-W>:res<cr>
In other words, it automatically switches to the next window and resizes it. You can still see the other windows, but they're out of the way.
The Perl 6 design team met by phone on 14 October 2009. Larry, Allison, Patrick, and chromatic attended.
Patrick:
:lang attribute in regexesLarry:
Patrick:
\x and \c Allison:
c:
Patrick:
Int, Num, and Complex Allison:
Patrick:
I love polls and surveys, because they tend to clarify almost any situation enormously. Data has a way of stripping away doubt and uncertainty.
Strawberry Perl exists in the form it does largely because of Michael Schwern's Perl Community survey. His discovery that 10% of respondents worked primarily on Windows, but 40% of respondents worked on Windows "At least once a year" essentially defined the target market, and brought about the current Strawberry mission statement.
"A 100% Open Source Perl for your Windows computer that is exactly the same as Perl everywhere else"
With the about-to-be-released Strawberry October 2009 release, I think we've finally met that challenge. The entire core crypto module set now builds properly, finally giving Windows support for https:// URLs and Module::Signature support (a long-time bugbear). Thanks to hard work by kmx, we can now also include C libraries in a manner that is protected from PATH collisions.
As far as the 40% of people that use Windows once a year goes, Strawberry Perl is now basically "done".
So it's time to turn our attention to the remaining 10% that use Windows every day, and particularly the newbie subset of this group. These people now make up the majority of people that we see arriving in the #win32 channel.
As a general rule, they arrive in the #win32 support channel having successfully downloaded and installed the headline Strawberry release from the front page. But because it is installed Unix-style as just the language, they have no idea what to do next.
There's a notable expectation amongst maybe a third of arrivals that the language comes with an IDE built-in, and questions like "How do I run a program" are common.
Looking at the recent Perl IDE and Editor Poll results, you can see the result of this lack of assistance.
Looking at the lower-percentage results, you can see that a significant number of people are using generic programmers text editors for Perl work, when superior options are now available.
While Padre is not yet a fully capable alternative to EPIC and Komodo, and will likely not be competing with vi(*) or Emacs any time soon, anyone currently using Ultraedit or Notepad++ (the two biggest Windows-specific editors) could easily switch to Padre and see productivity improvements (particularly in the case of Notepad++).
Taking over these groups would triple Padre's market share even from the Padre-biased respondents, it gives us a well-identified target market we can aim at, and it would mean thousands more potential Padre contributors.
We know from Ashton's Law ("Just make it fucking easy to install") that the ease of procurement of software is twice as important to market share as the quality of the software.
So as good as Padre is becoming in its own right, we can amplify the adoption rate enormously by simply making Padre so easy to procure that users don't even need to think about it.
Given the clear and significant benefits to both the Strawberry and Padre that would come from working more closely together, and the increased maturity and reliability we are seeing in both codebases, I think the time to finally start serious work on the long awaited "Chocolate Perl" concept has arrived.
With regards to the name itself, a number of people have mentioned that it would be a bad idea to go with this as the real name of the product, due to brand confusion. And I agree.
So while we will keep the name Chocolate as a codename in the short term, the more likely product name will be something like "Strawberry Perl Professional" (in keeping with the principle of least surprise) and will become the primary download link on the front page of the website.
The current Strawberry Perl product will probably be renamed to something like "Strawberry Perl Lite" or "Strawberry Perl Express" and will be potentially be moved off the front page to prevent confusion and limit the users of it to people that understand specifically what they need.
Looking at my currently installed Strawberry Perl, which contains a reasonable sample of the additions that will be included in Chocolate, I would say we can expect a download size of around 100meg and an installation footprint of around 500meg (not including data cached by the CPAN client), making Perl truly "Enterprise Grade" software :)
On top of a bundled copy of Padre, the list of included modules will be long and extensive, and will include four main categories.
1. A set of all significant (working) Win32:: modules, as well as modules for Excel integration and so on.
2. The most popular module sets for specific common project types (BioPerl, Catalyst + plugins, POE + plugins, PDL, SDL, Imager/GD, WWW::Mechanise, and so on)
3. All of the CPAN Top 100 that install on Windows.
4. A set chosen from the most-downloaded modules off http://cpan.strawberryperl.com/
Given how long this installer will take to build, the availability of a true production grade Perl 5.10.1, and to preserve CSJewell's sanity, I would anticipate that Chocolate will only be released as a Perl 5.10 variant.
Anyone savvy enough to know they need 5.8.9 should be reasonably capable of using the current lighter Strawberry product and installing the additional required modules themselves.
Since most of the pieces needed for Chocolate now install quite well (on the October release) I imagine we can produce a fairly decent beta by Christmas, with the first production version arriving as part of the January 2010 Strawberry release.
Perl 5.11.1
I've been behind in my blogging; time seems to fly when one is having fun, and I've been having a pretty good time recently.  Most of it's involved working with people and science, rather than technology.  After I finish my taxes (not yet overdue), this may change.
In the meantime, I can't go without mentioning that Perl 5.11.1 has been released. This isn't a stable version of Perl, but it's a point release on the way toward 5.12.0. I'm quite excited about 5.12.0 for many reasons I'll go into later, but they all involving modernisation of the language.
Of note in 5.11.1 (and hence 5.12.0) is that deprecation warnings are turned on by default. This isn't scary; it means that if you've got old code that's going to break in the future, then Perl will start warning you about that well in advance.
Of other note is a minor point, and that's the ability to include version numbers in package declarations. One can now write package Foo::Bar 1.23, rather than having to do cumbersome things with the $VERSION package variable.
Posted: 27th October 2009.
Tags: perl perl511 perl5111 perl512
Bookmark:
 
 
          A few days ago I announced a quick poll to find out what editor or IDE people are using when writing Perl code. The poll is now closed. There were 3,234 answers but as multiple answers were allowed the number of people is lower.
The data of the poll can be found under Perl IDE and Editor Poll, October 2009 on the Perl IDE web site along with the raw CSV files. It does not allow deep analysis so just a few quick observations.
The traditional Unix editors (vi/vim/gvim and emacs) got almost 50% of the answers. Taking into account that the number of people who answered is probably considerably lower than the number of answers this means a lot more than 50% of the responding people use either of those editors. For example I marked Padre and vim and I know I was not alone with this.
On the other hand I assume - without any proof - that vim and emacs use is much higher in the core Perl community than among people writing Perl in companies without a connection to the community. As quite likely more "community people" answered the poll than "non-community people" this skews the data in favor of vim/emacs.
I was surprised to see Padre getting 101 votes. I did not think so many people already use Padre.
Padre got more or less the same number of "votes" as TextMate and Komodo Edit which probably means that for all the other editors only a small fraction of the users voted while for Padre every one of its users voted. I don't think that 3% of the people writing Perl are using Padre. I don't think 3% even heard of it.
Anyway, some more thought is required on how to understand this data.
I decided that I really needed syntax highlighting in Pod::Parser::Groffman. I have an example at Testing with Test::Class. Note that the Perl examples are now colored. It's not what everyone would like, but it works.
To handle syntax highlighting, you just do this:
=for highlight Perl
sub add {
my ( $self, $data ) = @_;
my $add = $self->in_list_mode ? 'add_to_list' : 'add_to_mom';
$self->$add($data);
}
=end highlight
This turns on syntax highlighting. Allowable highlight types are the types allowed for Syntax::Highlight::Engine::Kate. We default to Perl, so the above can be written as:
=for highlight
sub add {
my ( $self, $data ) = @_;
my $add = $self->in_list_mode ? 'add_to_list' : 'add_to_mom';
$self->$add($data);
}
=end highlight
Syntax highlighting is experimental and a bit flaky. Some lines after comments are highlighted as comments. Also, POD in verbatim (indented) POD highlights incorrectly. Common_Lisp is allegedly supported by Syntax::Highlight::Engine::Kate, but I was getting weird stack errors when I tried to highlight it.
Also, don't use angle brackets with quote operators like "q" or "qq". The highlighter gets confused. I've not filed any bug reports as I've no idea if the errors are mine or the syntax highlighting module.
In writing Pod::Parser::Groffmom, I decided to start using the new subtest feature in Test::More. Since I added that, I may as well eat my own dog food.
Why would you want subtests? As test suites grow in size, you often see stuff like this:
{
diag "Checking customer";
ok my $customer = Customer->new({
given_name => 'John',
family_name => 'Public',
}), 'Creating a new customer should succeed';
isa_ok $customer, 'Customer';
can_ok $customer, 'given_name';
is $customer->given_name, 'John',
'given_name() should return the correct value';
# ... and so on
}
Programming like this is sometimes useful when we want to:
Points 1 and 2 are obvious, but what about 3? A sure sign of a desire for grouping comes when you see test output like this (from my Sub::Information tests):
ok 5 - Sub::Information->can('name')
ok 6 - ... and its helper module should not be loaded before it is needed
ok 7 - ... and it should return the original name of the subroutine
ok 8 - ... and its helper module should be loaded after it is needed
That's an example of writing tests in a narrative style (something chromatic taught me) so that the output can be (somewhat) human-readable. It's also a case of me writing a test in such a way that I have four assertions logically grouped together to test the behavior of a single feature (think "xUnit"). By grouping tests this way and by encapsulating our scope, we can often more easily refactor our tests in a way that makes sense. So I decided to use subtests. Here's what it looks like:
#!/usr/bin/env perl
use strict;
use warnings;
use Test::Most tests => 5;
use Pod::Parser::Groffmom;
my $parser;
subtest 'constructor' => sub {
plan tests => 2;
can_ok 'Pod::Parser::Groffmom', 'new';
$parser = Pod::Parser::Groffmom->new;
isa_ok $parser, 'Pod::Parser::Groffmom', '... and the object it returns';
};
subtest 'trim' => sub {
plan tests => 2;
can_ok $parser, '_trim';
my $text = <<' END';
this is
text
END
is $parser->_trim($text), "this is\n text",
'... and it should remove leading and trailing whitespace';
};
subtest 'escape' => sub {
plan tests => 2;
can_ok $parser, '_escape';
is $parser->_escape('Curtis "Ovid" Poe'), 'Curtis \\[dq]Ovid\\[dq] Poe',
'... and it should properly escape our data';
};
subtest 'interior sequences' => sub {
plan tests => 6;
can_ok $parser, 'interior_sequence';
is $parser->interior_sequence( 'I', 'italics' ),
'\\f[I]italics\\f[P]', '... and it should render italics correctly';
is $parser->interior_sequence( 'B', 'bold' ),
'\\f[B]bold\\f[P]', '... and it should render bold correctly';
is $parser->interior_sequence( 'C', 'code' ),
'\\f[C]code\\f[P]', '... and it should render code correctly';
my $result;
warning_like { $result = $parser->interior_sequence( '?', 'unknown' ) }
qr/^Unknown sequence \Q(?<unknown>)\E/,
'Unknown sequences should warn correctly';
is $result, 'unknown', '... but still return the sequence interior';
};
subtest 'textblock' => sub {
plan tests => 2;
my $text = <<' END';
This is some text with
an embedded C<code> block.
END
my $expected = <<' END';
This is some text with
an embedded \f[C]code\f[P] block.
END
can_ok $parser, 'textblock';
eq_or_diff $parser->textblock( $text, 2, 3 ), $expected,
'... and it should parse textblocks correctly';
};
(Note that the top level plan only lists five tests because each subtest is counted as one test)
And the output:
$ prove -lv t/internals.t
t/internals.t ..
1..5
1..2
ok 1 - Pod::Parser::Groffmom->can('new')
ok 2 - ... and the object it returns isa Pod::Parser::Groffmom
ok 1 - constructor
1..2
ok 1 - Pod::Parser::Groffmom->can('_trim')
ok 2 - ... and it should remove leading and trailing whitespace
ok 2 - trim
1..2
ok 1 - Pod::Parser::Groffmom->can('_escape')
ok 2 - ... and it should properly escape our data
ok 3 - escape
1..6
ok 1 - Pod::Parser::Groffmom->can('interior_sequence')
ok 2 - ... and it should render italics correctly
ok 3 - ... and it should render bold correctly
ok 4 - ... and it should render code correctly
ok 5 - Unknown sequences should warn correctly
ok 6 - ... but still return the sequence interior
ok 4 - interior sequences
1..2
ok 1 - Pod::Parser::Groffmom->can('textblock')
ok 2 - ... and it should parse textblocks correctly
ok 5 - textblock
ok
All tests successful.
Files=1, Tests=5, 2 wallclock secs ( 0.03 usr 0.01 sys + 0.29 cusr 0.07 csys = 0.40 CPU)
Result: PASS
That quickly showed an annoyance. Using subtests surprised me even though I created them! Specifically, having to specify a plan for every subtest is frustrating, but I don't know the number of tests before I've written them. Thus, I have to use no_plan for each subtest and then switch it afterwards.
I think a better strategy is clear: if no plan is included in a subtest, an implicit done_testing should be there. Thus, you can write subtests without specifying a plan but still have a bit of safety. I think I know how to implement this and it would make test author's lives simpler.
So I wrote a blog post about anecdote-driven development and I confess that not everyone was swayed by my opinion. Seems that lots of people like to introduce themselves as "hi! I'm captain of the USS Make Shit Up" and then start tossing out "facts" (I can be one of these people when I'm not careful). Here's an antidote:
What we actually know about software development, and why we believe it's true, by Professor Greg Wilson of the University of Toronto.
One bit I appreciated, from another presentation of his, is the debunking of the "best programmers are X times more productive than the worst" myth. He cites "28 times" as a commonly used figure, but I usually hear "10". However, he lists the studies for this information and they usually find the best programmers are only 5 times more productive than the worst and this is consistent with other fields.
I love the idea that we can use evidence rather than guesswork, but I doubt this idea will prove popular any time soon.
Having struggled repeatedly with LaTeX, I gave up. That's why github now hosts Pod::Parser::Groffmom. This is an undocumented module (just started hacking on it) which produces love MOM output suitable for groff -mom. That will transform it into PostScript for viewing with gv, Preview.app or anything else which can read PostScript. You can see a sample at Slideshare: I used this module to transform my Logic Programming In Perl article to a lovely PDF. There's virtually no control over output (patches and docs welcome :), but it works for me. It's only testing on Mac OS X Snow Leopard, but I imagine that any system with a modern 'groff' should be able to read the output.
Now I suppose I should write some docs.
The Perl 6 design team met by phone on 07 October 2009. Larry, Allison, Patrick, and chromatic attended.
Patrick:
Cursors and Match objectsAllison:
Larry:
quietly to trap warning exceptionswarn to emit warnings to talk to standard errornote function now emits a warning to standard errorCHECK time, any routine may be aggressively inlinedSoftRoutine and HardRoutine types nowmin and max are exclusivei as a constant, so you don't have to say 1i i routinesucc function, you get a range -- but can't do iterationand should always turn a False into a Nil in list contextCursor's string enum parsing works better! pseudo-sigil to hide them from normal Perl 6 codequietly statement prefiximport work correctlySetting now defines the Num package inline and imports constants from itCursor methods into appropriate sectionsloop construct now requires space between parenthesis and curly for consistency with other constructsviv raises the middle argument of a ternary operator to the same level as the left and right arguments@_ or %_ unless it's seen them in the blockmy ($a, $b, $c) used to be considered a signature, added to the current pad$_ semantics when reaching the end of a blockrepeat/while and repeat/until now require whitespace{*} stubs, as neither STD nor viv uses themc:
Allison:
Patrick:
Allison:
Patrick:
Allison:
Patrick:
One thing that makes me cringe is when people randomly invent their own cryptographic protocols. There's a Google Tech Talk by Nate Lawson where he explains some surprising approaches to attacking a cryptographic algorithm. It illustrates why rolling your own is probably a bad idea ;-)
Perhaps the most NIH cryptographic protocol I've seen is digitally signing as well as encrypting a message, in order to store tamper resistant data without revealing its contents. This is often done for storing sensitive data in cookies.
Obviously such a protocol can be built using HMACs and ciphers, but high level tools are already available, ones that have already been designed and analyzed by people who actually know what they're doing: authenticated encryption modes of operation.
Block ciphers are the sort of like hash functions, they take a block of data and scramble the block.
Simply encrypting your data blocks one by one is not a good way of securing it though. Wikipedia has a striking example:
 
Even though every pixel is encrypted, the data as a whole still reveals a lot.
Suffice it to say that blocks of operation are a wrapper that takes a low level scrambling function, the block cipher, and provide a less error prone tool, one that is more difficult to misuse.
Crypt::CBC and Crypt::Ctr are implementations of some of the more classic cipher modes. But this post is ranting about people not using authenticated modes.
Crypt::GCM and Crypt::EAX implement two different AEAD modes of operation using any block cipher.
These are carefully designed and analyzed algorithms, and the CPAN implementations make use of the tests from the articles describing the algorithms, so it sure beats rolling your own.
Secondly, Crypt::Util provides a convenience layer that builds on these tools (and many others), so perhaps Crypt::Util already handles what you want.
To tamper protect a simple data structure you can do something like this:
my $cu = Crypt::Util->new( key => ... );
my $ciphertext = $cu->tamper_proof_data( data = { ... }, encrypt => 1 );
Crypt::Util will use Storable to encode the data into a string, and then use an authenticated encryption mode to produce the ciphertext.
To decrypt, simply do:
my $data = $c->thaw_tamper_proof( string => $ciphertext );
Crypt::Util will decrypt and validate the ciphertext, and only after it's sure that the data is trusted it'll start unpacking the data, and if appropriate using Storable to deserialize the message. All allocations based on untrusted data are limited to 64KiB.
I'm not saying that the CPAN code is guaranteed to be safe. I'm saying this is a better idea than rolling your own. If your application is sensitive you have no excuse not to open up the code and audit it.
I was working with Pod::PseudoPod to create rich, structured documents with the idea that I could create nicely formatted PDFs. So far, the only reasonable way (no, plain text and HTML are not "reasonable") to accomplish this seems to be to emit DocBook xml from the PseudoPod and then convert that to LaTeX or PDF format. However, trying to find a tool which does that on Mac OS X Snow Leopard has left me at a dead end (CPAN libraries which claim to help are falling down badly).
So, what do you use to address this situation? Software which can actually install on OS X would be a bonus. Software which doesn't require me to read through volumes of documentation to figure out that one little setting would also be a bonus. I don't mind a little work, but so far, most software I've found has been miserable (e.g., openjade segfaults and I've no idea why).
One of the optional metrics used by CPANTS is wether the module has examples. I think it checks if there is an eg/ directory but actually having an eg/ directory with examples does not give the users much.
Most of the people install the modules using CPAN.pm or the native package management tool. Neither of those install the examples. Very few download the source and unzip it so they won't see the examples.
OK, I admit. I always have trouble locating examples even when the documentation of the module tells me to look at them.
So what other options are there?
If there is POD in the example files then search.cpan.org will display them, but of course it will display the POD and not the code.
In Win32::GuiTest there is a make_eg.pl script that is executed when the distribution is generated. It takes all the files from the eg/ directory and creates a module called Win32::GuiTest::Examples puting the examples in the POD section. This means the actual examples can be easily read on search.cpan.org and once the module is installed they can be easily found by typing
perldoc Win32::GuiTest::Examples
In Padre and in [dist:Padre::Plugin::Parrot] we instruct CPAN.pm to install the examples in the share directory. Actually we have a share/ directory and the examples are within that directory in share/examples. Then in Makefile.PL we have instruction to install the share directory. (In the case of Module::Install it is a call to install_share.)
Once the module is installed the examples are also installed and they can be found with the following call:
  $dir = File::Spec->catdir(
		File::ShareDir::dist_dir('Padre-Plugin-Parrot'),
		'examples');
That is
  File::ShareDir::dist_dir('Padre-Plugin-Parrot')
returns the path to the share directory of the package.
In Padre then we can create menu items to Open Example that will open the regular Open File window just it will position it in the directory of the examples supplied by Padre. Similarily the Parrot Plugin has (or I think will have in the next release) a menu item for the same thing.
Of course it is not restricted to people using Padre as anyone could use the above code.
There are a number of modules on CPAN with some way to run the examples embedded in modules. for example if you look at the SDL::Tutorial package of SDL_Perl, you know, the newly (and rightly) hyped module that will help you burn all your free time by writing games in Perl... there you will find simple instructions on how to run the sample script:
perl -MSDL::Tutorial -e 1
There are a number of modules on CPAN with some form of autoextract examples I just can't find them. The idea there is that the sample script is embedded in a module and running
perl -MModule::Name -e1
will create a file example.pl that will hold the example script. Actually I think the SDL examples meant to do this as well but if I remember correctly they don't extract anything, just run the example.
As we can see there are several ways to inclue examples in a CPAN distribution with varying levels of ease of use. Actually we could even combine all the solutions and make sure people have several easy ways to see running examples of the modules on CPAN.
These could also help a lot with module evaluation and learning.
When it still worked reliably, my heaviest usage Devel::DProf wasn't for the profiling. What I used it the most for was tracing.
The dprofpp -T option would generate a space-indented dump of every single function call made in the program, including all the BEGIN stuff as the program compiled.
It was particularly useful for crashing or failing programs, because you could just start at the end of the trace and then watch the explosion backwards in slow motion, moving back to find the place where it all started to go wrong.
Unfortunately the new king of profilers, Devel::NYTProf, can't replicate this neat trick (yet?).
In the mean time, does anyone have a recommendation of where I can go to get the same information? I can't find anything obvious...
So my fiancée was rather curious about what I do for a living, so she started researching Perl Training companies. Part of this is with the idea that, long-term, we might consider launching a Perl consultancy firm (initially focused on training). Given that her background is legal (she has a Masters in French law), project management, communications and political consulting, it's fair to say that she has plenty of business background but no technical background. As a result, she was looking at various sites with the eye of someone who might hire them to do Perl training. She found one, and only one, that she would consider. That was Perl Training Australia. Why?
Make of that what you will, but if you comment, don't insult my fiancée :) Remember, she's looking at this from the point of view of someone considering Perl for their company. And just to be clear, she's looked at many, many Web sites of companies offering Perl training (and if you know Perl, you know some of those names).
I kept meaning to post this, and I can't believe it's already this late in October.
Jonas Nielsen and I are going to be in Dublin, Ireland on Sunday because we're both running the marathon on Monday. If any local Perl Mongers want to get together for a drink on Sunday early evening, let me know. I'm staying about a mile from the marathon start and otherwise have no idea of the geography.
It might be my only time to see Jonas. Once he leaves the start line, I won't see him for hours. :)
Read more of this story at use Perl.
Elliot forced me to accept a commit bit to the Perl Critic repo.
My first policy will be *::YouArentAllowedToProgramAnymore, which deletes your source if it finds that you fail any policy, you have any use of 'no critic', you run Perl::Critic more than once on the same file, or if it's Friday afternoon. If you have your stuff in source control, you should be safe. That's the fence you have to jump over to get back in, though.
Elliot already shot down my policy suggestions for *::NotEnoughVowels, *::PassiveVoiceInString, *::MisconjugatedVerb, *::StupidVariableName, *::UsesWindows, *:;DependsOnModules, *::YouEditedThisInEmacs, *::Magic8Ball, *::YourNameIsPudge, *::YourModuleWebsiteIsUgly, and *::YourPerlIsSoLastMonth.
What have I been up to lately? I’ve been tooling (toiling?) away in the Perl toolchain smithy, fixing up modules on CPAN and patching blead.
CPAN module work:
Work on the Perl core (and related modules):
This has been keeping me too busy to work on finishing inc bundling for Module::Build and a number of personal projects, but I hope to get back to them soon.
(This is a copy of an email to the BioPerl mailing list)
Dear BioPeople
Since we added a "Live Support Chat" link to the frontpage of the Strawberry Perl website ( http://strawberryperl.com/ ) we've noticed that one of the main types of visitors that we see in the IRC channel ( #win32 on irc.perl.org ) is biologists trying to install various
things.
As a result, for the October 2009 release of Strawberry Perl, we've prioritised adding support for CPAN-installation of BioPerl.
Changes include a major improvement to crypto support which provides OpenSSL and support for https:// URLs, we now ship Postgres and MySQL DBI drivers in the default installation, and we have added Berkely DB support (which previously prevented us meeting the dependencies for the BioPerl distribution).
I'm happy to report that we now believe we are in a position to officially support the installation of BioPerl on Strawberry Perl.
Prior to the official release next week, we would appreciate testing from the BioPerl community.
Release candidate installers for the October release are available at the following URLs
http://strawberryperl.com/download/strawberry-perl-5.10.1.0.msi
http://strawberryperl.com/download/strawberry-perl-5.8.9.3.msi
Once installed, run Start -> Program Files -> Strawberry Perl -> CPAN Client
From the CPAN client command line, run "install BioPerl" and select
the default options.
Once installed, you should be able to do anything you do normally with a BioPerl install.
Assuming all goes well in this release, for the next release January 2010 we plan to also produce a "Strawberry Perl Professional" distribution that will bundle BioPerl as part of the default installation, as well as a Perl IDE and other useful packages.
Success or failure of the release candidate can be reported either here [the BioPerl list] or to #win32 on irc.perl.org.
Adam K
The Perl 6 design team met by phone on 30 September 2009. Larry, Allison, Patrick, Jerry, Nicholas, and chromatic attended.
Allison:
Patrick:
c:
Patrick:
c:
Patrick:
Larry:
temp and let in regular expressionsdefines infix; now we have a statement control import Patrick:
Larry:
BEGIN semantics was problematicPatrick:
Larry:
Numeric and Integral Real and Stringy defines with import in the code.= rules, for example, is an infix that used to check that it was a Perl 5 versionprefix ... and doesn't complain about redefinition... or a friend, it's a stubpi, e, and i CORE:: viv in various ways as Matthew discovers places where it doesn't propagate information into the AST correctlyc:
Jerry:
Patrick:
ResizableIntegerArray Cursors and Match objects are much more immutablec:
Patrick:
Jerry:
Patrick:
Match object with the same name as rulesCursors not behave like Match objects present a problem?Larry:
Match object might contain the Cursor used to produce itPatrick:
Cursors?Match object from there?Larry:
Cursor Match objectPatrick:
Match in a grammarpos Larry:
Patrick:
Larry:
pos comes from the base classPatrick:
Cursor.pos...Larry:
Cursor::.pos Patrick:
Match objectSTD doesn't do thatCursor as a hash in a few placesMatch to do thatLarry:
STD cheats all over the placesPatrick:
Larry:
Nicholas:
Jerry:
Allison:
Jerry:
Allison:
Nicholas:
Allison:
c:
Jerry:
Gabor Szabo is running a survey about Perl development:
I have set up a simple five-second poll to find out what editor(s) or IDE(s) people use for Perl development. I'd appreciate very much if you clicked on the link and answered the question. You can mark up to 3 answers.
Please also forward this mail in the company you are working and to people in your previous company so we can get a large and diverse set of responses.
The poll will be closed within a week or after we reached 1000 voters. Whichever comes first.
Let's pretend you never heard me talking about Perl editors and IDEs. Would you please spend 5 seconds to answer the poll Which editor(s) or IDE(s) are you using for Perl development? I will close the poll in 10 days or after 1000 responses. Whichever comes first. Act now!
List based on Perl Development Tools table on PerlMonks.
After yesterdays visit to the Budapest Perl Mongers, today I went to see the Perl Mongers in Amsterdam.
These links are collected from the Perlbuzz Twitter feed. If you have suggestions for news bits, please mail me at andy@perlbuzz.com.
Read more of this story at use Perl.
Read more of this story at use Perl.
Tatsuhiko Miyagawa released Devel::StackTrace::AsHTML and blogged about it.
I thought this would make a neat Perl debugger command, so I wrote DB::Pluggable::StackTraceAsHTML. It is a plugin to DB::Pluggable. It adds the Th command to the
debugger, which displays a stack trace in HTML format, with lexical variables.
It then opens the page in the default browser.
Here is an example of how to use it:
$ perl -d test.pl Loading DB routines from perl5db.pl version 1.3 Editor support available. Enter h or `h h' for help, or `man perldebug' for more help. main::(test.pl:14): my $n = 12; DB<1> r main::fib(test.pl:12): return fib($i - 1) + fib($i - 2); DB<1> Th
The result would look something like:

To enable the plugin, just add it to your ~/.perldb, like so:
use DB::Pluggable; use YAML; $DB::PluginHandler = DB::Pluggable->new(config => Load <<EOYAML); global: log: level: error plugins: - module: BreakOnTestNumber - module: StackTraceAsHTML EOYAML $DB::PluginHandler->run;
By the way, to be minimally invasive to the existing Perl debugger, the
command is defined using the debugger's aliasing mechanism. Normally you define
an alias as a regular expression that will change the command the user enters
to a known command, but here we circumvent that and call our command handler
directly. The following method is from DB::Pluggable::Plugin:
sub make_command { my ($self, $cmd_name, $code) = @_; no strict 'refs'; my $sub_name = "DB::cmd_$cmd_name"; *{$sub_name} = $code; $DB::alias{$cmd_name} = "/./; &$sub_name;"; }
To define a new foo command in a plugin, you then use:
package DB::Pluggable::StackTraceAsHTML; use strict; use warnings; use base qw(DB::Pluggable::Plugin); sub register { my ($self, $context) = @_; $self->make_command( foo => sub { # ... } ); }
I opened a separate blog to write about the Perl Mongers in general and about the specific groups. I started with a few words about the Perl Mongers and an entry about the Perl Mongers in Budapest, Hungary.
The Perl 6 design team met by phone on 23 September 2009. Larry, Patrick, Jerry, and Nicholas attended.
Larry:
a, b, c, d :by was ugly, it's now deprecated0..* or 0..*-1 (or 0..Inf), they all work out the same:3 without a trinary number afterPatrick:
cursor, gimme5, to do similar things for implementing the grammar engineJerry:
Patrick:
Jerry:
Patrick:
Jerry:
Larry:
Patrick:
Larry:
Patrick:
Larry:
Patrick:
Larry:
Patrick:
Jerry:
Patrick:
Jerry:
Patrick:
Jerry:
Patrick:
One problem with Bundles (and Task::, from what I can see) is that they don't let you specify a particular set of module versions which play well together.
Quite often, I find that I'm trying to downgrade Moose when I need to run an older version of our work code. But that means I also have a few other modules which need to be downgraded at the same time and this becomes painful.
It would be nice to have monolithic "Everything you need for this Moose version" modules, bundling a known-good set of modules together. This would allow a developer to do something like "cpan 'D/DR/DROLSKY/Everything-Moose-0.83.tar.gz'" and get everything they need for that version of Moose installed.
The problem seems to be that CPAN doesn't allow authorities to discriminate between different authorities releasing the same module. I can upload a version of Moose, but it's labeled an "Unauthorized released" if I do. Authorities might help. For example, Class::MOP has the following code:
our $AUTHORITY = 'cpan:STEVAN';
If CPAN had recognized this (it doesn't yet, does it?), I could theoretically bundle Moose 0.83 with Class::MOP 0.88 (and other dependencies) and have this in each module:
our $AUTHORITY = 'cpan:OVID';
And create a distribution which is easy to upgrade/downgrade with full dependencies packaged with it. In fact, we could have several people release identical versions of the same module and the "default" authority would be the owners of that namespace, but you could then do this:
cpan Moose --authority=cpan:SOMECPANID
So if you want an unauthorized release, you must specifically ask for it. Otherwise, the authority is assumed to be cpan:DROLSKY.
Would this work?
As mentioned previously, I've still been working on Perl marketing. Now that I'm on the TPF Board, I've been setting things up there. I'm happy to announce that TPF has approved The Perl Foundation Marketing Committee.
We've been doing more than just creating the committee. Dan Magnuszewski, the committee chair, has also been laying a lot of the groundwork for the various things we need to do. Please respond on the blog rather than here. It would be nice to have feedback in one place.
The Perl 6 design team met by phone on 16 September 2009. Larry, Patrick, Jerry, Nicholas, and chromatic attended.
Larry:
Jerry:
Patrick:
plan linesNicholas:
Larry:
kp6 wasgimme5 doesviv isNicholas:
Larry:
Nicholas:
Larry:
c:
Patrick:
Jerry:
Patrick:
Jerry:
Patrick:
A few weeks ago I was up in the hills about Geneva reminiscing with my sister about all the things we used to enjoy when we were smaller. When I was younger I used to really enjoy programming computer games, first on my 48K Spectrum and then later on in STOS BASIC and then 68000 assembly language on my Atari ST.
I haven't programmed a game in a very long time. However, I'm an avid gamer, playing games while travelling on my DS and at home on my Xbox 360. I almost enjoy reading Edge magazine more than I like playing games.
At YAPC::Europe in Lisbon, Domm pointed out that the Perl SDL project (which wraps the Simple DirectMedia Layer) was languishing and that we should all programs games in Perl.
A few months later I got around to playing with SDL and made a simple breakout clone which I styled after Batty on the Spectrum, but with gravity. It was fairly easy to program, but there was a lot to grasp. The Perl libraries are a mix between a Perl interface to SDL and a Perlish interface to SDL, with limited documentation, tests and examples.
Of course this is where I join the #sdl IRC channel on irc.perl.org and start discussing with the other hackers (kthakore, garu, nothingmuch). We decide on a major redesign to split the project into two sections: the main code will just wrap SDL and then there will be another layer which makes it easier to use. I've started writing a bunch of XS on the redesign branch of the repository while trying to keep Bouncy (my game) still working. There is a bunch of work still to do but we've made a good start. This is what Bouncy looks like at the moment:
[YouTube video]
The physics are pretty fun and it runs pretty fast (1800 frames/second). I'm taking a little break as I'm off to Taipei...
After numerous jerks and stops, Pod-Elemental is about as useful as it has to be for work on Pod::Weaver to really build up some steam.
It's well past my bed time, here, but I wanted to do a quick run through of what it can now do.
First, I have it read in the very basic Pod events from a document and convert
them into elements.  This is exercising only the most basic dialect of
Pod.  If I load in this document
and then dump out its structure (using Pod::Elemental's as_debug_string code)
I get this:
Document
  =pod
  |
  (Generic Text)
  |
  =begin
  |
  (Generic Text)
  |
  =image
  |
  =end
  |
  =head1
  |
  =head2
  |
  =method
  |
  (Generic Text)
  |
  =over
  |
  =item
  |
  =back
  |
  =head2
  |
  (Generic Text)
  |
  =head3
  |
  =over
  |
  =item
  |
  =back
  |
  =head1
  |
  (Generic Text)
  |
  =begin
  |
  (Generic Text)
  |
  (Generic Text)
  |
  =end
  |
  =method
  |
  (Generic Text)
  |
  =cut
All those pipes are "Blank" events.  Everything else is either a text paragraph
or a command.  There's nothing else structural.  We feed that document to the
Pod5 translator, which eliminates the need for blanks, understands the context
of various text types, and deals with =begin/=end regions.  It takes runs of
several text elements separated by blanks and turns them into single text
elements.
Document
  (Pod5 Ordinary)
  =begin :dialect
    (Pod5 Ordinary)
    =image
  =head1
  =head2
  =method
  (Pod5 Ordinary)
  =over
  =item
  =back
  =head2
  (Pod5 Ordinary)
  =head3
  =over
  =item
  =back
  =head1
  (Pod5 Ordinary)
  =begin comments
    (Pod5 Data)
  =method
  (Pod5 Ordinary)
So, already this is more readable. That goes for dealing with the structure, too, because we've eliminated all the boring Blank elements. Now we'll feed this to a Nester transformer, which can be set up to nest the document into subsections however we like. This is useful because Pod has no really clearly defined notion of hierarchy apart from regions (and lists, which I have not handled and probably don't need to).
Document
  (Pod5 Ordinary)
  =begin :dialect
    (Pod5 Ordinary)
    =image
  =head1
    =head2
  =method
    (Pod5 Ordinary)
    =over
    =item
    =back
    =head2
    (Pod5 Ordinary)
    =head3
    =over
    =item
    =back
  =head1
    (Pod5 Ordinary)
  =begin comments
    (Pod5 Data)
  =method
    (Pod5 Ordinary)
Now we've got a document with clear sections, but we've got these =method
events scattered around at the top level, so we feed the whole document to a
Gatherer transformer, which will find all the =method elements and gather
them under a container that we specify.  (Here we used a =head1 METHODS
command.)
Document
  (Pod5 Ordinary)
  =begin :dialect
    (Pod5 Ordinary)
    =image
  =head1
    =head2
  =head1
    =method
      (Pod5 Ordinary)
      =over
      =item
      =back
      =head2
      (Pod5 Ordinary)
      =head3
      =over
      =item
      =back
    =method
      (Pod5 Ordinary)
  =head1
    (Pod5 Ordinary)
  =begin comments
    (Pod5 Data)
That still leaves us with =method elements, so we update the command on all
the immediate descendants of the newly-Gathered node and end up with a pretty
reasonable looking Pod5-compliant document tree:
Document
  (Pod5 Ordinary)
  =begin :dialect
    (Pod5 Ordinary)
    =image
  =head1
    =head2
  =head1
    =head2
      (Pod5 Ordinary)
      =over
      =item
      =back
      =head2
      (Pod5 Ordinary)
      =head3
      =over
      =item
      =back
    =head2
      (Pod5 Ordinary)
  =head1
    (Pod5 Ordinary)
  =begin comments
    (Pod5 Data)
It doesn't round-trip, but that's the point. We've taken a simple not-quite-Pod5 document and turned it into a Pod5 document. We've also got it into a state where further manipulation is quite simple, because we've created a tree structured nested just the way we want for our uses.
I think the next steps will be further tests for a while.  I need to deal with
parsing =for events a bit more, then I'll consider making =over groups
easier to handle.
At this point, I believe I could replace PodPurler's code with a Pod::Elemental recipe. I might even do that. The real goal, now, is to start implementing Pod::Weaver itself. I think the way forward is clear!
I've updated a few things on CPAN today. The most important is probably Test::Differences. No new features, but it's a production release (after a year!).
I've also updated Test::Aggregate to a production release. It now supports nested TAP via Test::Aggregate::Nested. The latter is ALPHA code, but it gets around the __END__ and __DATA__ limitations of Test::Aggregate. When Test::Harness supports richer parsing of nested TAP, it should be fully production ready (crosses fingers). Having dinner with Andy Armstrong tonight, so maybe we can talk about this.
Also, by request, I've made Test::Kit ready for production. In the process, I also fixed a bug where it wouldn't handle Test::Most due to namespace clashes.
I wanted to update MooseX::Role::Strict by adding MooseX::Role::Warnings, but due to behavioral changes in Moose, I need to postpone that for a bit.
I've also updated Class::Trait, but only to mark it as deprecated in favor of Moose::Role.
Update: Just uploaded a new Perl6::Caller to eliminate spurious test failures.
Teaching Perl in Sydney
I've just spent the week teaching Perl in Sydney.  It was good.  Actually, it was really good.  My class were close in ability, asked intelligent questions, thought through problems, asked for assistance when needed, quizzed me about advanced topics during the breaks, and generally showed themselves to be awesome.  It felt just like the good ol' days.
Posted: 18th October 2009.
Bookmark:
 
 
          The building where the perl.org datacenter is hosted was performing safety tests today that involved running on generator power for a few hours. No problems were expected, as our UPS would have (and did) covered the transition. And then a fuel pump failed in one of the generators, requiring it to be shut down, resulting in us losing power.
Several machines didn't come back up. So if your favorite perl.org service isn't available or isn't working right today, that's why.
Ask is on his way to the datacenter to get things sorted. We'll update this blog as we have more information.
| 
When you need perl, think perl.org
     
 |   |