このページは大阪弁化フィルタによって翻訳生成されたんですわ。

翻訳前ページへ


Planet Perl - an aggregation of Perl blogs
The Wayback Machine - http://web.archive.org/web/20090227125600/http://planet.perl.org:80/

Planet Perl

February 27, 2009

v
^
x

Curtis PoeThrowing Away All of My Trigger Work

Unless someone on our team thinks of an incredibly creative solution, all of my work with triggers will need to be thrown away. I've used them previously, but only in our test database. Unfortunately, if you read the "create trigger" documentation carefully, it mentions a very interesting caveat:

In MySQL 5, prior to version 5.1.6, creating triggers required SUPER privileges (annoying, but I can live with that) and so does executing them.

We're on 5.0.45. Naturally, our production code is not going to run with SUPER privileges. Even if we were so foolish as to think this was a good idea (and yeah, go ahead and run Apache as root, will ya?), we share this database server with other teams who would strenuously object to our running as SUPER. Plus, upgrading to 5.1.6 means negotiating with all of those other teams. I don't think that's going to happen.

I have a lot of unpleasant work ahead of me ripping out triggers and reimplementing them in our DBIx::Class code :(

Aside from the fact that we're on an older version of MySQL, how on earth could the MySQL developers have thought that requiring SUPER privileges to run triggers was a good idea?

by Ovid at February 27, 2009 12:10 UTC

February 26, 2009

v
^
x

Leon BrocardWhat is Moose and why is it the future?

Last week London.pm organised a technical meetings themed along the question "What is Moose and why is it the future?". I was in Taipei on the day but Peter Edwards took charge and the speakers Ash Berlin, Tomas Doran, Mike Whitaker and Piers Cawley introduced us to Moose, how to use it effectively and explained advanced techniques. I think everyone was convinced to at least try it out. The slides are available on the London.pm website.

by acme at February 26, 2009 17:18 UTC

v
^
x

Curtis PoeWhy Visualization Rocks

You've probably read about Class::Sniff, software I've "written in anger" to deal with some terribly class composition issues. Because I already had a graph available, I decided to use it for visualization of class hierarchies. I know some people sneer at visualization ("hey, just let me read the code!"), but really, they're wrong. Reading code gives you fine-grained knowledge that a diagram cannot. Seeing the diagram instantly gives you knowledge about the code which it could ages slogging through the code to acquire.

jplindstrom started using Class::Sniff to refactor our code base. Here's a small part of it before using Class::Sniff. Here's the same bit after using Class::Sniff. It still has issues, but it is far, far better than it was. Heck, which would you want to work on?

Our test suite reveals that this refactoring has introduced one bug -- amusingly in the code with the tiny amount of multiple inheritance left. Seems we have an exception being thrown twice, but we're having trouble tracking it down.

by Ovid at February 26, 2009 15:56 UTC

v
^
x

Perl BuzzPerlbuzz news roundup for 2009-02-25

As promised, I'm going to start posting the quickie news tweets that I post to the Perlbuzz twitter feed here in the main Perlbuzz blog. These are links I found interesting and newsworthy, but didn't have any commentary or other story to go with them.

Here are the last twenty.

by Andy Lester at February 26, 2009 06:18 UTC

v
^
x

chromaticPerl 6 Design Minutes for 18 February 2009

The Perl 6 design team met by phone on 18 February 2009. Larry, Patrick, Allison, Jerry, Nicholas, and chromatic attended.

Larry:

  • fixed STD so that if you added A::B, it added A as a subpackage
  • so as not to complain about a missing package declaration
  • can now have $! as a parameter
  • mostly working over a lot of the error messages to be friendly to Perl 5 programmers
  • if you use do/while or do/until it now complains
  • if you use if or one of the keywords as if it's a function, it tells you what the problem is without saying "I don't recognize that function name"
  • people coming from a Haskell background think they can write 1.. to get an indefinite range
  • it now tells them what they should write instead
  • fixed a bug which reported runaway strings which start and stop on the same line
  • not really a runaway
  • now it reports that only if the string crosses a newline
  • in the spec space
  • to accompany the ability to use a bare sigil in declarations as an anonymous name, now you can use a bare :: to signify an anonymous package name or type
  • allows us to have a package named is without ambiguity
  • blew away the Main package; combined it with the GLOBAL package
  • everything in the main program comes up starting in the GLOBAL namespace

Patrick:

  • oh, I'm happy!

Larry:

  • thought the distinction served no useful purpose
  • continued doing spec work combining context variables with globals
  • at least in terms of twigils
  • realized that filehandles belong to the PROCESS namespace, not the GLOBAL namespace
  • continuing on the vein of .Whatever on most normal operators builds a closure of one argument
  • made *.method do the same thing
  • even in places which are not syntactically special, we can use *.prime in a grep for example
  • equivalent to putting .prime in the curlies
  • now we have a fairly general mechanism of writing closures of a single argument
  • actually reads pretty well if you don't want the curlies
  • we could go as far as to undo the special syntax on whens and ~~ so you have to say *.foo to call a method
  • still thinking about that
  • would regularize the syntax slightly

Patrick:

  • had a little emergency over the weekend, but things are fine
  • improving Rakudo's build process to make it easier to build
  • someone who wants to build it can get a copy of the repo and pass a special option to Rakudo's Configure.pl
  • that'll download a copy of Parrot from the right revision and build it for you
  • people who want to play with Rakudo don't have to play with Parrot dependencies
  • Jonathan and I have most of the guts ready to start writing Setting code
  • we can start to write methods in Perl 6 instead of in PIR
  • we'll gradually migrate methods which make sense to rewrite in Perl 6
  • moving those over into the Setting code
  • that'll make it easier for people to hack on
  • some things will remain in PIR, but we don't know what those are yet
  • most of my plans for this week is doing more cleanups in the build process
  • rewriting Rakudo's t/harness to remove dependencies on Parrot files
  • have that mostly done
  • writing some articles discussing Rakudo's new home
  • how to get it, how to build it, and updating websites

Allison:

  • working on Parrot's install process
  • making it so that Patrick doesn't have to build and install a whole copy of Parrot to build Rakudo

Patrick:

  • we'll need that for some time

Allison:

  • trying to make it so that people who build and install packages have an easier time
  • you won't have to depend on a Parrot build directory
  • you can run against an installed Parrot

Patrick:

  • I would like to get rid of the build tree dependency

Allison:

  • the patch I have gets rid of most of the build tree dependencies
  • there are a few header files in weird locations
  • they don't get included in PMCs and dynops
  • I've fixed all of the build tools
  • all that's left is C-level stuff
  • should probably send you to the patch to experiment with
  • lots of stuff preparing for the release
  • seems to be encouraging that we've been getting more novice questions
  • seems to be an influx of interest here

c:

  • talking to Richard Blackwell about YAPC
  • Parrot and Perl 6 hackathon needs
  • working on TODO/SKIP test review for Parrot
  • also pondering bug triaging guidelines for Parrot
  • having some discussions about release policies and deprecations (though mostly Perl 5)
  • setting expectations early seems to help a lot

Nicholas:

  • what projects did you look at, or did you look at things afresh?

c:

  • Subversion's really early milestone release process was a real inspiration
  • the Linux kernel's backwards compatibility policy was also
  • not much besides that

Jerry:

  • Ubuntu's long term support is pretty nice

c:

  • I'm a little less thrilled there
  • even six months time is a long span between releases

Allison:

  • that's kind of a corporate support guideline

Patrick:

  • I like knowing that Hardy will be supported for a few years

c:

  • that's a risk for them
  • how long are they going to support KDE 3 when upstream doesn't?

Nicholas:

  • RHEL and other enterprise distributions support Perl 5.8 until 2011 or so
  • but we haven't heard anything from downstream there
  • and we only support critical security fixes there

c:

  • I put in expicit language about vendors promising long-term security fixes
  • that's your vendor's problem, not the problem of volunteers

Nicholas:

  • in Perl 5, you used to do package; with no namespace
  • removed somewhere in 5.8.x
  • was such a thing considered for Perl 6?
  • you basically banned all unqualified variables

Larry:

  • it was a hack in Perl 5
  • I want to make new mistakes

Patrick:

  • I expect to do Rakudo's next release next Wednesday
  • that's a target anyway
  • I'll be out of town this weekend
  • we'll start its regular release clock with that release
  • we'll do monthly releases
  • that exact date will vary by a few days here or there until we get the release process down

Jerry:

  • will you have a similar release structure as Parrot
  • multiple release managers?

Patrick:

  • yes
  • I don't want to be the single release manager
  • I'll probably do the first couple
  • I want at least one other person to do some by May or June
  • ideally a team
  • the Rakudo release needs to happen very quickly after the Parrot release
  • we won't be able to do this this month
  • what do we need to think about for POD in Perl 6?
  • what we have now is "DRAFT DRAFT DRAFT"

Allison:

  • can you say "We'll target this version and update if necessary?"

Patrick:

  • that's exactly what I've done so far

Larry:

  • I agree

Patrick:

  • I expect to generate or write a fair bit of POD

Larry:

  • it's amenable to mechanical translation
  • someone should write a standard parser for POD

Allison:

  • Parrot might as well stick with the standard

c:

  • are you suggesting a Parrot-based POD6 parser?

Allison:

  • if you develop a POD6 parser in Perl 6, could we compile it back down to PIR and use it?

Patrick:

  • we'd more likely target NQP
  • Rakudo expects the Perl 6 runtime available
  • it'd be nice to say that that's just a PBC, but it's also PMCs and dynops

Jerry:

  • makes me wonder how small we can make a Parrot distribution that only contains NQP as a language

c:

  • that's the Parrot 2 proposal

Jerry:

  • I meant ripping out all ops and PMCs you don't need
  • maybe no IMCC

c:

  • it's not PBC compatible though

Allison:

  • he's talking about a version of Parrot built on as few basic opcodes as possible

c:

  • no PMCs or ops written in C

Nicholas:

  • I thought the computer scientists proved that you only need one operation to make things work, sort of a compare and jump

Jerry:

  • then you have Lisp

c:

  • but you need infinite storage space

Jerry:

  • disk space is cheap, and getting cheaper

c:

  • but not infinite
  • you can climb that asymptote all you want, but it keeps getting steeper

Patrick:

  • the ultimate slippery slope argument

by chromatic at February 26, 2009 01:05 UTC

February 25, 2009

v
^
x

Ricardo Signesall hail vincent pit

I didn't really know who this Vincent Pit character was until a couple years ago when Variable::Magic showed up. It was pretty cool, and then I'd see him saying crazy cool things on the p5p IRC channel. At some point I said, "Hey, Vincent. I bet you're just the guy to convince to make it possible to localize something in a higher scope."

He smiled, nodded, and said, "Uh huh." Then as soon as I forget mentioning it to him, he sent me a link to a tarball. Now it's powering some really cool dists on the CPAN!

A few months later, I said, "You really kicked ass with Scope::Upper. You know what else would be cool?" Again, some silence, followed by a really sweet implementation.

Vincent, you confound me! On one hand, I want to learn much more about the guts of perl5 so I can do such awesome things. On the other hand, you keep doing them as soon as I have conceived of them, so I have no need.

I think I'll just go with the flow and learn nothing, so I can spend my minimal remaining free time stomping necromorphs.

by rjbs at February 25, 2009 22:16 UTC

v
^
x

Dave CrossMySQL Stupidity

Been a while since I reported on MySQL's stupidity, but I came across a fine new example yesterday.

Create a table with a varchar column.

create table foo (foo varchar(10));

Insert a data value which is two numbers separated by a pipe character (don't ask, just accept that this was the data format I found in my table).

insert into foo values ('111|1');

Now let's try to select some data.

mysql> select * from foo where foo = '111|1';
+-------+
| foo   |
+-------+
| 111|1 |
+-------+
1 row in set (0.00 sec)

Ok. That makes sense. That's expected behaviour.

mysql> select * from foo where foo = '111';
Empty set (0.00 sec)

That also makes sense, of course. The string isn't '111', so it doesn't match.

mysql> select * from foo where foo = 111;
+-------+
| foo   |
+-------+
| 111|1 |
+-------+
1 row in set (0.00 sec)

Huh! I mean "What!?!".

There are at least two fundamentally stupid things going on here.

Firstly, MySQL is allowing me to match a string column against a number. When a user tries to match a value of one type against a column of another type, the only sensible action is to throw an error. The user is trying to do something completely wrong. Tell them that. Don't try and work something out.

Secondly, if you insist on trying to convert datatypes in order to force a match, then convert the user's data into the database column's datatype, not the other way round. The database column is a string. Convert the number to a string and try to match that string against the database (that would have returned no data). Instead MySQL is trying to convert the database value into a number to match the user's input. It looks like it's using something like Perl's string to number conversion so the string "111|1" is converted to the number 111 and therefore matches the user's input.

So you can actually get MySQL to match data which doesn't match at all. I wasted two hours on this yesterday.

I found this yesterday on a MySQL 4.x server. I've just tried it on a 5.0.67 server and the same bug is still there.

Oh, and setting the SQL mode to "traditional" doesn't seem to fix it either.

by davorg at February 25, 2009 21:56 UTC

v
^
x

Curtis PoeWriting Legal Contracts as Code

The following is a legal contract from Genoa, written in the 14th century:

...Geri, [son] of the late Ser Lapo of Florence, Simone Guascone, [9 more Names listed], each of them [liable] for the amount written below, have acknowledged and in truth have declared to me, notary undersigned, as a public official [holding] a public office, making the stipulation and receiving in the name and stead of Federico Vivaldi, citizen of Genoa, that they have bought, have had, and have received from him a certain amount of goods of the said Frederico...And for these goods and in consideration of their price each of them has promised to give and to pay to said Frederico or to his accredited messenger: [from] the said Geri, 150 gold florins, the said Simone, 50 florins, [100 florins each from the other Names] within the next five months from now. Otherwise they have promised to give and to pay to the said Frederico the penalty of the double of it and of the entire amount to which and to the extent of which [this agreement] is violated or is not observed as above, together with restitution of all losses, unrealized profits, and expenses which might be incurred because of it in court or outside -- the aforesaid remaining as settled, and under hypothecation and pledge of their goods and [the goods] of any one of them, existing and future.

[The above is binding] with the exception and special reservation that if the amount of goods, property, and merchandise which was loaded or is to be loaded by Frederico Imperiale or by another in his behalf for the account of the said Frederico Vivaldi in Aigues-Mortes -- to be transported to Ayassoluk and Rhodes or to either of these two localities in a certain ship...and which departed from Aigues-Mortes or is about to depart in order to sail to aforesaid regions -- is brought and unloaded in the said localities of Ayasoluk and Rhodes or in either of them, in safety, then and in such a case the present intrument is cancelled, void, and of no value and pro rata. And be it understood that such a risk begins when the said ship departs and sets sail from Aigues-Mortes, and it remains and lasts, while the captain goes, stays [in port], sails, loads and unloads, from the said locality of Aigues-Mortres up to the said localities of Ayassoluk and Rhodes, in whatever manner and way he wishes, until said amount of goods, property, and merchandise has been brought and unloaded in Ayassoluk and Rhodes or in either of these two localities in safety, and pro rata. Let the present instrument also be cancelled if the said Frederico refrains from asking payments of the aforesaid amounts of money for the space of one year after the time or the time limit has elapsed for asking or obtaining their payment.... Done as above, September 15th, around nones. [1393 A.D.]

And written in the contract language:

insureGoods(goodsPremium="a certain amount of goods",
        principal="100 fl + 50 fl + 7*100 fl",
        penalty=2*principal,
        t1="5 months from now",
        t2="1 year after [legal] time limit has expired"
        goodsInsured="that amount of goods, property,
            and merchandise which was loaded")

by Ovid at February 25, 2009 17:20 UTC

v
^
x

Justin MasonUbuntu to bundle Eucalyptus

Introducing Karmic Koala, Ubuntu 9.10:

What if you want to build an EC2-style cloud of your own? Of all the trees in the wood, a Koala’s favourite leaf is Eucalyptus. The Eucalyptus project, from UCSB, enables you to create an EC2-style cloud in your own data center, on your own hardware. It’s no coincidence that Eucalyptus has just been uploaded to universe and will be part of Jaunty - during the Karmic cycle we expect to make those clouds dance, with dynamically growing and shrinking resource allocations depending on your needs.

A savvy Koala knows that the best way to conserve energy is to go to sleep, and these days even servers can suspend and resume, so imagine if we could make it possible to build a cloud computing facility that drops its energy use virtually to zero by napping in the midday heat, and waking up when there’s work to be done. No need to drink at the energy fountain when there’s nothing going on. If we get all of this right, our Koala will help take the edge off the bear market.

AWESOME — exactly where the Linux server needs to go. Eucalyptus is the future of server farms. Really looking forward to this…

by Justin at February 25, 2009 10:45 UTC

v
^
x

Tim BunceNYTProf screencast from the 2008 London Perl Workshop


I’ve uploaded the screencast of my NYTProf talk at the London Perl Workshop in November 2008.

It’s based on a not-quite-2.08 version and includes some coverage of an early draft of the ‘timings per rolled-up package name’ feature I discussed previously.

It also shows how and why anonymous subroutines, defined at the same line of different executions of the same string eval, get ‘merged’.

The demos use perlcritic and Moose code. It also includes a nice demonstration showing NYTProf highlighting a performance problem with File::Spec::Unix when called using Path::Class::Dir objects.

It’s 36 minutes long, including a good Q&A session at the end (wherein a market rate for performance improvements is established). Enjoy.

Posted in perl Tagged: nytprof, performance, perl

by TimBunce at February 25, 2009 03:09 UTC

February 24, 2009

v
^
x

Marcus RambergPainless rollouts with FCGI::Engine

Our software site i use this is built on the MVC Framework Catalyst. We currently run it using the Russian web server Nginx and standalone fastcgi servers. I am using the Moose based CGI::Engine distribution by Stevan Little to start the servers. This module makes it really easy to manage your applications. You just create a YML config file like this:

---
- name: "iusethis-osx.server"
  nproc: 4
  scriptname: "/www/iusethis-osx/script/iusethis_fastcgi.pl"
  pidfile: "/var/run/iusethis-osx.pid"
  socket: "/var/run/iusethis-osx.sock"

with an entry for each server you want to run. (Note that the paths has been changed to protect the innocent.) Then you just create a simple perl script (See the FCGI::Engine::Manager SYNOPSIS for a sample), and you can easily start, stop and check the status for each application individually or every application in your config. If you have a system v style init, you can just stick the script in /etc/init.d/ and it will behave just like any of your other startup scripts.

There is one annoying detail. Each time you roll out code, you have to restart your fastcgi processes. Since Catalyst takes some time to initialize, the application is down, and end users gets 500 Internal Server Error responses, unless you have a load balancer in front and take the node out of the cluster before upgrading. It does not have to work like this. Since the fcgi workers use a non-exclusive lock on the socket, you can start a new set of processes before you kill the old ones. this way, no requests are lost.

I really wanted this feature, so I have spent some time today hacking on FCGI::Engine. Stevan accepted my patches, and released version 0.06, which supports this restart mode, via a new ‘graceful’ method added to FCGI::Engine::Manager + some bug fixes. Nginx already support On the fly upgrades, which means there is no need for us to drop a user connection when rolling out new code again.

by marcus at February 24, 2009 21:44 UTC

v
^
x

Curtis PoeToday's Database Nightmare

Years ago, the mysql folks had an essay online explaining why foreign key constraints belong in your code and not in your database. I can't find a copy of that essay any more, but it's an excellent symptom of so many things which have plagued this database.

One problem many people face is how you cannot have both auto-updating "created" timestamps and "modified" timestamps in the same table definition (thanks MySQL!). Solutions:

  1. Setting their values with triggers
  2. Setting their values in your code
  3. Setting their values via a stored procedure

All of those are generally awful solutions, but we've opted for number 2, setting them in our code, because MySQL triggers have proven so problematic. That has led to some interesting problems.

We've had tables where somehow these values have either not been set or not been updated. How did this happen? Who knows? The problem seems to have gone away, but it's potentially there.

We now have the annoying problem where another client has read-only (thank goodness for small favors) access to our database. We have several tables with "titles" (brands, series, episodes, etc.) and we need to remove those and put them into a central "title" table. Unfortunately, this other client is maintained by another team who cannot update their client due to a major release coming out.

The classic strategy for this is to keep both sets of titles and merely have the "title" table populated by triggers on the other tables (see the Move Column database refactoring). This is a temporary solution, but it's introduced some very subtle issues. Initially we had triggers just like this (with corresponding UPDATE and DELETE triggers):

CREATE TRIGGER tr_brand_insert AFTER INSERT ON brand
FOR REACH ROW BEGIN
    INSERT INTO title (entity_pid, title_type_id, value, created, modified)
    VALUES(NEW.pid, 1, NEW.title, now(), now());
END;

Our tests failed. We were getting annoying errors about trying to set "NOT NULL" columns to a NULL value. MySQL thoughtfully reported the SQL which triggered the trigger, but not the trigger itself. I found myself staring at SQL which did not have the column which MySQL reported caused the problem (thanks MySQL!). Fortunately, since I'm working on this now, it was quickly obvious what was happening. (This is what led me to cursing earlier today).

Resolving that meant I had to rewrite the above trigger. The problem is that it's possible to have an empty string for a title (long story). So the above trigger became this:

CREATE TRIGGER tr_${table}_insert AFTER INSERT ON $table
FOR EACH ROW BEGIN
    IF COALESCE(NEW.title, '') != '' THEN
        INSERT INTO title (entity_pid, title_type_id, value, created, modified)
        VALUES(NEW.pid, 1, NEW.title, now(), now());
    END IF;
END;

As soon as you start inserting conditional logic into your database, you're probably introducing bugs. Sure enough, it happened when we tried to update a title by setting it to a non-null value. Our update trigger would try to update a non-existent row. That's when one of my colleagues pointed out an elegant solution. MySQL allows INSERT ... ON DUPLICATE KEY UPDATE.... This is very handy. Try to INSERT a record and if it violates a unique constraint, it will turn the INSERT into an UPDATE.

Except that doesn't work. Remember the created/modified restriction that MySQL imposes? Because we handled this in our code, when we INSERT a record, we set both the "created" and "modified" columns. If that INSERT gets switched to an UPDATE, then we'll also wind up updating our "created" column. Grr ...

As a result, that nice, simple UPDATE trigger (there are actually 10 variations of this) has become this:

CREATE TRIGGER tr_brand_update AFTER UPDATE ON brand
FOR EACH ROW BEGIN
    IF COALESCE(NEW.title, '') != '' THEN
        IF EXISTS(SELECT 1 FROM title WHERE entity_pid=OLD.pid) THEN
            UPDATE title SET value=NEW.title
            WHERE entity_pid=OLD.pid AND title_type_id=1 AND value=OLD.title;
        ELSE
            INSERT INTO title (entity_pid, title_type_id, value, created, modified)
            VALUES(NEW.pid, 1, NEW.title, now(), now());
        END IF;
    END IF;
END;

So far, the tests are passing, but a combination of MySQL limitations and issues in our code have taken a relatively simple problem and made it a real headache.

Update: And if you can spot the three bugs I missed, you're a better MySQL programmer than I am :)

by Ovid at February 24, 2009 16:07 UTC

v
^
x

Marcus RambergMore reasons to love Twirc

15:18  mattgemmell> Wives and girlfriends around the world rejoice at
Safari 4's Top Sites feature. #privatebrowsingtotherescue
15:20  marcus> favorite mattgemmell
15:20  @tweeter> Which tweet?
15:20  @tweeter> [1] Wives and girlfriends around the world rejoice at Safari ...
15:20  @tweeter> [2] @mattfarrugia Which podcast?
15:20  @tweeter> [3] @Zyote They were just in-case-of-no-beans backups, and I ...
15:20  marcus> 1
15:20 -tweeter:&twitter- favorite added

For those who missed my previous post about Twirc, you can get it through CPAN or Github.

by marcus at February 24, 2009 14:26 UTC

v
^
x

Curtis PoeNot Using Profanity Is Harder Than I Thought

On February 10th, 2009, I decided to stop using profanity. It turns out to be very, very hard for me. Today, I was happy that I finally made it to day four without using profanity.

I'm back to day zero. Naturally, it was MySQL which pushed me over the edge.

by Ovid at February 24, 2009 12:29 UTC

v
^
x

Curtis PoeVim: Folding POD Out Of the Way

Once again, I failed to grok the vim docs, but I finally got this working. If you have an editor which implements folding, you probably love having annoying things folding out of the way. I find POD annoying. Almost all of my modules are written with inline POD because I believe it's very important for the documentation to be next to the code, but if you've worked on a codebase for any length of time, that POD quickly becomes annoying. The following will fold your POD out of the way without folding other bits:

let perl_fold=1
let perl_nofold_packages=1
let perl_nofold_subs=1

I expect that's rather self-explantory. See :help fold for more folding options. (quick tip: 'zd' on the fold line will unfold the POD).

To learn more about vim's syntax options for Perl: :help ft-perl-syntax.

Folding in vim will usually show the first line of text folded (you can override this). Because the first line of text in our POD is usually a signature, we get this:

+--  8 lines: =head2 resultset_with_request_params($c, $rs) : $rs------------

In short, every function comes with a quick synopsis with this. Very, very handy.

by Ovid at February 24, 2009 10:19 UTC

v
^
x

Gabor SzaboNo good Perl for Win32 ?


I was trying to test the development version of Padre on Windows. Update from SVN and tried it on the currently installed ActivePerl:

  C:\gabor\padre\Padre>perl Makefile.PL
  Found Wx.pm     0.90
  The required 'nmake' executable not found, fetching it...
  Fetching 'Nmake15.exe' from download.microsoft.com... Can't locate object method "new" via package "URI" at C:/Perl/lib/HT
  TP/Request.pm line 80.

Not good.

But no problem, I remember Adam Kennedy has just released the first stable version of Strawberry Perl on a Stick. Strangely the main site only had link to the release candidates and not to the real release.

No problem, I know he announced it on use.perl

Now that's bad. According to the comment there, it cannot build XS modules.

So I cannot use ActivePerl nor the released veresion of Strawberry on a stick.

Reverting to the previous Strawberry RC that I still have somewhere on my computer...

All tests passing.

by Szabgab at February 24, 2009 10:08 UTC

v
^
x

chromaticPerl 6 Design Minutes for 11 February 2009

The Perl 6 design team met by phone on 11 February 2009. Larry, Jerry, Patrick, Nicholas, and chromatic attended.

Larry:

  • hacking symbol table support into the STD parser
  • so that I can tell when things have and haven't been defined
  • largely complete
  • started moving the lexical pads out to a separate file
  • they have more information in them
  • the Setting file, formerly the Prelude, is now parameterized so that other Settings can be set for the lexical settings
  • lots of work on various error messages associated with declaration, non-declaration, and redeclaration errors
  • just checked in a unification of the + and * twigils
  • threatened that for a long time
  • prototyped that in the STD parser
  • it seemed to work
  • hacked that in
  • now we only have the * twigil
  • it essentially does the contextual variables
  • it looks in global and process symbol tables if it doesn't find them locally
  • it looks in the environment only after failing to find it in global and process
  • it's very easy to hide that, or substitute a vetted environment table somewhere in your dynamic scope
  • cleaned up the definition of how environment is passed on to the subprocess
  • dealing with the twigil changes in the Spec
  • cleaning up the pseudopackage names somewhat
  • regularizing the setting and file scope and current compilation unit and their relationships

Patrick:

  • went to Frozen Perl this weekend
  • gave one major presentation on Perl 6
  • a couple of lightning talks on PCT
  • had a hackathon on Sunday
  • that went well
  • there's a lot of enthusiasm for Rakudo and Perl 6
  • Andy Lester focused on that for his keynote
  • lots of people talking about it outside of that
  • lots of people starting to look at Perl 6 again
  • "We're getting to an implementation"
  • "These features are nice"
  • "I'm looking forward to using this in my business"
  • trying to update the instructions to download and build Rakudo from its new repository
  • coming along more slowly than I'd like
  • noticed at the hackathon that invoking Rakudo with the Parrot command line parrot perl6.pbc is confusing for folks
  • Parrot's not in a common location they can get to it from
  • we'll try to focus more on the executable form
  • people can understand that more
  • noticed that pbc_to_exe took an inordinately long time to do its work
  • hacked on that, and the new version is a lot faster
  • learned a lot about IO in Parrot
  • tweaking the build scripts for Rakudo
  • plan to do more of the same
  • will write up more documentation, READMEs, guides, blog posts, etc

Jerry:

  • interested in the work on the Setting
  • might work with one of the S19 options I've toyed with
  • -E to include an environment variable
  • can be generalized into a one-liner to add a setting
  • had some minor updates to S19
  • don't have the split/comb update quite right yet
  • been fighting with Parrot's Windows compatibility through changes in the config system
  • discovering some things this week about static and shared library linking
  • working toward a solution there
  • started creating an Ubuntu VM for distribution to interested hackers
  • has all sorts of Perl 6-related projects
  • Pugs, Rakudo, Parrot, November, Apache, mod_parrot
  • VMWare image that should fit on a 16 GB thumb drive
  • will make that online
  • should make it easier for people who want to hack on Rakudo to do so
  • provided we can set up tools to manage and update these distributions

c:

  • closed some bugs
  • checked in the support, deprecation, and release policy
  • brainstorming ideas on how to make a VM with less C code

Nicholas:

  • does a VM really take 16 GB?

Jerry:

  • I don't think it will
  • that's how big I have it now
  • it can probably be smaller
  • I'm not familiar enough with a minimal install of Ubuntu with enough space for GCC and the build tools

Nicholas:

  • it wouldn't fit on a CD
  • it might fit on a DVD
  • assumed that burning a DVD is a more disposable way of giving it away
  • they're effectively free

Jerry:

  • I'll consider that
  • hadn't thought terribly about distribution
  • I can resize it smaller at any time

Patrick:

  • just did an Ubuntu install yesterday
  • GCC, Git, Subversion... no Parrot yet
  • it's a fresh install
  • looks like 3.5 GB
  • I can generally work in 6 GB
  • that'd fit on a dual-layer DVD

c:

  • maybe look at a live DVD
  • add to that

Patrick:

  • Ubuntu lets you boot off a USB drive
  • you keep your storage with you

Jerry:

  • want to give Jesse credit for Prophet and SD, which syncs with RT, Hiveminder, and Trac
  • offline access to bug queues and TODO lists
  • it's nice to travel to a venue and then sync so everyone sees how productive you are when you're not connected
  • going to put that in the Perl 6 dev VM

Patrick:

  • I want that!
  • especially with all of the traveling I'm doing

by chromatic at February 24, 2009 01:22 UTC

February 23, 2009

v
^
x

Marcel Gr?nauerPerl benchmarks: Nested data structures

The conclusion first: Nested arrays and hashes in Perl are slow. They get slower the more levels you nest. For most applications this won't be a problem, but if you cache values in a hash, you might want to use as few levels of nesting as possible.

First, let's benchmark reading from and writing to nested array elements. We use different arrays for six different levels of nesting, and all element indices are constants.

use Benchmark qw(:all);

our (@array1, @array2, @array3, @array4, @array5, @array6);

cmpthese(timethese(10_000_000, {
    L1w => sub { $array1[1] = 1 },
    L2w => sub { $array2[1][2] = 1 },
    L3w => sub { $array3[1][2][3] = 1 },
    L4w => sub { $array4[1][2][3][4] = 1 },
    L5w => sub { $array5[1][2][3][4][5] = 1 },
    L6w => sub { $array6[1][2][3][4][5][6] = 1 },
    L1r => sub { $array1[1] },
    L2r => sub { $array2[1][2] },
    L3r => sub { $array3[1][2][3] },
    L4r => sub { $array4[1][2][3][4] },
    L5r => sub { $array5[1][2][3][4][5] },
    L6r => sub { $array6[1][2][3][4][5][6] },
}));

The results:

Benchmark: timing 10000000 iterations of L1r, L1w, L2r, L2w, L3r, L3w, L4r, L4w, L5r, L5w, L6r, L6w...
       L1r:  0 wallclock secs ( 0.20 usr + -0.00 sys =  0.20 CPU) @ 50000000.00/s (n=10000000)
            (warning: too few iterations for a reliable count)
       L1w:  1 wallclock secs ( 0.72 usr +  0.01 sys =  0.73 CPU) @ 13698630.14/s (n=10000000)
       L2r:  1 wallclock secs ( 1.19 usr +  0.01 sys =  1.20 CPU) @ 8333333.33/s (n=10000000)
       L2w:  1 wallclock secs ( 2.01 usr +  0.02 sys =  2.03 CPU) @ 4926108.37/s (n=10000000)
       L3r:  2 wallclock secs ( 1.82 usr +  0.02 sys =  1.84 CPU) @ 5434782.61/s (n=10000000)
       L3w:  3 wallclock secs ( 2.67 usr +  0.03 sys =  2.70 CPU) @ 3703703.70/s (n=10000000)
       L4r:  3 wallclock secs ( 2.63 usr +  0.01 sys =  2.64 CPU) @ 3787878.79/s (n=10000000)
       L4w:  4 wallclock secs ( 3.54 usr +  0.02 sys =  3.56 CPU) @ 2808988.76/s (n=10000000)
       L5r:  3 wallclock secs ( 3.04 usr +  0.00 sys =  3.04 CPU) @ 3289473.68/s (n=10000000)
       L5w:  4 wallclock secs ( 4.13 usr +  0.04 sys =  4.17 CPU) @ 2398081.53/s (n=10000000)
       L6r:  4 wallclock secs ( 3.76 usr +  0.01 sys =  3.77 CPU) @ 2652519.89/s (n=10000000)
       L6w:  5 wallclock secs ( 4.46 usr +  0.02 sys =  4.48 CPU) @ 2232142.86/s (n=10000000)
          Rate   L6w   L5w   L6r   L4w   L5r   L3w   L4r  L2w  L3r  L2r  L1w  L1r
L6w  2232143/s    --   -7%  -16%  -21%  -32%  -40%  -41% -55% -59% -73% -84% -96%
L5w  2398082/s    7%    --  -10%  -15%  -27%  -35%  -37% -51% -56% -71% -82% -95%
L6r  2652520/s   19%   11%    --   -6%  -19%  -28%  -30% -46% -51% -68% -81% -95%
L4w  2808989/s   26%   17%    6%    --  -15%  -24%  -26% -43% -48% -66% -79% -94%
L5r  3289474/s   47%   37%   24%   17%    --  -11%  -13% -33% -39% -61% -76% -93%
L3w  3703704/s   66%   54%   40%   32%   13%    --   -2% -25% -32% -56% -73% -93%
L4r  3787879/s   70%   58%   43%   35%   15%    2%    -- -23% -30% -55% -72% -92%
L2w  4926108/s  121%  105%   86%   75%   50%   33%   30%   --  -9% -41% -64% -90%
L3r  5434783/s  143%  127%  105%   93%   65%   47%   43%  10%   -- -35% -60% -89%
L2r  8333333/s  273%  248%  214%  197%  153%  125%  120%  69%  53%   -- -39% -83%
L1w 13698630/s  514%  471%  416%  388%  316%  270%  262% 178% 152%  64%   -- -73%
L1r 50000000/s 2140% 1985% 1785% 1680% 1420% 1250% 1220% 915% 820% 500% 265%   --

Next we do the same for hashes, again with constant hash keys:

use Benchmark qw(:all);

our (%hash1, %hash2, %hash3, %hash4, %hash5, %hash6);

cmpthese(timethese(10_000_000, {
    L1w => sub { $hash1{L1} = 1 },
    L2w => sub { $hash2{L1}{L2} = 1 },
    L3w => sub { $hash3{L1}{L2}{L3} = 1 },
    L4w => sub { $hash4{L1}{L2}{L3}{L4} = 1 },
    L5w => sub { $hash5{L1}{L2}{L3}{L4}{L5} = 1 },
    L6w => sub { $hash6{L1}{L2}{L3}{L4}{L5}{L6} = 1 },
    L1r => sub { $hash1{L1} },
    L2r => sub { $hash2{L1}{L2} },
    L3r => sub { $hash3{L1}{L2}{L3} },
    L4r => sub { $hash4{L1}{L2}{L3}{L4} },
    L5r => sub { $hash5{L1}{L2}{L3}{L4}{L5} },
    L6r => sub { $hash6{L1}{L2}{L3}{L4}{L5}{L6} },
}));

The results:

Benchmark: timing 10000000 iterations of L1r, L1w, L2r, L2w, L3r, L3w, L4r, L4w, L5r, L5w, L6r, L6w...
       L1r:  0 wallclock secs ( 0.63 usr + -0.00 sys =  0.63 CPU) @ 15873015.87/s (n=10000000)
       L1w:  0 wallclock secs ( 1.53 usr +  0.01 sys =  1.54 CPU) @ 6493506.49/s (n=10000000)
       L2r:  2 wallclock secs ( 1.58 usr + -0.00 sys =  1.58 CPU) @ 6329113.92/s (n=10000000)
       L2w:  2 wallclock secs ( 2.35 usr +  0.02 sys =  2.37 CPU) @ 4219409.28/s (n=10000000)
       L3r:  3 wallclock secs ( 2.58 usr +  0.01 sys =  2.59 CPU) @ 3861003.86/s (n=10000000)
       L3w:  4 wallclock secs ( 3.67 usr +  0.02 sys =  3.69 CPU) @ 2710027.10/s (n=10000000)
       L4r:  4 wallclock secs ( 3.07 usr +  0.02 sys =  3.09 CPU) @ 3236245.95/s (n=10000000)
       L4w:  5 wallclock secs ( 4.69 usr +  0.01 sys =  4.70 CPU) @ 2127659.57/s (n=10000000)
       L5r:  5 wallclock secs ( 4.67 usr +  0.02 sys =  4.69 CPU) @ 2132196.16/s (n=10000000)
       L5w:  5 wallclock secs ( 5.20 usr +  0.02 sys =  5.22 CPU) @ 1915708.81/s (n=10000000)
       L6r:  6 wallclock secs ( 5.77 usr +  0.01 sys =  5.78 CPU) @ 1730103.81/s (n=10000000)
       L6w:  7 wallclock secs ( 6.35 usr +  0.01 sys =  6.36 CPU) @ 1572327.04/s (n=10000000)
          Rate  L6w  L6r  L5w  L4w  L5r  L3w  L4r  L3r  L2w  L2r  L1w  L1r
L6w  1572327/s   --  -9% -18% -26% -26% -42% -51% -59% -63% -75% -76% -90%
L6r  1730104/s  10%   -- -10% -19% -19% -36% -47% -55% -59% -73% -73% -89%
L5w  1915709/s  22%  11%   -- -10% -10% -29% -41% -50% -55% -70% -70% -88%
L4w  2127660/s  35%  23%  11%   --  -0% -21% -34% -45% -50% -66% -67% -87%
L5r  2132196/s  36%  23%  11%   0%   -- -21% -34% -45% -49% -66% -67% -87%
L3w  2710027/s  72%  57%  41%  27%  27%   -- -16% -30% -36% -57% -58% -83%
L4r  3236246/s 106%  87%  69%  52%  52%  19%   -- -16% -23% -49% -50% -80%
L3r  3861004/s 146% 123% 102%  81%  81%  42%  19%   --  -8% -39% -41% -76%
L2w  4219409/s 168% 144% 120%  98%  98%  56%  30%   9%   -- -33% -35% -73%
L2r  6329114/s 303% 266% 230% 197% 197% 134%  96%  64%  50%   --  -3% -60%
L1w  6493506/s 313% 275% 239% 205% 205% 140% 101%  68%  54%   3%   -- -59%
L1r 15873016/s 910% 817% 729% 646% 644% 486% 390% 311% 276% 151% 144%   --

I hadn't expected the differences to be so dramatic...

Like this post? - Digg Me! | Add to del.icio.us! | reddit this! | Reply on Twitter

February 23, 2009 16:38 UTC

v
^
x

Adam KennedyStrabwerry Perl 5.10.0.4-1 Portable Released

http://strawberry-perl.googlecode.com/files/strawberry-perl-5.10.0.4-1-portable. zip

I'm happy to announce the first stable release of Strawberry Perl Portable.

This release is a based on a updated version of Strawberry Perl 5.10.0.4 (it will contain some slightly newer versions of the bundled modules, compared to the original January release).

This first-generation Portable distribution is only available in .zip format, you will need to unpack it yourself onto your flash drive, camera, iPod, mobile phone, or random other storage device.

My hope is that after this release, we can start to move towards becoming compatible with http://portableapps.com/.

by Alias at February 23, 2009 02:02 UTC

February 22, 2009

v
^
x

Curtis PoeWhat a phenomenally bad web site error

I won't name the site because I don't want crackers looking into this, but here's a fascinating error message I received when trying to create an account.

alter table users_new add column 39b6c79e6a45fa57dfefca048bb1d0ce' in 'where clause' varchar(255) null
Unknown column '39b6c79e6a45fa57dfefca048bb1d0ce' in 'where clause'

Obviously, I did not write this software :)

by Ovid at February 22, 2009 22:46 UTC

v
^
x

Curtis PoeMT and LiveJournal OpenID

Mainly for Google, but you might experience this.

I was going to post a comment to Modern Perl Books, but didn't. Seems that when I try to use my LiveJournal OpenID, I get an "unclosed token" error.

As it turns out, it's an MT error. If you have "too many" friends, LJs FOAF (Friend Of A Friend) response is too large for MT to handle. The workaround is to create a "FOAF-knows" friends group and only add yourself to it.

by Ovid at February 22, 2009 22:35 UTC

v
^
x

Curtis PoeMaking Test::Class More Like xUnit

In xUnit style tests, this is an entire test:

sub first_name : Tests(tests => 3) {
    my $test   = shift;
    my $person = $test->class->new;
    can_ok $person, 'first_name';
    ok !defined $person->first_name,
      '... and first_name should start out undefined';
    $person->first_name('John');
    is $person->first_name, 'John', '... and setting its value should succeed';
}

In the TAP world, we would look at this as three tests, but xUnit says we have three asserts to validate one feature, thus we have one test. Now TAP-based tests have a long way to go before working for xUnit users, but there's one thing we can do. Let's say that you have a test with 30 asserts and the fourth assert fails. Many xUnit programmers argue that once an assert fails, the rest of the information in the test is unreliable. Thus, the tests should be halted. Now regardless of whether or not you agree with this (I hate the fact that, for example, junit requires the test method to stop), you can get this behavior with Test::Class. Just use Test::Most instead of Test::More and put this in your test base class:

BEGIN { $ENV{DIE_ON_FAIL} = 1 }

Because each test method in Test::Class is wrapped in an eval, that test method will stop running and the tests will resume with the next test method.

by Ovid at February 22, 2009 15:09 UTC

v
^
x

Curtis PoeHow Does One Credit A Photo of Unknown Origin?

You know, every once in a while I post a photo in my personal journal and I want to credit it. But sometimes those photos show up on "aggregation" sites where people find "cool" photos and repost them, but without attribution. These photos are not original content to these sites, so I don't want to drive traffic to sites which don't credit the original creators of those works. What's the "right" thing to do there?

(Yes, I'm also guilty of not crediting creators. Trying to stop that.)

by Ovid at February 22, 2009 10:16 UTC

v
^
x

Perl NOC LogDNS Troubles

We had some DNS issues on Saturday morning, the 21st of February.  This may have resulted in some emails destined for perl.org addresses to have been bounced.  For details and resolution, please see here and here.

by Robert S at February 22, 2009 08:18 UTC

February 21, 2009

v
^
x

Ask Bj?rn Hansenperl.org "suspended" by directi.com / resellerclub

Earlier this morning the registrar for perl.org - DirectI/ResellerClub - decided to suspend the domain. They got one(!) report that it was “involved in phising activities”; which sounds like a spammer sent a mail pretending to be from perl.org.

This is the mail I got a few hours ago:

We received a complaint about your domain name perl.org being involved in phishing activities. Using domain names for any such activity, is strictly against Registrar PublicDomainRegistry.com’s AUP.

On account of the breach of the PublicDomainRegistry.com DomainRegistrant Agreement (available within your Control Panel at Help -> Legal Agreements) we have Suspended this domain name.

Very clever. In particular doing it on a Saturday morning! Also note the exquisite details that allows us to respond (that’s sarcasm, there obviously was no detail). In particular it’s insane because if it actually was happening we’d want to stop it, but they give us no help for that. Lots of DNS resolver will have the domain cached for a day or two, so just turning off the domain wouldn’t protect people. Did I mention incompetence?

They actually did this stunt with the xrl.us domain (the short domain for the metamark service) some time ago. That time they also didn’t communicate anything or seem to care much about the disruption they caused. Foolishly thought they’d be able to manage the other domains.

I’ve recommended DirectI in the past, but obviously no more. They have very good pricing and a decent web interface, but clearly they are useless for anything more important than parked “to be used later” domains. If you want to turn off one of their customers, just open a free email account and send some abuse complaints. It sounds like you just need to include your targets email address in the abuse complaint. Works out well if your competitors are using them for their business!

I’ve opened a ticket with them which is the only sort of contact they allow. Being the weekend now I don’t know when they’ll respond, much less fix it. Anyone have a contact at DirectI / Resellerclub?

Also - anyone have tips for how to as automatically as possible transfer a bunch of domains from them to OpenSRS?

Update - it’s back now - they say they suspended it by mistake while suspending other domains. Unbelievable.

by Ask Bj?rn Hansen at February 21, 2009 19:28 UTC

v
^
x

Ask Bj?rn Hansenperl.org domain back - suspended "by mistake"

They unsuspended perl.org now (~11:30PST). Apparently it was a mistake (no kidding). This is what they wrote:

Firstly, I would like to inform you that the domain name perl.org has been unsuspended.

We had received phishing complaints on several hundred domain names belonging to a particular network. Since we need to act on these complaints immediately, the domain name perl.org was accidentally suspended as well. We have now verified that this site does not contain any phishing material and have thus unsuspended the domain name.

I understand the consequences faced by you and you clients/people using the site due to this suspension. I sincerely apologize for this on behalf of ResellerClub. Be assured that this was a one-off case and we have made sure such a thing is not repeated.

Apologies once again.

This is even worse than if they had been overzealous with an abuse complaint actually on perl.org. Excuse me while I go look for my jaw on the floor.

by Ask Bj?rn Hansen at February 21, 2009 19:27 UTC

v
^
x

Marcus RambergPerl.org dns broken

 @ask_> Looks like DirectI/resellerclub (the domain registrar) decided
to  suspend perl.org!  I've no idea why.  They're not a communicative
bunch.  Also, nice touch doing it on a Saturday.

For now, if you need access to search.cpan.org, you can reach it directly by ip: http://207.115.101.144/ Also, note that you can access the catalyst framework home page through http://www.catalystframework.org/. You can read more about it from Ask’s blog.

*update* perl.org is back, serenity has been restored to the world. Hopefully they will also move to a less insane provider soon.

by marcus at February 21, 2009 18:38 UTC

February 20, 2009

v
^
x

Curtis PoeKnown Ruby Bug?

I suppose I should ask this in a Ruby forum, but since I'm so used to slinging other languages here ...

To find the Nth root of number is simple: raise the number to the reciprocal of N. For example, to find the cube root of 8:

$ perl -le 'print 8 ** (1/3)'
2

But you can't quite do that in Ruby:

$ ruby -e 'puts 8 ** (1/3)'
1

But this is a "feature", not a bug (*cough*) because the 1/3 is considered integer math and evaluates to 0, leaving you with 8 to the 0th power. Anything raised to the power of 0 results in 1. So far so good.

So to force floating point math, use a floating point number:

$ ruby -e 'puts 8 ** (1/3.0)'
2

And all is good. Except ...

Let's take the square root of 1:

$ ruby -e 'puts 1 ** (1/2.0)'
1.0

Now let's take the square root of -1:

$ ruby -e 'puts -1 ** (1/2.0)'
-1.0

Huh? The square root of -1 is imaginary (or i, if you want to be specific). What's going on here?

Yes, I know about Math.sqrt, which at least thoughtfully throws an exception rather than give an incorrect value:

$ ruby -e 'puts Math.sqrt(-1)'
-e:1:in `sqrt': Numerical argument out of domain - sqrt (Errno::EDOM)
    from -e:1

by Ovid at February 20, 2009 22:08 UTC

v
^
x

Curtis PoeHow to construct a list in Perl, Ruby, Smalltalk, etc ...

Just read a nice blog post about constructing a list from another list. It demonstrates how to do this in several languages. Naturally, the punchline is Java :)

by Ovid at February 20, 2009 16:54 UTC

v
^
x

Kirrily RobertO, Canada!

Barack Obama and I both arrived safely in Canada yesterday for our respective trips. He was here to meet with Prime Minister Stephen Harper and hold a press conference, but I’m here in Vancouver, BC for Northern Voice, a blogging and social media conference to which I was invited by Freebase contributor Jim Pick.

I visited my favourite sculpture at the airport:

Haida Gwaii 1

And I’m staying at the YWCA, the worst thing about which is that it’s hard to get your arms from the W position to the C while dancing in the shower.

On Sunday, we’ll be holding a Freebase meetup at the Irish Heather pub. You should come.

by Skud at February 20, 2009 15:58 UTC

v
^
x

Adam KennedyStrawberry Perl Portable Beta 2 passes testing

After some experimentation today with the latest Strawberry Portable Beta 2, it would appear that it now passes all required functions, and all previously noted bugs are now solved.

I was able to install it to a drive, start the CPAN client, install Padre from scratch with all dependencies, and all of the .bat files in the /perl/bin directory correctly launch the related Perl scripts.

With nothing left remaining to do, we should see the final stable release in the next couple of days in a special mid-cycle Strawberry release.

by Alias at February 20, 2009 15:14 UTC

v
^
x

Curtis PoeMoose!

We've started to use Moose pretty heavily at work, but until today, I've not had the chance to write any Moose classes myself. Today, I wrote my first two Moose classes:

package PIPs::QueryParams::Param;

use Moose;

has name => (is => 'ro', isa => 'Str', required => 1);

has method =>
        (is => 'ro', isa => 'Str', lazy => 1, default => sub { $_[0]->name });
has required   => (is => 'ro', isa => 'Bool', default => 0);
has repeatable => (is => 'ro', isa => 'Bool', default => 0);
has datatype   => (is => 'ro', isa => 'Str',  default => 'xsd:string');
has rel        => (is => 'ro', isa => 'Str',  default => '');

__PACKAGE__->meta->make_immutable;
no Moose;

1;

And ...

package PIPs::QueryParams;

use Moose;
use Carp ();
use PIPs::QueryParams::Param;

has params => (
    is  => 'ro',
    isa => 'ArrayRef[PIPs::QueryParams::Param]',
);

sub BUILDARGS {
    my ($class, %args) = @_;
    my @params;
    my $controller           = $args{controller};
    my $allowed_query_params = $controller->allowed_query_params;
    foreach my $name ( sort keys  %$allowed_query_params) {
        my $args = $allowed_query_params->{$name};
        $args->{name} = $name;
        push @params => PIPs::QueryParams::Param->new(%$args);
    }
    $class->SUPER::BUILDARGS(
        params => \@params,
    );
}

__PACKAGE__->meta->make_immutable;
no Moose;

1;

And I can now just do this:

my $params = PIPs::QueryParams->new( controller => $controller );
my $list   = $params->params;
say $_->name foreach @$list;

I'm sure those can be written better, but those were soooo much easier to write than typical Perl classes. Ultimately, this is going to be used to auto-generate documentation for our REST API.

And note, because of the annoying sort of guy that I am (and preferring immutable objects), everything is read-only.

Update: Fixed the code per autarch's suggestion below. His comment now won't make much sense, but I didn't want an example of bad Moose code here.

by Ovid at February 20, 2009 15:04 UTC

v
^
x

Gabor SzaboMoaning Goat Meter

Recently I looked for some graphical tool to monitor the memory and CPU usage of some of the applications I am running and found mgm. It is not exactly what I was looking for but its a nice GUI tool to show various parameters of your system.

When it launches it shows a horizontal bar with varius titles written on the vertical side making them unreadable. I could not find a way to click on the GUI to change it so I though I'll look for the documentation.

Typing man mgm points me at /usr/share/doc/mgm/html/docs.html which is not on my system and to http://www.xiph.org/mgm/ which returns 404 Not Found.

On the positive side it revealed that the code is written in Perl/Tk but after seeing the other two pieces of information to be invalid I had to check it. Indeed /usr/bin/mgm is a 600 lines long Perl script using Tk.

It also uses a bunch of other modules located in /usr/share/mgm/ Looking at the code that was written in 2001 it is not bad at all. Well, maybe execpt of this line:

  $0="moaning-goat-meter"; # OK, this is evil.

and a slight lack of spaces.

Unfortunatelly the packages its using are not real modules and are loaded via require.

Anyway my point is not to criticize the code. I'd be proud if wrote such code back in 2001.

The problem was that I could not find any indication regarding the source of the code. A short search on my hard disk and I found /usr/share/doc/mgm/README.Debian. That too points to the above not existing URL.

I had to turn to Google and even there I got many links pointing to the wrong, presumably old URL. Finally though I found it the web site of

Linux Mafia which points back to the Xiph web site so there is obviously some relationship.

There are many nice things in that code for system administrators. I wonder if it was worth to take it from there and CPAN-ify the code?

Oh and by the way I can use mgm with the following options to put it in vertical mode:

   mgm -bars horizontal -stack vertical

by Szabgab at February 20, 2009 14:34 UTC

February 19, 2009

v
^
x

Marcel Gr?nauerThe Grand Perspective on CPAN

I've created a tree map for the minicpan mirror using GrandPerspective on Mac OS X. To quote from GrandPerspective:

GrandPerspective is a small utility application for Mac OS X that graphically
shows the disk usage within a file system. It can help you to manage your disk,
as you can easily spot which files and folders take up the most space. It uses
a so called tree map for visualisation. Each file is shown as a rectangle with
an area proportional to the file's size. Files in the same folder appear
together, but their placement is otherwise arbitrary.

Within the application, you can mouse over the rectangles to see which files and folders they represent, but here I've just annotated a few interesting blocks. Somehow it feels like a city layout...

minicpan-grandperspective

Like this post? - Digg Me! | Add to del.icio.us! | reddit this! | Reply on Twitter

February 19, 2009 17:50 UTC

v
^
x

Gabor SzaboExperimental Perl 6 training / workshop in Frankfurt

There is less than a week till the German Perl Workshop and my Test Automation using Perl Training in Frankfurt, Germany and I am planning what to do on the weekend between them.

The plan, as it is being discussed on the Frankfurt.pm and Darmstadt.pm mailing list and also mentioned on the German Perl Board is to spend the weekend learning Perl 6. Not Rakudo or Parrot hacking but to learn to program in Perl 6.

I have been writing my Perl 6 training material so I'd be glad to go over those slides. This means we'll learn stuff like

  • Setting up Parrot and Rakudo to be able to run Perl 6
  • Setting up Padre with the Perl 6 plugin so we have a partial IDE for Perl 6
  • Then we go over the basic use of Perl 6 and how it differs from Perl 5.
    • Scalars
    • Basic I/O
    • Dealing with Files (I/O)
    • Control Structures (loops, conditionals)
    • Chained comparison
    • Lists and Arrays
    • Hashes
    • Subroutines, Multi dispatch subroutines, signatures
    • Junctions
  • The more advanced stuff still does not have slides but I might be be able to write a few basic ones till then, here are the subjects I'd like to cover:
    • Regexes, Grammars and Rules
    • Classes
    • Meta operators

There are some exercises already prepared that the students can solve and I hope I'll be able to help them. A large part of the time will be spent experiementing Perl 6 programming.

If you are attending the workshop at the end of the two days you will be able to write applications in Perl 6.

While in most of the cases you'll feel that you can still code in Perl 5 faster than in Perl 6 - after all you have been using it for many years - there already will be cases when you prefer to write stuff in Perl 6 as that will save you time and headache!

This training or workshop is on a cost-sharing basis. We might have some small expenses (e.g. renting a room in the Voltair Club) that we'll share.

Other than that it has no cost.

In order to get a feeling who wants to join us, please add your name on the GPW wiki

Announcements in German

Perl Nachrichten
Ren?e B?cker

by Szabgab at February 19, 2009 08:23 UTC

v
^
x

Tim BunceNYTProf 2.08 - better, faster, more cuddly


I’ve just released NYTProf 2.08 to CPAN, some three and a half months after 2.07.

If you’ve been profiling large applications then the first difference you’ll notice is that the new version generates reports much faster.

NYTProf 2.08 timings.pngThe next thing you may notice is that statement timings are now nicely formatted with units. Gisle Aas contributed the formatting code for 2.07 but I had to do some refactoring to get it working for the statement timings.

Another nice refinement is that hovering over a time will show a tool-tip with the time expressed as a percentage of the overall runtime.

Almost all the tables are now sortable. I used jQuery and the tablesorter plugin for that. I’ve not added any fancy buttons, just click on a table heading to sort by that column. You’ll see a little black arrow to show the column is sorted. (You can hold the shift key down to add second and third columns to the sort order.)

A profiler isn’t much use if it’s not accurate. NYTProf now has tests for correct handling of times for string evals within string evals. In fact the handling of string evals got a big overhaul for this version as part of ongoing improvements in the underlying data model. I’m working towards being able to show annotated performance reports for the contents of string evals. It’s not there yet, but definitely getting closer.

A related feature is the new savesrc=1 option. When enabled, with a recent version of perl, the source code for each source file is written into the profile data file. That makes the profile self-contained and, significantly, means that accurate reports can be generated even after the original source files have been modified.

Another new option is optimize=0. You can use it to disable the perl optimizer. That can be worth doing if the statement timings, or counts, for some chunk of code seem odd and you suspect that the perl optimizer has rewritten it.

The final new feature noted in the NYTProf 2.08 Changes file is that it’s now possible to generate multiple profile data files from a single application. Since v2.0 you could call DB::disable_profile() and DB::enable_profile() to control profiling at runtime. Now you can pass an optional filename to enable_profile to make it close the previous profile and open a new one. I imagine this would be most useful in long running applications where you’d leave profiling disabled (using the start=none option) and then call enable_profile and disable_profile around some specific code in specific situations - like certain requests to a mod_perl app.

There’s one more new feature that I’ve just realised I’d forgotten to add to the Changes file before the release: Timings per rolled-up package name. What’s that? Well, it’s probably easiest to show you…

These images are taken from a profile of perlcritic. Each shows the time spent exclusively in subroutines belonging to a certain package and any packages below it. Hovering over a time gives the percentage, so I can see that the 57.3s spent in the 36 PPI packages accounted for 42% of the runtime.

NYTProf 2.08 pkg1.png

This gives you a quick overview for large (wide) codebases that would be hard to get in any other way.

Tables are generated for upto five levels of package name hierarchy, so you can drill-down to finer levels of detail.

NYTProf 2.08 pkg2.png

 

NYTProf 2.08 pkg3.png

I can visualize a much better UI for this data than the series of tables nytprofhtml currently produces, but my limited free time and jQuery skills prevent me doing more. Patches welcome, naturally.

Enjoy!

p.s. I’ve a screencast from my NYTProf talk at the London Perl Workshop in November I hope to (finally) upload soon. It includes a demo of the package roll-up timing tables.

Posted in perl Tagged: nytprof, performance

by TimBunce at February 19, 2009 06:20 UTC

v
^
x

Adam KennedyFile::Find::Rule::Perl now supports META.yml no_index

http://svn.ali.as/cpan/releases/File-Find-Rule-Perl-1.05.tar.gz

After more than the usual mucking around with the API look, I've finally managed to get support for META.yml's no_index entry implemented in my "doin' Perl stuff" File::Find::Rule plugin, File::Find::Rule::Perl.

I've managed to get it to be very very easy to use in the simple cases, while being flexible enough for all the various things people might use it for.

The four different usages are demonstrated in the following sample code, taken from the module docs.

# Provide the rules directly
$rule->no_index(
        directory => [ 'inc', 't', 'examples', 'lib/ignore' ],
        file => [ 'Foo.pm', 'lib/Foo.pm' ],
);

# Provide a META.yml to use
$rule->no_index( 'META.yml' );

# Provide a dist root directory to look for a META.yml in
$rule->no_index( 'My-Distribution' );

# Automatically pick up a META.yml from the target directory
$rule->no_index->in( 'My-Distribution' );

As per the META.yml spec (once convention is codified) exclusions as root-relative multi-part Unix paths.

In this case, the "root-relative" part is always implemented relative to the directories provided to the ->in method, so this method isn't going to be useful for scanning an expanded minicpan or any other tree with multiple nested META.yml files.

But, when combined with File::Find::Rule::VCS, it should be trivial now to write your own tools for analysing Perl code in your modules in the same way PAUSE does, with something like this.

my @perl = File::Find::Rule->ignore_svn->no_index->perl_file;

Now I just need to go through and update all my stuff to use it.

by Alias at February 19, 2009 01:03 UTC

February 18, 2009

v
^
x

chromaticPerl 6 Design Minutes for 04 February 2009

The Perl 6 design team met by phone on 04 February 2009. Larry, Jerry, Will, Patrick, Nicholas, and chromatic attended.

Allison:

  • successful migration of the Subversion repository to svn.parrot.org
  • took up chromatic's ticket challenge and closed a handful a day, applying patches, fixing bugs, clearing out some old TODOs that don't fit with current architecture
  • started, completed, and merged in the second string refactor branch, a large-scale function name cleanup
  • a few fixes on the POD parser, mostly handed it off to kj
  • updated the Parrot Ubuntu packages on the PPA with instructions for using it
  • speaking about Parrot at IBM tomorrow

Jerry:

  • enjoyed the lively discussion with mtnviewmark, timtoady, and others about metaops
  • really like where that ended up
  • Jonathan should be making some rakudo commits shortly to clean up the cross metaop spectest failures
  • thanks to Larry's comments, i'm polishing S19
  • haven't had much uninterrupted time lately, so progress has been slower than I like
  • keep thinking that's going to clear up soon... but that hasn't been
  • the case for weeks now

Larry:

  • completely revamped STD.pm's metatoken parsing
  • treating generated metaoperators as longest tokens did not scale
  • chasing the implications of that through
  • LTM occurs on the metaoperator itself separately from the base operator
  • (most of the base operators are infixes)
  • what it modifies gets parses as a separate token
  • has implications on the order you check things
  • != isn't a separate token
  • has to be parsed with the combination of the ! metaoperator
  • cuts the number of tokens way down
  • the lexer runs a fair bit faster
  • invented a new notation for disambiguating infix operators
  • they may now all be put in [], the same form as reduce operators
  • internal to a metasequence (need a new name for that) sometimes need to identify which operator to apply in which order
  • &[] now refers to the infix function itself
  • makes it easy to pass binary operators into various functional programming primitives
  • reduce operator still stays out in front with square brackets
  • just generalized the notation
  • made more comments on S19
  • moving away from the notion of a Prelude
  • provisionally now called a Setting
  • it implies things before and after the text of a program
  • the -n and -p loops put a scope around the scope of your file
  • invented several new pseudo-package name prefixes to refer to the Setting
  • PERL is the outermost setting as well as LANG for the sublanguage the file is actually parsed in (such as from -n or -p)
  • just changed the Prelude to the Setting option in S19
  • chopped out the old, non-trie lexer since we're not using it
  • thought it might clean up the looks of the LTM for anyone interested in implementing his own
  • not that mine is a parallel matcher yet....
  • still something I'd like to do
  • maybe I won't have to
  • working with Mark on revisions to the new metaops (old minus metaops from last week)
  • many binary operators could use a metaop which reverses their arguments
  • would have the same effect on comparison operators as reversing the sense of the test
  • mutated into a R operator
  • and the X operator lost its second X to work the same
  • started a trend: adding more metaoperators with a prefix capital letter is a good approach
  • might end up with Z for a zip with operation
  • working on moving some of the preludey/settings stuff out of STD.pm into a separate file
  • - working over the various symbol tables to do lexical scoping more sanely - this is prep work for parsing a real Setting file and dumping out the symbols such that they can be slurped in for the user's file compilation

Will:

  • minor station keeping on Parrot
  • no major progress on cleaning up deprecated stuff
  • hope to release a copy of This Week in Parrot
  • trying to resurrect that
  • lots of questions in IRC about why things are happening
  • we don't have a good way to answer that
  • will try to post short articles on front page of parrot.org
  • hope to do so every Sunday until at least 1.0
  • then hope to hand it off to someone else

Patrick:

  • moved Rakudo out of the Parrot repository
  • its official location is github
  • the old Rakudo stuff is still in the Parrot repository
  • it'll be gone tomorrow
  • decided on Git because there weren't many pros for sticking with Subversion
  • getting everyone up to speed on using Git isn't super easy
  • haven't run into any major blockers yet either
  • usually just reorienting ourself to different commands
  • hacked up a new Configure.pl for Rakudo out of the Parrot tree
  • making it work with an installed Parrot
  • Parrot isn't quite there yet
  • if someone doesn't have Parrot but downloads Rakudo, what's the best way to get them an appropriate revision of Parrot to run on?
  • looking at options for that
  • very pleased to see Larry's changes to metaops and other pieces
  • pleased that Prelude is now Setting
  • equally pleased that I haven't worked on any of these pieces yet....
  • hooray for delayed binding

c:

  • Alan Kay is smiling somewhere

Patrick:

  • late binding wins again!
  • laziness is paying off
  • Rakudo is close to having its own Setting written in Perl 6
  • part of the repository
  • probably won't happen by Frozen Perl this weekend
  • have some documentation to write for that
  • should have it by the next Rakudo release, sometime in February

c:

  • finishing the draft Parrot support policy
  • pretty aggressive, but what we discussed at PDS
  • will be in the repository today

Nicholas:

  • what's the schedule on Rakudo releases?

Patrick:

  • I expect that they'll occur timed with Parrot releases
  • but not simultaneous
  • Parrot releases on the third Tuesday
  • Rakudo will release the weekend after that
  • this'll give us time to tie it to that specific release
  • we'll continue the monthly release cycle
  • for most of 2009, I expect many people won't want to play with the released version of Rakudo
  • they'll want to play with the head version
  • all of the cool features and bugfixes
  • it'll track Parrot, but not necessarily Parrot's head
  • it'll make sure you have a sufficient version of Parrot, but not necessarily the latest one
  • January has shown that when Rakudo and Parrot are separated, changes to Parrot trunk can easily break Rakudo
  • we update a file in the repository now
  • we'll update that whenever something happens in Parrot that we need to update it with
  • that version will always include at least a Parrot monthly release

Nicholas:

  • Parrot's starting to get to the point of Perl 5
  • changes can break CPAN modules

c:

  • we need to be more aggressive about adding those tests to the core test suite

by chromatic at February 18, 2009 21:43 UTC

v
^
x

brian d foyIs there a module that lobotomizes subroutines?

Is there already a module that will turn a list of subroutines into no-ops? I'm doing this to disable features in production code, not testing, and only because other work-arounds are intractable. It's monkeypatching to give a class a lobotomy.

I know about Curtis's Sub::Override, so I might just write a wrapper around that for an interface that looks something like:

use Sub::Nerf;
 
nerf( $return_value, @list_of_subs );  # all subs just return $return_value
 
unnerf( @list_of_subs ); # back to where we started

by brian_d_foy at February 18, 2009 20:59 UTC

v
^
x

Marcel Gr?nauerany::feature - Backwards-compatible handling of new syntactic features

I've released any::feature. The development repo is on github.

The problem

Perl 5.10 introduces new syntactic features which you can activate and deactivate with the feature module. You want to use the say feature in a program that's supposed to run under both Perl 5.8 and 5.10. So your program looks like this:

use feature 'say';
say 'Hello, world!';

But this only works in Perl 5.10, because there is no feature module in Perl 5.8. So you write

use Perl6::Say;
say 'Hello, world!';

This works, but it's strange to force Perl 5.10 users to install Perl6::Say when the say feature is included in Perl 5.10.

The solution

Use any::feature!

WARNING: This is just a proof-of-concept.

any::feature can be used like Perl 5.10's feature and will try to "do the right thing", regardless of whether you use Perl 5.8 or Perl 5.10.

At the moment, this is just a proof-of-concept and only handles the say feature. If things work out, I plan to extend it with other Perl 5.10 features.

The following programs should work and exhibit the same behaviour both in Perl 5.8 and Perl 5.10.

This program will work:

use any::feature 'say';
say 'Hello, world!';

This program will fail at compile-time:

use any::feature 'say';
say 'Hello, world!';

no any::feature 'say';
say 'Oops';

The features are lexically scoped, which is how they work in Perl 5.10:

{
    use any::feature 'say';
    say 'foo';
}
say 'bar';     # dies at compile-time

Like this post? - Digg Me! | Add to del.icio.us! | reddit this! | Reply on Twitter

February 18, 2009 14:04 UTC

v
^
x

Dave CrossYak Shaving

For a few months I've been playing with conky - which is a nice system for writing stuff onto a Linux desktop. I was introduced to it by a series of LifeHacker posts last year.

Just last week, they featured a really nice set-up which I wanted to go some way to recreated. The post included a link to the programs that were used to create the desktop, so it was easy to work out what was going on.

Most of the data was pulled from web feeds and converted to flat text. That's a nice approach as once you've got that working, there's no limit to the data you can use.

I was slightly disappointed, however, to see that the code included in the article had three separate scripts (one for each source used) and that they were all bash scripts which used curl to grab the feeds and load of sed and grep to extract the relevant parts. What this really needed was a generic approach.

So I reached for the Template Toolkit. And I reached for Template::Plugin::XML::RSS. And then I stopped myself. Not all web feeds are RSS these days (that's why we've largely stopped calling them RSS feeds) so XML::RSS wouldn't always be the right tool. What I really needed was XML::Feed - which handles both RSS and Atom and treats them both in the same way.

But there wasn't a Template::Plugin::XML::Feed. I say "wasn't" rather than "isn't" as there is one now - I uploaded it last night.

I didn't get much time to play with conky. But I've now got all of the tools I need. In particular, I can create simple programs like this to access web feeds.

#!/usr/bin/perl

use strict;
use warnings;

use Template;
use URI;

my $t = Template->new;
my $uri = URI->new('http://search.twitter.com/search.atom?q=@davorg');
$t->process(\*DATA , { uri => $uri })
  or die $t->error;

__END__
[% USE tweets = XML.Feed(uri);
   USE autoformat(right => 80);
   FOREACH tweet IN tweets.entries -%]
[% tweet.author %]:
[% tweet.title | autoformat -%]
[% LAST IF loop.count == 5 -%]
[% END -%]

Of course, I need to remove the hard-coded URI and put the template into a separate file. That's tonight's first little project.

by davorg at February 18, 2009 13:23 UTC

v
^
x

Gabor SzaboTwitter

A comment to my previous post made me now join Twitter as well. I wonder if I'll use it more than I use Identi.ca

by Szabgab at February 18, 2009 11:52 UTC

v
^
x

Gabor SzaboPrices

Yesterday I went to buy a disk-on-key. first I checked on the web-site Office Depot where I found a Sandisk ,Cruzer Micro, U3 8GB for 120 NIS (~ 30 USD).

I said ok, I don't have much time too fool around with these things and even though it might be a bit more expensive I'll buy that.

As I also had to go to the bank I did that first. If I am already there I went into the local shop selling cameras and memories for cameras. I thought just so I can say I checked multiple source I'll ask there.

No problem, the guy was just placing the newly arrived disk-on-keys on the shelves. Same brand, same capacity. 160 NIS.

Great, so now I know the 120 NIS is a good deal.

Then as I went back to my car I remembered another little computer shop at the lower floor. I know it is expensive but I figured if I get another such high price I'll really feel well with the 120 NIS.

The same device was 90 NIS there.

That makes 70 NIS (~ 18 USD) difference between the two floors. About 26 stairs.

Comments

By Mohsen Basirat from Iran

by Szabgab at February 18, 2009 09:27 UTC

v
^
x

Gabor SzaboMore Padre blogs

In Padre blogs I already listed several people, since then Claudio has also mentioned Padre in his blog about Perls new wave.

an IDE for Perl.

by Szabgab at February 18, 2009 09:07 UTC

February 17, 2009

v
^
x

Mark Jason DominusSecond-largest cities

A while back I was in the local coffee shop and mentioned that my wife had been born in Rochester, New York. "Ah," said the server. "The third-largest city in New York." Really? I would not have guessed that. (She was right, by the way.) As a native of the first-largest city in New York, the one they named the state after, I have spent very little time thinking about the lesser cities of New York. I have, of course, heard that there are some. But I have hardly any idea what they are called, or where they are.

It appears that the second-largest city in New York state is some place called (get this) "Buffalo". Okay, whatever. But that got me wondering if New York was the state with the greatest disparity between its largest and second-largest city. Since I had the census data lying around from a related project (and a good thing too, since the Census Bureau website moved the file) I decided to find out.

The answer is no. New York state has only one major city, since its next-largest settlement is Buffalo, with 1.1 million people. (Estimated, as of 2006.) But the second-largest city in Illinois is Peoria, which is actually the punchline of jokes. (Not merely because of its small size; compare Dubuque, Iowa, a joke, with Davenport, Iowa, not a joke.) The population of Peoria is around 370,000, less than one twenty-fifth that of Chicago.

But if you want to count weird exceptions, Rhode Island has everyone else beat. You cannot compare the sizes of the largest and second-largest cities in Rhode Island at all. Rhode Island is so small that it has only one city, Seriously. No, stop laughing! Rhode Island is no laughing matter.

The Articles of Confederation required unanimous consent to amend, and Rhode Island kept screwing everyone else up, by withholding consent, so the rest of the states had to junk the Articles in favor of the current United States Constitution. Rhode Island refused to ratify the new Constitution, insisting to the very end that the other states had no right to secede from the Confederation, until well after all of the other twelve had done it, and they finally realized that the future of their teeny one-state Confederation as an enclave of the United States of America was rather iffy. Even then, their vote to join the United States went 34–32.

But I digress.

Actually, for many years I have said that you can impress a Rhode Islander by asking where they live, and then—regardless of what they say—remarking "Oh, that's near Providence, isn't it?" They are always pleased. "Yes, that's right!" The census data proves that this is guaranteed to work. (Unless they live in Providence, of course.)

Here's a joke for mathematicians. Q: What is Rhode Island? A: The topological closure of Providence.

Okay, I am finally done ragging on Rhode Island.

Here is the complete data, ordered by size disparity. I wasn't sure whether to put Rhode Island at the top or the bottom, so I listed it twice, just like in the Senate.


State Largest city and
its Population
Second-largest city
and its population
Quotient
Rhode Island Providence-New Bedford-Fall River 1,612,989
Illinois Chicago-Naperville-Joliet 9,505,748 Peoria 370,194 25.68
New York New York-Northern New Jersey-Long Island 18,818,536 Buffalo-Niagara Falls 1,137,520 16.54
Minnesota Minneapolis-St. Paul-Bloomington 3,175,041 Duluth 274,244 11.58
Maryland Baltimore-Towson 2,658,405 Hagerstown-Martinsburg 257,619 10.32
Georgia Atlanta-Sandy Springs-Marietta 5,138,223 Augusta-Richmond County 523,249 9.82
Washington Seattle-Tacoma-Bellevue 3,263,497 Spokane 446,706 7.31
Michigan Detroit-Warren-Livonia 4,468,966 Grand Rapids-Wyoming 774,084 5.77
Massachusetts Boston-Cambridge-Quincy 4,455,217 Worcester 784,992 5.68
Oregon Portland-Vancouver-Beaverton 2,137,565 Salem 384,600 5.56
Hawaii Honolulu 909,863 Hilo 171,191 5.31
Nevada Las Vegas-Paradise 1,777,539 Reno-Sparks 400,560 4.44
Idaho Boise City-Nampa 567,640 Coeur d'Alene 131,507 4.32
Arizona Phoenix-Mesa-Scottsdale 4,039,182 Tucson 946,362 4.27
New Mexico Albuquerque 816,811 Las Cruces 193,888 4.21
Alaska Anchorage 359,180 Fairbanks 86,754 4.14
Indiana Indianapolis-Carmel 1,666,032 Fort Wayne 408,071 4.08
Colorado Denver-Aurora 2,408,750 Colorado Springs 599,127 4.02
Maine Portland-South Portland-Biddeford 513,667 Bangor 147,180 3.49
Vermont Burlington-South Burlington 206,007 Rutland 63,641 3.24
California Los Angeles-Long Beach-Santa Ana 12,950,129 San Francisco-Oakland-Fremont 4,180,027 3.10
Nebraska Omaha-Council Bluffs 822,549 Lincoln 283,970 2.90
Kentucky Louisville-Jefferson County 1,222,216 Lexington-Fayette 436,684 2.80
Wisconsin Milwaukee-Waukesha-West Allis 1,509,981 Madison 543,022 2.78
Alabama Birmingham-Hoover 1,100,019 Mobile 404,157 2.72
Kansas Wichita 592,126 Topeka 228,894 2.59
Pennsylvania Philadelphia-Camden-Wilmington 5,826,742 Pittsburgh 2,370,776 2.46
New Hampshire Manchester-Nashua 402,789 Lebanon 172,429 2.34
Mississippi Jackson 529,456 Gulfport-Biloxi 227,904 2.32
Utah Salt Lake City 1,067,722 Ogden-Clearfield 497,640 2.15
Florida Miami-Fort Lauderdale-Miami Beach 5,463,857 Tampa-St. Petersburg-Clearwater 2,697,731 2.03
North Dakota Fargo 187,001 Bismarck 101,138 1.85
South Dakota Sioux Falls 212,911 Rapid City 118,763 1.79
North Carolina Charlotte-Gastonia-Concord 1,583,016 Raleigh-Cary 994,551 1.59
Arkansas Little Rock-North Little Rock 652,834 Fayetteville-Springdale-Rogers 420,876 1.55
Montana Billings 148,116 Missoula 101,417 1.46
Missouri St. Louis 2,796,368 Kansas City 1,967,405 1.42
Iowa Des Moines-West Des Moines 534,230 Davenport-Moline-Rock Island 377,291 1.42
Virginia Virginia Beach-Norfolk-Newport News 1,649,457 Richmond 1,194,008 1.38
New Jersey Trenton-Ewing 367,605 Atlantic City 271,620 1.35
Louisiana New Orleans-Metairie-Kenner 1,024,678 Baton Rouge 766,514 1.34
Connecticut Hartford-West Hartford-East Hartford 1,188,841 Bridgeport-Stamford-Norwalk 900,440 1.32
Oklahoma Oklahoma City 1,172,339 Tulsa 897,752 1.31
Delaware Seaford 180,288 Dover 147,601 1.22
Wyoming Cheyenne 85,384 Casper 70,401 1.21
South Carolina Columbia 703,771 Charleston-North Charleston 603,178 1.17
Tennessee Nashville-Davidson--Murfreesboro 1,455,097 Memphis 1,274,704 1.14
Texas Dallas-Fort Worth-Arlington 6,003,967 Houston-Sugar Land-Baytown 5,539,949 1.08
West Virginia Charleston 305,526 Huntington-Ashland 285,475 1.07
Ohio Cleveland-Elyria-Mentor 2,114,155 Cincinnati-Middletown 2,104,218 1.00
Rhode Island Providence-New Bedford-Fall River 1,612,989

Some of this data is rather odd because of the way the census bureau aggregates cities. For example, the largest city in New Jersey is Newark. But Newark is counted as part of the New York City metropolitan area, so doesn't count separately. If it did, New Jersey's quotient would be 5.86 instead of 1.35. I should probably rerun the data without the aggregation. But you get oddities that way also.

I also made a scatter plot. The x-axis is the population of the largest city, and the y-axis is the population of the second-largest city. Both axes are log-scale:

Nothing weird jumps out here. I probably should have plotted population against quotient. The data and programs are online if you would like to mess around with them.


I gratefully acknowledge the gift of Tim McKenzie. Thank you!

February 17, 2009 20:41 UTC

v
^
x

Dave RolskyMoose Book? Well, Sort Of ...

Occasionally, someone will pop up and say that Moose really needs a book. "X looks good, but it needs a book before I can learn it" is a common meme among programmers.

This is crazy, of course. Demanding a book before you learn something means you'll never learn a lot of things. Book publishing is a risky proposition for many topics, and with the surfeit of good documentation on the net, it's getting harder and harder to justify a book for any given topic. Even for books that aren't failures, writing a book is not a good way for an author to make money.

I put a ridiculous amount of time into the Mason book, and my estimate is that I made $20 per hour (maybe less). Of course, having a book looks great on my resume, but the direct payoff is low. At this point in my career, it'd be hard to justify the effort required to produce another niche book, even assuming there was a publisher.

But the real point of this entry is to highlight just how much free documentation Moose has. A commenter on a previous post mentioned that he or she had created PDF output of the entire Moose manual. There are two versions at the link with different formatting, the shorter of which is about 58 pages. This is just the manual, not the cookbook. I imagine if the cookbook got the same treatment, we'd easily have 100+ pages of documentation. That doesn't include the API docs, this is all stuff that might go in a book (concepts, recommendation, examples, recipes).

So please stop asking for a Moose book, and just read the one that already exists!

by Dave Rolsky at February 17, 2009 19:21 UTC

v
^
x

Dave RolskyAm I a modern dinosaur?

♪ I'm a dinosaur, somebody is digging my bones ♪ - King Crimson

I just read a great article/talk by Charles Petzold that was recently (re-)posted on reddit. He talks about how Windows IDEs have involved, and how they influence the way one programs.

Reading it, I was struck by just how ancient the way I program is. I use Emacs, a distinctly non-visual editor. When I work on GUIs, I do it using HTML and CSS, which means I edit the GUI not using a GUI, but through a text interface. Then I have a separate application (my browser) in which I view the GUI.

My GUI development process involves tweaking the GUI definition (HTML & CSS), then reloading in the browser, and back and forth. All the actual GUI behavior is separate yet again, divided between client-side code (Javascript) and server-side (a Catalyst app, these days).

This is truly an archaic way to program. Not only is my editor archaic, the very thing I'm developing is a thin client talking to a fat server. Of course, with Javascript, I can make my client a bit fatter, but this is still very much like a terminal talking to a mainframe.

But I like programming this way well enough, and it seems to produce reasonable results. There are plenty of good reasons for deploying and using thin clients, and Emacs is an immensely powerful tool. HTML and CSS suck big time, but my limited experience with "real" GUI programming suggests that's just as painful, if not more so.

But still, I feel like I'm old and grizzled before my time. Surely I should be using a powerful new-fangled editor with powerful new-fangled libraries. Bah, kids today and their GUIs!

by Dave Rolsky at February 17, 2009 16:56 UTC

v
^
x

Curtis PoeHow To Not Teach Programming (Haskell)

Note: heavy use of HTML unicode character codes here. If some of the following is garbled, I trust you'll understand.

So I have the book Programming in Haskell. So far the book seems to be fairly straightforward, except for one little hitch. Here's the start of the second paragraph of the inside front cover:

This introduction is ideal for beginners: it requires no previous programming experience and all concepts are explained from first principles with the aid of carefully chosen examples. Each chapter contains a series of exercises ...

Exercises? For beginners? Well, do beginners to programming want to read about programming or to program? Almost everyone I've ever met wants to dive in and make things which print "Hello, World" and move on from there. We want to program, so a Haskell book for beginners with exercises sounds perfect for me.

So since this is for beginners, some might be surprised to see this:

(⨁) = λx → (λyxy)
(x⨁) = λyxy
(⨁y) = λxxy

But the text builds up to this at that point and the above is understandable (the author's really good about this), but still, it's the sort of thing which makes many a student put down a book. However, that was merely a formalized definition of something and wasn't too bad. Here's what's too bad:

> [ x ↑ 2 | x ← [1..5]]

The above is a list comprehension in Haskell. It will generate a list of the square of the numbers one through five. Note that this is not the mathematical notation of a comprehension. Prior to the above horror, the author clearly gives the mathematical notation:

{ x² | x ∈ {1..5} }

So the author can't use the excuse that he was showing mathematical notation. However, the keen observer might notice a small difference between them:

> [ x ↑ 2 | x ← [1..5]]
{ x² | x ∈ {1..5} }

Hmm, what's that leading angle bracket on the Haskell version?

THIS WAS MEANT TO BE TYPED AT A HASKELL PROMPT!!!

Yes, that's right. Here's that full snippet from the book:

> [ x ↑ 2 | x ← [1..5]]
[1,4,9,16,25]

The author gives code that the beginning student cannot type in! There is a "symbol table" in Appendix B which has a mapping of those funky characters to what the poor student should type in, but it's not even mentioned at the beginning (edit: turns out that this is mentioned as in a single sentence after the Hugs compiler introduction, after the student discovers they can't type things in and I managed to miss this). New to programming? You get to guess this. I'm not making this up. Even the book's intro to the Hugs compiler has this:

> 2 ↑ 3
8

What on earth could convince someone writing a programming book for beginning programmers that having them type in examples they can't type in would be a good idea? The rest of the book seems fine, but this is just, wow. I'm at a loss for words. And I just gave you the easy ones to figure out. You think the beginner is going to know to type \ for λ or /= for ?

Update: I forgot to mention that in the "Hugs" intro, there's this little gem:

> 7 ‘div‘ 2
3

If you don't know Haskell, what do you think you should type there? It's not explained in the introduction, the notation is not in the appendix and I eventually had to google for it. I had to play around with the HTML to replicate that as closely as I could.

Annoyingly, this is on page 12, introducing Hugs by showing various arithmetic expressions. The above is "integer division". Regular floating point division is not introduced until page 36 (and then as an afterthought) and coincidentally, that's also the page where the ‘div‘ syntax is finally explained.

by Ovid at February 17, 2009 14:54 UTC

v
^
x

Marcus RambergNew-wave Perl

I was positively surprised by the response of the perl community. It wasn’t the typical “our programs run fast” (to ruby fanboys*) or “space as syntax wtf?” (to python fanboys). Instead it seemed that community took notice of the criticisms and made pretty clear that waiting for Perl 6 was not an option. Today, Perl 6 is doing fine (you can write code in Perl 6) and so is Perl 5.

by marcus at February 17, 2009 09:21 UTC

v
^
x

chromaticOn Alternatives

In Should We Include a Better Alternative to File-Find in the perl5 Core?, Shlomi suggests that just as CPANPLUS and Module::Build entered 5.10 as alternatives to CPAN and MakeMaker, it's worth considering adding an alternative to File::Find to the core.

I understand the desire, but that kind of thinking digs the hole deeper.

The Perl 6 core will contain only those libraries absolutely necessary to download, configure, build, test, and install other libraries. (That trick seems to work for free operating systems.)

In a better world, Perl 5 would do the same and shed a few megabytes, as well as all of the crazy baggage required to make dual-lived modules work. Creating Bundle::BloatMeHarder is trivial.

The one corner case I can't quite figure out is how such a scheme would increase the likelihood of dropping support for ancient versions of Perl, such as everything older than Perl 5.8.8.

by chromatic at February 17, 2009 07:47 UTC

v
^
x

Perl BuzzMac OS X Security Update 2009-001 might break your Perl

I got errors on my Mac today complaining about IO.pm. I had just installed Parrot on the way to building up a Rakudo to work on, so I figured something in the still-not-ironed-out Parrot install had caused the problem. It looked like this:

% perl -MIO
IO object version 1.22 does not match bootstrap parameter 1.23
at /System/Library/Perl/5.8.8/....

I just figured I'd reinstall the module.I tried to update IO.pm, but the CPAN shell uses IO, and so barfed. Had to install it manually by downloading a tarball (gasp!) and doing it manually. And then everything was fine.

And then my old colleague Ed Silva IMs me asking if I knew anything about it. I had to confess I was surprised to see he had the same problem.

And now Miyagawa, bless his heart, lays it all out for you in this blog post about the Mac OS X security update. Ooopsie!

Thanks to Miyagawa for explaining the problem.

by Andy Lester at February 17, 2009 04:54 UTC

v
^
x

Leon BrocardPerl 5.005_05 RC1

Hi there porters,

Perl 5.005 is a great version of Perl and still used by many people. There have been a few distribution updates and tool changes which mean that it no longer compiles cleanly everywhere, so I believe it's time for another maintenance release of perl5.005, which will lead to perl5.005_05.

Please compile and make test this release candidate on as many platforms as possible:

 http://acme.s3.amazonaws.com/perl/perl5.005_05-RC1.tar.gz

This is a release candidate. I'm interested in compilation fixes. I'm not interested in fixing warnings.

It is hosted on the Perl 5 Git repository:

 http://perl5.git.perl.org/perl.git/log/refs/heads/maint-5.005

Which means you can see the Changes file here:

 http://perl5.git.perl.org/perl.git/blob_plain/ba3f100:/Changes

See perlrepository.pod if you want to play with Git:

 http://perl5.git.perl.org/perl.git/blob/HEAD:/pod/perlrepository.pod

Thanks! Léon

by acme at February 17, 2009 04:04 UTC

February 16, 2009

v
^
x

Curtis PoeInertia Driven Programming

I had so much fun recalling this incident in a reply to Schwern, I thought it deserved a top-level post.

Years ago I worked at a company where we made revenue projections for a particular industry. Industry executives could log on and when they saw what they were trying to project, they had a "weighting" factor that they could enter and the numbers would be adjusted with that. For the sake of argument, we'll say that number defaulted to 12, but based on the executive's knowledge of what they were offering, they would adjust that up or down.

After six months of hard labor, one programmer had revamped the system and the weighting factor defaulted to "15". Our customers were livid. They accused us of "cooking the books". Even though our new numbers were more accurate, somehow the executives thought we were cheating and demanded that we change the default back to 12. Our better revenue projections, ironically, became a PR disaster.

I and another programmer were called into a meeting with a vice president and he asked us to change the number. The other programmer went to the whiteboard and started sketching. He explained how the resources worked, what weights were assigned to them, how revenue was multiplied by those weights and how six months of intensive regression analysis and revamping of our statistical model, blah, blah, blah and circles and arrows and a paragraph on the back of each one.

Fifteen minutes later, the programmer finished. The vice president looked at him and said "Yeah, now can you change the f***ing number to 12?"

Inertia is a terrible thing :)

by Ovid at February 16, 2009 21:46 UTC

v
^
x

chromaticFacts versus Reputation

What language community has a greater reputation for being obsessed with TDD?

Obie: NO, Giles Bowkett (discussing Ruby)

Perl has had a canonical test suite in the core repository and shipped those tests with every source distribution since 1987. As of at least Ruby 1.8.7 (May 2008), this was not true of Ruby.

The third edition of Programming Perl, released in 2000, had a short (less than a page) discussion of Perl's testing modules and the CPAN testing system. The first edition of Programming Ruby, released in 2001, has no mention of testing that I can find.

Test::Builder, written in 2002, allows the existence of hundreds of Perl testing modules which can all work together in the same program to improve testing and the reusability, abstraction, and clarity of test programs. To my knowledge, Ruby has nothing like this.

The CPAN Testers project is nearly 11 years old (May 1998). Every distribution uploaded to the CPAN receives test reports for myriad combinations of operating system and version. The Perl CPAN culture encourages a testing culture, and has done so since before anyone in the US had even heard of Ruby. (Ruby has nothing quite like the CPAN.)

The Perl 6 specification includes a comprehensive test suite; every implementation of Perl 6 must pass this test suite to earn the label of "A conforming Perl 6 implementation". While the Rubinius project has done great work trying to create something similar for Ruby -- and while projects such as JRuby and IronRuby use this suite -- I'm not aware that the MRI developers have the same degree of interest in a comprehensive test suite.

This, then, is one of my persistent gripes with certain vocal members of the Ruby community. While it may be technically correct (the best kind of correct) that the Ruby community has the greatest reputation for testing of any other language community, the facts as I see them seem to contradict such a strong assertion.

by chromatic at February 16, 2009 20:42 UTC

v
^
x

Marcus RambergLatest OSX Security upgrade can break your Perl

Apple has shipped a somewhat untested security update, it seems. After installing it, my Catalyst apps stopped working in development. Trying to reinstall some modules through the CPAN shell brought further misery. Luckily, Miyagawa had already posted a solution to the CPAN problem.

In addition, it seems they have installed a broken Scalar::Utils, so after following his advice and installing IO manually to get a unbroken CPAN shell, I suggest running

cpan> force install Scalar::Util

To get a working weaken again. At least this fixes my stuff, but they might have broken more XS stuff. I’ll keep you posted if I discover anything else.

by marcus at February 16, 2009 18:46 UTC

v
^
x

Gabor SzaboMethods and Messages: Randal Schwartz on Smalltalk

I mentioned in my use.perl.org journal entry that I'd like to write about and link to every other member of the Perl community. Oh that's starting to sound bad. Are those people special just because they use Perl?

Anyway, I had just seen the Methods and Messages blog in which Randal Schwartz writes about Smalltalk

I met Randal during YAPC::NA::2008 in Chicago when we went to Due which is apperantly the second of the Unos, some local famous Italian restaurant. He talked a lot about Smalltalk, err.

So anyway, while he does not write a lot about Smalltalk, it might be worth checking out.

But beware as that blog leads to places such as Squeakland, the home of Etoys which can be a huge drain on your time.

by Szabgab at February 16, 2009 18:24 UTC

v
^
x

Justin MasonPlenty of money for Dublin’s bikes

So it seems that JC Decaux have been complaining about the costs of running the Velib scheme in Paris:

Since the scheme’s launch, nearly all the original bicycles have been replaced at a cost of 400 euros each.

Of course, this won’t be a problem in Dublin. Going by Newstalk’s estimates of how much the advertising space provided to JC Decaux for free, in exchange for the (as yet nonexistent) 450 bikes would have cost, each bike comes at a public cost of 111,000 Euros. That should cover a lot of “velib extreme”.

(OK, that may be overestimating it. The Irish Times puts a more sober figure of EUR 1m per year; that works out as EUR 2,000 per bike per year. Still should cover a few broken bikes.)

A quick reminder:

ParisDublin
20,000 bikes450 promised
~1,600 billboards~120 installed
~12.5 bikes per billboard~3.8 bikes per billboard
10km range (from 15e to 19e arondissement)4km range (from the Mater Hospital to the Grand Canal)

And, of course, there’s no sign of the bikes here yet… assuming they ever arrive. Heck of a job, Dublin City Council.

BTW, here’s the rate card for advertising on the “Metropole” ad platforms, if you’re curious, via the charmingly-titled Go Ask Me Bollix.

by Justin at February 16, 2009 17:53 UTC

v
^
x

Marcel Gr?nauerBenchmarking immutable Mouse

Dann has added benchmarks for immutable Mouse to App::Benchmark::Accessors; I've also updated the benchmarks page.

Like this post? - Digg Me! | Add to del.icio.us! | reddit this! | Reply on Twitter

February 16, 2009 13:14 UTC

v
^
x

Curtis PoeExplaining explain()

On the Perl-QA list, Gabor Szabo pointed out a problem with the testing toolchain. Test::Most is advertised as a drop-in replacement for Test::More, which is was.

The problem is that Test::More has also added my explain function. explain is very handy in tests because it lets you do this:

explain $some_data;

It's like diag(), but only outputs when the tests are run in verbose mode. Thus, if you do make test, you won't see a bunch of irrelevant information dumped to the terminal. Further, it automatically calls Data::Dumper::Dumper() on references so programmers no longer curse when they see this:

# ARRAY(0x80631d0)

It's a very handy tool in testing, but Schwern was concerned that it was doing too much. What if you want to automatically expand the references, even when you're not in verbose mode? There's a great argument for this:

isa_ok $some_object, "Some::Class" or diag explain $some_object;

If there's a failure, you might want to see it right away. So in Test::More, explain just formats the data and you have to call one of the following:

diag explain $some_data;   # always appears
note explain $some_data;   # only appears in verbose mode

That covers everything. I felt that a simple explain $some_data was the common case and I wanted to optimize for that. Schwern argued that it was trying to do too much and wanted the separation of concerns. That's a very reasonable argument, but what Schwern and I both missed was what Gabor pointed out. It breaks the Test::Simple to Test::More to Test::Most upgrade path. The functions behave differently and Test::Most is no longer a drop in replacement.

So let's take this to its logical conclusion (with apologies to Schwern for this fun :)

diag() formats your data and outputs it to the diagnostic filehandle (typically STDERR). But wait! It's doing too much and we should separate the formatting from the displaying. Since we can't just print it (we need to make sure it goes to the correct handle), we do this:

output diag explain $some_data;

Oh, that fails. Because output() doesn't know if it's outputting TAP or diagnostics, we need this:

output diagnostics diag explain $some_data;

Shove a few colons in there and we can call this a DSL!

However, what we really want is respect. We want this from the Java programmers. So with some Acme::Dot love, we can do this:

Tests.output.diagnostics.diag.explain($some_data);

Now those Java programmers are going to take us seriously!

I want explain $some_data; to just work.

by Ovid at February 16, 2009 12:48 UTC

v
^
x

Curtis PoeBug in Perl Causes a Small Class::Sniff Issue

Jesse Vincent sent a bug report about Class::Sniff detecting non-existent packages.

Seems Jesse has a lot of code like this:

eval "require RT::Ticket_Overlay";
if ($@ && $@ !~ qr{^Can't locate RT/Ticket_Overlay.pm}) {
    die $@;
};

Well, that seems quite reasonable. Except for this:

#!/usr/bin/env perl -l

use strict;
use warnings;

print $::{'Foo::'} || 'Not found';
eval "use Foo";
print $::{'Foo::'} || 'Not found';
eval "require Foo";
print $::{'Foo::'} || 'Not found';
__END__
Not found
Not found
*main::Foo::

That's right. Attempting the require a non-existent module via a string eval creates a symbol-table entry. Aristotle told me he was astonished that no one had caught this before. Frankly, I just think that not enough people are trying to do introspection in Perl.

This one will be tricky to work around. I thought "if the module doesn't actually exist, can I check to see if @ISA is there?" It gets automatically created for every package, but since the module representing that package doesn't exist, maybe it won't? No such luck:

print defined *NoSuchModule::ISA{ARRAY} ? 'Yes' : 'No';
print defined *NoSuchModule::xxx{ARRAY} ? 'Yes' : 'No';

That always prints "Yes" and then "No". @ISA is always created for every package if you try to access it. Darn.

I thought I could check for the module's existence in %INC, but inlined packages don't show up there, either (unless the author explicitly puts them there).

The only thing I can think of is this curious line:

print scalar keys %Foo::;

If you do that with a non-existent package which nonetheless has a symbol table entry, it still has no keys in its symbol table. However, if you do that with a module which exists but failed to load, you will probably have a few symbol table entries. This still doesn't quite solve the problem.

So how do I detect if a module in a symbol table failed to load? I'm not sure if I can. If I simply check to see if there are any keys in the symbol table, that should be enough, right? If someone evals "require $badmodule" and that require fails due to compilation errors, they'll exit or die, right? (too optimistic, I know)

Update: Alias, I've not forgotten about your request for ambiguous method detection. I've just not added it yet :)

by Ovid at February 16, 2009 10:16 UTC

v
^
x

Gabor SzaboWhat is Modern Perl?

Some time ago chromatic wrote about Modern Perl. I thought it to be a neat idea but did not follow up. Based on the first post I thought it is mostly about the use of high powered advanced features while now reading some of the comments I understand that at least in part Modern Perl is about helping beginners.

That's much more interesting to mee as I feel I understand that part more. So today I looked at some of the discussion on use.perl.org and on the web site of Modern Perl Books.

As I can understand there is a CPAN module called Modern::Perl that would enable strict, warnings and the features of perl 5.10.

I agree with the former two but unfortunatelly I still have an issue with 5.10. That's probably the main difference betwen Modern Perl and plain Perl for Beginners. The former can demand 5.10 on both the development and production machines, the latter has to live with whatever installed on the machines of the corporations.

Luckily in the past 2 years I have hardly seen any perl older than 5.6.0 at my clients but many are still stuck with 5.6 and nearly none of them made the change to 5.10. I can only assume that it is similar in the other parts of the world. That means if I want to give them good service I have to teach them something that will work on 5.6 or at least on 5.8.

That still gives plenty of improvements over most of the code out there in the direction of Modern Perl but it is a different type of beast.

by Szabgab at February 16, 2009 08:19 UTC

v
^
x

Adam KennedyInteresting discoveries while working on Top 100

As I try to stretch Algorithm::Dependency further in the Top 100 codebase, one of the more interesting discoveries I've found is that you can generalise just about all lists into one expression.

Alg::Dep works by taking a list of one or more selected vertices, and then walking the (extremely light weight) dependency graph to find all the dependants. Doing that for each vertice as a single-element list, and ranking the number of elements in the result, gives you the Heavy 100 list.

If you invert the graph, you get the Volatile 100 list.

The Debian list is quick a crude filter on the Volatile 100 list, of not really of any interest algorithmically.

What DOES get interesting though, is when you try to do metrics that are dependent not only on the number of dependants, but also on the quality of them.

For example, lets imagine a notional "Above the Law" list. This would be a ranking of distributions which don't meet all the conditions for legal clarity (in Kwalitee this would be the expression "has_humanreadable_license && has_meta_license").

While you COULD derive the metric based on the raw number of downstream dependants, what would be more interesting would be the number of OTHERWISE LEGAL modules that the parent is obstructing.

So a module with 50 dependants all written by the same author with no license scores 0, because fixing that module doesn't fix the legality of any of the dependants.

A module that has 10 or 20 modules below it that are all legal scores higher, because fixing this module clears up the legal ambiguity for many more modules below it.

Similar calculations are interesting for things like minimum Perl version as well, because it lets us identify which modules are standing in the way of back-compatibility the most.

And for author sets, we might also want to filter the dependants by "not the same author" to factor out author-specific clusters.

The neat thing is you can use the same tool to solve all these, and it turns out to be MapReduce.

Once you've walked the dependencies, you get a simple identifier set. You can then map the set by an arbitrary expression (either "not author XXX" or "minimum version of Perl is lower than Y.ZZ") and then do an addition-reduction to get the dependant count.

You can do similar things by applying a vertice metric for, say, the total number of significant PPI tokens in a distribution.

This would let us refine the Volatile and Heavy lists by factoring in the actual volume of code to the calculation, and help us further prioritise the dependencies of big things like bioperl ahead of the dependencies of minor conveniences.

That ALL of these problems can generalise to Dependencies + Map + Reduce makes not only the code abstraction simpler, but also means there's some great opportunities here for parallelism, caching (at the map operation level) and other interesting strategies.

by Alias at February 16, 2009 05:13 UTC

v
^
x

Ricardo Signesgiving up on on-line todo lists?

I gave up on index cards for OmniFocus about a year ago. OmniFocus was pretty easy to deal with, and it meant I didn't need to carry around index cards and a pen. Not too much later, I gave up OmniFocus for Hiveminder. Hiveminder was available from anywhere on the network, and its IM-based interface was fantastic.

Eventually I stopped using Hiveminder because I couldn't use it for things to do "later today" because it didn't do any smaller resolution of promised-by than calendar dates. That made it more useful for longer-term plans. This made me think I'd use it for bug tracking or managing simple projects like some of my free software, but it doesn't support dependencies or publicly readable groups.

I put stuff into RTM, thinking that its support for timed events would help, but although it's blazing fast, its UI is kind of mediocre, and it lacks a lot of the features that make Hiveminder so good. (Seriously, though, it is really fast.)

I've put some of my free software plans into LiquidPlanner, but they don't let me make my plans publicly visible, so that's no good.

I looked at some other ways to track bugs, but they all leave me unexcited.

I could manage reminders for specific times with iCal. It's not really exciting, but it would work, and I don't need them to be particuarly searchable or structured. I just need to be able to set them up quickly.

That leaves managing random ideas I have, features I want to add to things, and things I want to do eventually. Until I find something I can use to make these lists public, I might as well use the thing that's the most convenient for me. It's probably going to end up being LiquidPlanner or index cards.

I wonder how many people use productivity software that they didn't design and feel like it's really, really great. Maybe I'm just too obsessive or weird in what I want. Oh well.

by rjbs at February 16, 2009 04:05 UTC

v
^
x

Marcel Gr?nauerTwitter avatar blackout

If you're on Twitter, would you change your avatar to black to show support for the fight against a "three accusations and you're offline" law in New Zealand?

Via gnat (Nathan Torkington).

Like this post? - Digg Me! | Add to del.icio.us! | reddit this! | Reply on Twitter

February 16, 2009 01:19 UTC

February 15, 2009

v
^
x

Marcus RambergInstall modules via the CPAN shell from github

Nice hack by Miyagawa. I’d prefer if Perl continued to have a distributed delivery mechanism tho :-)

by marcus at February 15, 2009 22:27 UTC

v
^
x

Marcel Gr?nauerDissecting the Moose Part 5 - Accessor Generator Benchmarks Updated

The results of the accessor generator benchmarks generated by App::Benchmark::Accessors now have a permanent page. I will update the page from time to time as new versions of the accessor generators are released.

Like this post? - Digg Me! | Add to del.icio.us! | reddit this! | Reply on Twitter

February 15, 2009 21:12 UTC

v
^
x

Marcus RambergPerl community not generating any internal buzz?

Chromatic points to a trend in the Perl community of not giving any link love to our own. Somehow, it feels like I’ve heard this before. I agree this is a negative trend, but I find it amusing that it comes from chromatic, the only posters I can remember him linking to in his journal is people spreading FUD about Perl, in is mission to refute their statements. Somehow it seems to me like linking to these people and giving them attention is only making them stronger…

by marcus at February 15, 2009 19:18 UTC

v
^
x

chromaticAnother Odd Perl Community Feature

As I write on Modern Perl Books, I've noticed an interesting trend in the access and referral statistics: very few links come from other Perl weblogs, journals, or related sites.

I know plenty of people talk and write about Perl, but I wonder if some of the trouble this community has with getting search engine and buzz results is due to this lack of interlinking. Compare the apparent web presence and mindshare of certain other language communities with a greater propensity toward chitchat. It seems that Gabor has it right when he asks people to link to his site instead of commenting there.

by chromatic at February 15, 2009 18:13 UTC

Perl.org sites : books | dev | history | jobs | learn | lists | use   
When you need perl, think perl.org  
the camel    
(Last updated: February 27, 2009 12:41 GMT)