このページは大阪弁化フィルタによって翻訳生成されたんですわ。

翻訳前ページへ


PerlMonks - The Monastery Gates
The Wayback Machine - http://web.archive.org/web/20091224191438/http://www.perlmonks.org/
Beefy Boxes and Bandwidth Generously Provided by pair Networks Cowboy Neal with Hat
"be consistent."
 
PerlMonks

The Monastery Gates

 | Log in | Create a new user | The Monastery Gates | Super Search | 
 | Seekers of Perl Wisdom | Meditations | PerlMonks Discussion | 
 | Obfuscation | Reviews | Cool Uses For Perl | Perl News | Q&A | Tutorials | 
 | Poetry | Recent Threads | Newest Nodes | Donate | What's New | 

( #131=superdoc: print w/ replies, xml ) Need Help??
Donations gladly accepted

Users, please read the following important update:

Status of Recent User Information Leak

If you're new here please read PerlMonks FAQ
and Create a new user.

Want Mega XP? Prepare to have your hopes dashed, join in on the: poll ideas quest 2009  (Don't worry; you've got plenty of time.)

New Questions
Running a single instance of a Catalyst app across multiple subdomains with single sign-on
on Dec 23, 2009 at 10:34
1 direct reply by Akoya

    Esteemed monks,

    I am designing a Catalyst application (my first), which I hope to run as a single instance, with single sign-on, across multiple subdomains.

    I have found the Catalyst::Action::SubDomain module, which looks very promising. The documentation is fairly sparse, and does not addresses any domain/server configuration required in order to run a single instance as I propose.

    My questions are:

    1. In order to have a single Catalyst app instance running across multiple subdomains, must those subdomains be mirrored domains (i.e., using Apache's ServerAlias directive)?
    2. In order to have single sign-on between domains, must I use a wildcard domain in the session cookie?
    3. Most of all, is Catalyst::Action::Subdomain the best choice for determining within my app which domain the request is going to, so that the app can be customized for each domain?

    Thank you for your time and consideration.

    There are 10 types of people in this world...those who grok binary, and those who don't.
Tool used by Perl developers to design the flow of Perl Projects
on Dec 23, 2009 at 08:02
4 direct replies by paragkalra

    Hello All,

    Up till now I have been coding fairly simple Perl scripts.

    Lately I have started designing some complex scripts. And I always felt that if you place the logic and flow of the script on paper first then coding part is simplified to a great extent. And up till now I have mainly used pen and paper for it. :)

    Was just wondering if there is any graphical tool (of course opensource :)) specific to Perl to design the flow of the Perl script projects before we actually start coding it.

    Using it I should be able to design something similar to flowcharts and should be able to share it with others to get it reviewed.

    So just wanted to know which tool does Perl developers use in general to design the flow of the Perl projects.

    TIA

    Parag

IF condition that doesnt work
on Dec 23, 2009 at 06:00
3 direct replies by Sombrerero_loco
    Hi there. Im triying to do a script that open a files, read it, go throught some if conditions and do things using this if's. This script open a file and read inside a hash all the info, then, look for files, uncompress (hacked for testing), read the line from the file A, look throught the if conditions to substitute the values es $vl_read and then write it in another file, then recompres (hacked too) and delete the original files (hacked again). My problem its when i need to check the line using ReGex, i have the check if the lines named (hwUserField7) exists, and if dont, it must use the data store in hwUserField3 to look for in the has i had readed from the file, then must write the value referenced from this key in the hash as a new line. My problem its i dont able to put into the if contition of the flag to check if exists the line. I know your wisdom will clarify this, also, if you have a better code, i'll use :) Thanks Perl Masters

Alternative to bytes::length()
on Dec 22, 2009 at 20:04
2 direct replies by creamygoodness

    Greets,

    I have often seen people badmouth the bytes pragma, but there's one thing I use it for: cheaply identifying empty strings with bytes::length()when the strings may be carrying the SVf_UTF8 flag. The length() function can be inefficient for such strings, because it must traverse the entire buffer counting characters:

    marvin@smokey:~/perltest $ perl compare_length_efficiency.pl Rate utf8 bytes utf8 4.35/s -- -98% bytes 185/s 4154% -- marvin@smokey:~/perltest $

    Is there an efficient alternative to bytes::length() for this use case elsewhere in core?

Help with Hash or Arrays
on Dec 22, 2009 at 17:30
3 direct replies by Anonymous Monk
    Hi,
    I wish to do something like this -
    use strict; use warnings; my %people; my @sentences = ("here i am","i am me"); $people{me}[0] = 0; $people{me}[1] = "1985"; $people{me}[2] = @sentences; @sentences = ("there she is"); $people{friend}[0] = 1; $people{friend}[1] = "1984"; $people{friend}[2] = @sentences; foreach my $person ( %people) { my $rowId = $person[0]; my $dob = $person[1]; addDOBToDatabase($rowId, $dob); foreach my $sentence ( %sentences) { addSentenceToDatabase($rowId, $sentence); } }
    but its not working for me since I don't fully understand the concepts involved, what am I doing wrong? Thanks!
RFC on the naming of a CPAN module
on Dec 22, 2009 at 14:25
2 direct replies by stevieb

    Hi everyone,

    I've now been registered on the PAUSE, and now am contemplating on the naming of what I've been working on.

    One of the projects is essentially an integrated Internet Service Provider management system. It contains 9 modules, a full client record and ledger management system, a sales transaction system, client accounts/plans system all which backs to a database (SQLite & MySQL tested). It also has a web GUI front-end that overlays it all.

    That project has additional features if other modules I have written are installed.

    The primary external module is a RADIUS database management system. Its purpose is to allow easy management of all FreeRADIUS database tables. Particularly, it aggregates all of the accounting information, archives data, produces accounting reports for billing etc.

    Currently, my naming is as such:

    ISP::

    • ::Core
    • ::User
    • ::Transac
    • ::Ledger
    • ::GUI::Accounting
    • etc

    ISP::RADIUS

    ISP::Email

    Given that all of these projects/modules are directly related to the operation of an ISP (with more in the works), I'm looking for feedback if this would warrant a new top-level namespace (ISP) on the CPAN, and how people generally feel about new top-level namespaces being requested.

    Any and all feedback appreciated. Cheers!

    Steve

Sorting an array containing version numbers
on Dec 22, 2009 at 13:14
8 direct replies by appu_rajiv
    I want to sort an array whose elements are release versions in a.b.c format.
    e.g. my @versions = (2.4.74, 3.2.5, 1.14.56, 1.45.2, 3.14.75)
    There are many ways to achieve it but I need a very optimized way.
PDF::API2 and barcodes
on Dec 22, 2009 at 11:27
3 direct replies by stanner56
    I am creating a material label that is required by my customer. I have successfully created the label format using PDF::API2 but now I need to add some barcodes and there I am stuck. Can/would someone offer some guidance?
Why would a Perl script stop with no informatives?
on Dec 21, 2009 at 17:30
3 direct replies by Anonymous Monk

    I'm writing a web spider, using LWP::RobotUA, and running it on WinXP (ActivePerl). When I retrieve certain webpages, the anchor parsing (which uses HTML::TokeParser) just dies, in the middle of printing out a progress report, if I have tracing turned on, or later, if not. I have handled all the sigs, and the (single, common) handler is never invoked (though I have tried to manually invoke a few sigs, and they seem to work fine). Failure seems to be 100% repeatable, so far, stopping at the exact same char of printout in each case.

    I would suspect that memory allocation failure is at the root of this, but have no idea how to confirm that suspicion... the failing pages (5 so far, completely unrelated to each other) all seem fairly long - over 80k. However, most other, much longer (200k+) pages parse quite successfully.

    Is there any known way to trap a memory allocation failure?

    Is there any known way to trap any other normally silent failure? (and what might fall into this category?)

    Thanks, for any ideas or pointers. I am a noob to perl, but have >40 years experience programming... so I probably have some incorrect assumptions that are blinding me.

    Dick Martin Shorter

Localisation and Locale::Maketext
on Dec 21, 2009 at 13:14
1 direct reply by Tanktalus

    As my first assignment in my new job ramps up, one task I've taken on is to resolve L10N issues. From the looks of things, I think Locale::Maketext looks to have addressed 90%+ of the issues (though if they could have packaged in base classes that help with double-byte, or other locale, issues, that'd have been great - they seemed to know about some issues, just leaving handling them as excersises for the users). One substantial piece left unresolved is the creation of the base language class (usually English). They provide an _AUTO option, which looks useful, until I found out that, well, it didn't write the updates back to the English module, which would make it harder to create the modules for the other languages.

    Also missing is the "delta" - that is, merging those keys, but with empty values, into the other languages' modules.

    The first thing popping into my head would be to use something like PPI to look for the maketext calls and extract the first parameter and then use that to update all the language files (as both key and value for English, and just key with empty value for other languages). Manually updating English is already annoying, so that'd actually be the second thing that popped into my head, and it doesn't actually resolve the other-languages issue to send to the translators.

    I'm wondering if anyone else has dealt with this already, and what approach you took? Is "use utf8;" required at the top of these files?

Hints Towards Writing a Module with New Operators
on Dec 19, 2009 at 17:30
4 direct replies by swampyankee

    I'm looking for some hints on how to write a module which adds new (and largely superfluous) operators to Perl, specifically to replicate Fortran's .EQV. and .NEQV. logical operators. I won't argue that these are useful additions to Perl–I've programmed Fortran for close to forty years and never used either–but it does seem like a fun little project.


    Information about American English usage here and here. Floating point issues? Please read this before posting. — emc

sorting very large text files
on Dec 18, 2009 at 20:09
9 direct replies by rnaeye

    Hi! I have very large text files (10GB to 15GB). Each one contains about 40 million lines and 7 fields (see example below).

    File looks like this: 99_999_852_F3 chr9 97768833 97768867 ATTTTCTTCAATTACATTTCC +AATGCTATCCCAAA + 35 99_999_852_F3 chr9 97885645 97885679 ATTTTCTTCAaTTACATTTCC +AATGCTATCCCAAA + 35 99_99_994_F3 chr10 47028821 47028855 AGACAAAAAGGCCATCAACAG +ATCAGTAAAGGATC + 35 ...

    I need to sort the files based on field-1 (ASCII sorting). I am using Unix  sort -k1 command. Although it works fine, it takes very long time, 30 min to 1 hour. I also tried following Perl script:

    #!/usr/bin/perl use strict; use warnings; open (INFILE, "inputfile.txt") or die $!; open (OUTFILE, '>', "sorted.txt") or die $!; foreach (sort <INFILE>){ print OUTFILE $_; } close(OUTFILE); close(INFILE); exit;

    However, this script puts entire file into memory and sorting process becomes too slow. I was wondering if someone could suggest me a Perl script that will do the sorting faster than Unix  sort -k1 command, and will not use too much memory. Thanks.

New Meditations
Persistent Fault Tolerant SQL (MySQL + DBI) connections!
on Dec 23, 2009 at 05:57
2 direct replies by expresspotato
    Finally... After running into various issues with DBI and the SQL process simply cutting out, I decided to write up a solution. When an SQL server looses an autorization packet or goes down for a reboot the results can be disasterous. User or system entries half added or half retrieved can leave the database integrity questionable. Commits really *really* help with this problem, allowing semi-posted data to be discarded if the insert(s) or update(s) aren't successful. But what about "one up" simple queries where a commit really isn't practical? You should always use commits when possible, but again what about perserving the SQL call? Welcome to Persistent Fault Tolerant SQL! Making SQL easier requires your own design, with subroutines to delete, update, insert or return rows as you'd like. I've done the work for you.

    sql.pl Preable

    Using this code is really simple and ensures that in the event of a query or sql server failure, it is persistently retried until it is successful. You may want to add a timeout for such a thing or a number or retries but I really see no point as this was designed for Apache::DBI and back-end thread processing nodes where a blocking lock is practical. When a user cancels a page load, the resulting SQL connection would terminate anyways. This type of solution is also extremely practical for background ajax requests when returning an error simply wouldn't be visible or recognized.
    This script is designed to be separated from the main program and then accessed through a do("sql.pl"); routine. This allows the subs defined within to be used elsewhere in the program. It is also designed to run in combination with Apache::DBI, this is why there are no disconnects.

    sql.pl Non-Returning Subroutines

    &del;_sql Calls &sql;
    &mod;_sql Calls &sql;
    &put;_sql Calls &sql;

    sql.pl Returning Subroutines

    &row;_sql Return a single ROW from a table
    &get;_sql Return a single COLUMN from a table
    &hash;_sql Return an ARRAY of HASHES for ALL RESULTS.
    Use sql.pl as you wish, its now yours.

    sql.pl Accessing Subroutines

    my @results = &row_sql(qq~ select * from table where ...; ~); my @results = &get_sql(qq~ select id from table where ...; ~); my $rows = &hash_sql(qq~ select * from table; ~); foreach my $row (@$rows){ print "$row->{id}, $row->{somecolumn}\n"; }

    Rolling your own persistent-ness

    The important thing when rolling your own persistent connections is simple. Its constructed with:
    - Until
    - (While retrying) - Sleep for a second and possibly warn - Until
    - - Eval
    - - (While retrying) - Sleep for a second and possibly warn
    until ( $dbh=DBI->connect_cached($connectionInfo,$user,$passwd, {PrintErro +r=>0} ) ) { if ($sqlw){print "Retrying SQL Connect ($DBI::errstr) ...";} s +leep(1); } until ( eval { $sth=$dbh->prepare($insert); if ($sth->execute()) { $sth->finish; } } ) { if ($sqlw){print "Retrying SQL Insert ($DBI::errstr)...";} sle +ep (1); $retry_count++;}

    sql.pl :: Source Code

Installation instructions for Wx using ActivePerl
on Dec 21, 2009 at 21:14
0 direct replies by ikegami

    Wx is a pain to install on Windows using ActivePerl 5.10.1 and a MS compiler. This post details one way of installing it.

    It crashes Visual Studio 6's compiler, so start by upgrading your compiler:

    Now we need the library. If you let Alien::wxWidgets download and build it, it won't embed the manifest files into the DLL files, causing DLL errors and malfunctions when you try to some aspects of Wx.

    • Download wxWidgets-2.8.10
    • Install it in C:\Progs\wxWidgets-2.8.10 (Adjust at will)

    It might be a good idea to upgrade your Perl tool chain. You want at least:

    I had the following installed:

    Then, open up a console and execute the following commands in turn. As is, it's not suitable to be copy and pasted in its entirety or put in a batch file.

    Note that the process takes a while. The longest step, building "msw", takes an hour or more on my aging machine.

    "C:\Progs\Microsoft Visual Studio 9.0\VC\vcvarsall" C: cd \Progs\wxWidgets-2.8.10 cd build\msw nmake -f makefile.vc SHARED=1 BUILD=release cd ..\.. cd contrib\build\stc nmake -f makefile.vc SHARED=1 BUILD=release cd ..\..\.. cd lib\vc_dll for %q in (*.manifest) do ( mt -nologo -manifest %q -outputresource:%~nq;2 del %q ) cd ..\.. set WXDIR=C:\Progs\wxWidgets-2.8.10 set WXWIN=C:\Progs\wxWidgets-2.8.10 C: cd \ md stager cd stager lwp-request http://search.cpan.org/CPAN/authors/id/M/MB/MBARBON/Alien- +wxWidgets-0.47.tar.gz > Alien-wxWidgets-0.47.tar.gz perl -MArchive::Tar -e"Archive::Tar->new->read($ARGV[0],1,{extract=>1} +)" Alien-wxWidgets-0.47.tar.gz cd Alien-wxWidgets-0.47 perl -i.bak -pe"s/config => \{ cc => \$cc, ld => \K(?=\$cc })/\$cc eq +'cl' ? 'link' : /" lib\Alien\wxWidgets\Utility.pm perl Makefile.PL INSTALLDIRS=site -> Should ask "Do you want to fetch and build wxWidgets from source +s?". Use default "no". nmake nmake test nmake install cd .. rd /s/q Alien-wxWidgets-0.47 del Alien-wxWidgets-0.47.tar.gz lwp-request http://search.cpan.org/CPAN/authors/id/M/MB/MBARBON/Wx-0.9 +4.tar.gz > Wx-0.94.tar.gz perl -MArchive::Tar -e"Archive::Tar->new->read($ARGV[0],1,{extract=>1} +)" Wx-0.94.tar.gz cd Wx-0.94 perl -i.bak -pe"s/^sub \K(?=dynamic_lib \{$)/DELETED_/" build\Wx\build +\MakeMaker\Win32_MSVC.pm perl -i.bak -pe"s/new wxCursor\( name, \K(?=type, hsx, hsy \))/(long)/ +" xs\Cursor.xs perl Makefile.PL INSTALLDIRS=site --extra-cflags="/FIstdlib.h" nmake nmake test nmake install cd .. rd /s/q Wx-0.94 del Wx-0.94.tar.gz cd .. rd stager

    Finally, you may uninstall wxWidgets-2.8.10 (via Start | Programs) since Alien::wxWidgets made a copy of the files Wx needs into Perl's lib. The uninstaller leaves the built files behind, so delete the C:\Progs\wxWidgets-2.8.10 directory as a follow up. Uninstalling wxWidgets-2.8.10 will free up 500MB.


    I recommend that you get Wx::Demo.

    "C:\Progs\Microsoft Visual Studio 9.0\VC\vcvarsall" cpan Wx::Demo

    You can launch the demo using

    wperl C:\Progs\perl5101\site\bin\wxperl_demo.pl

    It warns that it's "skipping module 'Wx::DemoModules::wxHVScrolledWindow'". I think it's a problem with the demo, not a problem with Wx.

    Update: Many small tweaks to the text parts. Last one on Dec 22, 2009 at 10:21 EST

Meditating on Perl, Python and the Semantic Web
on Dec 19, 2009 at 00:01
4 direct replies by ack

    I have been working on a project with the National Labs over the past several months. A central element of that work has been an effort to bring much of the information and computatiom power of the Labs into a Service Oriented Architecture paradigm and the Semantic Web constructs are at the heart of that part of the work.

    Not being that familiar with the Semantic Web constructs, I embarked on a concerted effort to teach myself about it. So I have read through several books. But, for me, there is no substitute for doing some actualy programming on a topic to really do a "deep dive" into the material.

    So I latched onto Segara, Evans and Taylors' "Programming the Semantic Web" (published by O'Reilly. I like O'Reilly's books, in general, and had some really good learning on their similar book "Programming Collective Intelligence."

    I am a Perl programmer, through and through, and have been laboring with trying to convert the Python constructs that seem so prevalent in these SOA, Semantic Web, and Web Data Minimg "tutorials".

    Out of this last few months' efforts, two questions have emerged for me.

    First, why is Python so prevalent in this genere of discourse? What makes Python so preferred in the world of Collective Intelligence, Semantic Web, etc.?

    Second, why is Perl so conspicuously NOT involved in those domains? I have searched extensively in CPAN and we only have a tiny handful of modules. Similarly, there are virtually no books on the topics that use Perl or are Perl-oriented.

    Perl seems like an absolute natural programming language for these types of applications. I have tought myself enough Python that I don't have to struggle too hard when learning from the books. That progress learning Python, however, has taught me that I'm hard pressed to see or know why Python would be any better than Perl. Superficially, it seems that Python may be a little more compact (with constructs like its List Comprehensions); but my experience so far is that it is no easier to understand or read than obfuscated Perl which can be just as compact...and just as obscure. I take exception to the assertion in all of those books that "Python is used because it is so naturally understandable"; I don't see that at all.

    So I'm curious (knowing that quite a few of you know Python much beter than I do and you all, here in the Monestary know Perl, of course) if anyone can explain the predilection for Python for those types of applications...and the dirth of Perl in them.

    Just curious.

    ack Albuquerque, NM
A fix for merge keys in YAML::XS, YAML::Syck
on Dec 18, 2009 at 16:52
1 direct reply by bv

    Ok, so not a fix, but a workaround. The problem is fairly well described in this blog post and this bug report for YAML::XS. The merge keys functionality of YAML is not supported by either YAML::XS or YAML::Syck, so this script:

    #!/usr/bin/perl use strict; use warnings; use YAML::XS; use Data::Dumper; my $data; {local $/;$data = <DATA>}; print Dumper Load $data; __DATA__ --- key1: &id01 subkey_a: asdf subkey_b: qwer key2: <<: *id01 subkey_a: foo subkey_c: bar ...

    produces this output:

    UPDATE: Found a couple easy ways to avoid recursion. See my reply below.


    @_=qw; Just another Perl hacker,; ;$_=q=print "@_"= and eval;
New Cool Uses for Perl
Decode character encodings, warn on user mistake
on Dec 21, 2009 at 17:35
2 direct replies by ambrus

    This program converts a text file from a character encoding to another, but tries to detect if the user accidentally specifies the wrong character encoding, such as iso-8859-2 instead of utf-8.

    This has a real world motivation. I'm writing a program that works by mostly copying its input to the output but annotates some parts of it. The program must accept input of multiple character encodings (iso-8859-2, utf-8, cp1250), and similarly must be able to emit the output in multiple encodings. The user can choose the input and output encodings with command-line switches. If, however, the user inputs an utf-8 file but specifies 8859-2 as the input and output encodings, the program may appear to work and output a utf-8 file. The program won't be able to understand those records that have non-ascii characters, but this might not be obvious from the output. For this reason, I added some code to detect this kinds of errors. This program here is a standalone program copying only the relevant code.

Laziness through CPAN: Screen-scrape to RSS with 3 Modules
on Dec 20, 2009 at 21:44
0 direct replies by crashtest

    I'm not sure this is a cool use for Perl, but once again, I am astounded by how easy the easy things really are. The script below is one of those where you're almost surprised when you're done writing it. "That's it?", you ask yourself. Yes, that's it.

    Here's the background: there's a certain trail race that I'd like to run, but there are always more applicants than slots, so the organizers have resorted to a lottery system to pick entrants this year. Unfortunately, I didn't get in, but I am in the top 25 on the wait list.

    The lottery winners have until midnight tonight to pay their entry fee - otherwise the wait-listed people move into their slots. On the lottery page, it is clearly indicated who has, and who hasn't, paid their entry fee yet. Now I could obsessively sit at my computer, refresh the page every five minutes and count the "Not Paid" entrants... or I could be obsessive and lazy, and enlist Perl for help.

    With just three use directives, I'm in business:

    use LWP::UserAgent; use HTML::TableParser; use XML::RSS;
    And now in 50 non-optimized lines, I can easily write a script that screen-scrapes the web page (using LWP::UserAgent), counts the people who've paid and those who haven't (via HTML::TableParser), then print a simple RSS file (with XML::RSS) to a web-accessible spot that I've now added to my News Reader application (Google Reader).

    The script is scheduled via cron. Since I can check my news reader on my phone, I am free to walk around, eat dinner etc. while tracking something I have absolutely no control over. Perfect!

    I've done something like this before, in order to track the waiver wire in a fantasy league. But I am struck by how easy this really was, and totally worthwhile even though I can put this script in the trash after midnight.

    I've also thought that this basic process - scrape -> parse -> post - can be implemented in thousands of ways using many other tools and technologies. Have other monks done similar things in the past? How would you have approached my problem?

New Perl Poetry
secret of the universe haiku
on Dec 18, 2009 at 11:41
2 direct replies by zentara
New Obfuscated Code
Functional Composition
on Dec 17, 2009 at 15:41
2 direct replies by billh

    This isn't really obfuscation, it's even vaguely useful, but see if you can figure out what Pipes.pm must be doing, given that this prints o-l-l-e-H

    use strict; use warnings; use Pipes; print "hello" | fn { ucfirst($_[0]) } | fn { [ split('', $_[0]) ] } | fn { [ reverse(@{$_[0]}) ] } | fn { join('-', @{$_[0]}) } | fn { $_[0] . "\n" };
    Bill H
    perl -e 'print sub { "Hello @{[shift->()]}!\n" }->(sub{"World"})'
New Monk Discussion
Special place for RTFM posts
on Dec 22, 2009 at 04:42
4 direct replies by SilasTheMonk
    Saw this post today Re: configuring language for CGI and my heart sank. What is the point of starting up your web browser just to shout RTFM (well okay he left out the "F") at someone? And the guy cannot even be bothered to login into perlmonks. How about a special consideration category for RTFM posts where the post is not actually reaped but negative votes count double?
Login:
Password
remember me
What's my password?
Create A New User

Community Ads
Chatterbox
and all is quiet...

How do I use this? | Other CB clients
Other Users
Others drinking their drinks and smoking their pipes about the Monastery: (10)
wfsp
Your Mother
atcroft
herveus
Eyck
zwon
ssandv
shawnhcorey
ReturnOfThelonious
bret
As of 2009-12-24 19:10 GMT
Sections
The Monastery Gates
Seekers of Perl Wisdom
Meditations
PerlMonks Discussion
Categorized Q&A
Tutorials
Obfuscated Code
Perl Poetry
Cool Uses for Perl
Perl News
Information
PerlMonks FAQ
Guide to the Monastery
What's New at PerlMonks
Voting/Experience System
Tutorials
Reviews
Library
Perl FAQs
Other Info Sources
Find Nodes
Nodes You Wrote
Super Search
List Nodes By Users
Newest Nodes
Recently Active Threads
Selected Best Nodes
Best Nodes
Worst Nodes
Saints in our Book
Leftovers
The St. Larry Wall Shrine
Offering Plate
Awards
Craft
Snippets Section
Code Catacombs
Quests
Editor Requests
Buy PerlMonks Gear
PerlMonks Merchandise
Planet Perl
Perlsphere
Use Perl
Perl.com
Perl 5 Wiki
Perl Jobs
Perl Mongers
Perl Directory
Perl documentation
CPAN
Random Node
Voting Booth

I hope to spend this Christmas:

With Family.
With Friends.
Alone.
Partying in pubs/clubs.
Watching some traditional Christmas films.
By the fire with a book.
Playing computer games.
Working :(
Hacking :)
Being visited by four ghosts.
A selection of the above.
Something else.

Results (316 votes), past polls