Despite some media and blog coverage predicting the demise of trust in Wikipedia entries, I believe the system is working as it should - there is an ebb and flow to ensure community balance. People that exploit the system are discovered by tools provided by other community members (or interested parties) that shine the light of those activities.
If you look at these events over time, the reaction I believe/hope is bit more pragmatic. In the case of Wikipedia, these recent revelations are a necessary part of the continuing maturation of an open platform that still needs additional tools to make sure that interactions are made visible over time - even if they are thought to be "anonymous" at the time. I don't believe that you have to impose significant controls, people to act as "official" editors, or have forced registration if you can provide supporting information to the content which allows people to make an informed decision.
The UCSC tool mentioned in the article below adds some interesting capabilities that augment what WikiScanner delivers. The cat-and-mouse game will certainly continue - some Wikipedia topics are emotionally charged, or sensitive in other ways - but hopefully the community will continue to respond.
Wikipedia Trust Coloring
paleshadows writes "Researchers at UCSC developed a tool that measures the trustworthiness of each Wikipedia page. Roughly speaking, the algorithm analyzes the entire 7-year user-editing-history and utilizes the longevity of the content to learn which contributors are the most reliable: If your contribution lasts, you gain 'reputation,' whereas if it's edited out, your reputation falls. The trustworthiness of a newly inserted text is a function of the reputation of all its authors, a heuristic that turned out to be successful in identifying poor content. The interested reader can take a look at this demonstration (random page with white/orange background marking trusted/untrusted text, respectively; note "random page" link at the left for more demo pages), this presentation (pdf), and this paper (pdf)."
Slashdot | Algorithm Rates Trustworthiness of Wikipedia Pages
WikiScanner
WikiScanner is a relatively new site that will track the edits made on Wikipedia.
The purpose of this service is to see who’s behind edits made, and how these actions generally lend themselves towards the self-interested corporations hoping to promote and protect brand identities. Created by CalTech student Virgil Griffith, WikiScanner searches the entirety of the XML-based records in Wikipedia and cross-references them with public and private IP and domain information to see who is behind the edits made on the online encyclopedia. With WikiScanner, there are a few levels on which you can search for info, including organization name, exact Wikipedia URL, or IP address, among others.
In what could be considered a sociology experiment, Griffith found that a good portion of edits for company entries are being made by the companies themselves. This isn’t surprising at all?it’s something that’s been speculated upon, and tested on a smaller scale. The team behind Wikipedia is also aware of it, and has been working to deal with issues such as this. Wikipedia’s policies have changed since it’s onset, and the user-generated system has been improved as a result. There is also a new edit-marking system that’s currently being tested on Wikia for possible use on Wikipedia in the future, making it even easier to track changes made to entries.


On September 6th at 8:30 am PDT see
One of the leading RSS Aggregation startups, 