This paper (first reference) is the result of a class project I was part of
almost two years ago for CSCI 5417 Information Retrieval Systems. It builds
on a class project I did in CSCI 5832 Natural Language Processing and which
I presented at Wikimania '07. The project was very late as we didn't send
the final paper in until the day before new years. This technical report was
never really announced that I recall so I thought it would be interesting to
look briefly at the results. The goal of this paper was to break articles
down into surface features and latent features and then use those to study
the rating system being used, predict article quality and rank results in a
search engine. We used the [[random forests]] classifier which allowed us to
analyze the contribution of each feature to performance by looking directly
at the weights that were assigned. While the surface analysis was performed
on the whole english wikipedia, the latent analysis was performed on the
simple english wikipedia (it is more expensive to compute). = Surface
features = * Readability measures are the single best predictor of quality
that I have found, as defined by the Wikipedia Editorial Team (WET). The
[[Automated Readability Index]], [[Gunning Fog Index]] and [[Flesch-Kincaid
Grade Level]] were the strongest predictors, followed by length of article
html, number of paragraphs, [[Flesh Reading Ease]], [[Smog Grading]], number
of internal links, [[Laesbarhedsindex Readability Formula]], number of words
and number of references. Weakly predictive were number of to be's, number
of sentences, [[Coleman-Liau Index]], number of templates, PageRank, number
of external links, number of relative links. Not predictive (overall - see
the end of section 2 for the per-rating score breakdown): Number of h2 or
h3's, number of conjunctions, number of images*, average word length, number
of h4's, number of prepositions, number of pronouns, number of interlanguage
links, average syllables per word, number of nominalizations, article age
(based on page id), proportion of questions, average sentence length. :*
Number of images was actually by far the single strongest predictor of any
class, but only for Featured articles. Because it was so good at picking out
featured articles and somewhat good at picking out A and G articles the
classifier was confused in so many cases that the overall contribution of
this feature to classification performance is zero. :* Number of external
links is strongly predictive of Featured articles. :* The B class is highly
distinctive. It has a strong "signature," with high predictive value
assigned to many features. The Featured class is also very distinctive. F, B
and S (Stop/Stub) contain the most information.
:* A is the least distinct class, not being very different from F or G. =
Latent features = The algorithm used for latent analysis, which is an
analysis of the occurence of words in every document with respect to the
link structure of the encyclopedia ("concepts"), is [[Latent Dirichlet
Allocation]]. This part of the analysis was done by CS PhD student Praful
Mangalath. An example of what can be done with the result of this analysis
is that you provide a word (a search query) such as "hippie". You can then
look at the weight of every article for the word hippie. You can pick the
article with the largest weight, and then look at its link network. You can
pick out the articles that this article links to and/or which link to this
article that are also weighted strongly for the word hippie, while also
contributing maximally to this articles "hippieness". We tried this query in
our system (LDA), Google (site:en.wikipedia.org hippie), and the Simple
English Wikipedia's Lucene search engine. The breakdown of articles occuring
in the top ten search results for this word for those engines is: * LDA
only: [[Acid rock]], [[Aldeburgh Festival]], [[Anne Murray]], [[Carl
Radle]], [[Harry Nilsson]], [[Jack Kerouac]], [[Phil Spector]], [[Plastic
Ono Band]], [[Rock and Roll]], [[Salvador Allende]], [[Smothers brothers]],
[[Stanley Kubrick]]. * Google only: [[Glam Rock]], [[South Park]]. * Simple
only: [[African Americans]], [[Charles Manson]], [[Counterculture]], [[Drug
use]], [[Flower Power]], [[Nuclear weapons]], [[Phish]], [[Sexual
liberation]], [[Summer of Love]] * LDA & Google & Simple: [[Hippie]],
[[Human Be-in]], [[Students for a democratic society]], [[Woodstock
festival]] * LDA & Google: [[Psychedelic Pop]] * Google & Simple: [[Lysergic
acid diethylamide]], [[Summer of Love]] ( See the paper for the articles
produced for the keywords philosophy and economics ) = Discussion /
Conclusion = * The results of the latent analysis are totally up to your
perception. But what is interesting is that the LDA features predict the WET
ratings of quality just as well as the surface level features. Both feature
sets (surface and latent) both pull out all almost of the information that
the rating system bears. * The rating system devised by the WET is not
distinctive. You can best tell the difference between, grouped together,
Featured, A and Good articles vs B articles. Featured, A and Good articles
are also quite distinctive (Figure 1). Note that in this study we didn't
look at Start's and Stubs, but in earlier paper we did. :* This is
interesting when compared to this recent entry on the YouTube blog. "Five
Stars Dominate Ratings"
http://youtube-global.blogspot.com/2009/09/five-stars-dominate-ratings.html…
I think a sane, well researched (with actual subjects) rating system
is
well within the purview of the Usability Initiative. Helping people find and
create good content is what Wikipedia is all about. Having a solid rating
system allows you to reorganized the user interface, the Wikipedia
namespace, and the main namespace around good content and bad content as
needed. If you don't have a solid, information bearing rating system you
don't know what good content really is (really bad content is easy to spot).
:* My Wikimania talk was all about gathering data from people about articles
and using that to train machines to automatically pick out good content. You
ask people questions along dimensions that make sense to people, and give
the machine access to other surface features (such as a statistical measure
of readability, or length) and latent features (such as can be derived from
document word occurence and encyclopedia link structure). I referenced page
262 of Zen and the Art of Motorcycle Maintenance to give an example of the
kind of qualitative features I would ask people. It really depends on what
features end up bearing information, to be tested in "the lab". Each word is
an example dimension of quality: We have "*unity, vividness, authority,
economy, sensitivity, clarity, emphasis, flow, suspense, brilliance,
precision, proportion, depth and so on.*" You then use surface and latent
features to predict these values for all articles. You can also say, when a
person rates this article as high on the x scale, they also mean that it has
has this much of these surface and these latent features.
= References =
- DeHoust, C., Mangalath, P., Mingus., B. (2008). *Improving search in
Wikipedia through quality and concept discovery*. Technical Report.
PDF<http://grey.colorado.edu/mediawiki/sites/mingus/images/6/68/DeHoustMangalat…>
- Rassbach, L., Mingus., B, Blackford, T. (2007). *Exploring the
feasibility of automatically rating online article quality*. Technical
Report. PDF<http://grey.colorado.edu/mediawiki/sites/mingus/images/d/d3/RassbachPincock…>
Hoi,
I have asked and received permission to forward to you all this most
excellent bit of news.
The linguist list, is a most excellent resource for people interested in the
field of linguistics. As I mentioned some time ago they have had a funding
drive and in that funding drive they asked for a certain amount of money in
a given amount of days and they would then have a project on Wikipedia to
learn what needs doing to get better coverage for the field of linguistics.
What you will read in this mail that the total community of linguists are
asked to cooperate. I am really thrilled as it will also get us more
linguists interested in what we do. My hope is that a fraction will be
interested in the languages that they care for and help it become more
relevant. As a member of the "language prevention committee", I love to get
more knowledgeable people involved in our smaller projects. If it means that
we get more requests for more projects we will really feel embarrassed with
all the new projects we will have to approve because of the quality of the
Incubator content and the quality of the linguistic arguments why we should
approve yet another language :)
NB Is this not a really clever way of raising money; give us this much in
this time frame and we will then do this as a bonus...
Thanks,
GerardM
---------- Forwarded message ----------
From: LINGUIST Network <linguist(a)linguistlist.org>
Date: Jun 18, 2007 6:53 PM
Subject: 18.1831, All: Call for Participation: Wikipedia Volunteers
To: LINGUIST(a)listserv.linguistlist.org
LINGUIST List: Vol-18-1831. Mon Jun 18 2007. ISSN: 1068 - 4875.
Subject: 18.1831, All: Call for Participation: Wikipedia Volunteers
Moderators: Anthony Aristar, Eastern Michigan U <aristar(a)linguistlist.org>
Helen Aristar-Dry, Eastern Michigan U <hdry(a)linguistlist.org>
Reviews: Laura Welcher, Rosetta Project
<reviews(a)linguistlist.org>
Homepage: http://linguistlist.org/
The LINGUIST List is funded by Eastern Michigan University,
and donations from subscribers and publishers.
Editor for this issue: Ann Sawyer <sawyer(a)linguistlist.org>
================================================================
To post to LINGUIST, use our convenient web form at
http://linguistlist.org/LL/posttolinguist.html
===========================Directory==============================
1)
Date: 18-Jun-2007
From: Hannah Morales < hannah(a)linguistlist.org >
Subject: Wikipedia Volunteers
-------------------------Message 1 ----------------------------------
Date: Mon, 18 Jun 2007 12:49:35
From: Hannah Morales < hannah(a)linguistlist.org >
Subject: Wikipedia Volunteers
Dear subscribers,
As you may recall, one of our Fund Drive 2007 campaigns was called the
"Wikipedia Update Vote." We asked our viewers to consider earmarking their
donations to organize an update project on linguistics entries in the
English-language Wikipedia. You can find more background information on this
at:
http://linguistlist.org/donation/fund-drive2007/wikipedia/index.cfm.
The speed with which we met our goal, thanks to the interest and generosity
of
our readers, was a sure sign that the linguistics community was enthusiastic
about the idea. Now that summer is upon us, and some of you may have a bit
more
leisure time, we are hoping that you will be able to help us get started on
the
Wikipedia project. The LINGUIST List's role in this project is a purely
organizational one. We will:
*Help, with your input, to identify major gaps in the Wikipedia materials or
pages that need improvement;
*Compile a list of linguistics pages that Wikipedia editors have identified
as
"in need of attention from an expert on the subject" or " does not cite any
references or sources," etc;
*Send out periodical calls for volunteer contributors on specific topics or
articles;
*Provide simple instructions on how to upload your entries into Wikipedia;
*Keep track of our project Wikipedians;
*Keep track of revisions and new entries;
*Work with Wikimedia Foundation to publicize the linguistics community's
efforts.
We hope you are as enthusiastic about this effort as we are. Just to help us
all
get started looking at Wikipedia more critically, and to easily identify an
area
needing improvement, we suggest that you take a look at the List of
Linguists
page at:
http://en.wikipedia.org/wiki/List_of_linguists. M
Many people are not listed there; others need to have more facts and
information
added. If you would like to participate in this exciting update effort,
please
respond by sending an email to LINGUIST Editor Hannah Morales at
hannah(a)linguistlist.org, suggesting what your role might be or which
linguistics
entries you feel should be updated or added. Some linguists who saw our
campaign
on the Internet have already written us with specific suggestions, which we
will
share with you soon.
This update project will take major time and effort on all our parts. The
end
result will be a much richer internet resource of information on the breadth
and
depth of the field of linguistics. Our efforts should also stimulate
prospective
students to consider studying linguistics and to educate a wider public on
what
we do. Please consider participating.
Sincerely,
Hannah Morales
Editor, Wikipedia Update Project
Linguistic Field(s): Not Applicable
-----------------------------------------------------------
LINGUIST List: Vol-18-1831
There are an increasing number of organisations which have indicated
that their output is Creative Commons by default, however there are
not as many that have a public IP policy which clearly allows staff to
publish "their" work.
i.e. We have moved from the IP policy being the stick used to prevent
openness, and the "work for hire" and "publish process" are the next
frontier.
A few staff at University of Canberra (UC) have written an IP policy
proposal which clearly gives staff ownership of their work, and
requires CC licensing if their staff use organisational infrastructure
to create their work.
http://en.wikiversity.org/wiki/University_of_Canberra/Proposed_policy_on_in…
Otago Polytechnic adopted an IP policy like that in 2007.
http://wikieducator.org/Otago_Polytechnic/Intellectual_property
Are there other examples, within or outside academia, where the
organisation empowers its staff by providing a policy which clarifies
when "work for hire" principle is enforced in this murky world of
online collaboration?
Does the WMF have an intellectual property policy for works created by
WMF employees?
Employees edit and upload using free licenses under their own name,
but does the copyright belong to the employee or to the WMF?
Is anyone in our community going to:
Global Congress on Intellectual Property and the Public Interest
Washington College of Law
American University, Washington, DC
August 25-27, 2011
http://infojustice.org/public-events/global-congress
--
John Vandenberg
Hi Scott,
It was raised with the NPG, as promised, at the time. They assured me they
would investigate immediately on the pages indicated as well as make a
random sampling of other pages on their site to see if it had happened
elsewhere.
I'll re-raise this with them today and get back to you when I know any
news.
Sincerely,
-Liam
wittylama.com/blog
Peace, love & metadata
On 12 June 2011 22:53, Scott MacDonald <doc.wikipedia(a)ntlworld.com> wrote:
> Back in March this year, I pointed out in Wikien-1 that the UKs National
> Portrait Gallery, was reusing Wikipedia content (and in particular my work)
> without any attribution (and indeed was claiming copyright).
>
> http://lists.wikimedia.org/pipermail/wikien-l/2011-March/108731.html
>
> This got some attention at the time, and coverage in the en.wp Signpost.
>
> http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2011-03-21/News_an
> d_notes
>
> The matter ended when it was indicated that WMF people in the GLAM project
> would raise it with the NPG as a matter of urgency.
>
>
> However, I note that the NPG continues to use copyrighted material without
> attribution and with a false copyright claim.
>
> http://www.npg.org.uk/collections/search/person.php?search=sa&LinkID=mp07767
>
>
>
>
> _______________________________________________
> foundation-l mailing list
> foundation-l(a)lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
>
This week, the Wikimedia Foundation Board of Trustees unanimously
passed a resolution addressing the issue of controversial content on
the projects. The Board also unanimously passed a resolution
addressing images of identifiable, living people on the projects. The
resolutions are posted at:
http://wikimediafoundation.org/wiki/Resolution:Controversial_contenthttp://wikimediafoundation.org/wiki/Resolution:Images_of_identifiable_people
These topics have been the subject of active debate on the Projects,
and particularly on Commons, for a long time. Last June, following
extensive community debate, the Wikimedia Foundation Board requested
the Executive Director undertake a study of the issue of controversial
content on the projects, acknowledging the difficulty of the issue
(http://wikimediafoundation.org/wiki/Resolution:Commissioning_Recommendation…).
Robert and Dory Harris were commissioned to do this study, which they
did on meta in consultation with the community, publishing
recommendations in September 2010. Their report is available at:
http://meta.wikimedia.org/wiki/2010_Wikimedia_Study_of_Controversial_Content
At its October 2010 meeting, the Board was presented with this report.
The Board discussed the recommendations in depth, and developed a
working group to act on them. The working group's report was presented
at the Board's next in-person meeting, in March 2011; and these
resolutions were subsequently drafted and voted on. The working group
report has also been posted on meta, at:
http://meta.wikimedia.org/wiki/Controversial_content/Board_report
Note that the controversial content resolution uses the term
"curation." We are using this term to refer to all aspects of managing
images and other content on our projects, including recruiting and
acquiring contributions and uploading, categorizing, placement of
images in articles and other pages (including gallery pages and the
main page), featuring or highlighting, flagging for improvement, and
deletion and removal. All of our projects are curated in line with
broad editorial principles; this is an essential feature that
distinguishes our projects from indiscriminate or general-purpose
repositories.
Not all of the Harris recommendations are addressed in this
resolution. In particular:
* At this time, we refer the recommendation to create a WikiJunior
project to the editing community; the Board would like to see
demonstrated community support before creating such a project.
and
* In agreement with the Harris report, we do not recommend that
changes be made to current editing and/or filtering regimes
surrounding text in Wikimedia projects; we feel editorial mechanisms
regarding text are working well.
Finally, we urge that the community, the Foundation and the Wikimedia
movement continue to discuss the appropriate scope of Commons for
fulfilling Wikimedia's mission; this is a difficult and important
question.
Thank you to everyone who has worked on this issue, and special thanks
to Robert and Dory Harris for their hard work.
-- Phoebe Ayers, on behalf of the Board working group and the Board
Hi,
I'm taking part in an images discussion workshop with a number of
academics tomorrow and could do with a statement about the WMF's long
term commitment to supporting Wikimedia Commons (and other projects)
in terms of the public availability of media. Is there an official
published policy I can point to that includes, say, a 10 year or 100
commitment?
If it exists, this would be a key factor for researchers choosing
where to share their images with the public.
Thanks,
Fae
--
http://enwp.org/user_talk:fae
Guide to email tags: http://j.mp/faetags
Hoi,
There is a request for a Wikipedia in Ancient Greek. This request has so far
been denied. A lot of words have been used about it. Many people maintain
their positions and do not for whatever reason consider the arguments of
others.
In my opinion their are a few roadblocks.
- Ancient Greek is an ancient language - the policy does not allow for
it
- Text in ancient Greek written today about contemporary subjects
require the reconstruction of Ancient Greek.
- it requires the use of existing words for concepts that did
not exist at the time when the language was alive
- neologisms will be needed to describe things that did not
exist at the time when the language was alive
- modern texts will not represent the language as it used to be
- Constructed and by inference reconstructed languages are effectively
not permitted
We can change the policy if there are sufficient arguments, when we agree on
a need.
When a text is written in reconstructed ancient Greek, and when it is
clearly stated that it is NOT the ancient Greek of bygone days, it can be
obvious that it is a great tool to learn skills to read and write ancient
Greek but that it is in itself not Ancient Greek. Ancient Greek as a
language is ancient. I have had a word with people who are involved in the
working group that deals with the ISO-639, I have had a word with someone
from SIL and it is clear that a proposal for a code for "Ancient Greek
reconstructed" will be considered for the ISO-639-3. For the ISO-639-6 a
code is likely to be given because a clear use for this code can be given.
We can apply for a code and as it has a use bigger then Wikipedia alone it
clearly has merit.
With modern texts clearly labelled as distinct from the original language,
it will be obvious that innovations a writers needs for his writing are
legitimate.
This leaves the fact that constructed and reconstructed languages are not
permitted because of the notion that mother tongue users are required. In my
opinion, this has always been only a gesture to those people who are dead
set against any and all constructed languages. In the policies there is
something vague "*it must have a reasonable degree of recognition as
determined by discussion (this requirement is being discussed by the language
subcommittee <http://meta.wikimedia.org/wiki/Language_subcommittee>)."* It
is vague because even though the policy talks about a discussion, it is
killed off immediately by stating "The proposal has a sufficient number of
living native speakers to form a viable community and audience." In my
opinion, this discussion for criteria for the acceptance of constructed or
reconstructed languages has not happened. Proposals for objective criteria
have been ignored.
In essence, to be clear about it:
- We can get a code for reconstructed languages.
- We need to change the policy to allow for reconstructed and
constructed languages
We need to do both in order to move forward.
The proposal for objective criteria for constructed and reconstructed
languages is in a nutshell:
- The language must have an ISO-639-3 code
- We need full WMF localisation from the start
- The language must be sufficiently expressive for writing a modern
encyclopaedia
- The Incubator project must have sufficiently large articles that
demonstrate both the language and its ability to write about a wide range of
topics
- A sufficiently large group of editors must be part of the Incubator
project
Thanks,
GerardM
Routledge has kindly offered three months free online access to
Feminist Economics, a peer-reviewed academic journal, for up to 15
Wikimedians. The sign-up sheet is here,
http://en.wikipedia.org/wiki/Wikipedia:Feminist_Economics and will
open at 22:00 UTC, Monday, August 29.
Please pass the word to anyone you think might be interested.
Sarah
http://chronicle.com/article/Academic-Publisher-Steps-Up/128031/
People are exchanging and selling access to the databases to get the
damn science.
This is why we need to keep pushing the free content and open access
message. You cannot do science in a system with these effects.
- d.