Wikt

For World Heritage Encyclopedia's guidance on linking to sister projects, see World Heritage Encyclopedia: sister projects#How to link.
To go directly to the World Heritage Encyclopedia sister project, see www..org

Web address Slogan The Free Dictionary
Commercial? No
Type of site Online dictionary
Registration Optional
Available language(s) Multi-lingual (over 170)
Owner Foundation
Created by Jimmy Wales and the community
Launched December 12, 2002
Alexa rank positive decrease 561 (November 2013)[1]
Current status active

(a blend of the words wiki and dictionary) is a multilingual, web-based project to create a free content dictionary of all words in all languages. It is available in 158 languages and in Simple English. Like its sister project World Heritage Encyclopedia, is run by the Foundation, and is written collaboratively by volunteers, dubbed "Wiktionarians". Its wiki software allows almost anyone with access to the website to create and edit entries.

Because is not limited by print space considerations, most of 's language editions provide definitions and translations of words from many languages, and some editions offer additional information typically found in thesauri and lexicons. The English includes a Wikisaurus (thesaurus) of synonyms of various words.

data are frequently used in various natural language processing tasks.

History and development

was brought online on December 12, 2002,Template:Efn following a proposal by Daniel Alston and an idea by [ref], features well over 5 million entries across its 272 language editions. The largest of the language editions is the English , with over 3.5 million entries, followed by the Malagasy with over 2.5 million entries and the French with over 2.4 million. Nineteen language editions now contain over 100,000 entries each.Template:Efn


Most of the entries and many of the definitions at the project's largest language editions were created by bots that found creative ways to generate entries or (rarely) automatically imported thousands of entries from previously published dictionaries. Seven of the 18 bots registered at the English Template:Efn created 163,000 of the entries there.[2]

Another of these bots, "ThirdPersBot," was responsible for the addition of a number of third-person conjugations that would not have received their own entries in standard dictionaries; for instance, it defined "smoulders" as the "third-person singular simple present form of smoulder." Of the 648,970 definitions the English provides for 501,171 English words, 217,850 are "form of" definitions of this kind.[3] This means its coverage of English is slightly smaller than that of major monolingual print dictionaries. The Oxford English Dictionary, for instance, has 615,000 headwords, while Merriam-Webster's Third New International Dictionary of the English Language, Unabridged has 475,000 entries (with many additional embedded headwords). Detailed statistics exist to show how many entries of various kinds exist.

The English does not rely on bots to the extent that some other editions do. The French and Vietnamese Wiktionaries, for example, imported large sections of the Free Vietnamese Dictionary Project (FVDP), which provides free content bilingual dictionaries to and from Vietnamese.Template:Efn These imported entries make up virtually all of the Vietnamese edition's contents. Almost all of the Malagasy 's entries were copied by bot, from other Wiktionaries. Like the English edition, the French has imported the approximately 20,000 entries from the Unihan database of Chinese, Japanese, and Korean characters. The French grew rapidly in 2006 thanks in large part to bots copying many entries from old, freely licensed dictionaries, such as the eighth edition of the Dictionnaire de l'Académie française (1935, around 35,000 words), and using bots to add words from other editions with French translations. The Russian edition grew by nearly 80,000 entries as "LXbot" added boilerplate entries (with headings, but without definitions) for words in English and German.[4]

Logos

has historically lacked a uniform logo across its numerous language editions. Some editions use logos that depict a dictionary entry about the term "", based on the English logo, which was designed by Brion Vibber, a developer.Template:Efn Because a purely textual logo must vary considerably from language to language, a four-phase contest to adopt a uniform logo was held at the from September to October 2006.Template:Efn Some communities adopted the winning entry by "Smurrayinchester", a 3×3 grid of wooden tiles, each bearing a character from a different writing system. However, the poll did not see as much participation from the community as some community members had hoped, and a number of the larger wikis ultimately kept their textual logos.Template:Efn

In April 2009, the issue was resurrected with a new contest. This time, a depiction by "AAEngelman" of an open hardbound dictionary won a head-to-head vote against the 2006 logo, but the process to refine and adopt the new logo then stalled.Template:Efn In the following years, some wikis replaced their textual logos with one of the two newer logos. In 2012, 55 wikis that had been using the English logo received localized versions of the 2006 design by "Smurrayinchester".Template:Efn As of 25 January 2013, 136 wikis, representing 51% of 's entries, use the 2006 design by "Smurrayinchester", 31 wikis (48%) use a textual logo, and three wikis (2%) use the 2009 design by "AAEngelman".Template:Efn

Accuracy

To ensure accuracy, the English has a policy requiring that terms be attested. Terms in major languages such as English and Chinese must be verified by:

  1. clearly widespread use,
  2. use in a well-known work, or
  3. use in permanently recorded media, conveying meaning, in at least three independent instances spanning at least a year.

For smaller languages such as Creek and extinct languages such as Latin, one use in a permanently recorded medium or one mention in a reference work is sufficient verification.

Critical reception

Critical reception of has been mixed. In 2006 Jill Lepore wrote in the article "Noah’s Ark" for The New Yorker,Template:Efn

There’s no show of hands at . There’s not even an editorial staff. "Be your own lexicographer!", might be ’s motto. Who needs experts? Why pay good money for a dictionary written by lexicographers when we could cobble one together ourselves?

isn’t so much republican or democratic as Maoist. And it’s only as good as the copyright-expired books from which it pilfers.

Keir Graff’s review for Booklist was less critical:

Is there a place for ? Undoubtedly. The industry and enthusiasm of its many creators are proof that there’s a market. And it’s wonderful to have another strong source to use when searching the odd terms that pop up in today’s fast-changing world and the online environment. But as with so many Web sources (including this column), it’s best used by sophisticated users in conjunction with more reputable sources.

References in other publications are fleeting and part of larger discussions of World Heritage Encyclopedia, not progressing beyond a definition, although David Brooks in The Nashua Telegraph described it as wild and woolly.Template:Efn One of the impediments to independent coverage of is the continuing confusion that it is merely an extension of World Heritage Encyclopedia.Template:Efn In 2005, PC Magazine rated as one of the Internet's "Top 101 Web Sites",[5] although little information was given about the site.

The measure of correctness of the inflections for a subset of the Polish words in the English showed that this grammatical data is very stable. Only 131 out of 4748 Polish words have had their inflection data corrected.[6]

data in natural language processing

has semi-structured data.[7] lexicographic data should be converted to machine-readable format in order to be used in natural language processing tasks.[8][9][10]

data mining is a complex task. There are the following difficulties:[11] (1) the constant and frequent changes to data and schema, (2) the heterogeneity in language edition schemas Template:Efn and (3) the human-centric nature of a wiki.

There are several parsers for different language editions:[12]

  • DBpedia :[13] a subproject of DBpedia, the data are extracted from English, French, German and Russian wiktionaries; the data includes language, part of speech, definitions, semantic relations and translations. The declarative description of the page scema[14], regular expressions[15] and finite state transducer[16] are used in order to extract information.
  • JWKTL (Java Library):[17] provides access to English and German dumps via a Java API.[18] The data includes language, part of speech, definitions, quotations, semantic relations, etymologies and translations. JWKTL is available for non-commercial use.
  • wikokit:[19] the parser of English and Russian .[20] The parsed data includes language, part of speech, definitions, quotations[21]Template:Efn, semantic relations[22] and translations. This is a multi-licensed open-source software.
  • Etymological entries have been parsed in the Etymological WordNet project.[23]

The various natural language processing tasks were solved with the help of data:[24]

  • Rule-based machine translation between Dutch language and Afrikaans; data of English , Dutch and World Heritage Encyclopedia were used with the Apertium machine translation platform.[25]
  • Construction of machine-readable dictionary by the parser NULEX, which integrates open linguistic resources: English , WordNet, and VerbNet.[26] The parser NULEX scrapes English for tense information (verbs), plural form and part of speech (nouns).
  • Speech recognition and synthesis, where was used to automatically create pronunciation dictionaries.[27] Word-pronunciation pairs were retrieved from 6 language editions (Czech, English, French, Spanish, Polish, and German). Pronunciations are in terms of the International Phonetic Alphabet.Template:Efn The ASR system based on English has the highest word error rate, where each third phoneme has to be changed.[28]
  • Ontology engineering[29] and semantic network constructing.Template:Efn
  • Ontology matching.[30]
  • Text simplification. Medero & Ostendorf[31] assessed vocabulary difficulty (reading level detection) with the help of data. Properties of words extracted from entries (definition length and POS, sense, and translation counts) were investigated. Medero & Ostendorf expected that (1) very common words will be more likely to have multiple parts of speech, (2) common words to be more likely to have multiple senses, (3) common words will be more likely to have been translated into multiple languages. These features extracted from entries were useful in distinguishing word types that appear in Simple English World Heritage Encyclopedia articles from words that only appear in the Standard English comparable articles.
  • Part-of-speech tagging. Li et al. (2012)[32] built multilingual POS-taggers for eight resource-poor languages on the basis of English and Hidden Markov Models.Template:Efn
  • Sentiment analysis.[33]

Notes

Template:Notelist

References

  • Krizhanovsky, Andrew (2010). "Transformation of entry structure into tables and relations in a relational database schema". cs].
  • Krizhanovsky, Andrew (2010). "The comparison of thesauri transformed into the machine-readable format". cs].

External links

  • List of all editions
  • front page
  • 's Multilingual Statistics
  • 's page on (including list of all existing Wiktionaries)
  • Pages about in Meta.
  • Meta:Main Page – OmegaWiki
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.