Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Word-sense disambiguation
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Approaches and methods== There are two main approaches to WSD – deep approaches and shallow approaches. Deep approaches presume access to a comprehensive body of [[Commonsense knowledge bases|world knowledge]]. These approaches are generally not considered to be very successful in practice, mainly because such a body of knowledge does not exist in a computer-readable format, outside very limited domains.{{sfn|Lenat|Guha|1989|pp=}} Additionally due to the long tradition in [[computational linguistics]], of trying such approaches in terms of coded knowledge and in some cases, it can be hard to distinguish between knowledge involved in linguistic or world knowledge. The first attempt was that by [[Margaret Masterman]] and her colleagues, at the Cambridge Language Research Unit in England, in the 1950s. This attempt used as data a punched-card version of Roget's Thesaurus and its numbered "heads", as an indicator of topics and looked for repetitions in text, using a set intersection algorithm. It was not very successful,{{sfn|Wilks|Slator|Guthrie|1996|pp=}} but had strong relationships to later work, especially Yarowsky's machine learning optimisation of a thesaurus method in the 1990s. Shallow approaches do not try to understand the text, but instead consider the surrounding words. These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives superior results in practice, due to the computer's limited world knowledge. There are four conventional approaches to WSD: * [[Machine-readable dictionary|Dictionary]]- and knowledge-based methods: These rely primarily on dictionaries, thesauri, and lexical [[knowledge base]]s, without using any corpus evidence. * [[Semi-supervised learning|Semi-supervised or minimally supervised methods]]: These make use of a secondary source of knowledge such as a small annotated corpus as seed data in a bootstrapping process, or a word-aligned bilingual corpus. * [[Supervised learning|Supervised methods]]: These make use of sense-annotated corpora to train from. * [[Unsupervised learning|Unsupervised methods]]: These eschew (almost) completely external information and work directly from raw unannotated corpora. These methods are also known under the name of [[word sense discrimination]]. Almost all these approaches work by defining a window of ''n'' content words around each word to be disambiguated in the corpus, and statistically analyzing those ''n'' surrounding words. Two shallow approaches used to train and then disambiguate are [[Naive Bayes classifier|Naïve Bayes classifiers]] and [[decision tree]]s. In recent research, [[Kernel methods|kernel-based methods]] such as [[support vector machine]]s have shown superior performance in [[supervised learning]]. Graph-based approaches have also gained much attention from the research community, and currently achieve performance close to the state of the art. ===Dictionary- and knowledge-based methods=== The [[Lesk algorithm]]{{sfn|Lesk|1986|pp=24–26}} is the seminal dictionary-based method. It is based on the hypothesis that words used together in text are related to each other and that the relation can be observed in the definitions of the words and their senses. Two (or more) words are disambiguated by finding the pair of dictionary senses with the greatest word overlap in their dictionary definitions. For example, when disambiguating the words in "pine cone", the definitions of the appropriate senses both include the words evergreen and tree (at least in one dictionary). A similar approach<ref>{{Cite book|last1=Diamantini|first1=C.|last2=Mircoli|first2=A.|last3=Potena|first3=D.|last4=Storti|first4=E.|title=2015 International Conference on Collaboration Technologies and Systems (CTS) |chapter=Semantic disambiguation in a social information discovery system |s2cid=13260353|date=2015-06-01|pages=326–333|doi=10.1109/CTS.2015.7210442|isbn=978-1-4673-7647-1}}</ref> searches for the shortest path between two words: the second word is iteratively searched among the definitions of every semantic variant of the first word, then among the definitions of every semantic variant of each word in the previous definitions and so on. Finally, the first word is disambiguated by selecting the semantic variant which minimizes the distance from the first to the second word. An alternative to the use of the definitions is to consider general word-sense [[relatedness]] and to compute the [[semantic similarity]] of each pair of word senses based on a given lexical knowledge base such as [[WordNet]]. [[Graph (discrete mathematics)|Graph-based]] methods reminiscent of [[spreading activation]] research of the early days of AI research have been applied with some success. More complex graph-based approaches have been shown to perform almost as well as supervised methods{{sfn|Navigli|Velardi|2005|pp=1063–1074}} or even outperforming them on specific domains.{{sfn|Navigli|Litkowski|Hargraves|2007|pp=30–35}}{{sfn|Agirre|Lopez de Lacalle|Soroa|2009|pp=1501–1506}} Recently, it has been reported that simple [[Connectivity (graph theory)|graph connectivity measures]], such as [[Degree (graph theory)|degree]], perform state-of-the-art WSD in the presence of a sufficiently rich lexical knowledge base.{{sfn|Navigli|Lapata|2010|pp=678–692}} Also, automatically transferring [[knowledge]] in the form of [[semantic relation]]s from Wikipedia to WordNet has been shown to boost simple knowledge-based methods, enabling them to rival the best supervised systems and even outperform them in a domain-specific setting.{{sfn|Ponzetto|Navigli|2010|pp=1522–1531}} The use of selectional preferences (or selectional restrictions) is also useful, for example, knowing that one typically cooks food, one can disambiguate the word bass in "I am cooking basses" (i.e., it's not a musical instrument). ===Supervised methods=== [[Supervised learning|Supervised]] methods are based on the assumption that the context can provide enough evidence on its own to disambiguate words (hence, [[common sense]] and [[reasoning]] are deemed unnecessary). Probably every machine learning algorithm going has been applied to WSD, including associated techniques such as [[feature selection]], parameter optimization, and [[ensemble learning]]. [[Support Vector Machines]] and [[memory-based learning]] have been shown to be the most successful approaches, to date, probably because they can cope with the high-dimensionality of the feature space. However, these supervised methods are subject to a new knowledge acquisition bottleneck since they rely on substantial amounts of manually sense-tagged corpora for training, which are laborious and expensive to create. ===Semi-supervised methods=== Because of the lack of training data, many word sense disambiguation algorithms use [[semi-supervised learning]], which allows both labeled and unlabeled data. The [[Yarowsky algorithm]] was an early example of such an algorithm.{{sfn|Yarowsky|1995|pp=189–196}} It uses the ‘One sense per collocation’ and the ‘One sense per discourse’ properties of human languages for word sense disambiguation. From observation, words tend to exhibit only one sense in most given discourse and in a given collocation.<ref>{{cite book |last1=Mitkov |first1=Ruslan |title=The Oxford Handbook of Computational Linguistics |date=2004 |publisher=OUP |isbn=978-0-19-927634-9 |page=257 |chapter-url=https://books.google.com/books?id=yl6AnaKtVAkC&pg=PA257 |language=en |chapter=13.5.3 Two claims about senses |access-date=2022-02-22 |archive-date=2022-02-22 |archive-url=https://web.archive.org/web/20220222110649/https://books.google.com/books?id=yl6AnaKtVAkC&pg=PA257 |url-status=live }}</ref> The [[bootstrapping]] approach starts from a small amount of seed data for each word: either manually tagged training examples or a small number of surefire decision rules (e.g., 'play' in the context of 'bass' almost always indicates the musical instrument). The seeds are used to train an initial [[Classifier (mathematics)|classifier]], using any supervised method. This classifier is then used on the untagged portion of the corpus to extract a larger training set, in which only the most confident classifications are included. The process repeats, each new classifier being trained on a successively larger training corpus, until the whole corpus is consumed, or until a given maximum number of iterations is reached. Other semi-supervised techniques use large quantities of untagged corpora to provide [[co-occurrence]] information that supplements the tagged corpora. These techniques have the potential to help in the adaptation of supervised models to different domains. Also, an ambiguous word in one language is often translated into different words in a second language depending on the sense of the word. Word-aligned [[bilingual]] corpora have been used to infer cross-lingual sense distinctions, a kind of semi-supervised system.{{citation needed|reason=where? by whom?|date=August 2022}} ===Unsupervised methods=== {{Main|Word sense induction}} [[Unsupervised learning]] is the greatest challenge for WSD researchers. The underlying assumption is that similar senses occur in similar contexts, and thus senses can be induced from text by [[cluster analysis|clustering]] word occurrences using some [[Similarity measure|measure of similarity]] of context,{{sfn|Schütze|1998|pp=97–123}} a task referred to as [[word sense induction]] or discrimination. Then, new occurrences of the word can be classified into the closest induced clusters/senses. Performance has been lower than for the other methods described above, but comparisons are difficult since senses induced must be mapped to a known dictionary of word senses. If a [[Map (mathematics)|mapping]] to a set of dictionary senses is not desired, cluster-based evaluations (including measures of entropy and purity) can be performed. Alternatively, word sense induction methods can be tested and compared within an application. For instance, it has been shown that word sense induction improves Web search result clustering by increasing the quality of result clusters and the degree diversification of result lists.{{sfn|Navigli|Crisafulli|2010|pp=}}{{sfn|Di Marco|Navigli|2013|pp=}} It is hoped that unsupervised learning will overcome the [[knowledge acquisition]] bottleneck because they are not dependent on manual effort. Representing words considering their context through fixed-size dense vectors ([[word embedding]]s) has become one of the most fundamental blocks in several NLP systems.<ref name=":0">{{cite arXiv|last1=Mikolov|first1=Tomas|last2=Chen|first2=Kai|last3=Corrado|first3=Greg|last4=Dean|first4=Jeffrey|date=2013-01-16|title=Efficient Estimation of Word Representations in Vector Space|eprint=1301.3781|class=cs.CL}}</ref><ref>{{Cite book|last1=Pennington|first1=Jeffrey|last2=Socher|first2=Richard|last3=Manning|first3=Christopher|title=Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) |chapter=Glove: Global Vectors for Word Representation |date=2014|pages=1532–1543|location=Stroudsburg, PA, USA|publisher=Association for Computational Linguistics|doi=10.3115/v1/d14-1162|s2cid=1957433|doi-access=free}}</ref><ref>{{Cite journal|last1=Bojanowski|first1=Piotr|last2=Grave|first2=Edouard|last3=Joulin|first3=Armand|last4=Mikolov|first4=Tomas|date=December 2017|title=Enriching Word Vectors with Subword Information|journal=Transactions of the Association for Computational Linguistics|volume=5|pages=135–146|doi=10.1162/tacl_a_00051|issn=2307-387X|doi-access=free|arxiv=1607.04606}}</ref> Even though most of traditional word-embedding techniques conflate words with multiple meanings into a single vector representation, they still can be used to improve WSD.<ref>{{Cite journal|last1=Iacobacci|first1=Ignacio|last2=Pilehvar|first2=Mohammad Taher|last3=Navigli|first3=Roberto|date=2016|title=Embeddings for Word Sense Disambiguation: An Evaluation Study|url=http://aclweb.org/anthology/P16-1085|journal=Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)|location=Berlin, Germany|publisher=Association for Computational Linguistics|pages=897–907|doi=10.18653/v1/P16-1085|doi-access=free|access-date=2019-10-28|archive-date=2019-10-28|archive-url=https://web.archive.org/web/20191028134505/https://www.aclweb.org/anthology/P16-1085/|url-status=live|hdl=11573/936571|hdl-access=free}}</ref> A simple approach to employ pre-computed word embeddings to represent word senses is to compute the centroids of sense clusters.<ref>{{Cite book |last1=Bhingardive |first1=Sudha |last2=Singh |first2=Dhirendra |last3=V |first3=Rudramurthy |last4=Redkar |first4=Hanumant |last5=Bhattacharyya |first5=Pushpak |title=Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies |chapter=Unsupervised Most Frequent Sense Detection using Word Embeddings |year=2015 |chapter-url=https://aclanthology.org/N15-1132 |location=Denver, Colorado |publisher=Association for Computational Linguistics |pages=1238–1243 |doi=10.3115/v1/N15-1132 |s2cid=10778029 |access-date=2023-01-21 |archive-date=2023-01-21 |archive-url=https://web.archive.org/web/20230121132514/https://aclanthology.org/N15-1132/ |url-status=live }}</ref><ref>{{Cite journal |last1=Butnaru |first1=Andrei |last2=Ionescu |first2=Radu Tudor |last3=Hristea |first3=Florentina |year=2017 |title=ShotgunWSD: An unsupervised algorithm for global word sense disambiguation inspired by DNA sequencing |url=https://aclanthology.org/E17-1086/ |journal=Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics |language=en-us |pages=916–926 |arxiv=1707.08084 |access-date=2023-01-21 |archive-date=2023-01-21 |archive-url=https://web.archive.org/web/20230121132136/https://aclanthology.org/E17-1086/ |url-status=live }}</ref> In addition to word-embedding techniques, lexical databases (e.g., [[WordNet]], [[Open Mind Common Sense|ConceptNet]], [[BabelNet]]) can also assist unsupervised systems in mapping words and their senses as dictionaries. Some techniques that combine lexical databases and word embeddings are presented in AutoExtend<ref>{{Cite conference |last1=Rothe |first1=Sascha |last2=Schütze |first2=Hinrich |date=2015 |title=Volume 1: Long Papers |conference=Association for Computational Linguistics and the International Joint Conference on Natural Language Processing |location=Stroudsburg, Pennsylvania, USA |publisher=Association for Computational Linguistics |pages=1793–1803 |arxiv=1507.01127 |bibcode=2015arXiv150701127R |doi=10.3115/v1/p15-1173 |chapter=AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes |journal=Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing |s2cid=15687295}}</ref><ref name=":1">{{Cite journal|last1=Rothe|first1=Sascha|last2=Schütze|first2=Hinrich|date=September 2017|title=AutoExtend: Combining Word Embeddings with Semantic Resources|journal=Computational Linguistics|volume=43|issue=3|pages=593–617|doi=10.1162/coli_a_00294|issn=0891-2017|doi-access=free}}</ref> and Most Suitable Sense Annotation (MSSA).<ref name=":2">{{Cite journal|last1=Ruas|first1=Terry|last2=Grosky|first2=William|last3=Aizawa|first3=Akiko|date=December 2019|title=Multi-sense embeddings through a word sense disambiguation process|journal=Expert Systems with Applications|volume=136|pages=288–303|doi=10.1016/j.eswa.2019.06.026 |arxiv=2101.08700|hdl=2027.42/145475|s2cid=52225306|hdl-access=free}}</ref> In AutoExtend,<ref name=":1" /> they present a method that decouples an object input representation into its properties, such as words and their word senses. AutoExtend uses a graph structure to map words (e.g. text) and non-word (e.g. [[synsets]] in [[WordNet]]) objects as nodes and the relationship between nodes as edges. The relations (edges) in AutoExtend can either express the addition or similarity between its nodes. The former captures the intuition behind the offset calculus,<ref name=":0" /> while the latter defines the similarity between two nodes. In MSSA,<ref name=":2" /> an unsupervised disambiguation system uses the similarity between word senses in a fixed context window to select the most suitable word sense using a pre-trained word-embedding model and [[WordNet]]. For each context window, MSSA calculates the centroid of each word sense definition by averaging the word vectors of its words in WordNet's [[Gloss (annotation)|glosses]] (i.e., short defining gloss and one or more usage example) using a pre-trained word-embedding model. These centroids are later used to select the word sense with the highest similarity of a target word to its immediately adjacent neighbors (i.e., predecessor and successor words). After all words are annotated and disambiguated, they can be used as a training corpus in any standard word-embedding technique. In its improved version, MSSA can make use of word sense embeddings to repeat its disambiguation process iteratively. ===Other approaches=== Other approaches may vary differently in their methods: * Domain-driven disambiguation;{{sfn|Gliozzo|Magnini|Strapparava|2004|pp=380–387}}{{sfn|Buitelaar|Magnini|Strapparava|Vossen|2006|pp=275–298}} * Identification of dominant word senses;{{sfn|McCarthy|Koeling|Weeds|Carroll|2007|pp=553–590}}{{sfn|Mohammad|Hirst|2006|pp=121–128}}{{sfn|Lapata|Keller|2007|pp=348–355}} * WSD using Cross-Lingual Evidence.{{sfn|Ide|Erjavec|Tufis|2002|pp=54–60}}{{sfn|Chan|Ng|2005|pp=1037–1042}} * WSD solution in [[John Ball (cognitive scientist)|John Ball's]] language independent NLU combining Patom Theory and RRG (Role and Reference Grammar) * [[Type inference]] in [[constraint-based grammar]]s<ref name="Shieber1992">{{cite book |author=Shieber |first=Stuart M. |url=https://books.google.com/books?id=QcYl_ylrHmcC |title=Constraint-based Grammar Formalisms: Parsing and Type Inference for Natural and Computer Languages |publisher=MIT Press |year=1992 |isbn=978-0-262-19324-5 |location=Massachusetts |language=en-us |access-date=2018-12-23 |archive-url=https://web.archive.org/web/20230715100054/https://books.google.com/books?id=QcYl_ylrHmcC |archive-date=2023-07-15 |url-status=live}}</ref> ===Other languages=== * '''[[Hindi]]''': Lack of [[lexical resource]]s in Hindi have hindered the performance of supervised models of WSD, while the unsupervised models suffer due to extensive morphology. A possible solution to this problem is the design of a WSD model by means of [[parallel corpora]].<ref>Bhattacharya, Indrajit, Lise Getoor, and Yoshua Bengio. [http://www.umiacs.umd.edu/~getoor/Publications/acl04.pdf Unsupervised sense disambiguation using bilingual probabilistic models] {{Webarchive|url=https://web.archive.org/web/20160109171700/http://www.umiacs.umd.edu/~getoor/Publications/acl04.pdf |date=2016-01-09 }}. Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, 2004.</ref><ref>Diab, Mona, and Philip Resnik. [http://www.aclweb.org/anthology/P02-1033 An unsupervised method for word sense tagging using parallel corpora] {{Webarchive|url=https://web.archive.org/web/20160304120639/http://www.aclweb.org/anthology/P02-1033 |date=2016-03-04 }}. Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, 2002.</ref> The creation of the [http://www.cfilt.iitb.ac.in/wordnet/webhwn/ Hindi WordNet] has paved way for several Supervised methods which have been proven to produce a higher accuracy in disambiguating nouns.<ref>Manish Sinha, Mahesh Kumar, Prabhakar Pande, Laxmi Kashyap, and Pushpak Bhattacharyya. [http://www.cfilt.iitb.ac.in/wordnet/webhwn/papers/HindiWSD.pdf Hindi word sense disambiguation] {{Webarchive|url=https://web.archive.org/web/20160304230158/http://www.cfilt.iitb.ac.in/wordnet/webhwn/papers/HindiWSD.pdf |date=2016-03-04 }}. In International Symposium on Machine Translation, Natural Language Processing and Translation Support Systems, Delhi, India, 2004.</ref> ===Local impediments and summary=== The knowledge acquisition bottleneck is perhaps the major impediment to solving the WSD problem. [[Unsupervised learning|Unsupervised methods]] rely on knowledge about word senses, which is only sparsely formulated in dictionaries and lexical databases. [[Supervised learning|Supervised methods]] depend crucially on the existence of manually annotated examples for every word sense, a requisite that can so far{{when|date=February 2019}} be met only for a handful of words for testing purposes, as it is done in the [[Senseval]] exercises. One of the most promising trends in WSD research is using the largest [[Corpus linguistics|corpus]] ever accessible, the [[World Wide Web]], to acquire lexical information automatically.{{sfn|Kilgarrif|Grefenstette|2003|pp=333–347}} WSD has been traditionally understood as an intermediate language engineering technology which could improve applications such as [[information retrieval]] (IR). In this case, however, the reverse is also true: [[web search engine]]s implement simple and robust IR techniques that can successfully mine the Web for information to use in WSD. The historic lack of training data has provoked the appearance of some new algorithms and techniques, as described in [[Automatic acquisition of sense-tagged corpora]].
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Word-sense disambiguation
(section)
Add topic