A Bag of Words: Difference between revisions
From Algolit
m (added category) |
|||
(10 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
− | + | {| | |
+ | |- | ||
+ | | Type: || Algoliterary exploration | ||
+ | |- | ||
+ | | Technique: || Frequency counts | ||
+ | |- | ||
+ | | Developed by: || Python, nltk, Algolit | ||
+ | |} | ||
− | + | This interactive installation guides you through the different steps in the process of a bag-of-words model. | |
− | + | The bag-of-words model is a classification model which reads a text as a collection of words. While processing a text the model discards word order, punctuation and possibly conjugations. The model transforms the text into a unique list of words used in the text, or quite literally a bag of words. | |
− | + | This model is often used to understand the subject of a text by recognizing the most frequent or important words, or to measure the similarities of texts by comparing their bags of words. | |
− | To understand the importance of | + | To understand the importance of less common but significant words, often related to the topic of the text, the function TF-IDF (Term Frequency-Inverted Document Frequency) can be used, where the frequency of a word in a single text is divided by the average frequency of the word in the collection. |
+ | |||
+ | [[Category:Algoliterary-Encounters]] | ||
+ | [[Category:algolit-extension]] |
Latest revision as of 12:10, 30 October 2017
Type: | Algoliterary exploration |
Technique: | Frequency counts |
Developed by: | Python, nltk, Algolit |
This interactive installation guides you through the different steps in the process of a bag-of-words model.
The bag-of-words model is a classification model which reads a text as a collection of words. While processing a text the model discards word order, punctuation and possibly conjugations. The model transforms the text into a unique list of words used in the text, or quite literally a bag of words.
This model is often used to understand the subject of a text by recognizing the most frequent or important words, or to measure the similarities of texts by comparing their bags of words.
To understand the importance of less common but significant words, often related to the topic of the text, the function TF-IDF (Term Frequency-Inverted Document Frequency) can be used, where the frequency of a word in a single text is divided by the average frequency of the word in the collection.