WikiHarass: Difference between revisions
From Algolit
(3 intermediate revisions by one other user not shown) | |||
Line 2: | Line 2: | ||
|- | |- | ||
| Type: || Dataset | | Type: || Dataset | ||
+ | |- | ||
+ | |Number of words: || 1.039.789 | ||
+ | |- | ||
+ | |Unique words: || 64.136 | ||
|- | |- | ||
| Source: || English Wikipedia | | Source: || English Wikipedia | ||
Line 11: | Line 15: | ||
The [https://figshare.com/projects/Wikipedia_Talk/16731 original dataset] consists of: | The [https://figshare.com/projects/Wikipedia_Talk/16731 original dataset] consists of: | ||
− | *A corpus of all 95 million user and article talk diffs made between | + | *A corpus of all 95 million user and article talk diffs made between 2001 and 2015 scored by the personal attack model. |
*A human annotated dataset of 1m crowd-sourced annotations that cover 100k talk page diffs (with 10 judgements per diff). | *A human annotated dataset of 1m crowd-sourced annotations that cover 100k talk page diffs (with 10 judgements per diff). | ||
Latest revision as of 13:55, 2 November 2017
Type: | Dataset |
Number of words: | 1.039.789 |
Unique words: | 64.136 |
Source: | English Wikipedia |
Developed by: | Wikimedia Foundation |
The Detox dataset is a project by Wikimedia and Perspective API to train a neural network that would detect the level of toxicity of a comment.
The original dataset consists of:
- A corpus of all 95 million user and article talk diffs made between 2001 and 2015 scored by the personal attack model.
- A human annotated dataset of 1m crowd-sourced annotations that cover 100k talk page diffs (with 10 judgements per diff).
For Algolit, a smaller section of the Detox dataset was used, taken from Jigsaw's Github, which contains both constructive and vandalist edits.