A One Hot Vector: Difference between revisions
From Algolit
Line 1: | Line 1: | ||
=one-hot-vectors= | =one-hot-vectors= | ||
+ | |||
+ | "Meaning is this illusive thing that were trying to capture" (Richard Socher in [https://www.youtube.com/watch?v=xhHOL3TNyJs&index=2&list=PLcGUo322oqu9n4i0X3cRJgKyVy7OkDdoi CS224D Lecture 2 - 31st Mar 2016 (Youtube)]) | ||
[0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0] | [0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0] |
Revision as of 09:21, 6 October 2017
one-hot-vectors
"Meaning is this illusive thing that were trying to capture" (Richard Socher in CS224D Lecture 2 - 31st Mar 2016 (Youtube))
[0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0]
with one 0 for each word in a vocabulary where the 1 is representing the place of a word in the vector > In this kind of vector representation: none of the words are similar, they are all a 1.
Note that
Words are represented once in a vector. So words with multiple meanings, like "bank", are more difficult to represent. there is research to multivectors for one word, so that it does not end up in the middle (but you get already far by using 1 dense vector/word)
From: http://pad.constantvzw.org/public_pad/neural_networks_3