Jack Dermody


Sequence to Sequence with LSTM

One way the output can be varied in length against the input is with something called a sequence to sequence (STS) recurrent neural network. Our dictionary size (the number of possible characters) is 10, and each generated sequence is of length 5. In a recurrent auto encoder the input and output sequence lengths are necessarily the same, but we are using the encoder's ability to find the relevant discriminative features of the input as it creates the single embedding from the input sequence.