1. Basic Embedding Model(基础嵌入模型)
1-1. NNLM(Neural Network Language Model)- Predict Next Word
A Neural Probabilistic LanguageModel(2003)
NNLM_Tensor.ipynb, NNLM_Torch.ipynb
1-2. Word2Vec(Skip-gram) - EmbeddingWords and Show Graph
Distributed Representations of Wordsand Phrases and their Compositionality(2013)
Word2Vec_Tensor(NCE_loss).ipynb,Word2Vec_Tensor(Softmax).ipynb,Word2Vec_Torch(Softmax).ipynb
1-3. FastText(Application Level)- Sentence Classification
Bag of Tricks for Efficient Text Classification(2016)
FastText.ipynb
2. CNN(卷积神经网络)
2-1. TextCNN - BinarySentiment Classification
Convolutional Neural Networks for SentenceClassification(2014)
TextCNN_Tensor.ipynb, TextCNN_Torch.ipynb
2-2. DCNN(Dynamic Convolutional Neural Network)
3. RNN(循环神经网络)
3-1. TextRNN - Predict NextStep
Finding Structure in Time(1990)
TextRNN_Tensor.ipynb, TextRNN_Torch.ipynb
3-2. TextLSTM - Autocomplete
LONG SHORT-TERM MEMORY(1997)
TextLSTM_Tensor.ipynb, TextLSTM_Torch.ipynb
3-3. Bi-LSTM - Predict NextWord in Long Sentence
Bi_LSTM_Tensor.ipynb, Bi_LSTM_Torch.ipynb
4. Attention Mechanism(注意力机制)
4-1. Seq2Seq - Change Word
Learning Phrase Representations using RNN Encoder–Decoder for Statistical MachineTranslation(2014)
Seq2Seq_Tensor.ipynb, Seq2Seq_Torch.ipynb
4-2. Seq2Seq with Attention - Translate
NeuralMachine Translation by Jointly Learning to Align and Translate(2014)
Seq2Seq(Attention)_Tensor.ipynb,Seq2Seq(Attention)_Torch.ipynb
4-3. Bi-LSTM with Attention - BinarySentiment Classification
Bi_LSTM(Attention)_Tensor.ipynb,Bi_LSTM(Attention)_Torch.ipynb
5. Model based on Transformer(Transformer模型)
5-1. The Transformer - Translate
Attention Is All You Need(2017)
Transformer_Torch.ipynb, Transformer(Greedy_decoder)_Torch.ipynb
5-2. BERT - ClassificationNext Sentence & Predict Masked Tokens
BERT: Pre-training of Deep Bidirectional Transformers for LanguageUnderstanding(2018)