我用 tensorflow 实现的“一个神经聊天模型”:一个基于深度学习的聊天机器人
我用 tensorflow 實(shí)現(xiàn)的“一個(gè)神經(jīng)聊天模型”:一個(gè)基于深度學(xué)習(xí)的聊天機(jī)器人
個(gè)工作嘗試重現(xiàn)這個(gè)論文的結(jié)果?A Neural Conversational Model?(aka the Google chatbot).它使用了循環(huán)神經(jīng)網(wǎng)絡(luò)(seq2seq 模型)來(lái)進(jìn)行句子預(yù)測(cè)。它是用 python 和 TensorFlow 開(kāi)發(fā)。
程序的加載主體部分是參考 Torch的?neuralconvo?from?macournoyer.
現(xiàn)在, DeepQA 支持一下對(duì)話語(yǔ)料:
- Cornell Movie Dialogs?corpus (default). Already included when cloning the repository.
- OpenSubtitles?(thanks to?Eschnou). Much bigger corpus (but also noisier). To use it, follow?those instructions?and use the flag?--corpus opensubs.
- Supreme Court Conversation Data (thanks to?julien-c). Available using?--corpus scotus. See the?instructions?for installation.
- Ubuntu Dialogue Corpus?(thanks to?julien-c). Available using?--corpus ubuntu. See the?instructions?for installation.
- Your own data (thanks to?julien-c) by using a simple custom conversation format (See?here?for more info).
To speedup the training, it's also possible to use pre-trained word embeddings (thanks to?Eschnou). More info?here.
安裝
這個(gè)程序需要一下依賴(easy to install using pip:?pip3 install -r requirements.txt):
- python 3.5
- tensorflow (tested with v1.0)
- numpy
- CUDA (for using GPU)
- nltk (natural language toolkit for tokenized the sentences)
- tqdm (for the nice progression bars)
你可能需要下載附帶的數(shù)據(jù)讓 nltk 正常工作。
python3 -m nltk.downloader punktCornell 數(shù)據(jù)集已經(jīng)包括了。其他的數(shù)據(jù)集查看 readme 文件到他們所在的文件夾。 (在data/).
網(wǎng)站接口需要一些附加的包:
- django (tested with 1.10)
- channels
- Redis (see?here)
- asgi_redis (at least 1.0)
Docker 安裝也是支持的,更多詳細(xì)的教程參考?here.
運(yùn)行
聊天機(jī)器人
訓(xùn)練這個(gè)模型,直接運(yùn)行?main.py?。一旦訓(xùn)練完成,你可以測(cè)試結(jié)果用?main.py --test
(結(jié)果生成在 'save/model/samples_predictions.txt') 或者用?main.py --test interactive?(更有趣).
Here are some flags which could be useful. For more help and options, use?python main.py -h:
- --modelTag <name>: allow to give a name to the current model to differentiate between them when testing/training.
- --keepAll: use this flag when training if when testing, you want to see the predictions at different steps (it can be interesting to see the program changes its name and age as the training progress). Warning: It can quickly take a lot of storage space if you don't increase the?--saveEvery?option.
- --filterVocab 20?or?--vocabularySize 30000: Limit the vocabulary size to and optimize the performances and memory usage. Replace the words used less than 20 times by the?<unknown>?token and set a maximum vocabulary size.
- --verbose: when testing, will print the sentences as they are computed.
- --playDataset: show some dialogue samples from the dataset (can be use conjointly with?--createDataset?if this is the only action you want to perform).
To visualize the computational graph and the cost with?TensorBoard, just run?tensorboard --logdir save/.
默認(rèn)的這個(gè)網(wǎng)絡(luò)架構(gòu)是一個(gè)標(biāo)準(zhǔn)的 encoder/decoder 有兩個(gè) LSTM layers (隱藏層大小 256) ,然后 vocabulary 的 embedding size 是 32.這個(gè)網(wǎng)絡(luò)用 ADAM 訓(xùn)練。最大的句子長(zhǎng)度設(shè)置為 10個(gè)單詞,但是可以增加。
結(jié)果
當(dāng)然,這個(gè)網(wǎng)絡(luò)并不會(huì)很擅長(zhǎng)聊天:
Q: Hi A: Hi.Q: What is your name ? A: Laura.Q: What does that mean ? A: I dunno.Q: How old are you ? A: thirty-five.Q: Will Google hire me ? A: No.Q: Tell me the alphabet A: Fuck you.Q: That's not nice A: Yeah.這里有一些情況它并不能正確回答:
Q: Who is Laura ? A: My brother.Q: Say 'goodbye' A: Alright.Q: What is cooking ? A: A channel.Q: Can you say no ? A: No.Q: Two plus two A: Manny...預(yù)訓(xùn)練模型
項(xiàng)目截圖:
chatbot_miniature.png
實(shí)測(cè)截圖:
Screenshot from 2017-09-05 14-47-52.png
一步一步教程:
1.下載這個(gè)項(xiàng)目:
https://github.com/Conchylicultor/DeepQA
2.下載訓(xùn)練好的模型:
https://drive.google.com/file/d/0Bw-phsNSkq23OXRFTkNqN0JGUU0/view
(如果網(wǎng)址不能打開(kāi)的話,今晚我會(huì)上傳到百度網(wǎng)盤,分享到:http://www.tensorflownews.com/)
3.解壓之后放在 項(xiàng)目 save 目錄下
如圖所示
Screenshot from 2017-09-05 14-52-13.png
4.復(fù)制 save/model-pretrainedv2/dataset-cornell-old-lenght10-filter0-vocabSize0.pkl 這個(gè)文件到 data/samples/
如圖所示:
Screenshot from 2017-09-05 14-55-00.png
5.在項(xiàng)目目錄執(zhí)行一下命令:
python3 main.py --modelTag pretrainedv2 --test interactive程序讀取了預(yù)訓(xùn)練的模型之后,如圖:
Screenshot from 2017-09-05 14-57-14.png
聊天機(jī)器人資源合集
項(xiàng)目,語(yǔ)聊,論文,教程
https://github.com/fendouai/Awesome-Chatbot
更多教程:
http://www.tensorflownews.com/
DeepQA
https://github.com/Conchylicultor/DeepQA
備注:為了更加容易了解這個(gè)項(xiàng)目,說(shuō)明部分翻譯了項(xiàng)目的部分 readme ,主要是介紹使用預(yù)處理數(shù)據(jù)來(lái)運(yùn)行這個(gè)項(xiàng)目。
總結(jié)
以上是生活随笔為你收集整理的我用 tensorflow 实现的“一个神经聊天模型”:一个基于深度学习的聊天机器人的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: TensorFlow Wide And
- 下一篇: https://github.com/f