DL之LSTM:基于《wonderland爱丽丝梦游仙境记》小说数据集利用LSTM算法(层加深,基于keras)对单个character字符预测
生活随笔
收集整理的這篇文章主要介紹了
DL之LSTM:基于《wonderland爱丽丝梦游仙境记》小说数据集利用LSTM算法(层加深,基于keras)对单个character字符预测
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
DL之LSTM:基于《wonderland愛(ài)麗絲夢(mèng)游仙境記》小說(shuō)數(shù)據(jù)集利用LSTM算法(層加深,基于keras)對(duì)單個(gè)character字符預(yù)測(cè)
?
?
?
目錄
基于《wonderland愛(ài)麗絲夢(mèng)游仙境記》小說(shuō)數(shù)據(jù)集利用LSTM算法(層加深,基于keras)對(duì)單個(gè)character字符預(yù)測(cè)
設(shè)計(jì)思路
輸出結(jié)果
核心代碼
?
?
?
基于《wonderland愛(ài)麗絲夢(mèng)游仙境記》小說(shuō)數(shù)據(jù)集利用LSTM算法(層加深,基于keras)對(duì)單個(gè)character字符預(yù)測(cè)
設(shè)計(jì)思路
數(shù)據(jù)集下載:https://download.csdn.net/download/qq_41185868/13767751
?
?
輸出結(jié)果
Using TensorFlow backend. F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'._np_qint8 = np.dtype([("qint8", np.int8, 1)]) F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'._np_quint8 = np.dtype([("quint8", np.uint8, 1)]) F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'._np_qint16 = np.dtype([("qint16", np.int16, 1)]) F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'._np_quint16 = np.dtype([("quint16", np.uint16, 1)]) F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'._np_qint32 = np.dtype([("qint32", np.int32, 1)]) F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.np_resource = np.dtype([("resource", np.ubyte, 1)]) [nltk_data] Error loading punkt: <urlopen error [Errno 11004] [nltk_data] getaddrinfo failed> raw_text[:10] : alice's ad Total Characters: 144413 chars ['\n', ' ', '!', '"', "'", '(', ')', '*', ',', '-', '.', '0', '3', ':', ';', '?', '[', ']', '_', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'] Total Vocab: 45 sentences 1625 ["alice's adventures in wonderland\n\nlewis carroll\n\nthe millennium fulcrum edition 3.0\n\nchapter i. down the rabbit-hole\n\nalice was beginning to get very tired of sitting by her sister on the\nbank, and of having nothing to do: once or twice she had peeped into the\nbook her sister was reading, but it had no pictures or conversations in\nit, 'and what is the use of a book,' thought alice 'without pictures or\nconversations?'", 'so she was considering in her own mind (as well as she could, for the\nhot day made her feel very sleepy and stupid), whether the pleasure\nof making a daisy-chain would be worth the trouble of getting up and\npicking the daisies, when suddenly a white rabbit with pink eyes ran\nclose by her.', "there was nothing so very remarkable in that; nor did alice think it so\nvery much out of the way to hear the rabbit say to itself, 'oh dear!", 'oh dear!', "i shall be late!'"] lengths (1625,) [420 289 140 ... 636 553 7] CharMapInt_dict 45 {'\n': 0, ' ': 1, '!': 2, '"': 3, "'": 4, '(': 5, ')': 6, '*': 7, ',': 8, '-': 9, '.': 10, '0': 11, '3': 12, ':': 13, ';': 14, '?': 15, '[': 16, ']': 17, '_': 18, 'a': 19, 'b': 20, 'c': 21, 'd': 22, 'e': 23, 'f': 24, 'g': 25, 'h': 26, 'i': 27, 'j': 28, 'k': 29, 'l': 30, 'm': 31, 'n': 32, 'o': 33, 'p': 34, 'q': 35, 'r': 36, 's': 37, 't': 38, 'u': 39, 'v': 40, 'w': 41, 'x': 42, 'y': 43, 'z': 44} IntMapChar_dict 45 {0: '\n', 1: ' ', 2: '!', 3: '"', 4: "'", 5: '(', 6: ')', 7: '*', 8: ',', 9: '-', 10: '.', 11: '0', 12: '3', 13: ':', 14: ';', 15: '?', 16: '[', 17: ']', 18: '_', 19: 'a', 20: 'b', 21: 'c', 22: 'd', 23: 'e', 24: 'f', 25: 'g', 26: 'h', 27: 'i', 28: 'j', 29: 'k', 30: 'l', 31: 'm', 32: 'n', 33: 'o', 34: 'p', 35: 'q', 36: 'r', 37: 's', 38: 't', 39: 'u', 40: 'v', 41: 'w', 42: 'x', 43: 'y', 44: 'z'} dataX: 144313 100 [[19, 30, 27, 21, 23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32], [30, 27, 21, 23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32, 1], [27, 21, 23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32, 1, 38], [21, 23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32, 1, 38, 26], [23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32, 1, 38, 26, 23]] dataY: 144313 [1, 38, 26, 23, 1] Total patterns: 144313 X_train.shape (144313, 100, 1) Y_train.shape (144313, 45) Init data,after read_out, chars: 144313 alice's adventures in wonderlandlewis carrolltge millennium fulcrum edition 3.0cgapter i. down _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= F:\File_Jupyter\實(shí)用代碼\NeuralNetwork(神經(jīng)網(wǎng)絡(luò))\CharacterLanguageLSTM.py:135: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.LSTM_Model.fit(X_train[:train_index], Y_train[:train_index], nb_epoch=10, batch_size=64, callbacks=callbacks_list) lstm_1 (LSTM) (None, 256) 264192 _________________________________________________________________ dropout_1 (Dropout) (None, 256) 0 _________________________________________________________________ dense_1 (Dense) (None, 45) 11565 ================================================================= Total params: 275,757 Trainable params: 275,757 Non-trainable params: 0 _________________________________________________________________ LSTM_Model None Epoch 1/10 2020-12-23 23:42:07.919094: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX264/1000 [>.............................] - ETA: 29s - loss: 3.8086128/1000 [==>...........................] - ETA: 15s - loss: 3.7953192/1000 [====>.........................] - ETA: 11s - loss: 3.7823256/1000 [======>.......................] - ETA: 8s - loss: 3.7692 320/1000 [========>.....................] - ETA: 7s - loss: 3.7552384/1000 [==========>...................] - ETA: 5s - loss: 3.7372448/1000 [============>.................] - ETA: 4s - loss: 3.7026512/1000 [==============>...............] - ETA: 4s - loss: 3.6552576/1000 [================>.............] - ETA: 3s - loss: 3.5955640/1000 [==================>...........] - ETA: 2s - loss: 3.5678704/1000 [====================>.........] - ETA: 2s - loss: 3.5116768/1000 [======================>.......] - ETA: 1s - loss: 3.4778832/1000 [=======================>......] - ETA: 1s - loss: 3.4441896/1000 [=========================>....] - ETA: 0s - loss: 3.4278960/1000 [===========================>..] - ETA: 0s - loss: 3.4092 1000/1000 [==============================] - 7s 7ms/step - loss: 3.3925Epoch 00001: loss improved from inf to 3.39249, saving model to hdf5/weights-improvement-01-3.3925.hdf5 Epoch 2/1064/1000 [>.............................] - ETA: 4s - loss: 3.1429128/1000 [==>...........................] - ETA: 4s - loss: 3.1370192/1000 [====>.........................] - ETA: 3s - loss: 3.1034256/1000 [======>.......................] - ETA: 3s - loss: 3.1038320/1000 [========>.....................] - ETA: 3s - loss: 3.0962384/1000 [==========>...................] - ETA: 2s - loss: 3.1055448/1000 [============>.................] - ETA: 2s - loss: 3.0986512/1000 [==============>...............] - ETA: 2s - loss: 3.0628576/1000 [================>.............] - ETA: 2s - loss: 3.0452640/1000 [==================>...........] - ETA: 1s - loss: 3.0571704/1000 [====================>.........] - ETA: 1s - loss: 3.0684768/1000 [======================>.......] - ETA: 1s - loss: 3.0606832/1000 [=======================>......] - ETA: 0s - loss: 3.0596896/1000 [=========================>....] - ETA: 0s - loss: 3.0529960/1000 [===========================>..] - ETA: 0s - loss: 3.0484 1000/1000 [==============================] - 5s 5ms/step - loss: 3.0371Epoch 00002: loss improved from 3.39249 to 3.03705, saving model to hdf5/weights-improvement-02-3.0371.hdf5 Epoch 3/1064/1000 [>.............................] - ETA: 4s - loss: 3.1671128/1000 [==>...........................] - ETA: 4s - loss: 3.0008192/1000 [====>.........................] - ETA: 4s - loss: 3.0159256/1000 [======>.......................] - ETA: 4s - loss: 3.0019320/1000 [========>.....................] - ETA: 3s - loss: 3.0056384/1000 [==========>...................] - ETA: 3s - loss: 3.0156448/1000 [============>.................] - ETA: 2s - loss: 3.0392512/1000 [==============>...............] - ETA: 2s - loss: 3.0243576/1000 [================>.............] - ETA: 2s - loss: 3.0226640/1000 [==================>...........] - ETA: 1s - loss: 3.0162704/1000 [====================>.........] - ETA: 1s - loss: 3.0238768/1000 [======================>.......] - ETA: 1s - loss: 3.0195832/1000 [=======================>......] - ETA: 0s - loss: 3.0286896/1000 [=========================>....] - ETA: 0s - loss: 3.0272960/1000 [===========================>..] - ETA: 0s - loss: 3.0214 1000/1000 [==============================] - 6s 6ms/step - loss: 3.0225Epoch 00003: loss improved from 3.03705 to 3.02249, saving model to hdf5/weights-improvement-03-3.0225.hdf5 Epoch 4/1064/1000 [>.............................] - ETA: 5s - loss: 2.7843128/1000 [==>...........................] - ETA: 5s - loss: 2.8997192/1000 [====>.........................] - ETA: 4s - loss: 2.9975256/1000 [======>.......................] - ETA: 4s - loss: 3.0150320/1000 [========>.....................] - ETA: 3s - loss: 3.0025384/1000 [==========>...................] - ETA: 3s - loss: 3.0442448/1000 [============>.................] - ETA: 3s - loss: 3.0494512/1000 [==============>...............] - ETA: 2s - loss: 3.0398576/1000 [================>.............] - ETA: 2s - loss: 3.0170640/1000 [==================>...........] - ETA: 2s - loss: 3.0421704/1000 [====================>.........] - ETA: 1s - loss: 3.0366768/1000 [======================>.......] - ETA: 1s - loss: 3.0339832/1000 [=======================>......] - ETA: 0s - loss: 3.0316896/1000 [=========================>....] - ETA: 0s - loss: 3.0361960/1000 [===========================>..] - ETA: 0s - loss: 3.0326 1000/1000 [==============================] - 6s 6ms/step - loss: 3.0352Epoch 00004: loss did not improve from 3.02249 Epoch 5/1064/1000 [>.............................] - ETA: 4s - loss: 2.8958128/1000 [==>...........................] - ETA: 4s - loss: 2.9239192/1000 [====>.........................] - ETA: 4s - loss: 2.9044256/1000 [======>.......................] - ETA: 4s - loss: 2.9417320/1000 [========>.....................] - ETA: 3s - loss: 2.9674384/1000 [==========>...................] - ETA: 3s - loss: 2.9646448/1000 [============>.................] - ETA: 3s - loss: 2.9629512/1000 [==============>...............] - ETA: 2s - loss: 2.9707576/1000 [================>.............] - ETA: 2s - loss: 2.9699640/1000 [==================>...........] - ETA: 1s - loss: 2.9594704/1000 [====================>.........] - ETA: 1s - loss: 2.9830768/1000 [======================>.......] - ETA: 1s - loss: 2.9773832/1000 [=======================>......] - ETA: 0s - loss: 2.9774896/1000 [=========================>....] - ETA: 0s - loss: 2.9891960/1000 [===========================>..] - ETA: 0s - loss: 3.0070 1000/1000 [==============================] - 5s 5ms/step - loss: 3.0120Epoch 00005: loss improved from 3.02249 to 3.01205, saving model to hdf5/weights-improvement-05-3.0120.hdf5 Epoch 6/1064/1000 [>.............................] - ETA: 4s - loss: 3.0241128/1000 [==>...........................] - ETA: 4s - loss: 3.0463192/1000 [====>.........................] - ETA: 3s - loss: 3.0364256/1000 [======>.......................] - ETA: 3s - loss: 2.9712320/1000 [========>.....................] - ETA: 3s - loss: 2.9840384/1000 [==========>...................] - ETA: 3s - loss: 2.9887448/1000 [============>.................] - ETA: 2s - loss: 2.9785512/1000 [==============>...............] - ETA: 2s - loss: 2.9852576/1000 [================>.............] - ETA: 2s - loss: 2.9893640/1000 [==================>...........] - ETA: 1s - loss: 2.9931704/1000 [====================>.........] - ETA: 1s - loss: 2.9790768/1000 [======================>.......] - ETA: 1s - loss: 2.9962832/1000 [=======================>......] - ETA: 0s - loss: 3.0166896/1000 [=========================>....] - ETA: 0s - loss: 3.0213960/1000 [===========================>..] - ETA: 0s - loss: 3.0143 1000/1000 [==============================] - 5s 5ms/step - loss: 3.0070Epoch 00006: loss improved from 3.01205 to 3.00701, saving model to hdf5/weights-improvement-06-3.0070.hdf5 Epoch 7/1064/1000 [>.............................] - ETA: 5s - loss: 3.0738128/1000 [==>...........................] - ETA: 5s - loss: 3.0309192/1000 [====>.........................] - ETA: 4s - loss: 2.9733256/1000 [======>.......................] - ETA: 4s - loss: 2.9728320/1000 [========>.....................] - ETA: 4s - loss: 2.9422384/1000 [==========>...................] - ETA: 3s - loss: 2.9496448/1000 [============>.................] - ETA: 3s - loss: 2.9548512/1000 [==============>...............] - ETA: 3s - loss: 2.9635576/1000 [================>.............] - ETA: 2s - loss: 2.9614640/1000 [==================>...........] - ETA: 2s - loss: 2.9537704/1000 [====================>.........] - ETA: 1s - loss: 2.9454768/1000 [======================>.......] - ETA: 1s - loss: 2.9649832/1000 [=======================>......] - ETA: 1s - loss: 2.9814896/1000 [=========================>....] - ETA: 0s - loss: 2.9955960/1000 [===========================>..] - ETA: 0s - loss: 2.9948 1000/1000 [==============================] - 6s 6ms/step - loss: 2.9903Epoch 00007: loss improved from 3.00701 to 2.99027, saving model to hdf5/weights-improvement-07-2.9903.hdf5 Epoch 8/1064/1000 [>.............................] - ETA: 5s - loss: 2.9248128/1000 [==>...........................] - ETA: 4s - loss: 2.9293192/1000 [====>.........................] - ETA: 4s - loss: 2.9820256/1000 [======>.......................] - ETA: 4s - loss: 3.0261320/1000 [========>.....................] - ETA: 3s - loss: 2.9989384/1000 [==========>...................] - ETA: 3s - loss: 3.0101448/1000 [============>.................] - ETA: 3s - loss: 3.0050512/1000 [==============>...............] - ETA: 2s - loss: 3.0155576/1000 [================>.............] - ETA: 2s - loss: 3.0414640/1000 [==================>...........] - ETA: 2s - loss: 3.0180704/1000 [====================>.........] - ETA: 1s - loss: 3.0295768/1000 [======================>.......] - ETA: 1s - loss: 2.9996832/1000 [=======================>......] - ETA: 0s - loss: 3.0151896/1000 [=========================>....] - ETA: 0s - loss: 3.0201960/1000 [===========================>..] - ETA: 0s - loss: 3.0063 1000/1000 [==============================] - 6s 6ms/step - loss: 3.0064Epoch 00008: loss did not improve from 2.99027 Epoch 9/1064/1000 [>.............................] - ETA: 4s - loss: 2.8417128/1000 [==>...........................] - ETA: 4s - loss: 2.9652192/1000 [====>.........................] - ETA: 4s - loss: 2.9907256/1000 [======>.......................] - ETA: 3s - loss: 3.0133320/1000 [========>.....................] - ETA: 3s - loss: 3.0092384/1000 [==========>...................] - ETA: 3s - loss: 3.0139448/1000 [============>.................] - ETA: 2s - loss: 3.0453512/1000 [==============>...............] - ETA: 2s - loss: 3.0481576/1000 [================>.............] - ETA: 2s - loss: 3.0434640/1000 [==================>...........] - ETA: 1s - loss: 3.0158704/1000 [====================>.........] - ETA: 1s - loss: 3.0141768/1000 [======================>.......] - ETA: 1s - loss: 3.0203832/1000 [=======================>......] - ETA: 0s - loss: 3.0068896/1000 [=========================>....] - ETA: 0s - loss: 2.9980960/1000 [===========================>..] - ETA: 0s - loss: 3.0016 1000/1000 [==============================] - 5s 5ms/step - loss: 2.9944Epoch 00009: loss did not improve from 2.99027 Epoch 10/1064/1000 [>.............................] - ETA: 4s - loss: 3.0100128/1000 [==>...........................] - ETA: 4s - loss: 3.0620192/1000 [====>.........................] - ETA: 4s - loss: 3.0169256/1000 [======>.......................] - ETA: 3s - loss: 3.0289320/1000 [========>.....................] - ETA: 3s - loss: 3.0060384/1000 [==========>...................] - ETA: 3s - loss: 2.9940448/1000 [============>.................] - ETA: 2s - loss: 2.9823512/1000 [==============>...............] - ETA: 2s - loss: 2.9686576/1000 [================>.............] - ETA: 2s - loss: 2.9699640/1000 [==================>...........] - ETA: 1s - loss: 2.9710704/1000 [====================>.........] - ETA: 1s - loss: 2.9625768/1000 [======================>.......] - ETA: 1s - loss: 2.9748832/1000 [=======================>......] - ETA: 0s - loss: 2.9794896/1000 [=========================>....] - ETA: 0s - loss: 2.9788960/1000 [===========================>..] - ETA: 0s - loss: 2.9802 1000/1000 [==============================] - 5s 5ms/step - loss: 2.9963Epoch 00010: loss did not improve from 2.99027 LSTM_Pre_word.shape: (3, 45) after LSTM read_out, chars: 3 ["\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n '\n\n!!\n\n\n\n !\n\n! ' \n\n\n\n\n", "\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n '\n\n!!\n\n\n\n !\n\n! ' \n\n\n\n\n", '\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n "\n\n!!\n\n \n !\n\n! \' \n\n\n\n\n'] LSTM_Model,Seed: " ent down its head to hide a smile: some of the other birds tittered audibly.'what i was going to s "199 100Generated Sequence:Done. _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm_2 (LSTM) (None, 100, 256) 264192 _________________________________________________________________ dropout_2 (Dropout) (None, 100, 256) 0 _________________________________________________________________ lstm_3 (LSTM) (None, 64) 82176 _________________________________________________________________ dropout_3 (Dropout) (None, 64) 0 _________________________________________________________________ dense_2 (Dense) (None, 45) 2925 ================================================================= Total params: 349,293 Trainable params: 349,293 Non-trainable params: 0 _________________________________________________________________ DeepLSTM_Model None F:\File_Jupyter\實(shí)用代碼\NeuralNetwork(神經(jīng)網(wǎng)絡(luò))\CharacterLanguageLSTM.py:246: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.DeepLSTM_Model.fit(X_train[:train_index], Y_train[:train_index], nb_epoch=2, batch_size=256, callbacks=callbacks_list) Epoch 1/2256/1000 [======>.......................] - ETA: 11s - loss: 3.8128512/1000 [==============>...............] - ETA: 5s - loss: 3.8058 768/1000 [======================>.......] - ETA: 2s - loss: 3.7976 1000/1000 [==============================] - 10s 10ms/step - loss: 3.7883Epoch 00001: loss improved from inf to 3.78827, saving model to hdf5/weights-improvement-01-3.7883.hdf5 Epoch 2/2256/1000 [======>.......................] - ETA: 5s - loss: 3.7167512/1000 [==============>...............] - ETA: 4s - loss: 3.6880768/1000 [======================>.......] - ETA: 1s - loss: 3.6622 1000/1000 [==============================] - 8s 8ms/step - loss: 3.6151Epoch 00002: loss improved from 3.78827 to 3.61512, saving model to hdf5/weights-improvement-02-3.6151.hdf5 DeepLSTM_Pre_word.shape: (3, 45) after DeepLSTM read_out, chars: 3 ["\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n '\n\n!!\n\n\n\n !\n\n! ' \n\n\n\n\n", "\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n '\n\n!!\n\n\n\n !\n\n! ' \n\n\n\n\n", '\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n "\n\n!!\n\n \n !\n\n! \' \n\n\n\n\n']?
?
核心代碼
LSTM_Model = Sequential() LSTM_Model.add(LSTM(256, input_shape=(X_train.shape[1], X_train.shape[2]))) LSTM_Model.add(Dropout(0.2)) LSTM_Model.add(Dense(Y_train.shape[1], activation='softmax')) LSTM_Model.compile(loss='categorical_crossentropy', optimizer='adam') print('LSTM_Model \n',LSTM_Model.summary())embedding_vector_length = 32 LSTMWithE_Model = Sequential() LSTMWithE_Model.add(Embedding(chars_len, embedding_vector_length, input_length=seq_length)) LSTMWithE_Model.add(LSTM(256)) LSTMWithE_Model.add(Dropout(0.2)) LSTMWithE_Model.add(Dense(Y_train.shape[1], activation='softmax')) LSTMWithE_Model.compile(loss='categorical_crossentropy', optimizer='adam') print (LSTMWithE_Model.summary())DeepLSTM_Model = Sequential() DeepLSTM_Model.add(LSTM(256, input_shape=(X_train.shape[1], X_train.shape[2]), return_sequences=True)) DeepLSTM_Model.add(Dropout(0.2)) DeepLSTM_Model.add(LSTM(64)) DeepLSTM_Model.add(Dropout(0.2)) DeepLSTM_Model.add(Dense(Y_train.shape[1], activation='softmax')) DeepLSTM_Model.compile(loss='categorical_crossentropy', optimizer='adam') print('DeepLSTM_Model \n',DeepLSTM_Model.summary())?
?
?
?
?
?
?
?
總結(jié)
以上是生活随笔為你收集整理的DL之LSTM:基于《wonderland爱丽丝梦游仙境记》小说数据集利用LSTM算法(层加深,基于keras)对单个character字符预测的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 成功解决on line , but no
- 下一篇: ML之FE:特征工程之数据处理常用案例总