You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am using your basic LSTM architecture to recreate the chatbot. However, I am using GloVe embedding.
During my training process, my Training accuracy gets stuck at very low values (0.1969) and no progress happens. I am attaching my code below. Can you tell me what can be done to improve the training?
from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense, LSTM
from keras.optimizers import Adam
Hi, I am using your basic LSTM architecture to recreate the chatbot. However, I am using GloVe embedding.
During my training process, my Training accuracy gets stuck at very low values (0.1969) and no progress happens. I am attaching my code below. Can you tell me what can be done to improve the training?
from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense, LSTM
from keras.optimizers import Adam
#model.reset_states()
model=Sequential()
model.add(Embedding(max_words,embedding_dim,input_length=maxlen))
model.add(LSTM(units=100,return_sequences=True, kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal", activation='sigmoid'))
model.add(LSTM(units=100,return_sequences=True, kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal", activation='sigmoid'))
model.add(LSTM(units=100,return_sequences=True, kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal", activation='sigmoid'))
model.add(LSTM(units=100,return_sequences=True, kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal", activation='sigmoid'))
model.summary()
model.layers[0].set_weights([embedding_matrix])
model.layers[0].trainable = False
model.compile(loss='cosine_proximity', optimizer='adam', metrics=['accuracy'])
#model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
model.fit(x_train, y_train,
epochs = 500,
batch_size = 32,
validation_data=(x_val,y_val))
Epoch 498/500
60/60 [==============================] - 0s 3ms/step - loss: -0.1303 - acc: 0.1969 - val_loss: -0.1785 - val_acc: 0.2909
Epoch 499/500
60/60 [==============================] - 0s 3ms/step - loss: -0.1303 - acc: 0.1969 - val_loss: -0.1785 - val_acc: 0.2909
Epoch 500/500
60/60 [==============================] - 0s 3ms/step - loss: -0.1303 - acc: 0.1969 - val_loss: -0.1785 - val_acc: 0.2909
Further training (on the same conversation data set ) does not improve accuracy.
The text was updated successfully, but these errors were encountered: