我對 LSTM 領域完全陌生。是否有任何提示可以優化我的自動編碼器以重建 len = 300 序列的任務瓶頸層應該有 10-15 個神經元model = Sequential()model.add(LSTM(128, activation='relu', input_shape=(timesteps,1), return_sequences=True))model.add(LSTM(64, activation='relu', return_sequences=False))model.add(RepeatVector(timesteps))model.add(LSTM(64, activation='relu', return_sequences=True))model.add(LSTM(128, activation='relu', return_sequences=True))model.add(TimeDistributed(Dense(1)))model.compile(optimizer='adam', loss='mae')代碼復制自:https ://towardsdatascience.com/step-by-step-understanding-lstm-autoencoder-layers-ffab055b6352目前結果只是 nan 的序列:[nan, nan, nan ... nan, nan]序列看起來類似于下圖:
添加回答
舉報
0/150
提交
取消