亚洲在线久爱草,狠狠天天香蕉网,天天搞日日干久草,伊人亚洲日本欧美

為了賬號安全,請及時綁定郵箱和手機立即綁定
已解決430363個問題,去搜搜看,總會有你想問的

Python CNN LSTM(值誤差步長應為 1、1 或 3 但實際為 2)

Python CNN LSTM(值誤差步長應為 1、1 或 3 但實際為 2)

拉風的咖菲貓 2022-12-14 20:48:09
我一直在嘗試在 mnist 數據集上訓練一個 convlstm 模型,以拓寬我在模型開發方面的知識。我無法逃避我在標題中包含的錯誤。任何幫助或提示表示贊賞!我知道步幅的默認值是 (1,1) 但不確定 2 是如何設置的。import tensorflow as tffrom keras.models import Sequentialfrom keras.layers import Dense, Dropout, LSTM, CuDNNLSTM, TimeDistributed, Reshapefrom keras.utils import to_categoricalfrom keras.layers.convolutional import Conv2D, Conv3Dfrom keras.layers.pooling import MaxPooling2D, MaxPool3Dfrom keras.layers.core import Flattendef prep_pixels(train, test):    # convert from integers to floats    train_norm = train.astype('float32')    test_norm = test.astype('float32')    # normalize to range 0-1    train_norm = train_norm / 255.0    test_norm = test_norm / 255.0    # return normalized images    return train_norm, test_normmnist = tf.keras.datasets.mnist(x_train, y_train), (x_test, y_test) = mnist.load_data()x_train = x_train.reshape((x_train.shape[0], 28, 28, 1))x_test = x_test.reshape((x_test.shape[0], 28, 28, 1))y_train = to_categorical(y_train)y_test = to_categorical(y_test)x_train, x_test = prep_pixels(x_train, x_test)model = Sequential()model.add(TimeDistributed(Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1))))model.add(TimeDistributed((MaxPooling2D((2, 2)))))model.add(TimeDistributed(Flatten()))model.add(LSTM(32, activation='relu', return_sequences=True))model.add(Dropout(0.2))model.add(Dense(10, activation='softmax'))opt = tf.keras.optimizers.Adam(lr=1e-3, decay=1e-5)model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])model.fit(x_train, y_train, epochs=1, validation_data=(x_test, y_test))錯誤model.fit(x_train, y_train, epochs=1, validation_data=(x_test, y_test))strides = _get_sequence(strides, n, channel_index, "strides")ValueError:步幅應該是長度 1、1 或 3 但實際是 2
查看完整描述

1 回答

?
湖上湖

TA貢獻2003條經驗 獲得超2個贊

您似乎還沒有為 ConvLSTM 創建窗口數據集。所以你可能想在打電話之前這樣做model.fit


d_train = tf.keras.preprocessing.sequence.TimeseriesGenerator(x_train, y_train, length=5, batch_size=64) # window size = 5

d_test = tf.keras.preprocessing.sequence.TimeseriesGenerator(x_test, y_test, length=5)

model.fit(d_train, epochs=1, validation_data=d_test)

為了與您的損失函數保持一致,您需要禁用返回序列(或添加另一個不返回序列的層)。


model.add(tf.keras.layers.LSTM(32, activation='relu', return_sequences=False))


查看完整回答
反對 回復 2022-12-14
  • 1 回答
  • 0 關注
  • 107 瀏覽
慕課專欄
更多

添加回答

舉報

0/150
提交
取消
微信客服

購課補貼
聯系客服咨詢優惠詳情

幫助反饋 APP下載

慕課網APP
您的移動學習伙伴

公眾號

掃描二維碼
關注慕課網微信公眾號