我使用了model.fit()幾次,每次負責訓練一個層塊,其中其他層被凍結代碼 # create the base pre-trained model base_model = efn.EfficientNetB0(input_tensor=input_tensor,weights='imagenet', include_top=False) # add a global spatial average pooling layer x = base_model.output x = GlobalAveragePooling2D()(x) # add a fully-connected layer x = Dense(x.shape[1], activation='relu',name='first_dense')(x) x=Dropout(0.5)(x) x = Dense(x.shape[1], activation='relu',name='output')(x) x=Dropout(0.5)(x) no_classes=10 predictions = Dense(no_classes, activation='softmax')(x) # this is the model we will train model = Model(inputs=base_model.input, outputs=predictions) # first: train only the top layers (which were randomly initialized) # i.e. freeze all convolutional layers for layer in base_model.layers: layer.trainable = False #FIRST COMPILE model.compile(optimizer='Adam', loss=loss_function, metrics=['accuracy']) #FIRST FIT model.fit(features[train], labels[train], batch_size=batch_size, epochs=top_epoch, verbose=verbosity, validation_split=validation_split) # Generate generalization metrics scores = model.evaluate(features[test], labels[test], verbose=1) print(scores) #Let all layers be trainable for layer in model.layers: layer.trainable = True from tensorflow.keras.optimizers import SGD奇怪的是,在第二次擬合中,第一個時期的準確率比第一次擬合的最后一個時期的準確率低得多。結果Epoch 40/40 6286/6286 [================================] - 14s 2ms/樣本 - 損失:0.2370 - 準確度:0.9211 - val_loss:1.3579 - val_accuracy:0.6762 874/874 [================================] - 2s 2ms/樣本- 損失:0.4122 - 準確度:0.8764在 6286 個樣本上進行訓練,在 1572 個樣本上進行驗證 Epoch 1/40 6286/6286 [================================] - 60 秒9ms/樣本 - 損失:5.9343 - 準確度:0.5655 - val_loss:2.4981 - val_accuracy:0.5115我認為第二次擬合的權重不是從第一次擬合中獲取的
添加回答
舉報
0/150
提交
取消