1 回答

TA貢獻2011條經驗 獲得超2個贊
您可以在輸出層之前使用Flatten()或。GlobalAveragePooling1D完整示例:
import numpy
import tensorflow as tf
list = numpy.array([[[1., 2., 4.], [5., 6., 8.]], [[5., 6., 0.], [7., 2., 4.]]])
tpl = 3
nl = 2
model = tf.keras.Sequential([
tf.keras.layers.Dense(nl * tpl, input_shape=(nl, tpl), activation='relu'),
tf.keras.layers.Dense(64, input_shape=(nl, tpl), activation='sigmoid'),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(2, input_shape=(0, 1), activation='softmax')
])
model.build(input_shape=(nl, tpl))
model(list)
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[0.41599566, 0.58400434],
[0.41397247, 0.58602756]], dtype=float32)>
你不會只得到 0 和 1,你會得到每個班級的概率。你也應該隱藏內置關鍵字list。
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_12 (Dense) (None, 2, 6) 24
_________________________________________________________________
dense_13 (Dense) (None, 2, 64) 448
_________________________________________________________________
global_average_pooling1d (Gl (None, 64) 0
_________________________________________________________________
dense_14 (Dense) (None, 2) 130
=================================================================
Total params: 602
Trainable params: 602
Non-trainable params: 0
_________________________________________________________________
添加回答
舉報