我正在尝试在TensorFlow中使用基本的LSTM.我收到以下错误:
TypeError:’Tensor’对象不可迭代.
违规行是:
rnn_outputs, final_state = tf.nn.dynamic_rnn(cell, x, sequence_length=seqlen,
initial_state=init_state,)`
我在Windows 7上使用的是1.0.1版本.我的输入和标签有以下形状
x_shape =(50,40,18),y_shape =(50,40)
哪里:
>批量大小= 50
>序列长度= 40
>每步的输入矢量长度= 18
我正在构建我的图表,如下所示
def build_graph(learn_rate, seq_len, state_size=32, batch_size=5):
# use a fixed sequence length
seqlen = tf.constant(seq_len, shape=[batch_size],dtype=tf.int32)
# Placeholders
x = tf.placeholder(tf.float32, [batch_size, None, 18])
y = tf.placeholder(tf.float32, [batch_size, None])
keep_prob = tf.constant(1.0)
# RNN
cell = tf.contrib.rnn.LSTMCell(state_size)
init_state = tf.get_variable('init_state', [1, state_size],
initializer=tf.constant_initializer(0.0))
init_state = tf.tile(init_state, [batch_size, 1])
rnn_outputs, final_state = tf.nn.dynamic_rnn(cell, x, sequence_length=seqlen,
initial_state=init_state,)
# Add dropout, as the model otherwise quickly overfits
rnn_outputs = tf.nn.dropout(rnn_outputs, keep_prob)
# Prediction layer
with tf.variable_scope('prediction'):
W = tf.get_variable('W', [state_size, num_classes])
b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0))
preds = tf.tanh(tf.matmul(rnn_outputs, W) + b)
# MSE
loss = tf.square(tf.subtract(y, preds))
# loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits, y))
train_step = tf.train.AdamOptimizer(learn_rate).minimize(loss)
谁能告诉我我失踪了什么?
最佳答案 序列长度应该是可迭代的,例如列表或张量,而不是标量.在具体情况下,您需要将序列长度= 40替换为每个输入的长度列表.例如,如果你的第一个序列有10个步骤,第二个13和第三个18,你将传入[10,13,18].这让TensorFlow的动态RNN知道要展开多少步(我相信它在内部使用了while循环).