我有:import torchinput_sliced = torch.rand(180, 161)output_sliced = torch.rand(180,)batched_inputs = torch.Tensor()batched_outputs = torch.Tensor()print('input_sliced.size', input_sliced.size())print('output_sliced.size', output_sliced.size())batched_inputs = torch.cat((batched_inputs, input_sliced))batched_outputs = torch.cat((batched_outputs, output_sliced))print('batched_inputs.size', batched_inputs.size())print('batched_outputs.size', batched_outputs.size())此輸出:input_sliced.size torch.Size([180, 161])output_sliced.size torch.Size([180])batched_inputs.size torch.Size([180, 161])batched_outputs.size torch.Size([180])我需要附加那些,但不起作用。我做錯了什么?batchedtorch.cat
1 回答

Helenr
TA貢獻1780條經驗 獲得超4個贊
假設你在循環中這樣做,我會說最好這樣做:
import torch
batch_input, batch_output = [], []
for i in range(10): # assuming batch_size=10
batch_input.append(torch.rand(180, 161))
batch_output.append(torch.rand(180,))
batch_input = torch.stack(batch_input)
batch_output = torch.stack(batch_output)
print(batch_input.shape) # output: torch.Size([10, 180, 161])
print(batch_output.shape) # output: torch.Size([10, 180])
如果您先驗地知道結果形狀,則可以預先分配最終形狀,只需將每個樣品分配到批次中的相應位置即可。這將更節省內存。batch_*Tensor
添加回答
舉報
0/150
提交
取消