Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stack expects each tensor to be equal size, but got [531, 674, 3] at entry 0 and [380, 885, 3] at entry 1 #63

Open
dididichufale opened this issue Apr 6, 2022 · 2 comments

Comments

@dididichufale
Copy link

I have no way to solve this kind of problem. This kind of problem is usually caused by data processing, but I can't find the error. What can I do?
D:\annaconda3\envs\aimbot_env\python.exe J:/stereo-transformer-main/stereo-transformer/main.py
number of params in backbone: 1,050,800
number of params in transformer: 797,440
number of params in tokenizer: 503,728
number of params in regression: 161,843
0%| | 0/11195 [00:00<?, ?it/s]Start training
Epoch: 0
0%| | 0/11195 [00:03<?, ?it/s]
Traceback (most recent call last):
File "J:/stereo-transformer-main/stereo-transformer/main.py", line 263, in
main(args_)
File "J:/stereo-transformer-main/stereo-transformer/main.py", line 234, in main
args.clip_max_norm, amp)
File "J:\stereo-transformer-main\stereo-transformer\utilities\train.py", line 30, in train_one_epoch
for idx, data in enumerate(tbar):
File "D:\annaconda3\envs\aimbot_env\lib\site-packages\tqdm\std.py", line 1180, in iter
for obj in iterable:
File "D:\annaconda3\envs\aimbot_env\lib\site-packages\torch\utils\data\dataloader.py", line 521, in next
data = self._next_data()
File "D:\annaconda3\envs\aimbot_env\lib\site-packages\torch\utils\data\dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "D:\annaconda3\envs\aimbot_env\lib\site-packages\torch\utils\data\dataloader.py", line 1229, in _process_data
data.reraise()
File "D:\annaconda3\envs\aimbot_env\lib\site-packages\torch_utils.py", line 434, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "D:\annaconda3\envs\aimbot_env\lib\site-packages\torch\utils\data_utils\worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "D:\annaconda3\envs\aimbot_env\lib\site-packages\torch\utils\data_utils\fetch.py", line 52, in fetch
return self.collate_fn(data)
File "D:\annaconda3\envs\aimbot_env\lib\site-packages\torch\utils\data_utils\collate.py", line 74, in default_collate
return {key: default_collate([d[key] for d in batch]) for key in elem}
File "D:\annaconda3\envs\aimbot_env\lib\site-packages\torch\utils\data_utils\collate.py", line 74, in
return {key: default_collate([d[key] for d in batch]) for key in elem}
File "D:\annaconda3\envs\aimbot_env\lib\site-packages\torch\utils\data_utils\collate.py", line 64, in default_collate
return default_collate([torch.as_tensor(b) for b in batch])
File "D:\annaconda3\envs\aimbot_env\lib\site-packages\torch\utils\data_utils\collate.py", line 56, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [531, 674, 3] at entry 0 and [380, 885, 3] at entry 1

@Guzaiwang
Copy link

Yes, I have met the same problem.
This is because of the random crop augmentation in the data loader process.
If you want to set the batch size > 2, you can do the random crop process in the function ( forward_pass ) rather than in the definition of getitem.
image

@Guzaiwang
Copy link

When you set the batch size > 2, another problem may occur.
It is because the batch size equals to 1 in the original code.
You can see line 109 in the transform.py file.
image

Finally, the code could run and work well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants