遇到的错误:
运行的时候突然就这样了
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [283,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [283,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [283,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [283,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [283,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [283,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [283,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [283,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
[train epoch 0] total loss: 12.206; view1 loss: 3.442; view2 loss: 1.780; cross loss: 6.983: 6%|████████▋ | 152/2412 [04:11<1:02:21, 1.66s/it]
Traceback (most recent call last):
File "pretrain_cgip.py", line 233, in <module>
main(args)
File "pretrain_cgip.py", line 203, in main
train_dict = train_one_epoch(
File "/mnt/d/Chorm_Download/CGIP-master/CGIP-master/model/train/dual_model_utils.py", line 44, in train_one_epoch
X_v2_a1, _ = branch2(view2_aug1) # the space of view 2: aug 1
File "/home/mapengsen/anaconda3/envs/ldm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/d/Chorm_Download/CGIP-master/CGIP-master/model/deepergcn.py", line 174, in forward
h_graph = self.pool(h, batch)
File "/home/mapengsen/anaconda3/envs/ldm/lib/python3.8/site-packages/torch_geometric/nn/glob/glob.py", line 51, in global_mean_pool
size = int(batch.max().item() + 1) if size is None else size
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
这种错误一般直接看不出出来到底是哪里错误了,不知道错误的原因是因为CPU和gpu的异步执行:
cuda编程默认使用异步执行,大概是因为CPU和GPU的内存是分开的。用户想要将数据从GPU搬运到CPU,必须发起一个kernel launch。因此,当程序执行正确时,GPU异步与同步执行的结果应该是等价的。
但是如果涉及到异常处理,例如cuda代码里面报错了,则CPU在某次后续的kernel launch的时候才能发现,报错信息和位置就会非常奇怪。因此,处理cuda错误的第一件事,就是设置
CUDA_LAUNCH_BLOCKING=1
重新再跑一遍。
解决办法:
需要设置环境变量才可以以debug的方式(同步执行)运行,报出真正的错误。网上很多说直接 CUDA_LAUNCH_BLOCKIN=1 python XXX.py 的是错误的
正确做法:
export CUDA_LAUNCH_BLOCKING=1
python XXX.py
找到真正的错误,然后再解决相应的错误
https://zhuanlan.zhihu.com/p/667225351
标签:BLOCKIN,srcSelectDimSize,错误,ATen,699,failed,CUDA,283,cuda From: https://blog.csdn.net/weixin_43135178/article/details/142812114