1. 出现错误1:DataLoader worker (pid(s) 8500, 7876, 30128, 1760) exited unexpectedly
2. 出现错误2:CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 2.00 GiB total capacity; 829.83 MiB already allocated; 457.25 MiB free; 846.00 MiB reserved in total by PyTorch) (malloc at ..\c10\cuda\CUDACachingAllocator.cpp:289) (no backtrace available)
3. 解决:修改了load_data_fashion_mnist里的num_workers = 0就能跑了,就是把训练前加载数据那修改一下(因为我不知道如何在这个函数外面修改num_workers,所以只能把那个函数给复制过来修改一下)
import torchvision from torch.utils import data from torchvision import transforms batch_size = 128 def load_data_fashion_mnist_temp(batch_size, resize=None): trans = [transforms.ToTensor()] if resize: trans.insert(0, transforms.Resize(resize)) trans = transforms.Compose(trans) mnist_train = torchvision.datasets.FashionMNIST( root="../data", train=True, transform=trans, download=True) mnist_test = torchvision.datasets.FashionMNIST( root="../data", train=False, transform=trans, download=True) return (data.DataLoader(mnist_train, batch_size, shuffle=True, num_workers=0), data.DataLoader(mnist_test, batch_size, shuffle=False, num_workers=0)) train_iter, test_iter = load_data_fashion_mnist_temp(batch_size, resize=224)
题外话:好家伙,跑了40min,但效果确实比LeNet好
标签:jupyter,train,batch,P24,notebook,trans,data,mnist,size From: https://www.cnblogs.com/lq007/p/16928482.html