定义参数
-
dataroot - the path to the root of the dataset folder. We will talk more about the dataset in the next section.
-
workers - the number of worker threads for loading the data with the DataLoader.
-
batch_size - the batch size used in training. The DCGAN paper uses a batch size of 128.
-
image_size - the spatial size of the images used for training. This implementation defaults to 64x64. If another size is desired, the structures of D and G must be changed. See here for more details.
-
nc - number of color channels in the input images. For color images this is 3.
-
nz - length of latent vector.
-
ngf - relates to the depth of feature maps carried through the generator.
-
ndf - sets the depth of feature maps propagated through the discriminator.
-
num_epochs - number of training epochs to run. Training for longer will probably lead to better results but will also take much longer.
-
lr - learning rate for training. As described in the DCGAN paper, this number should be 0.0002.
-
beta1 - beta1 hyperparameter for Adam optimizers. As described in paper, this number should be 0.5.
-
ngpu - number of GPUs available. If this is 0, code will run in CPU mode. If this number is greater than 0 it will run on that number of GPUs.
# Root directory for dataset
dataroot = "data/celeba"
# 加载数据的线程数
workers = 2
# Batch size during training
batch_size = 128
# 训练图像的大小 64*64
image_size = 64
# 输入图像的颜色通道数 彩色图像
nc = 3
# Size of z latent vector (i.e. size of generator input)
# 潜在矢量 生成器的初始输入
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 64
# Number of training epochs
num_epochs = 5
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparameter for Adam optimizers
# ADAM优化器的超参数
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 1
# We can use an image folder dataset the way we have it setup.
# 创建数据集
dataset = dset.ImageFolder(root=dataroot,
transform=transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size), #从图像中间裁剪
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), #数据标准化处理
]))
# 创建dataloader
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
shuffle=True, num_workers=workers)
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
# Plot some training images
real_batch = next(iter(dataloader))
plt.figure(figsize=(8,8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))
# 定义生成器
# Generator Code
class Generator(nn.Module):
def __init__(self, ngpu):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. ``(ngf*8) x 4 x 4``
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. ``(ngf*4) x 8 x 8``
nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. ``(ngf*2) x 16 x 16``
nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. ``(ngf) x 32 x 32``
nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. ``(nc) x 64 x 64``
)
def forward(self, input):
return self.main(input)
这段代码定义了生成器(Generator)的网络结构。
生成器的输入是一个随机噪声向量 Z,通过一系列转置卷积层(ConvTranspose2d)和批归一化层(BatchNorm2d),最终输出一个与真实数据相似的合成样本。
具体来说,生成器包含以下层:
第一层:将输入的随机噪声 Z 经过转置卷积层(nn.ConvTranspose2d)得到 ngf * 8 个特征图,尺寸为 4x4,然后经过批归一化层(nn.BatchNorm2d)和激活函数 ReLU 进行非线性变换。
第二层:将第一层的输出经过转置卷积层得到 ngf * 4 个特征图,尺寸为 8x8,然后经过批归一化层和激活函数 ReLU 进行非线性变换。
第三层:将第二层的输出经过转置卷积层得到 ngf * 2 个特征图,尺寸为 16x16,然后经过批归一化层和激活函数 ReLU 进行非线性变换。
第四层:将第三层的输出经过转置卷积层得到 ngf 个特征图,尺寸为 32x32,然后经过批归一化层和激活函数 ReLU 进行非线性变换。
最后一层:将第四层的输出经过转置卷积层得到 nc 个特征图,尺寸为 64x64,然后经过 Tanh 激活函数将像素值映射到 -1 到 1 的范围内,得到最终的生成样本。
这个生成器结构可以通过调整输入的随机噪声 Z 来生成不同的合成样本。根据具体应用场景的需求,可以调整网络结构和参数来优化生成效果。
nn.ConvTranspose2d()
nn.ConvTranspose2d()函数中各个参数的解释:
nz:输入通道数(input channels),也就是随机噪声向量 Z 的维度。
ngf * 8:输出通道数(output channels),生成器第一层转置卷积后得到的特征图数量。
4:转置卷积核大小,指定了卷积核在空间维度上的大小,这里是一个正方形卷积核,边长为 4。
1:步幅(stride),指定了卷积核在每个空间维度上的移动步长。
0:填充(padding),指定了在进行卷积操作之前在输入的空间维度周围添加的零值像素数。在这里,填充为 0 表示不进行填充操作。
bias=False:是否使用偏置项(bias)。这里设置为 False 表示不使用偏置项,意味着转置卷积层没有可学习的偏置参数。
nn.BatchNorm2d(ngf * 8)
这是 PyTorch 中的一个二维批标准化层 (BatchNorm2d) 操作。其中,ngf 是一个变量,表示生成器的第一层卷积层中使用的特征图数量(feature map),ngf * 8 表示该层中输入数据的通道数。BatchNorm2d 层会对该层输入的每个通道在 mini-batch 上计算均值和方差,并对其进行标准化处理,使得数据分布更加平稳、收敛更快。它通常用于卷积神经网络中的全连接层和卷积层之间,以及激活函数之前。
netG = Generator(ngpu).to(device)
# Handle multi-GPU if desired
if (device.type == 'cuda') and (ngpu > 1):
netG = nn.DataParallel(netG, list(range(ngpu)))
# Apply the ``weights_init`` function to randomly initialize all weights
# to ``mean=0``, ``stdev=0.02``.
netG.apply(weights_init)
# Print the model
print(netG)
Generator(
(main): Sequential(
(0): ConvTranspose2d(100, 512, kernel_size=(4, 4), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(7): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): ReLU(inplace=True)
(9): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(10): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(11): ReLU(inplace=True)
(12): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(13): Tanh()
)
)
在PyTorch中,有两种常见的卷积运算的padding值可以由“地板除法”来计算。
- 第一种情况:对转置卷积来说,当输入特征图尺寸(高和宽)为偶数时,可以选择一个kernel_size为偶数,步长stride=2的卷积核,此时padding用kernel_size//2 可以实现精确的2倍上采样(如4x4变8x8)。
- 第二种情况:对普通卷积来说,当输入特征图尺寸(高和宽)为奇数时,可以选择一个kernel_size为奇数,步长stride=1的卷积核,此时padding用kernel_size//2 可以实现输出特征图尺寸不变。
判别器
class Discriminator(nn.Module):
def __init__(self, ngpu):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is ``(nc) x 64 x 64``
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. ``(ndf) x 32 x 32``
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. ``(ndf*2) x 16 x 16``
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. ``(ndf*4) x 8 x 8``
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. ``(ndf*8) x 4 x 4``
nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
return self.main(input)
Discriminator(
(main): Sequential(
(0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): LeakyReLU(negative_slope=0.2, inplace=True)
(2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(4): LeakyReLU(negative_slope=0.2, inplace=True)
(5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): LeakyReLU(negative_slope=0.2, inplace=True)
(8): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(9): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(10): LeakyReLU(negative_slope=0.2, inplace=True)
(11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), bias=False)
(12): Sigmoid()
)
)
$$ℓ(x,y)=L={l_
1
,…,l_
N
}
^⊤
,l
n
=−[y
n
⋅logx
n
+(1−y
n
)⋅log(1−x
n
)]$$
#BCEloss
# Initialize the ``BCELoss`` function
criterion = nn.BCELoss()
# Create batch of latent vectors that we will use to visualize
# the progression of the generator
fixed_noise = torch.randn(64, nz, 1, 1, device=device)
# Establish convention for real and fake labels during training
real_label = 1.
fake_label = 0.
# Setup Adam optimizers for both G and D
optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999))
损失函数
- 交叉熵:nn.CrossEntropyLoss
$$H(P,Q)=−∑_{i=1} ^NP(x_i)logQ(x_i)$$ - 二分类交叉熵:n.BCELoss
$$l n=−wn[y_n⋅logx_n+(1−y_n)⋅log(1−x_n)]$$
# Training Loop
# Lists to keep track of progress
img_list = []
G_losses = []
D_losses = []
iters = 0
print("Starting Training Loop...")
# For each epoch
for epoch in range(num_epochs):
# For each batch in the dataloader
for i, data in enumerate(dataloader, 0):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
## Train with all-real batch
netD.zero_grad()
# Format batch
real_cpu = data[0].to(device)
b_size = real_cpu.size(0)
label = torch.full((b_size,), real_label, dtype=torch.float, device=device)
# Forward pass real batch through D
output = netD(real_cpu).view(-1)
# Calculate loss on all-real batch
errD_real = criterion(output, label)
# Calculate gradients for D in backward pass
errD_real.backward()
D_x = output.mean().item()
## Train with all-fake batch
# Generate batch of latent vectors
noise = torch.randn(b_size, nz, 1, 1, device=device)
# Generate fake image batch with G
fake = netG(noise)
label.fill_(fake_label)
# Classify all fake batch with D
output = netD(fake.detach()).view(-1)
# Calculate D's loss on the all-fake batch
errD_fake = criterion(output, label)
# Calculate the gradients for this batch, accumulated (summed) with previous gradients
errD_fake.backward()
D_G_z1 = output.mean().item()
# Compute error of D as sum over the fake and the real batches
errD = errD_real + errD_fake
# Update D
optimizerD.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad()
label.fill_(real_label) # fake labels are real for generator cost
# Since we just updated D, perform another forward pass of all-fake batch through D
output = netD(fake).view(-1)
# Calculate G's loss based on this output
errG = criterion(output, label)
# Calculate gradients for G
errG.backward()
D_G_z2 = output.mean().item()
# Update G
optimizerG.step()
# Output training stats
if i % 50 == 0:
print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
% (epoch, num_epochs, i, len(dataloader),
errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
# Save Losses for plotting later
G_losses.append(errG.item())
D_losses.append(errD.item())
# Check how the generator is doing by saving G's output on fixed_noise
if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
with torch.no_grad():
fake = netG(fixed_noise).detach().cpu()
img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
iters += 1
标签:ngf,False,复习,nn,python,batch,dcgan,True,size
From: https://www.cnblogs.com/jinwan/p/17494916.html