欢迎光临散文网 会员登陆 & 注册

PyTorch Tutorial 06 - Training Pipeli...

2023-02-15 12:05 作者:Mr-南乔  | 我要投稿

教程Python代码如下:


# 1) Design model(input, output size, forward pass)

# 2) Construct loss and optimizer

# 3) Training loop 训练循环

# - forward pass: compute prediction

# - backward pass: gradients

# - update weights

import torch

import torch.nn as nn #导入神经网络模块


# f = w * x 此处不加偏置


# f = 2 * x

X = torch.tensor([[1],[2],[3],[4]],dtype=torch.float32)

Y = torch.tensor([[2],[4],[6],[8]],dtype=torch.float32)


X_test = torch.tensor([5],dtype=torch.float32)


n_samples, n_features = X.shape

print(n_samples, n_features)


input_size = n_features

output_size = n_features


#model = nn.Linear(input_size, output_size)


class LinearRegression(nn.Module):


def __init__(self, input_dim, output_dim):

super(LinearRegression, self).__init__()

# define layers

self.lin = nn.Linear(input_dim, output_dim)


def forward(self, x):

return self.lin(x)


model = LinearRegression(input_size, output_size)


print(f'Prediction befor training: f(5) = {model(X_test).item():.3f}')


# Training

# 学习率

learning_rate = 0.01


# 多次迭代

n_iters = 100


# loss = 均方误差

loss = nn.MSELoss()


# 优化器

optimizer = torch.optim.SGD(model.parameters(),lr=learning_rate) #SGD代表随机梯度下降,参数列表是它将优化的参数


for epoch in range(n_iters):

# prediction = forward pass

y_pred = model(X)


# loss

l = loss(Y,y_pred)


# gradients = backward pass

l.backward() # dl/dw


# update weights 更新公式:权重 = 权重 - (步长或学习速率 * dw)

optimizer.step()


# zero gradients

optimizer.zero_grad()


#打印每一步

if epoch % 10 == 0:

[w, b] = model.parameters()

print(f'epoch {epoch+1}: w = {w[0][0].item():.3f}, loss = {l:.8f}')


print(f'Prediction after training: f(5) = {model(X_test).item():.3f}')

PyTorch Tutorial 06 - Training Pipeli...的评论 (共 条)

分享到微博请遵守国家法律