900字范文,内容丰富有趣,生活中的好帮手!
900字范文 > 深度学习day05 用Pytorch实现线性回归

深度学习day05 用Pytorch实现线性回归

时间:2021-07-13 05:36:30

相关推荐

深度学习day05 用Pytorch实现线性回归

深度学习day05 用Pytorch实现线性回归

代码模板四个主要流程代码中的一些重点其他的优化器

代码模板

import torch# 步骤一:Prepare dataset# x,y是矩阵,3行1列 也就是说总共有3个数据,每个数据只有1个特征x_data = torch.tensor([[1.0], [2.0], [3.0]])y_data = torch.tensor([[2.0], [4.0], [6.0]])# 步骤二:Design model using class (inherit from nn.Module)class LinearModel(torch.nn.Module):def __init__(self):super(LinearModel, self).__init__()# (1,1)是指输入x和输出y的特征维度,这里数据集中的x和y的特征都是1维的# 该线性层需要学习的参数是w和b 获取w/b的方式分别是~linear.weight/linear.biasself.linear = torch.nn.Linear(1, 1) #设置w、b的初始值分别为1、1def forward(self, x):y_pred = self.linear(x)return y_predmodel = LinearModel()# 步骤三:Construct loss and optimizer (using Pytorch API)# criterion = torch.nn.MSELoss(size_average = False)criterion = torch.nn.MSELoss(reduction='sum')optimizer = torch.optim.SGD(model.parameters(), lr=0.01) # model.parameters()自动完成参数的初始化操作# 步骤四:Training cycle (forward, backward, update)for epoch in range(100):y_pred = model(x_data) # forward:predictloss = criterion(y_pred, y_data) # forward: lossprint(epoch, loss.item())optimizer.zero_grad() # the grad computer by .backward() will be accumulated. so before backward, remember set the grad to zeroloss.backward() # backward: autograd,自动计算梯度optimizer.step() # update 参数,即更新w和b的值# 打印最终训练好的权重和偏置并测试print('w = ', model.linear.weight.item())print('b = ', model.linear.bias.item())x_test = torch.tensor([[4.0]])y_test = model(x_test)print('y_pred = ', y_test.data)

跑的结果:

四个主要流程

注意准备数据X、Y的值必须是一个矩阵

Pytorch不用再人工计算导数,而是要关注如何构建计算图

代码中的一些重点

In PyTorch, the computational graph is in mini-batch fashion, so X and Y are 3 × 1 Tensors.2.Our model class should be inherit from nn.Module, which is Base class for all neural network modules.Member methods __ init __() and forward() have to be implemented.Class nn.Linear contain two member Tensors: weight and bias.Class nn.Linear has implemented the magic methodcall(), which enable the instance of the class can be called just like a function. Normally the forward() will be called. Pythonic!!!torch.nn.MSELoss also inherit from nn.Module.NOTICE:The grad computed by .backward()will be accumulated. So before backward, remember set the grad to ZERO!!!由于魔法函数__call__的实现,使用model(x_data)将会自动调model.forward(x_data)函数

其他的优化器

• torch.optim.Adagrad

• torch.optim.Adam

• torch.optim.Adamax

• torch.optim.ASGD

• torch.optim.LBFGS

• torch.optim.RMSprop

• torch.optim.Rprop

• torch.optim.SGD

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。