pytorch学习笔记

一、一些简单的运算

1、numpy与torch数据形式转换

import torch
import numpy as np
np_data = np.arange(6).reshape(2,3)
torch_data = torch.from_numpy(np_data)#numpy转torch
torch2array = torch_data.numpy()#torch转numpy
print(
      '\nnumpy',np_data,
      '\ntorch',torch_data,
      '\ntorch2array',torch2array
      
      )

 输出

numpy [[0 1 2]
 [3 4 5]] 
torch 
 0  1  2
 3  4  5
[torch.LongTensor of size 2x3]
 
torch2array [[0 1 2]
 [3 4 5]]

 2、绝对值

import torch
import numpy as np
#abs
data = [-1,-2,1,2]
tensor = torch.FloatTensor(data) #32it
print(
      '\nabs',
      '\nnumpy',np.abs(data),
      '\ntorch',torch.abs(tensor), 
      )

 

abs 
numpy [1 2 1 2] 
torch 
 1
 2
 1
 2
[torch.FloatTensor of size 4]

 3、矩阵相乘

import torch
import numpy as np
#abs
data = [[1,2],[3,4]]
tensor = torch.FloatTensor(data) #32it
print(
      '\nabs',
      '\nnumpy',np.matmul(data,data),
      '\ntorch',torch.mm(tensor,tensor), 
      )

 

matmul 
numpy [[ 7 10]
 [15 22]] 
torch 
  7  10
 15  22
[torch.FloatTensor of size 2x2]

二、variable

import torch
from torch.autograd import Variable
import numpy as np
#matmul
data = [[1,2],[3,4]]
tensor = torch.FloatTensor(data) #32it
variable = Variable(tensor,requires_grad=True)#requires控制是否计算梯度
t_out = torch.mean(tensor*tensor)
v_out = torch.mean(variable*variable)
print(t_out)
print(v_out)
v_out.backward()
#v_out = 1/4*sum(var*var)
#d(v_out)/d(var) = 1/4*2variable = variable/2
print(variable.grad)

 

7.5
Variable containing:
 7.5000
[torch.FloatTensor of size 1]

Variable containing:
 0.5000  1.0000
 1.5000  2.0000
[torch.FloatTensor of size 2x2]

 这里输出variable.grad的意思是指输出v_out对variable求导所得的梯度值。

如果你想看variable里有上面数据,并将它们转换成numpy的形式

print(variable.data)
print(variable.data.numpy())

 

[torch.FloatTensor of size 2x2]


 1  2
 3  4
[torch.FloatTensor of size 2x2]

[[ 1.  2.]
 [ 3.  4.]]

 

    原文作者:pytorch
    原文地址: https://www.cnblogs.com/lindaxin/p/7873371.html
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞