Pytorch 1.0.0 学习笔记:
Pytorch 的学习可以参考:Welcome to PyTorch Tutorials
Pytorch 是什么?
快速上手 Pytorch!
Tensors(张量)
from __future__ import print_function
import torch
创建一个没有初始化的 \(5\times 3\) 矩阵:
x = torch.empty(5, 3)
print(x)
tensor([[0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 1.9730e-42, 0.0000e+00],
[0.0000e+00, 7.3909e+22, 0.0000e+00]])
创建一个已经初始化的 \(5\times 3\) 的随机矩阵:
x = torch.rand(5, 3)
print(x)
tensor([[0.2496, 0.8405, 0.7555],
[0.9820, 0.9988, 0.5419],
[0.6570, 0.4990, 0.4165],
[0.6985, 0.9972, 0.4234],
[0.0096, 0.6374, 0.8520]])
给定数据类型为 long
的 \(5\times 3\) 的全零矩阵:
x = torch.zeros(5, 3, dtype=torch.long)
print(x)
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]])
直接从 list
中创建张量:
x = torch.tensor([5.5, 3]) # list
print(x)
tensor([5.5000, 3.0000])
直接从 Numpy
中创建张量:
import numpy as np
a = np.array([2, 3, 5], dtype='B')
x = torch.tensor(a) # numpy
print(x)
x.numel() # Tensor 中元素的个数
tensor([2, 3, 5], dtype=torch.uint8)
3
x = torch.rand(5, 3)
size = x.size()
print(size)
h, w = size
h, w
torch.Size([5, 3])
(5, 3)
Operations(运算)
Tensor 的运算大都与 Numpy 相同,下面仅仅介绍一些特殊的运算方式:
x = x.new_ones(5, 3, dtype=torch.double) # new_* methods take in sizes
print(x)
x = torch.randn_like(x, dtype=torch.float) # override dtype!
print(x)
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=torch.float64)
tensor([[-0.9367, -0.1121, 1.9103],
[ 0.2284, 0.3823, 1.0877],
[-0.2797, 0.7217, -0.7032],
[ 0.9047, 1.7789, 0.4215],
[-1.0368, -0.2644, -0.7948]])
result = torch.empty(5, 3) # 创建一个为初始化的矩阵
y = torch.rand(5, 3)
torch.add(x, y, out=result) # 计算 x + y 并将结果赋值给 result
print(result)
tensor([[-0.0202, 0.6110, 2.8150],
[ 1.0288, 1.2454, 1.7464],
[-0.1786, 0.8212, -0.2493],
[ 1.5294, 2.2713, 0.8383],
[-0.9292, 0.5749, -0.1146]])
任何一个 可变的 tensor 所对应的运算在其适当的位置后加上 _
, 便会修改原 tensor 的值:
x = torch.tensor([7])
y = torch.tensor([2])
print(y, y.add(x))
print(y, y.add_(x))
y
tensor([2]) tensor([9])
tensor([9]) tensor([9])
tensor([9])
x = torch.tensor(7)
x.item() # 转换为 python 的 number
7
reshape tensor:veiw()
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])
NumPy Bridge(与 Numpy 交互)
Tensor 转换为 Numpy
a = torch.ones(5)
print(a)
tensor([1., 1., 1., 1., 1.])
Tensor 转换为 Numpy
b = a.numpy()
print(b)
[1. 1. 1. 1. 1.]
_
的作用依然存在:
a.add_(1)
print(a)
print(b)
tensor([2., 2., 2., 2., 2.])
[2. 2. 2. 2. 2.]
Numpy 转换为 Tensor
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
[2. 2. 2. 2. 2.]
tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
CUDA
使用 .to
方法,Tensors 可被移动到任何 device:
# let us run this cell only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!
tensor(8, device='cuda:0')
tensor(8., dtype=torch.float64)
更多内容参考:我的github: https://github.com/XNoteW/Studying/tree/master/PyTorch_beginner