pytorch view、expand、transpose、permute、reshape、repeat、repeat_interleave

这篇具有很好参考价值的文章主要介绍了pytorch view、expand、transpose、permute、reshape、repeat、repeat_interleave。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

非contiguous操作

There are a few operations on Tensors in PyTorch that do not change the contents of a tensor, but change the way the data is organized. These operations include:

narrow(), view(), expand() and transpose() permute()

This is where the concept of contiguous comes in. In the example above, x is contiguous but y is not because its memory layout is different to that of a tensor of same shape made from scratch. Note that the word “contiguous” is a bit misleading because it’s not that the content of the tensor is spread out around disconnected blocks of memory. Here bytes are still allocated in one block of memory but the order of the elements is different!

When you call contiguous(), it actually makes a copy of the tensor such that the order of its elements in memory is the same as if it had been created from scratch with the same data.

transpose()

permute() and tranpose() are similar. transpose() can only swap two dimension. But permute() can swap all the dimensions. For example:

x = torch.rand(16, 32, 3)
y = x.tranpose(0, 2)

z = x.permute(2, 1, 0)

permute

Returns a view of the original tensor input with its dimensions permuted.

>>> x = torch.randn(2, 3, 5)
>>> x.size()
torch.Size([2, 3, 5])
>>> torch.permute(x, (2, 0, 1)).size()
torch.Size([5, 2, 3])

expand

More than one element of an expanded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first.

>>> x = torch.tensor([[1], [2], [3]])
>>> x.size()
torch.Size([3, 1])
>>> x.expand(3, 4)
tensor([[ 1,  1,  1,  1],
        [ 2,  2,  2,  2],
        [ 3,  3,  3,  3]])
>>> x.expand(-1, 4)   # -1 means not changing the size of that dimension
tensor([[ 1,  1,  1,  1],
        [ 2,  2,  2,  2],
        [ 3,  3,  3,  3]])

Difference Between view() and reshape()

1/ view(): Does NOT make a copy of the original tensor. It changes the dimensional interpretation (striding) on the original data. In other words, it uses the same chunk of data with the original tensor, so it ONLY works with contiguous data.

2/ reshape(): Returns a view while possible (i.e., when the data is contiguous). If not (i.e., the data is not contiguous), then it copies the data into a contiguous data chunk, and as a copy, it would take up memory space, and also the change in the new tensor would not affect the value in the original tensor.

With contiguous data, reshape() returns a view.

When data is contiguous

x = torch.arange(1,13)
x
>> tensor([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12])

Reshape returns a view with the new dimension

y = x.reshape(4,3)
y
>>
tensor([[ 1,  2,  3],
        [ 4,  5,  6],
        [ 7,  8,  9],
        [10, 11, 12]])

How do we know it’s a view? Because the element change in new tensor y would affect the value in x, and vice versa

y[0,0] = 100
y
>>
tensor([[100,   2,   3],
        [  4,   5,   6],
        [  7,   8,   9],
        [ 10,  11,  12]])
print(x)
>>
tensor([100,   2,   3,   4,   5,   6,   7,   8,   9,  10,  11,  12])

Next, let’s see how reshape() works on non-contiguous data.

# After transpose(), the data is non-contiguous
x = torch.arange(1,13).view(6,2).transpose(0,1)
x
>>
tensor([[ 1,  3,  5,  7,  9, 11],
        [ 2,  4,  6,  8, 10, 12]])
# Reshape() works fine on a non-contiguous data
y = x.reshape(4,3)
y
>>
tensor([[ 1,  3,  5],
        [ 7,  9, 11],
        [ 2,  4,  6],
        [ 8, 10, 12]])
# Change an element in y
y[0,0] = 100
y
>>
tensor([[100,   3,   5],
        [  7,   9,  11],
        [  2,   4,   6],
        [  8,  10,  12]])
# Check the original tensor, and nothing was changed
x
>>
tensor([[ 1,  3,  5,  7,  9, 11],
        [ 2,  4,  6,  8, 10, 12]])

Finally, let’s see if view() can work on non-contiguous data.
No, it can’t!

# After transpose(), the data is non-contiguous
x = torch.arange(1,13).view(6,2).transpose(0,1)
x
>>
tensor([[ 1,  3,  5,  7,  9, 11],
        [ 2,  4,  6,  8, 10, 12]])
# Try to use view on the non-contiguous data
y = x.view(4,3)
y
>>
-------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
----> 1 y = x.view(4,3)
      2 y

RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

contiguous操作

reshape是能返回view就view,不能view就拷贝一份

>>> a = torch.arange(4.)
>>> torch.reshape(a, (2, 2))
tensor([[ 0.,  1.],
        [ 2.,  3.]])
>>> b = torch.tensor([[0, 1], [2, 3]])
>>> torch.reshape(b, (-1,))
tensor([ 0,  1,  2,  3])

repeat是新克隆内存,但是expand是原地更新stride

import torch
a = torch.arange(10).reshape(2,5)
# b = a.expand(4,5) #这就崩了,多维上没法expand,用repeat
b = a.repeat(2,2)
print('b={}'.format(b))
'''
b=tensor([[0, 1, 2, 3, 4, 0, 1, 2, 3, 4],
        [5, 6, 7, 8, 9, 5, 6, 7, 8, 9],
        [0, 1, 2, 3, 4, 0, 1, 2, 3, 4],
        [5, 6, 7, 8, 9, 5, 6, 7, 8, 9]])
'''
c = torch.arange(3).reshape(1,3)
print('c={} c.stride()={}'.format(c, c.stride()))
d = c.expand(2,3)
print('d={} d.stride()={}'.format(d, d.stride()))
'''
c=tensor([[0, 1, 2]]) c.stride()=(3, 1), 在dim=0上迈3步,在dim=1上迈1步
d=tensor([[0, 1, 2],
        [0, 1, 2]]) d.stride()=(0, 1), 在dim=0上迈0步,在dim=1上迈1步
'''
d[0][0] = 5
print('c={} d={}'.format(c, d))
'''
c=tensor([[5, 1, 2]]) d=tensor([[5, 1, 2],
        [5, 1, 2]])
'''

repeat_interleave是把相邻着重复放,但是repeat是整体重复。所以repeat_interleave要指定下dim,但是repeat一次多维重复

This is different from torch.Tensor.repeat() but similar to numpy.repeat.文章来源地址https://www.toymoban.com/news/detail-852409.html

>>> x = torch.tensor([1, 2, 3])
>>> x.repeat_interleave(2)
tensor([1, 1, 2, 2, 3, 3])
>>> y = torch.tensor([[1, 2], [3, 4]])
>>> torch.repeat_interleave(y, 2)
tensor([1, 1, 2, 2, 3, 3, 4, 4])
>>> torch.repeat_interleave(y, 3, dim=1)
tensor([[1, 1, 1, 2, 2, 2],
        [3, 3, 3, 4, 4, 4]])
# 第一行重复1遍,第二行重复2遍
>>> torch.repeat_interleave(y, torch.tensor([1, 2]), dim=0)
tensor([[1, 2],
        [3, 4],
        [3, 4]])
>>> torch.repeat_interleave(y, torch.tensor([1, 2]), dim=0, output_size=3)
tensor([[1, 2],
        [3, 4],
        [3, 4]])

  1. https://stackoverflow.com/questions/48915810/what-does-contiguous-do-in-pytorch
  2. https://medium.com/analytics-vidhya/pytorch-contiguous-vs-non-contiguous-tensor-view-understanding-view-reshape-73e10cdfa0dd

到了这里,关于pytorch view、expand、transpose、permute、reshape、repeat、repeat_interleave的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • Pytorch中的repeat以及repeat_interleave用法

    repeat和repeat_interleave都是pytorch中用来复制的两种方法,但是二者略有不同,如下所示。 torch.tensor().repeat()里面假设里面有3个参数,即3,2,1,如下所示: 用repeat时,应当从后往前看,即先复制最后一维,依次向前。 ①最后一个数字为1,复制一次,还是[1,2,3]. ②倒数第二个数

    2023年04月12日
    浏览(39)
  • pytorch的permute(dims) 函数的功能详解

    有三块 二维矩阵 如下 把二维矩阵堆叠起来,就是三维 矩阵 这样的矩阵 从x方向看(0维)  有三块 记为3,每块从y方向看(1维)的行是3,从z方向看(2维)列也是3,故三维矩阵的尺寸就是(3,3,3) permute(dims)是干啥的, 它是操作维度的,二维是转置,原三维不变的情况下

    2024年02月12日
    浏览(35)
  • 【人工智能概论】 PyTorch中的topk、expand_as、eq方法

    对PyTorch中的tensor类型的数据都存在topk方法,其功能是按照要求取前k个最大值。 其最常用的场合就是求一个样本被网络认为的前k种可能的类别。 举例: torch.topk(input, k, dim=None, largest=True, sorted=True, out=None) 其中: input: 是待处理的tensor数据; k: 指明要前k个数据及其index;

    2024年02月10日
    浏览(42)
  • 笔记68:Pytorch中repeat函数的用法

    repeat 相当于一个broadcasting的机制 repeat(*sizes) 沿着指定的维度重复tensor。不同与expand(),本函数复制的是tensor中的数据。 转自:pytorch repeat的用法-CSDN博客

    2024年02月05日
    浏览(34)
  • Pytorch:torch.repeat_interleave()用法详解

    torch.repeat_interleave() 是 PyTorch 中的一个函数,用于 按指定的方式重复张量中的元素 。 以下是该函数的详细说明: torch.repeat_interleave() 的原理是将 输入张量中的每个元素 重复 指定的次数 ,并将这些重复的元素拼接成一个新的张量。 input: 输入的张量。 repeats: 用于指定每个元

    2024年01月16日
    浏览(39)
  • 【Torch API】pytorch 中repeat_interleave函数详解

    torch. repeat_interleave ( input ,  repeats ,  dim=None ) → Tensor Repeat elements of a tensor. Parameters input  (Tensor) – the input tensor. repeats  (Tensor  or  int) – The number of repetitions for each element. repeats is broadcasted to fit the shape of the given axis. dim  (int ,  optional ) – The dimension along which to repeat values

    2024年02月11日
    浏览(44)
  • 详解Pytorch中的view函数

    一、函数简介 Pytorch中的view函数主要用于 Tensor维度的重构 ,即返回一个 有相同数据但不同维度的Tensor 。 根据上面的描述可知,view函数的操作对象应该是Tensor类型。如果不是Tensor类型,可以通过tensor = torch.tensor(data)来转换。 二、实例讲解 ▶view(参数a,参数b,…),其中,总的

    2024年02月16日
    浏览(40)
  • PyTorch中view()函数用法说明

    首先,view( ) 是对 PyTorch 中的 Tensor 操作的,若非 Tensor 类型,可使用 data = torch.tensor(data)来进行转换。 (1) 作用:该函数返回一个有相同数据但不同维度大小的 Tensor。也就是说该函数的功能是改变矩阵维度,相当于 Numpy 中的 resize() 或者 Tensorflow 中的 reshape() 。 (2) 参数:view

    2024年04月09日
    浏览(39)
  • pytorch contiguous().view(-1, 1) 作用

    在PyTorch中,contiguous()方法和view()方法经常一起使用,通常用来将张量按照指定的形状进行重塑。 contiguous()方法可以用来判断一个张量是否是连续存储的,如果不是,则会返回一个连续存储的副本; view()方法可以用来对张量的维度进行调整,如压平张量或将张量拆分成多个子

    2024年02月12日
    浏览(35)
  • Warning: Grad strides do not match bucket view strides pytorch利用DDP报错

    遇到报错: [W reducer.cpp:362] Warning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param’s strides changed since DDP was constructed. This is not an error, but may impair performance. 机翻:警告。梯度与桶状视图的梯度不一致。这可能表明

    2024年02月13日
    浏览(70)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包