python – Tensorflow vs Numpy数学函数

numpy和tensorflow执行的数学函数之间是否有任何真正的区别.例如,指数函数,还是最大函数?

我注意到的唯一区别是tensorflow需要输入张量,而不是numpy数组.
这是唯一的区别,功能结果与价值没有区别吗?

最佳答案 如前所述,存在性能差异. TensorFlow的优势在于它可以在CPU或GPU上运行,因此如果你有一个支持CUDA的GPU,TensorFlow可能会快得多.您可以在网上找到几个不同比较的基准测试,以及其他软件包,如Numba或Theano.

但是,我认为你在谈论NumPy和TensorFlow操作是否完全相同.答案基本上是肯定的,也就是说,操作的含义是相同的.但是,由于它们是完全独立的库,具有针对所有内容的不同实现,因此您会发现结果中存在细微差别.以此代码为例(TensorFlow 1.2.0,NumPy 1.13.1):

# Force TensorFlow to run on CPU only
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"

import numpy as np
import tensorflow as tf

# float32 NumPy array
a = np.arange(100, dtype=np.float32)
# The same array with the same dtype in TensorFlow
a_tf = tf.constant(a, dtype=tf.float32)
# Square root with NumPy
sqrt = np.sqrt(a)
# Square root with TensorFlow
with tf.Session() as sess:
    sqrt_tf = sess.run(tf.sqrt(a_tf))

你可以期望从两者得到几乎相同的输出,我的意思是,平方根听起来并不像一个非常复杂的操作.但是,在我的计算机中打印这些数组我得到:

print(sqrt)
>>> array([ 0.        ,  1.        ,  1.41421354,  1.73205078,  2.        ,
            2.23606801,  2.44948983,  2.64575124,  2.82842708,  3.        ,
            3.1622777 ,  3.31662488,  3.46410155,  3.60555124,  3.7416575 ,
            3.87298346,  4.        ,  4.12310553,  4.2426405 ,  4.35889912,
            4.47213602,  4.5825758 ,  4.69041586,  4.79583168,  4.89897966,
            5.        ,  5.09901953,  5.19615221,  5.29150248,  5.38516474,
            5.47722578,  5.56776428,  5.65685415,  5.74456263,  5.83095169,
            5.91608   ,  6.        ,  6.08276272,  6.16441393,  6.24499798,
            6.3245554 ,  6.40312433,  6.48074055,  6.55743837,  6.63324976,
            6.70820379,  6.78233004,  6.85565472,  6.92820311,  7.        ,
            7.07106781,  7.14142847,  7.21110249,  7.28010988,  7.34846926,
            7.41619825,  7.48331499,  7.54983425,  7.6157732 ,  7.68114567,
            7.74596691,  7.81024981,  7.8740077 ,  7.93725395,  8.        ,
            8.06225777,  8.1240387 ,  8.18535233,  8.24621105,  8.30662346,
            8.36660004,  8.42614937,  8.48528099,  8.54400349,  8.60232544,
            8.66025448,  8.71779823,  8.77496433,  8.83176041,  8.88819408,
            8.94427204,  9.        ,  9.05538559,  9.11043358,  9.1651516 ,
            9.21954441,  9.2736187 ,  9.32737923,  9.38083172,  9.43398094,
            9.48683262,  9.53939247,  9.59166336,  9.64365101,  9.69536018,
            9.7467947 ,  9.79795933,  9.84885788,  9.89949512,  9.94987392], dtype=float32)

print(sqrt_tf)
>>> array([ 0.        ,  0.99999994,  1.41421342,  1.73205078,  1.99999988,
            2.23606801,  2.44948959,  2.64575124,  2.82842684,  2.99999976,
            3.1622777 ,  3.31662488,  3.46410155,  3.60555077,  3.74165726,
            3.87298322,  3.99999976,  4.12310553,  4.2426405 ,  4.35889864,
            4.47213602,  4.58257532,  4.69041538,  4.79583073,  4.89897919,
            5.        ,  5.09901857,  5.19615221,  5.29150248,  5.38516474,
            5.47722483,  5.56776428,  5.65685368,  5.74456215,  5.83095121,
            5.91607952,  5.99999952,  6.08276224,  6.16441393,  6.24499846,
            6.3245554 ,  6.40312433,  6.48074055,  6.5574379 ,  6.63324976,
            6.70820427,  6.78233004,  6.85565472,  6.92820311,  6.99999952,
            7.07106733,  7.14142799,  7.21110153,  7.28010893,  7.34846973,
            7.41619825,  7.48331451,  7.54983425,  7.61577368,  7.68114567,
            7.74596643,  7.81025028,  7.8740077 ,  7.93725395,  7.99999952,
            8.06225681,  8.12403774,  8.18535233,  8.24621105,  8.30662346,
            8.36660004,  8.42614937,  8.48528099,  8.54400253,  8.60232449,
            8.66025352,  8.71779728,  8.77496433,  8.83176041,  8.88819408,
            8.94427204,  8.99999905,  9.05538464,  9.11043262,  9.16515064,
            9.21954441,  9.27361774,  9.32737923,  9.38083076,  9.43398094,
            9.48683357,  9.53939152,  9.59166145,  9.64365005,  9.69535923,
            9.7467947 ,  9.79795837,  9.84885788,  9.89949417,  9.94987392], dtype=float32)

所以,好吧,它是相似的,但有明显的差异.例如,TensorFlow甚至无法使1,4或9的平方根正确.如果你在GPU上运行它,你可能会得到不同的结果(由于GPU内核与CPU内核不同以及NVIDIA实现的对CUDA例程的依赖,这是该领域的另一个参与者).

我的印象(尽管我可能错了)是TensorFlow更愿意牺牲一点精度来换取性能(考虑到它的典型用例,这是有意义的).我甚至看到一些更复杂的操作产生(非常轻微)不同的结果只运行两次(在相同的硬件上),可能是由于聚合中的未指定顺序和平均操作导致舍入错误(我通常使用float32,所以这是一个因素我猜也是如此).

点赞