怎么查看keras 或者 tensorflow 正在使用的GPU,怎么用 pytorch 查看 GPU 信息

 

查看keras认得到的GPU

from keras import backend as K
K.tensorflow_backend._get_available_gpus()

Out[28]:

['/job:localhost/replica:0/task:0/device:GPU:0']

查看更详细device信息

from tensorflow.python.client import device_lib
import tensorflow as tf

print(device_lib.list_local_devices())
print(tf.test.is_built_with_cuda())

output:

 
[name: “/device:CPU:0”

device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 9443078288448539683
, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 14589028880023685106
physical_device_desc: "device: XLA_CPU device"
, name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 12944586764183584921
physical_device_desc: "device: XLA_GPU device"
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 8365150044
locality {
  bus_id: 1
  links {
  }
}
incarnation: 8725535454902618392
physical_device_desc: "device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0"
]
True

  查看正在使用的GPU

import tensorflow as tf
print (tf.__version__)
if tf.test.gpu_device_name():
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
else:
    print("Please install GPU version of TF")
 
 

output: 

1.13.1
Default GPU Device: /device:GPU:0

    如果你用的 PyTorch, 请移步 
怎么用 pytorch 查看 GPU 信息

 

    原文作者:tensorflow
    原文地址: https://www.cnblogs.com/mashuai-191/p/11028448.html
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞