Tensorflow的三种储存格式-2(pb & Saved Model)

2. GraphDef(*.pb)

这种格式文件包含protobuf对象序列化后的数据,包含了计算图,可以从中得到所有运算符(operators)的细节,也包含tensors,这里有两种pb文件:

1)包含所有的variable,但是所有的variable都已经变成了tf.constant和graph一起frozen到一个文件;

2)不包含variable的值,因此只能从中恢复计算图,但一些训练的权值和参数需要从ckpt文件中恢复。

下面代码展示了保存pb文件的过程:

import tensorflow as tf
import os
from tensorflow.python.framework import graph_util

pb_file_path = os.getcwd()

with tf.Session(graph=tf.Graph()) as sess:
    x = tf.placeholder(tf.int32, name='x')
    y = tf.placeholder(tf.int32, name='y')
    b = tf.Variable(1, name='b')
    xy = tf.multiply(x, y)
    # 这里的输出需要加上name属性
    op = tf.add(xy, b, name='op_to_store')

    sess.run(tf.global_variables_initializer())

    # convert_variables_to_constants 需要指定output_node_names,list(),可以多个
    constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['op_to_store'])

    # 测试 OP
    feed_dict = {x: 10, y: 3}
    print(sess.run(op, feed_dict))

    # 写入序列化的 PB 文件
    with tf.gfile.FastGFile(pb_file_path+'model.pb', mode='wb') as f:
        f.write(constant_graph.SerializeToString())

    # 输出
    # INFO:tensorflow:Froze 1 variables.
    # Converted 1 variables to const ops.
    # 31

下面代码实现了利用*.pb文件构建计算图的过程:

model_path = 'XXX.pb'
with tf.gfile.GFile(model_path, "rb") as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name='XXX')
graph = tf.get_default_graph()
input_images  = g.get_tensor_by_name('XXX/image_tensor:0')
output_num_boxes = g.get_tensor_by_name('XXX/num_boxes:0')
output_scores = g.get_tensor_by_name('XXX/scores:0')

TensorFlow 一些例程中用到 *.pb 文件作为预训练模型,这和上面 GraphDef 格式稍有不同,属于冻结(Frozen)后的 GraphDef 文件,简称 FrozenGraphDef 格式。这种文件格式不包含 Variables 节点。将 GraphDef 中所有 Variable 节点转换为常量(其值从 checkpoint 获取),就变为 FrozenGraphDef 格式。代码可以参考 tensorflow/python/tools/freeze_graph.py

*.pb 为二进制文件,实际上 protobuf 也支持文本格式(*.pbtxt),但包含权值时文本格式会占用大量磁盘空间,一般不用。

3. SavedModel

在使用 TensorFlow Serving 时,会用到这种格式的模型。该格式为 GraphDef 和 CheckPoint 的结合体,另外还有标记模型输入和输出参数的 SignatureDef。从 SavedModel 中可以提取 GraphDef 和 CheckPoint 对象。

模型的存储结构如下:

└── 1
···├── saved_model.pb
···└── variables
·········├── variables.data-00000-of-00001
·········└── variables.index

pb文件+ variable目录(.index文件+.data).PS:如果从pb文件中转出来的模型,variable文件夹中为空,因为pb文件里面的各项参数都是tf.constant,所以不会存储到variable里面。

3.1模型导出:

  • tf.train.Saver()

用于保存和恢复Variable。它可以非常方便的保存当前模型的变量或者倒入之前训练好的变量。一个最简单的运用:

saver - tf.train.Saver()
# Save the variables to disk.
saver.save(sess, "/tmp/test.ckpt")
# Restore variables from disk.
saver.restore(sess, "/tmp/test.ckpt")
  • tf.contrib.session_bundle.exporter.Exporter

Exporter 的基本使用方式是:

1)传入一个Saver实例;

2)调用init,定义模型的graph以及input/output

3)使用Exporter导出模型

def build_and_saved_wdl():
#1)导入pb文件,构造graph
  model_path = './model.pb'
  with tf.gfile.GFile(model_path, 'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    graph = tf.Graph()   
  with graph.as_default():
    sess = tf.Session()
    tf.import_graph_def(graph_def, name='import')
    #恢复指定的tensor
    input_image = graph.get_tensor_by_name('import/image_tensor:0')
    output_ops = graph.get_tensor_by_name('import/num_boxes:0')
    variables_to_restore = slim.get_variables_to_restore()
  
    saver = tf.train.Saver(variables_to_restore)
    saver.restore(sess, tf.train.latest_checkpoint(checkpoint_path))
  
  
    print('Exporting trained model to', export_path)
    #构造定义一个builder,并制定模型输出路径
    builder = tf.saved_model.builder.SavedModelBuilder(export_path)
    #声明模型的input和output
    tensor_info_input = tf.saved_model.utils.build_tensor_info(input_images)
    tensor_info_output = tf.saved_model.utils.build_tensor_info(output_result)
    #定义签名
    prediction_signature = (
  	  tf.saved_model.signature_def_utils.build_signature_def(
      inputs={'images': tensor_info_input},
      outputs={'result': tensor_info_output},
      method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
    builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.SERVING],signature_def_map={'predict_images': prediction_signature})
    builder.save()
    print('Done exporting!')

4.拓展:tensorflow serving服务的部署

1)拉取带有tensorflow serving的docker镜像,这样我们服务器上就可以穿件一个安转tensorflow serving的docker container,这个容器可以看做是一台虚拟机买注意这个拉取下来之后不是直接放在当前目录的,而是docker默认的存储路径。

$docker pull tensorflow/serving

2)创建一个container实例,并设置好通信端口:

sudo docker run -t --rm -p 8500:8500 \     #(设置通信端口)
  -v "/home/zhangdaqu/serving/models/serving_model:/models/FaceBoxes" \     #将导出的tensorflow saved model 路径映射
  -e MODEL_NAME=FaceBoxes \                 #定义声明模型名字,作为后面请求的inference的key
  tensorflow/serving 1>out1.txt 2>out2.txt &      #打印日志,方便debug

–mount: 表示要进行挂载

source: 指定要运行部署的模型地址, 也就是挂载的源,这个是在宿主机上的模型目录

target: 这个是要挂载的目标位置,也就是挂载到docker容器中的哪个位置,这是docker容器中的目录

-t: 指定的是挂载到哪个容器

-p: 指定主机到docker容器的端口映射

docker run: 启动这个容器并启动模型服务(这里是如何同时启动容器中的模型服务的还不太清楚)

综合解释:将source目录中的例子模型,挂载到-t指定的docker容器中的target目录,并启动.

3)编写客户端的请求服务代码(how to run inference):

def do_inference(hostport):
  """Tests PredictionService with concurrent requests.

  Args:
    hostport: Host:port address of the PredictionService.
    work_dir: The full path of working directory for test data set.
    concurrency: Maximum number of concurrent requests.
    num_tests: Number of test images to use.

  Returns:
    The classification error rate.

  Raises:
    IOError: An error occurred processing test data set.
  """
  signature_key = 'predict_images'
  input_key = 'input'
  output_key_1 = 'output_num_boxes'
  output_key_2 = 'output_scores'
  image_path = './face2.jpg'
  image_array = cv2.imread(image_path)
  image_array = cv2.cvtColor(image_array, cv2.COLOR_BGR2RGB)
  
  h, w, _ = image_array.shape
  
  image_array = np.expand_dims(image_array, 0)
  
  
  start_time = time.time()
  channel = grpc.insecure_channel(hostport)
  stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
  request = predict_pb2.PredictRequest()
  request.model_spec.name = 'FaceBoxes'
  request.model_spec.signature_name = signature_key
  request.inputs[input_key].CopyFrom(tf.contrib.util.make_tensor_proto(image_array, shape=[1, h, w, 3]))
  response = stub.Predict(request, 10.0)
  results = {}
  for key in response.outputs:
    tensor_proto = response.outputs[key]
    nd_array = tf.contrib.util.make_ndarray(tensor_proto)
    results[key] = nd_array
    print(key,nd_array)
  print("cost %ss to predict: " % (time.time() - start_time))
  print(results[output_key_1])
  print(results[output_key_2])

4)在部署好tensorflow serving的服务器上运行客户端的Python脚本,我们要check下我们当时设置的通信端口有没有被占用:

owenliu@cv-1:~/serving/tensorflow_serving_models$ python test_serving_model.py
/home/cbgcv/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
cost 6.24359655380249s to predict:
Face numbers:
4
Scores:
[0.9880967  0.98482877 0.98189765 0.9761871 ]

    原文作者:OwenLiuzZ
    原文地址: https://zhuanlan.zhihu.com/p/60069860
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞