规范构建tensorflow模型

Defining your models in TensorFlow can easily result in one huge wall of code. How to structure your code in a readable and reusable way? For the impatient of you, here is the link to aworking example gist. You might also want to take a look at my new post onfast prototypingin TensorFlow, that builds on the idea described here.

在TensorFlow中定义模型很容易导致一个巨大的代码墙。 如何以可读和可重用的方式构建代码? 对于你的不耐烦,这里是一个工作示例要点的链接。 您可能还想看看我在TensorFlow中关于快速原型设计的新帖子,该帖子建立在此处描述的构思之上。

Defining the Compute Graph(定义计算图)

It’s sensible to start with one class per model. What is the interface of that class? Usually, your model connects to some input data and targetplaceholders and provides operations for training, evaluation, andinference.

从每个模型开始一个类是明智的。 那个类的界面是什么? 通常,您的模型连接到某些输入数据和目标占位符,并提供培训,评估和推理操作。

class Model:

    def __init__(self, data, target):
        data_size = int(data.get_shape()[1])
        target_size = int(target.get_shape()[1])
        weight = tf.Variable(tf.truncated_normal([data_size, target_size]))
        bias = tf.Variable(tf.constant(0.1, shape=[target_size]))
        incoming = tf.matmul(data, weight) + bias
        self._prediction = tf.nn.softmax(incoming)
        cross_entropy = -tf.reduce_sum(target, tf.log(self._prediction))
        self._optimize = tf.train.RMSPropOptimizer(0.03).minimize(cross_entropy)
        mistakes = tf.not_equal(
            tf.argmax(target, 1), tf.argmax(self._prediction, 1))
        self._error = tf.reduce_mean(tf.cast(mistakes, tf.float32))

    @property
    def prediction(self):
        return self._prediction

    @property
    def optimize(self):
        return self._optimize

    @property
    def error(self):
        return self._error

This is basically, how models are defined in the TensorFlow codebase. However, there are some problems with it. Most notably, the whole graph is define in a single function, the constructor. This is neither particularly readable nor reusable.

这基本上就是如何在TensorFlow代码库中定义模型。 但是,它存在一些问题。 最值得注意的是,整个图形是在单个函数中定义的,即构造函数。 这既不是特别可读也不可重复使用。

Using Properties(使用属性)

Just splitting the code into functions doesn’t work, since every time the functions are called, the graph would be extended by new code. Therefore, we have to ensure that the operations are added to the graph only when the function is called for the first time. This is basically lazy-loading.

只是将代码拆分成函数是行不通的,因为每次调用函数时,图形都会被新代码扩展。 因此,我们必须确保仅在第一次调用函数时才将操作添加到图形中。 这就是基本的lazy-loading。

class Model:

    def __init__(self, data, target):
        self.data = data
        self.target = target
        self._prediction = None
        self._optimize = None
        self._error = None

    @property
    def prediction(self):
        if not self._prediction:
            data_size = int(self.data.get_shape()[1])
            target_size = int(self.target.get_shape()[1])
            weight = tf.Variable(tf.truncated_normal([data_size, target_size]))
            bias = tf.Variable(tf.constant(0.1, shape=[target_size]))
            incoming = tf.matmul(self.data, weight) + bias
            self._prediction = tf.nn.softmax(incoming)
        return self._prediction

    @property
    def optimize(self):
        if not self._optimize:
            cross_entropy = -tf.reduce_sum(self.target, tf.log(self.prediction))
            optimizer = tf.train.RMSPropOptimizer(0.03)
            self._optimize = optimizer.minimize(cross_entropy)
        return self._optimize

    @property
    def error(self):
        if not self._error:
            mistakes = tf.not_equal(
                tf.argmax(self.target, 1), tf.argmax(self.prediction, 1))
            self._error = tf.reduce_mean(tf.cast(mistakes, tf.float32))
        return self._error

This is much better than the first example. Your code now is structured into functions that you can focus on individually. However, the code is still a bit bloated due to the lazy-loading logic. Let’s see how we can improve on that.

这比第一个例子好得多。 您的代码现在被构建为可以单独关注的功能。 但是,由于延迟加载逻辑,代码仍然有点臃肿。 让我们看看我们如何改进。

Lazy Property Decorator

Python is a quite flexible language. So let me show you how to strip out the redundant code from the last example. We will use a decorator that behaves like@propertybut only evaluates the function once. It stores the result in a member named after the decorated function (prepended with a prefix) and returns this value on any subsequent calls. If you haven’t used custom decorators yet, you might also want totake a look at this guide.

Python是一种非常灵活的语言。 那么,让我向您展示如何从上一个示例中删除冗余代码。 我们将使用类似@property的装饰器,但只评估一次该函数。 它将结果存储在以装饰函数命名的成员中(前缀为前缀),并在任何后续调用中返回此值。 如果您尚未使用自定义装饰器,您可能还需要查看本指南。

import functools

def lazy_property(function):
    attribute = '_cache_' + function.__name__

    @property
    @functools.wraps(function)
    def decorator(self):
        if not hasattr(self, attribute):
            setattr(self, attribute, function(self))
        return getattr(self, attribute)

    return decorator

Using this decorator, our example simplifies to the code below.

使用这个装饰器,我们的示例简化了下面的代码。

class Model:

    def __init__(self, data, target):
        self.data = data
        self.target = target
        self.prediction
        self.optimize
        self.error

    @lazy_property
    def prediction(self):
        data_size = int(self.data.get_shape()[1])
        target_size = int(self.target.get_shape()[1])
        weight = tf.Variable(tf.truncated_normal([data_size, target_size]))
        bias = tf.Variable(tf.constant(0.1, shape=[target_size]))
        incoming = tf.matmul(self.data, weight) + bias
        return tf.nn.softmax(incoming)

    @lazy_property
    def optimize(self):
        cross_entropy = -tf.reduce_sum(self.target, tf.log(self.prediction))
        optimizer = tf.train.RMSPropOptimizer(0.03)
        return optimizer.minimize(cross_entropy)

    @lazy_property
    def error(self):
        mistakes = tf.not_equal(
            tf.argmax(self.target, 1), tf.argmax(self.prediction, 1))
        return tf.reduce_mean(tf.cast(mistakes, tf.float32))

Note that we mention the properties in the constructor. This way the full graph is ensured to be defined by the time we run

请注意,我们在构造函数中提到了属性。 这样,确保完整图表由我们运行的时间定义

tf.initialize_variables().

Organizing the Graph with Scopes

We now have a clean way to define model in code, but the resulting computations graphs are still crowded. If you would visualize the graph, it would contain a lot of interconnected small nodes. The solution would be to wrap the content of each function by awith tf.name_scope('name')orwith tf.variable_scope('name'). Nodes would then be grouped together in the graph. But we adjust our previous decorator to do that automatically:

我们现在有了一种在代码中定义模型的简洁方法,但生成的计算图仍然很拥挤。如果您想可视化流图,它将包含许多互连的小节点。解决方案是用awith tf.name_scope('name')或包装每个函数的内容with tf.variable_scope('name')。然后,节点将在图中组合在一起。但我们调整我们以前的装饰器来自动完成:

import functools

def define_scope(function):
    attribute = '_cache_' + function.__name__

    @property
    @functools.wraps(function)
    def decorator(self):
        if not hasattr(self, attribute):
            with tf.variable_scope(function.__name):
                setattr(self, attribute, function(self))
        return getattr(self, attribute)

    return decorator

I gave the decorator a new name since it has functionality specific to TensorFlow in addition to the lazy caching. Other than that, the model looks identical to the previous one.

我给装饰器一个新名称,因为除了延迟缓存之外,它还具有特定于TensorFlow的功能。 除此之外,该模型看起来与前一个相同。

We could go even further an enable the@define_scopedecorator to forward arguments to thetf.variable_scope(), for example to define a default initializer for the scope. If you are interested in this check out thefull exampleI put together.

我们甚至可以进一步启用@define_scope装饰器将参数转发到tf.variable_scope(),例如为范围定义默认初始值设定项。 如果您对此感兴趣,请查看我放在一起的完整示例。

We can now define models in a structured and compact way that result in organized computation graphs. This works well for me. If you have any suggestions or questions, feel free to use the comment section.

我们现在可以以结构化和紧凑的方式定义模型,从而生成有组织的计算图。 这对我很有用。 如果您有任何建议或问题,请随时使用评论部分。

Updated 2018-06-26:Added link to my post onprototyping in TensorFlow, that introduces an improved version of the decorator idea introduced here.

更新时间:2018-06-26:在TensorFlow中添加了关于原型制作的帖子的链接,其中介绍了此处介绍的装饰者想法的改进版本。

转载于:Structuring Your TensorFlow Models

    原文作者:ohyes
    原文地址: https://zhuanlan.zhihu.com/p/42196141
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞