TensorFlow MNIST(手写识别 softmax)实例运行

TensorFlow MNIST(手写识别 softmax)实例运行

首先要有编译环境,并且已经正确的编译安装,关于环境配置参考:http://www.cnblogs.com/dyufei/p/8027517.html

一、MNIST 运行

1)首先下载训练数据

http://yann.lecun.com/exdb/mnist/ 将四个包都下载下来,在下面代码的运行目录下创建MNIST_data目录,将四个包放进去

train-images-idx3-ubyte.gz: training set images (9912422 bytes)
train-labels-idx1-ubyte.gz: training set labels (28881 bytes)
t10k-images-idx3-ubyte.gz: test set images (1648877 bytes)
t10k-labels-idx1-ubyte.gz: test set labels (4542 bytes)

当然也可以不下载,前提是运行TensorFlow的服务器可以正常访问下载目录,如果出问题参照 【问题1)】解决)

2) MNIST 代码

A: 比较旧的版本(官方教程里面的)

https://tensorflow.google.cn/get_started/mnist/beginners
中文:http://www.tensorfly.cn/tfdoc/tutorials/mnist_beginners.html

完整代码如下:mnist.py

import input_data
import  tensorflow as tf
FLAGS = None
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)

x = tf.placeholder("float",[None,784])
w = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))

y =  tf.nn.softmax(tf.matmul(x,w) + b)
y_ =   tf.placeholder("float",[None,10])
cross_entroy = -tf.reduce_sum(y_ * tf.log(y))

train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entroy)

init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)

for _ in range(1000):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    sess.run(train_step,feed_dict ={x:batch_xs,y_:batch_ys})

correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float"))

print sess.run(accuracy, feed_dict={x:mnist.test.images, y_:mnist.test.labels})

input_data.py

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import gzip
import os
import tempfile

import numpy
from six.moves import urllib
from six.moves import xrange  # pylint: disable=redefined-builtin
import tensorflow as tf
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets

运行

python mnist.py

2) 新版本mnist_softmax.py

input_data.py 文件内容相同,mnist_softmax.py文件不同
mnist_softmax.py 文件目录:

tensorflow\tensorflow\examples\tutorials\mnist\mnist_softmax.py

完整代码:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import argparse
import sys
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
import tensorflow as tf

from tensorflow.examples.tutorials.mnist import input_data

import tensorflow as tf

FLAGS = None


def main(_):
  # Import data
  mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)

  # Create the model
  x = tf.placeholder(tf.float32, [None, 784])
  W = tf.Variable(tf.zeros([784, 10]))
  b = tf.Variable(tf.zeros([10]))
  y = tf.matmul(x, W) + b

  # Define loss and optimizer
  y_ = tf.placeholder(tf.float32, [None, 10])

  # The raw formulation of cross-entropy,
  #
  #   tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)),
  #                                 reduction_indices=[1]))
  #
  # can be numerically unstable.
  #
  # So here we use tf.nn.softmax_cross_entropy_with_logits on the raw
  # outputs of 'y', and then average across the batch.
  cross_entropy = tf.reduce_mean(
      tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
  train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

  sess = tf.InteractiveSession()
  tf.global_variables_initializer().run()
  # Train
  for _ in range(1000):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

  # Test trained model
  correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
  accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
  print(sess.run(accuracy, feed_dict={x: mnist.test.images,
                                      y_: mnist.test.labels}))

if __name__ == '__main__':
  parser = argparse.ArgumentParser()
  parser.add_argument('--data_dir', type=str, default='/tmp/tensorflow/mnist/input_data',
                      help='Directory for storing input data')
  FLAGS, unparsed = parser.parse_known_args()
  tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)

数据路径不同,将训练数据copy过去:

cp MNIST_data/*.gz /tmp/tensorflow/mnist/input_data/

运行:

python mnist_softmax.py

    原文作者:tensorflow
    原文地址: https://www.cnblogs.com/dyufei/p/8041288.html
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞