【Tool】Keras 实战 I: 卷积神经网络图片分类

在这篇中我们使用keras来进行一个图片分类任务,猫狗分类。我们只使用其中的5000张数据, 4500张作为训练集,500张作为测试集。使用keras设计一个简单的神经网络,包含卷积,池化,全连接层。使用flow_from_dir直接从路径读取数据,使用ImageDataGenerator()对训练数据做增强处理, 训练100个epoch。

import os
import numpy as np
from keras.models import Sequential
from keras import layers
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.utils.np_utils import to_categorical
from scipy.misc import imread, imresize
import matplotlib.pyplot as plt
imgs = []
labels = []
img_shape =(150,150)
# image generator
files = os.listdir('data/test')
# read 1000 files for the generator
for img_file in files[:1000]:
    img = imread('data/test/' + img_file).astype('float32')
    img = imresize(img, img_shape)
    imgs.append(img)

imgs = np.array(imgs)
train_gen = ImageDataGenerator(
     # rescale = 1./255,
     featurewise_center=True,
     featurewise_std_normalization=True,
     rotation_range=20,
     width_shift_range=0.2,
     height_shift_range=0.2,
     horizontal_flip=True)
val_gen = ImageDataGenerator(
     # rescale = 1./255,
     featurewise_center=True,
     featurewise_std_normalization=True)

train_gen.fit(imgs)
val_gen.fit(imgs)

# 4500 training images 
train_iter = train_gen.flow_from_directory('data/train',class_mode='binary',
                                            target_size=img_shape,   batch_size=16)
# 501 validation images
val_iter = val_gen.flow_from_directory('data/val', class_mode='binary', 
                                        target_size=img_shape, batch_size=16)

'''
# image generator debug
for x_batch, y_batch in img_iter:
    print(x_batch.shape)
    print(y_batch.shape)
    plt.imshow(x_batch[0])
    plt.show()
'''

# define the navie model using sequential model
model = Sequential()
model.add(layers.Conv2D(32, (3,3), activation='relu', input_shape=(150,150,3)))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64, (3,3), activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(128, (3,3), activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(128, (3,3), activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
# define the optimzers
model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4),
        metrics=['acc'])
history = model.fit_generator(
        generator=train_iter,
        steps_per_epoch=282,
        epochs=100,
        validation_data=val_iter,
        validation_steps=32
        )
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1,101)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'r', label='Validation acc')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'r', label='Validation loss')
plt.legend()
plt.show()
# 输出
Epoch 1/100
63/63 [==============================] - 34s 534ms/step - loss: 0.6882 - acc: 0.5595 - val_loss: 0.6527 - val_acc: 0.6260
Epoch 2/100
63/63 [==============================] - 29s 466ms/step - loss: 0.6607 - acc: 0.6014 - val_loss: 0.6368 - val_acc: 0.6420
Epoch 3/100
63/63 [==============================] - 29s 460ms/step - loss: 0.6455 - acc: 0.6285 - val_loss: 0.6316 - val_acc: 0.6590
Epoch 4/100
63/63 [==============================] - 29s 465ms/step - loss: 0.6398 - acc: 0.6245 - val_loss: 0.6268 - val_acc: 0.6530
Epoch 5/100
63/63 [==============================] - 29s 462ms/step - loss: 0.6247 - acc: 0.6545 - val_loss: 0.5982 - val_acc: 0.7010
Epoch 6/100
63/63 [==============================] - 29s 465ms/step - loss: 0.6078 - acc: 0.6605 - val_loss: 0.5800 - val_acc: 0.7050
Epoch 7/100
63/63 [==============================] - 29s 461ms/step - loss: 0.6056 - acc: 0.6771 - val_loss: 0.5941 - val_acc: 0.6770
Epoch 8/100
63/63 [==============================] - 30s 472ms/step - loss: 0.5882 - acc: 0.6873 - val_loss: 0.5826 - val_acc: 0.6980
Epoch 9/100
63/63 [==============================] - 29s 464ms/step - loss: 0.5923 - acc: 0.6880 - val_loss: 0.5811 - val_acc: 0.6760
Epoch 10/100
63/63 [==============================] - 29s 463ms/step - loss: 0.5763 - acc: 0.6897 - val_loss: 0.5709 - val_acc: 0.7090
Epoch 11/100
63/63 [==============================] - 29s 459ms/step - loss: 0.5704 - acc: 0.7011 - val_loss: 0.5816 - val_acc: 0.6810
Epoch 12/100
63/63 [==============================] - 29s 460ms/step - loss: 0.5655 - acc: 0.7113 - val_loss: 0.5731 - val_acc: 0.6820
Epoch 13/100
63/63 [==============================] - 29s 457ms/step - loss: 0.5554 - acc: 0.7217 - val_loss: 0.5660 - val_acc: 0.7160
Epoch 14/100
63/63 [==============================] - 29s 455ms/step - loss: 0.5528 - acc: 0.7200 - val_loss: 0.5603 - val_acc: 0.7250
Epoch 15/100
63/63 [==============================] - 29s 457ms/step - loss: 0.5565 - acc: 0.7120 - val_loss: 0.5338 - val_acc: 0.7290
Epoch 16/100
63/63 [==============================] - 29s 454ms/step - loss: 0.5499 - acc: 0.7242 - val_loss: 0.5557 - val_acc: 0.7210
Epoch 17/100
63/63 [==============================] - 29s 460ms/step - loss: 0.5476 - acc: 0.7170 - val_loss: 0.5509 - val_acc: 0.7220
Epoch 18/100
63/63 [==============================] - 29s 454ms/step - loss: 0.5394 - acc: 0.7349 - val_loss: 0.5148 - val_acc: 0.7470
Epoch 19/100
63/63 [==============================] - 29s 468ms/step - loss: 0.5315 - acc: 0.7346 - val_loss: 0.5314 - val_acc: 0.7360
Epoch 20/100
63/63 [==============================] - 29s 453ms/step - loss: 0.5297 - acc: 0.7324 - val_loss: 0.5808 - val_acc: 0.7060
Epoch 21/100
63/63 [==============================] - 29s 455ms/step - loss: 0.5280 - acc: 0.7391 - val_loss: 0.5517 - val_acc: 0.7360
Epoch 22/100
63/63 [==============================] - 29s 453ms/step - loss: 0.5176 - acc: 0.7403 - val_loss: 0.5282 - val_acc: 0.7460
Epoch 23/100
63/63 [==============================] - 29s 454ms/step - loss: 0.5170 - acc: 0.7537 - val_loss: 0.5204 - val_acc: 0.7330
Epoch 24/100
63/63 [==============================] - 28s 451ms/step - loss: 0.5199 - acc: 0.7495 - val_loss: 0.5123 - val_acc: 0.7400
Epoch 25/100
63/63 [==============================] - 28s 452ms/step - loss: 0.5136 - acc: 0.7493 - val_loss: 0.5004 - val_acc: 0.7570
Epoch 26/100
63/63 [==============================] - 29s 454ms/step - loss: 0.5016 - acc: 0.7508 - val_loss: 0.5127 - val_acc: 0.7610
Epoch 27/100
63/63 [==============================] - 28s 452ms/step - loss: 0.4972 - acc: 0.7550 - val_loss: 0.4997 - val_acc: 0.7530
Epoch 28/100
63/63 [==============================] - 28s 452ms/step - loss: 0.4928 - acc: 0.7572 - val_loss: 0.4996 - val_acc: 0.7610
Epoch 29/100
63/63 [==============================] - 28s 450ms/step - loss: 0.5042 - acc: 0.7602 - val_loss: 0.5062 - val_acc: 0.7650
Epoch 30/100
63/63 [==============================] - 29s 464ms/step - loss: 0.4904 - acc: 0.7619 - val_loss: 0.4968 - val_acc: 0.7450
Epoch 31/100
63/63 [==============================] - 29s 453ms/step - loss: 0.4670 - acc: 0.7825 - val_loss: 0.5002 - val_acc: 0.7510
Epoch 32/100
63/63 [==============================] - 29s 454ms/step - loss: 0.4915 - acc: 0.7587 - val_loss: 0.4600 - val_acc: 0.7770
Epoch 33/100
63/63 [==============================] - 28s 451ms/step - loss: 0.4697 - acc: 0.7770 - val_loss: 0.4898 - val_acc: 0.7620
Epoch 34/100
63/63 [==============================] - 28s 451ms/step - loss: 0.4845 - acc: 0.7659 - val_loss: 0.4882 - val_acc: 0.7780
Epoch 35/100
63/63 [==============================] - 28s 451ms/step - loss: 0.4804 - acc: 0.7728 - val_loss: 0.4721 - val_acc: 0.7740
Epoch 36/100
63/63 [==============================] - 28s 451ms/step - loss: 0.4814 - acc: 0.7691 - val_loss: 0.4861 - val_acc: 0.7840
Epoch 37/100
63/63 [==============================] - 29s 453ms/step - loss: 0.4674 - acc: 0.7731 - val_loss: 0.4750 - val_acc: 0.7920
Epoch 38/100
63/63 [==============================] - 29s 453ms/step - loss: 0.4761 - acc: 0.7684 - val_loss: 0.4800 - val_acc: 0.7780
Epoch 39/100
63/63 [==============================] - 28s 448ms/step - loss: 0.4630 - acc: 0.7842 - val_loss: 0.5113 - val_acc: 0.7520
Epoch 40/100
63/63 [==============================] - 28s 452ms/step - loss: 0.4565 - acc: 0.7783 - val_loss: 0.6173 - val_acc: 0.6990
Epoch 41/100
63/63 [==============================] - 29s 463ms/step - loss: 0.4553 - acc: 0.7855 - val_loss: 0.5212 - val_acc: 0.7420
Epoch 42/100
63/63 [==============================] - 28s 452ms/step - loss: 0.4589 - acc: 0.7830 - val_loss: 0.4551 - val_acc: 0.7720
Epoch 43/100
63/63 [==============================] - 29s 455ms/step - loss: 0.4534 - acc: 0.7887 - val_loss: 0.4752 - val_acc: 0.7780
Epoch 44/100
63/63 [==============================] - 28s 452ms/step - loss: 0.4515 - acc: 0.7860 - val_loss: 0.4573 - val_acc: 0.7960
Epoch 45/100
63/63 [==============================] - 28s 447ms/step - loss: 0.4477 - acc: 0.7897 - val_loss: 0.4505 - val_acc: 0.7880
Epoch 46/100
63/63 [==============================] - 28s 450ms/step - loss: 0.4427 - acc: 0.7892 - val_loss: 0.4595 - val_acc: 0.7790
Epoch 47/100
63/63 [==============================] - 28s 446ms/step - loss: 0.4370 - acc: 0.7951 - val_loss: 0.4760 - val_acc: 0.7610
Epoch 48/100
63/63 [==============================] - 29s 453ms/step - loss: 0.4414 - acc: 0.7974 - val_loss: 0.4677 - val_acc: 0.7790
Epoch 49/100
63/63 [==============================] - 28s 450ms/step - loss: 0.4381 - acc: 0.7994 - val_loss: 0.5648 - val_acc: 0.7290
Epoch 50/100
63/63 [==============================] - 28s 449ms/step - loss: 0.4360 - acc: 0.7986 - val_loss: 0.4509 - val_acc: 0.7750
Epoch 51/100
63/63 [==============================] - 28s 448ms/step - loss: 0.4281 - acc: 0.8011 - val_loss: 0.4583 - val_acc: 0.7810
Epoch 52/100
63/63 [==============================] - 29s 467ms/step - loss: 0.4253 - acc: 0.7989 - val_loss: 0.5347 - val_acc: 0.7470
Epoch 53/100
63/63 [==============================] - 28s 448ms/step - loss: 0.4241 - acc: 0.8063 - val_loss: 0.4243 - val_acc: 0.7990
Epoch 54/100
63/63 [==============================] - 28s 450ms/step - loss: 0.4381 - acc: 0.7912 - val_loss: 0.4856 - val_acc: 0.7700
Epoch 55/100
63/63 [==============================] - 28s 450ms/step - loss: 0.4082 - acc: 0.8098 - val_loss: 0.4296 - val_acc: 0.7910
Epoch 56/100
63/63 [==============================] - 28s 448ms/step - loss: 0.4202 - acc: 0.7996 - val_loss: 0.4401 - val_acc: 0.7900
Epoch 57/100
63/63 [==============================] - 28s 447ms/step - loss: 0.4153 - acc: 0.8046 - val_loss: 0.4345 - val_acc: 0.7950
Epoch 58/100
63/63 [==============================] - 28s 444ms/step - loss: 0.4080 - acc: 0.8120 - val_loss: 0.6320 - val_acc: 0.7040
Epoch 59/100
63/63 [==============================] - 28s 450ms/step - loss: 0.4155 - acc: 0.8095 - val_loss: 0.4201 - val_acc: 0.8090
Epoch 60/100
63/63 [==============================] - 28s 444ms/step - loss: 0.3953 - acc: 0.8207 - val_loss: 0.4368 - val_acc: 0.7950
Epoch 61/100
63/63 [==============================] - 28s 450ms/step - loss: 0.4050 - acc: 0.8095 - val_loss: 0.4339 - val_acc: 0.8060
Epoch 62/100
63/63 [==============================] - 28s 444ms/step - loss: 0.4055 - acc: 0.8063 - val_loss: 0.4352 - val_acc: 0.7880
Epoch 63/100
63/63 [==============================] - 29s 463ms/step - loss: 0.4061 - acc: 0.8190 - val_loss: 0.4509 - val_acc: 0.7900
Epoch 64/100
63/63 [==============================] - 28s 449ms/step - loss: 0.3895 - acc: 0.8257 - val_loss: 0.4309 - val_acc: 0.8070
Epoch 65/100
63/63 [==============================] - 28s 444ms/step - loss: 0.4039 - acc: 0.8056 - val_loss: 0.4215 - val_acc: 0.8110
Epoch 66/100
63/63 [==============================] - 28s 450ms/step - loss: 0.3932 - acc: 0.8244 - val_loss: 0.4495 - val_acc: 0.7840
Epoch 67/100
63/63 [==============================] - 28s 442ms/step - loss: 0.3803 - acc: 0.8311 - val_loss: 0.4361 - val_acc: 0.8120
Epoch 68/100
63/63 [==============================] - 28s 450ms/step - loss: 0.3823 - acc: 0.8219 - val_loss: 0.4137 - val_acc: 0.8130
Epoch 69/100
63/63 [==============================] - 28s 447ms/step - loss: 0.3840 - acc: 0.8279 - val_loss: 0.4047 - val_acc: 0.8190
Epoch 70/100
63/63 [==============================] - 28s 452ms/step - loss: 0.3736 - acc: 0.8368 - val_loss: 0.4626 - val_acc: 0.8040
Epoch 71/100
63/63 [==============================] - 28s 449ms/step - loss: 0.3823 - acc: 0.8229 - val_loss: 0.3858 - val_acc: 0.8380
Epoch 72/100
63/63 [==============================] - 28s 450ms/step - loss: 0.3731 - acc: 0.8284 - val_loss: 0.4022 - val_acc: 0.8040
Epoch 73/100
63/63 [==============================] - 28s 451ms/step - loss: 0.3821 - acc: 0.8274 - val_loss: 0.4187 - val_acc: 0.8260
Epoch 74/100
63/63 [==============================] - 29s 462ms/step - loss: 0.3640 - acc: 0.8323 - val_loss: 0.4059 - val_acc: 0.8140
Epoch 75/100
63/63 [==============================] - 28s 449ms/step - loss: 0.3677 - acc: 0.8378 - val_loss: 0.4120 - val_acc: 0.8140
Epoch 76/100
63/63 [==============================] - 28s 445ms/step - loss: 0.3657 - acc: 0.8375 - val_loss: 0.4363 - val_acc: 0.8000
Epoch 77/100
63/63 [==============================] - 28s 448ms/step - loss: 0.3727 - acc: 0.8348 - val_loss: 0.3741 - val_acc: 0.8270
Epoch 78/100
63/63 [==============================] - 28s 447ms/step - loss: 0.3547 - acc: 0.8457 - val_loss: 0.4289 - val_acc: 0.7920
Epoch 79/100
63/63 [==============================] - 28s 447ms/step - loss: 0.3650 - acc: 0.8370 - val_loss: 0.4033 - val_acc: 0.8250
Epoch 80/100
63/63 [==============================] - 28s 445ms/step - loss: 0.3619 - acc: 0.8393 - val_loss: 0.4161 - val_acc: 0.8080
Epoch 81/100
63/63 [==============================] - 28s 445ms/step - loss: 0.3557 - acc: 0.8408 - val_loss: 0.3922 - val_acc: 0.8320
Epoch 82/100
63/63 [==============================] - 28s 446ms/step - loss: 0.3419 - acc: 0.8472 - val_loss: 0.5187 - val_acc: 0.7460
Epoch 83/100
63/63 [==============================] - 28s 444ms/step - loss: 0.3508 - acc: 0.8452 - val_loss: 0.5072 - val_acc: 0.7670
Epoch 84/100
63/63 [==============================] - 28s 448ms/step - loss: 0.3561 - acc: 0.8366 - val_loss: 0.3847 - val_acc: 0.8270
Epoch 85/100
63/63 [==============================] - 29s 459ms/step - loss: 0.3483 - acc: 0.8430 - val_loss: 0.4139 - val_acc: 0.8170
Epoch 86/100
63/63 [==============================] - 28s 443ms/step - loss: 0.3522 - acc: 0.8415 - val_loss: 0.4807 - val_acc: 0.7770
Epoch 87/100
63/63 [==============================] - 28s 447ms/step - loss: 0.3481 - acc: 0.8457 - val_loss: 0.4108 - val_acc: 0.8120
Epoch 88/100
63/63 [==============================] - 28s 441ms/step - loss: 0.3296 - acc: 0.8564 - val_loss: 0.4092 - val_acc: 0.8070
Epoch 89/100
63/63 [==============================] - 28s 447ms/step - loss: 0.3449 - acc: 0.8480 - val_loss: 0.5122 - val_acc: 0.7660
Epoch 90/100
63/63 [==============================] - 28s 443ms/step - loss: 0.3458 - acc: 0.8490 - val_loss: 0.4397 - val_acc: 0.8010
Epoch 91/100
63/63 [==============================] - 28s 441ms/step - loss: 0.3439 - acc: 0.8465 - val_loss: 0.4055 - val_acc: 0.8210
Epoch 92/100
63/63 [==============================] - 28s 449ms/step - loss: 0.3303 - acc: 0.8534 - val_loss: 0.4587 - val_acc: 0.8030
Epoch 93/100
63/63 [==============================] - 28s 441ms/step - loss: 0.3329 - acc: 0.8509 - val_loss: 0.5493 - val_acc: 0.7520
Epoch 94/100
63/63 [==============================] - 28s 438ms/step - loss: 0.3407 - acc: 0.8467 - val_loss: 0.3835 - val_acc: 0.8300
Epoch 95/100
63/63 [==============================] - 28s 441ms/step - loss: 0.3400 - acc: 0.8499 - val_loss: 0.3963 - val_acc: 0.8210
Epoch 96/100
63/63 [==============================] - 29s 461ms/step - loss: 0.3195 - acc: 0.8591 - val_loss: 0.4187 - val_acc: 0.8120
Epoch 97/100
63/63 [==============================] - 28s 439ms/step - loss: 0.3239 - acc: 0.8537 - val_loss: 0.5002 - val_acc: 0.7880
Epoch 98/100
63/63 [==============================] - 28s 444ms/step - loss: 0.3248 - acc: 0.8564 - val_loss: 0.3788 - val_acc: 0.8140
Epoch 99/100
63/63 [==============================] - 28s 438ms/step - loss: 0.3369 - acc: 0.8465 - val_loss: 0.3815 - val_acc: 0.8270
Epoch 100/100
63/63 [==============================] - 28s 444ms/step - loss: 0.3302 - acc: 0.8477 - val_loss: 0.3717 - val_acc: 0.8330

可以看出这个navie model大概可以达到80%的validation accuracy。

《【Tool】Keras 实战 I: 卷积神经网络图片分类》 5F8iKl.png

《【Tool】Keras 实战 I: 卷积神经网络图片分类》 5F8xoJ.png

但是从training 和 validation accuracy 和loss 之间的gap也可以看出从40epochs之后我们的模型出现了过拟合。避免过拟合的方式有很多种,比如数据增强,加入dropout层和直接微调他人已经训练好的模型。第三种方法也是迁移学习,下篇文章我们会看看如何从别人模型进行迁移学习,来减轻过拟合现象。

    原文作者:ItchyHacker
    原文地址: https://www.jianshu.com/p/dcb10f9a5b05
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞