Keras Functional API

之前介绍过使用keras进行二分类,多分类。使用的是keras框架中的贯序模型。

model <- keras_model_sequential()

# define and compile the model
model %>% 
  layer_dense(units = 64, activation = 'relu', input_shape = c(20)) %>% 
  layer_dropout(rate = 0.5) %>% 
  layer_dense(units = 64, activation = 'relu') %>% 
  layer_dropout(rate = 0.5) %>% 
  layer_dense(units = 1, activation = 'sigmoid') %>% 
  compile(
    loss = 'binary_crossentropy',
    optimizer = 'rmsprop',
    metrics = c('accuracy')
  )

# train 
model %>% fit(x_train, y_train, epochs = 20, batch_size = 128)

# evaluate
score = model %>% evaluate(x_test, y_test, batch_size=128)

用的是这个模型

model <- keras_model_sequential()

这样建立的深度学习模型就是一层一层的网络结构。

但是深度网络结构可以是多输入和多输出的,这样就不能使用贯序模型模型进行定义网络结构,这个时候就需要keras框架里面一种更加灵活的定义方式。
下面介绍一个简单的例子:
首先你需要定义一个输入:

inputs <- layer_input(shape = c(100))

其次,定义输出:

prediction <- inputs%>%layer_dense(units = 30,
                                   activation = "relu",
                                   input_shape = 100) %>%
  layer_dropout(rate = 0.4) %>% layer_dense(units = 30, activation = "relu") %>%
  layer_dropout(rate = 0.4) %>% layer_dense(units = 1, activation = "sigmoid")

创建以及编译模型

x_train <-
  matrix(runif(100000, min = 0, max = 2), nrow = 1000, ncol = 100)
y_train <- matrix(sample(
  x = 0:1,
  size = 1000,
  replace = T
))

x_test <-
  matrix(runif(100000, min = 0, max = 2), nrow = 1000, ncol = 100)
y_test <- matrix(sample(
  x = 0:1,
  size = 1000,
  replace = T
))
# 以上是数据
# 创建模型
model <- keras_model(inputs = inputs,outputs = prediction)
# 编译模型
model %>% compile(
  loss = 'binary_crossentropy',
  optimizer = 'rmsprop',
  metrics = c('accuracy')
)
# 训练模型
model%>%fit(x_train,y_train,BATCH_SIZE=80,epochs = 40)

《Keras Functional API》 image.png

评估模型

model %>% evaluate(x_test,y_test)
1000/1000 [==============================] - 1s 531us/step
$loss
[1] 0.7711746

$acc
[1] 0.473

与之对应的贯序模型的定义方式如下:

x_train <-
  matrix(runif(100000, min = 0, max = 2), nrow = 1000, ncol = 100)
y_train <- matrix(sample(
  x = 0:1,
  size = 1000,
  replace = T
))

x_test <-
  matrix(runif(100000, min = 0, max = 2), nrow = 1000, ncol = 100)
y_test <- matrix(sample(
  x = 0:1,
  size = 1000,
  replace = T
))

model <- keras_model_sequential()

model %>% layer_dense(units = 30,
                      activation = "relu",
                      input_shape = 100) %>%
  layer_dropout(rate = 0.4) %>% layer_dense(units = 30, activation = "relu") %>%
  layer_dropout(rate = 0.4) %>% layer_dense(units = 1, activation = "sigmoid")

model %>% compile(
  loss = 'binary_crossentropy',
  optimizer = 'rmsprop',
  metrics = c('accuracy')
)


model %>% fit(x_train,y_train,epochs = 30,batch_size = 70,validation_split =0.2)

model %>% evaluate(x_test,y_test)

看见差别了没

接下来要干的事,是贯序模型做不了了,我们要训练这样一个模型,模型结构如下图:

《Keras Functional API》 image.png

其有两部分输入构成,又对应着两部分输出。

  1. 定义住输入:
library(keras)

main_input <- layer_input(shape = c(100), dtype = 'int32', name = 'main_input')

lstm_out <- main_input %>% 
  layer_embedding(input_dim = 10000, output_dim = 512, input_length = 100) %>% 
  layer_lstm(units = 32)
  1. 定义副输出:
auxiliary_output <- lstm_out %>% 
  layer_dense(units = 1, activation = 'sigmoid', name = 'aux_output')

3.定义副输入与主输出:

auxiliary_input <- layer_input(shape = c(5), name = 'aux_input')

main_output <- layer_concatenate(c(lstm_out, auxiliary_input)) %>%  
  layer_dense(units = 64, activation = 'relu') %>% 
  layer_dense(units = 64, activation = 'relu') %>% 
  layer_dense(units = 64, activation = 'relu') %>% 
  layer_dense(units = 1, activation = 'sigmoid', name = 'main_output')

需要注意的是输出层都定义了名字

  1. 定义模型
model <- keras_model(
  inputs = c(main_input, auxiliary_input), 
  outputs = c(main_output, auxiliary_output)
)

看一下模型结构:

 summary(model)
____________________________________________________________________________________________
Layer (type)                  Output Shape        Param #    Connected to                   
============================================================================================
main_input (InputLayer)       (None, 100)         0                                         
____________________________________________________________________________________________
embedding_2 (Embedding)       (None, 100, 512)    5120000    main_input[0][0]               
____________________________________________________________________________________________
lstm_4 (LSTM)                 (None, 32)          69760      embedding_2[0][0]              
____________________________________________________________________________________________
aux_input (InputLayer)        (None, 5)           0                                         
____________________________________________________________________________________________
concatenate_2 (Concatenate)   (None, 37)          0          lstm_4[0][0]                   
                                                             aux_input[0][0]                
____________________________________________________________________________________________
dense_137 (Dense)             (None, 64)          2432       concatenate_2[0][0]            
____________________________________________________________________________________________
dense_138 (Dense)             (None, 64)          4160       dense_137[0][0]                
____________________________________________________________________________________________
dense_139 (Dense)             (None, 64)          4160       dense_138[0][0]                
____________________________________________________________________________________________
main_output (Dense)           (None, 1)           65         dense_139[0][0]                
____________________________________________________________________________________________
aux_output (Dense)            (None, 1)           33         lstm_4[0][0]                   
============================================================================================
Total params: 5,200,610
Trainable params: 5,200,610
Non-trainable params: 0
_____________________________

5,200,610个参数

  1. 编译模型
model %>% compile(
  optimizer = 'rmsprop',
  loss = 'binary_crossentropy',
  loss_weights = c(1.0, 0.2)
)

6.训练模型

model %>% fit(
  x = list(headline_data, additional_data),
  y = list(labels, labels),
  epochs = 50,
  batch_size = 32
)

注意,我这里没有生成数据
也可以指定不同的损失函数:

model %>% fit(
  x = list(main_input = headline_data, aux_input = additional_data),
  y = list(main_output = labels, aux_output = labels),
  epochs = 50,
  batch_size = 32
)
    原文作者:Liam_ml
    原文地址: https://www.jianshu.com/p/90132fdb9a4a
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞