apache-spark – Spark流媒体加入Kafka主题比较

我们需要考虑到后期数据或“不加入”来实现Kafka主题的连接,这意味着在流中迟到或不在连接中的数据不会被删除/丢失,但会被标记为超时,

生成连接的结果是输出Kafka主题(如果发生超时提交).

(单独部署中的spark 2.1.1,Kafka 10)

Kafka主题:X,Y,……主题结果如下:

{
    "keyJoinFiled": 123456,
    "xTopicData": {},
    "yTopicData": {},
    "isTimeOutFlag": true
}

我在这里找到了三个解决方案,1和2来自spark流媒体官方文档,但与我们无关(数据不在加入Dtsream,到达“业务时间”迟到,丢弃/丢失)但我写了它们进行比较.

从我们看到的情况来看,Kafka连接主题没有太多示例,有状态操作在此处添加一些代码供审阅:

1)根据火花流媒体文档,

https://spark.apache.org/docs/2.1.1/streaming-programming-guide.html:   
 val stream1: DStream[String, String] = 
 val stream2: DStream[String, String] = 
 val joinedStream = stream1.join(stream2)

这将连接来自两个流批次持续时间的数据,但数据到达“业务时间”迟到/不在连接中将被丢弃/丢失.

2)窗口连接:

val leftWindowDF = kafkaStreamLeft.window(Minutes(input_parameter_time))
val rightWindowDF = kafkaStreamRight.window(Minutes(input_parameter_time))
leftWindowDF.join(rightWindowDF).foreachRDD...

2.1)在我们的例子中,我们需要考虑使用Tumbling窗口
       火花流批处理间隔.
  2.2)需要在Memory / Disk中保存大量数据,例如30-60分钟
       窗口
  2.3)并且数据再次到达/不在窗口中/不在连接中
       丢弃/丢失.
       *自Spark 2.3.1结构化流媒体流到流连接是
         支持,但我们遇到一个没有清除HDFS状态的错误
         因此,商店的工作每隔几个小时就会在OOM上下降,
         在2.4中解决
        ,https://issues.apache.org/jira/browse/SPARK-23682
        (使用Rocksdb或CustomStateStoreProvider HDFS状态存储).

3)使用有状态操作mapWithState来连接Kafka主题Dstreams
   翻滚窗口和晚期数据超时30分钟,
   生成输出主题的所有数据都包含来自所有人的联合消息
   如果出现连接,则为主题,否则为主题数据的一部分
   加入发生在30分钟(用is_time_out标志标记)

3.1)为每个主题创建1..n Dstream,转换为Key value / Unioned
     连接记录作为键和翻滚窗口.
     创造一个包罗万象的计划.
 3.2)联盟所有流
 3.3)运行联合流mapWithState with function – 实际上会执行
     加入/标记超时.

来自数据库的有状态连接的好例子(spark 2.2.0):
https://www.youtube.com/watch?time_continue=1858&v=JAb4FIheP28

添加正在运行/测试的示例代码.

 val kafkaParams = Map[String, Object](
    "bootstrap.servers" -> brokers,
    "key.deserializer" -> classOf[StringDeserializer],
    "value.deserializer" -> classOf[StringDeserializer],
    "group.id" -> groupId,
    "session.timeout.ms" -> "30000"
  )

  //Kafka xTopic DStream
  val kafkaStreamLeft = KafkaUtils.createDirectStream[String, String](
    ssc,
    PreferConsistent,
    Subscribe[String, String](leftTopic.split(",").toSet, kafkaParams)
  ).map(record => {
    val msg:xTopic = gson.fromJson(record.value(),classOf[xTopic])
    Unioned(Some(msg),None,if (msg.sessionId!= null) msg.sessionId.toString else "")
  }).window(Minutes(leftWindow),Minutes(leftWindow))

  //Kafka yTopic DStream
  val kafkaStreamRight = KafkaUtils.createDirectStream[String, String](
    ssc,
    PreferConsistent,
    Subscribe[String, String](rightTopic.split(",").toSet, kafkaParams)
  ).map(record => {
    val msg:yTopic = gson.fromJson(record.value(),classOf[yTopic])
    Unioned(None,Some(msg),if (msg.sessionId!= null) msg.sessionId.toString else "")
  }).window(Minutes(rightWindow),Minutes(rightWindow))

  //convert stream to key, value pair and filter empty session id.
  val unionStream = kafkaStreamLeft.union(kafkaStreamRight).map(record =>(record.sessionId,record))
    .filter(record => !record._1.toString.isEmpty)
  val stateSpec = StateSpec.function(stateUpdateF).timeout(Minutes(timeout.toInt))

  unionStream.mapWithState(stateSpec).foreachRDD(rdd => {
    try{
      if(!rdd.isEmpty()) rdd.foreachPartition(partition =>{
        val props = new util.HashMap[String, Object]()
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers)
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")

        val producer = new KafkaProducer[String, String](props)
        //send to kafka result JSON.
        partition.foreach(record => {
          if(record!=null && !"".equals(record) && !"()".equals(record.toString) && !"None".equals(record.toString) ){
            producer.send(new ProducerRecord[String, String](outTopic, null, gson.toJson(record)))
          }
        })
        producer.close()
      })
    }catch {
      case e: Exception  => {
        logger.error(s""""error join topics :${leftTopic} ${rightTopic} to out topic ${outTopic}""")
        logger.info(e.printStackTrace())
      }
    }})

//mapWithState function that will be called on each key occurrence with new items in newItemValues and state items if exits.

def stateUpdateF = (keySessionId:String,newItemValues:Option[Unioned],state:State[Unioned])=> {
    val currentState = state.getOption().getOrElse(Unioned(None,None,keySessionId))

    val newVal:Unioned = newItemValues match {
      case Some(newItemValue) => {
        if (newItemValue.yTopic.isDefined)
          Unioned(if(newItemValue.xTopic.isDefined) newItemValue.xTopic else currentState.xTopic,newItemValue.yTopic,keySessionId)
        else if (newItemValue.xTopic.isDefined)
          Unioned(newItemValue.xTopic, if(currentState.yTopic.isDefined)currentState.yTopic else newItemValue.yTopic,keySessionId)
        else newItemValue
      }
      case _ => currentState //if None = timeout => currentState
    }

    val processTs = LocalDateTime.now()
    val processDate = dtf.format(processTs)
    if(newVal.xTopic.isDefined && newVal.yTopic.isDefined){//if we have a join remove from state
      state.remove()
      JoinState(newVal.sessionId,newVal.xTopic,newVal.yTopic,false,processTs.toInstant(ZoneOffset.UTC).toEpochMilli,processDate)
    }else if(state.isTimingOut()){//time out do no try to remove state manually ,it's removed automatically.
        JoinState(newVal.sessionId, newVal.xTopic, newVal.yTopic,true,processTs.toInstant(ZoneOffset.UTC).toEpochMilli,processDate)
    }else{
      state.update(newVal)
    }
  }

  //case class for kafka topics data.(x,y topics ) join will be on session id filed.
  case class xTopic(sessionId:String,param1:String,param2:String,sessionCreationDate:String)
  case class yTopic(sessionId:Long,clientTimestamp:String)
  //catch all schema : object that contains both kafka input fileds topics and key valiue for join.
  case class Unioned(xTopic:Option[xTopic],yTopic:Option[yTopic],sessionId:String)
  //class for  output result of join stateful function.
  case class JoinState(sessionId:String, xTopic:Option[xTopic],yTopic:Option[yTopic],isTimeOut:Boolean,processTs:Long,processDate:String)

我很乐意进行一些审查.
抱歉这篇长篇文章.

最佳答案 我认为这个用例是由Sessionization API解决的?:

StructuredSessionization.scala

Stateful Operations in Structured Streaming

或者我错过了什么?

点赞