apache-spark – Spark Streaming:在写入记录时抛出异常:BatchAllocationEvent

我使用以下代码关闭Spark StreamingContext.

本质上,一个线程监视一个布尔开关,然后调用StreamingContext.stop(true,true)

一切似乎都在处理,我的所有数据似乎都已收集完毕.但是,我在关机时遇到以下异常.

我可以忽略吗?看起来有可能导致数据丢失.

18/03/07 11:46:40 WARN ReceivedBlockTracker: Exception thrown while
writing record: BatchAllocationEvent(1520452000000
ms,AllocatedBlocks(Map(0 -> ArrayBuffer()))) to the WriteAheadLog.
java.lang.IllegalStateException: close() was called on
BatchedWriteAheadLog before write request with time 1520452000001
could be fulfilled.
at org.apache.spark.streaming.util.BatchedWriteAheadLog.write(BatchedWriteAheadLog.scala:86)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.writeToLog(ReceivedBlockTracker.scala:234)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.allocateBlocksToBatch(ReceivedBlockTracker.scala:118)
at org.apache.spark.streaming.scheduler.ReceiverTracker.allocateBlocksToBatch(ReceiverTracker.scala:213)
at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:248)

线程

var stopScc=false

private def stopSccThread(): Unit = {
val thread = new Thread {
  override def run {

    var continueRun=true
    while (continueRun) {
      logger.debug("Checking status")
      if (stopScc == true) {
        getSparkStreamingContext(fieldVariables).stop(true, true)
        logger.info("Called Stop on Streaming Context")
        continueRun=false


      }
      Thread.sleep(50)
    }
  }
}
thread.start

}

@throws(classOf[IKodaMLException])
def startStream(ip: String, port: Int): Unit = {

try {
  val ssc = getSparkStreamingContext(fieldVariables)
  ssc.checkpoint("./ikoda/cp")

  val lines = ssc.socketTextStream(ip, port, StorageLevel.MEMORY_AND_DISK_SER)
  lines.print


  val lmap = lines.map {
    l =>

      if (l.contains("IKODA_END_STREAM")) {
        stopScc = true
      }
      l
  }


  lmap.foreachRDD {
    r =>
      if (r.count() > 0) {
        logger.info(s"RECEIVED: ${r.toString()} first: ${r.first().toString}")
        r.saveAsTextFile("./ikoda/test/test")
      }
      else {
        logger.info("Empty RDD. No data received")
      }
  }
  ssc.start()

  ssc.awaitTermination()
}
catch {
  case e: Exception =>
    logger.error(e.getMessage, e)
    throw new IKodaMLException(e.getMessage, e)
}

最佳答案 我有同样的问题,并调用close()而不是停止修复它.

点赞