Spark程序结果包含括号的问题

一个简单的wordcount Spark程序如下所示:

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._

/** * @author lming_08 */
object WordCount {
  def main(args:Array[String]) {
    if (args.length < 2) {
      println("WordCount inputfile outputfile")
      return
    }
    val Array(inputFile, outputFile) = args
    val sc = new SparkContext(new SparkConf())

    val inputData = sc.textFile(inputFile)
    inputData.flatMap(_.split("\\s+")).map((_, 1)).reduceByKey(_+_)
      .saveAsTextFile(outputFile)
  }
}

输入文件input.dat内容:
OS:Red Hat Enterprise Linux Server release 6.4 (Santiago)
IntelliJ IDEA 13.1.3
输出文件内容:
(release,1)
(OS:Red,1)
(13.1.3,1)
(Hat,1)
(6.4,1)
(IntelliJ,1)
(IDEA,1)
(Linux,1)
(Server,1)
(Enterprise,1)
((Santiago),1)

显然,结果中含有该死的括号!所以需要想办法去掉括号!

val inputData = sc.textFile(inputFile)
    inputData.flatMap(_.split("\\s+")).map((_, 1)).reduceByKey(_+_)
      .map(line => {
        val word = line._1
        val cnt = line._2
        word + "\t" + cnt
      })
      .saveAsTextFile(outputFile)

这次结果才是预期的
release 1
OS:Red 1
13.1.3 1
Hat 1
6.4 1
IntelliJ 1
IDEA 1
Linux 1
Server 1
Enterprise 1
(Santiago) 1

    原文作者:括号匹配问题
    原文地址: https://blog.csdn.net/lming_08/article/details/51661344
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞