Spark--Quick Start

spark具有详细的官方文档

spark具有完善的接口:Scala、Python、Java、R

启动Scala接口

./bin/spark-shell

Python

./bin/pyspark

启动pyspark,出现spark版本号,>>> 则代表启动成功

[hadoop@localhost Desktop]$ pyspark
Python 3.5.2 |Anaconda 4.1.1 (64-bit)| (default, Jul  2 2016, 17:53:06) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/spark-1.6.2-bin-hadoop2.6/lib/spark-assembly-1.6.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/10/18 06:16:18 INFO spark.SparkContext: Running Spark version 1.6.2
16/10/18 06:16:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/10/18 06:16:19 WARN util.Utils: Your hostname, localhost.localdomain resolves to a loopback address: 127.0.0.1; using 192.168.163.129 instead (on interface eth0)
16/10/18 06:16:19 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
16/10/18 06:16:19 INFO spark.SecurityManager: Changing view acls to: hadoop
16/10/18 06:16:19 INFO spark.SecurityManager: Changing modify acls to: hadoop
16/10/18 06:16:19 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
16/10/18 06:16:20 INFO util.Utils: Successfully started service 'sparkDriver' on port 55502.
16/10/18 06:16:21 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/10/18 06:16:21 INFO Remoting: Starting remoting
16/10/18 06:16:21 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.163.129:33962]
16/10/18 06:16:21 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 33962.
16/10/18 06:16:21 INFO spark.SparkEnv: Registering MapOutputTracker
16/10/18 06:16:21 INFO spark.SparkEnv: Registering BlockManagerMaster
16/10/18 06:16:21 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-db7f7a2d-17be-4b7d-92ea-df8621a1d4be
16/10/18 06:16:21 INFO storage.MemoryStore: MemoryStore started with capacity 517.4 MB
16/10/18 06:16:21 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/10/18 06:16:22 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/10/18 06:16:22 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/10/18 06:16:22 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/10/18 06:16:22 INFO ui.SparkUI: Started SparkUI at http://192.168.163.129:4040
16/10/18 06:16:22 INFO executor.Executor: Starting executor ID driver on host localhost
16/10/18 06:16:22 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 43486.
16/10/18 06:16:22 INFO netty.NettyBlockTransferService: Server created on 43486
16/10/18 06:16:22 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/10/18 06:16:22 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:43486 with 517.4 MB RAM, BlockManagerId(driver, localhost, 43486)
16/10/18 06:16:22 INFO storage.BlockManagerMaster: Registered BlockManager
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 1.6.2
      /_/

Using Python version 3.5.2 (default, Jul  2 2016 17:53:06)
SparkContext available as sc, HiveContext available as sqlContext.
>>> 

‘SparkContext available as sc, HiveContext available as sqlContext.’

>>>textFile = sc.textFile("file:///opt/spark-1.6.2-bin-hadoop2.6/README.md")
>>>textFile.count()

此处需注意,spark shell默认读取HDFS上的数据,使用“file://”限定读取本地文件,否则会报如下的错,提示你HDFS上不存在该文件。

py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/opt/spark-1.6.2-bin-hadoop2.6/README.md

在跑Spark示例程序时,输出信息会很多,可使用“2>/dev/null”将错误信息过滤

[hadoop@localhost spark-1.6.2-bin-hadoop2.6]$ run-example SparkPi 2>/dev/null
##结果
Pi is roughly 3.13928

或者将标准错误重定向到终端,然后用管道命令“|”截取

[hadoop@localhost spark-1.6.2-bin-hadoop2.6]$ run-example SparkPi 2>&1 | grep "Pi is "
##结果
Pi is roughly 3.1439

RDD(Resilient Distributed Dataset,弹性分布数据集)是Spark的核心概念,有actions、transformations两种操作。actions返回计算的值,transformations返回一个指向新RDD的指针。
疑惑:transformations返回的到底是指针还是新的RDD
RDD操作总结可参考:http://blog.csdn.net/eric_sunah/article/details/51037837

    原文作者:野生大头鱼
    原文地址: https://www.jianshu.com/p/cd9c7d601aa7
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞