看一下提交命令 offline.sh
中的一个有趣的配置:
spark2-submit \
--class $1 \
--master yarn \
--deploy-mode cluster \
--driver-memory 4g \
--driver-cores 2 \
--executor-memory 6g \
--executor-cores 3 \
--num-executors 12 \
--conf spark.yarn.submit.waitAppCompletion=false \
--files /etc/hbase/conf/hbase-site.xml \
/tmp/xxxx.jar
spark.yarn.submit.waitAppCompletion 这个配置,看字面意思,提交任务,直到程序结束运行的意思。如果设置为 false
, 那么提交完就可以去干别的事情了,不用一直等着看结果;如果设置为 true
(默认的), 顾名思义,提交完程序后会一直在终端中打印信息,直到程序运行结束。所以为了方便,还是设置成 false
比较合适。因为等一会你可以通过 yarn logs applicationId application_1534402443030_0345
查看运行的日志/运行结果。
./offline.sh wmstat.Completion 12 7
它提交完就完事了,任务还在后台运行,所以打印出来的日志很短:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/jars/phoenix-4.14.0-cdh5.13.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.13.2-1.cdh5.13.2.p0.3/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/09/04 17:07:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm154
18/09/04 17:07:24 INFO yarn.Client: Requesting a new application from cluster with 6 NodeManagers
18/09/04 17:07:24 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (41121 MB per container)
18/09/04 17:07:24 INFO yarn.Client: Will allocate AM container, with 4505 MB memory including 409 MB overhead
18/09/04 17:07:24 INFO yarn.Client: Setting up container launch context for our AM
18/09/04 17:07:24 INFO yarn.Client: Setting up the launch environment for our AM container
18/09/04 17:07:24 INFO yarn.Client: Preparing resources for our AM container
18/09/04 17:07:24 INFO yarn.Client: Uploading resource file:/root/mileage_anxiety/wmanxiety-1.0-SNAPSHOT.jar -> hdfs://WMBigdata0:8020/user/root/.sparkStaging/application_1534402443030_0345/wmanxiety-1.0-SNAPSHOT.jar
18/09/04 17:07:25 INFO yarn.Client: Uploading resource file:/etc/hbase/conf/hbase-site.xml -> hdfs://WMBigdata0:8020/user/root/.sparkStaging/application_1534402443030_0345/hbase-site.xml
18/09/04 17:07:25 INFO yarn.Client: Uploading resource file:/tmp/spark-13281d42-2c64-41a0-9c93-e6f9d2377c83/__spark_conf__6338418821142772330.zip -> hdfs://WMBigdata0:8020/user/root/.sparkStaging/application_1534402443030_0345/__spark_conf__.zip
18/09/04 17:07:25 INFO spark.SecurityManager: Changing view acls to: root
18/09/04 17:07:25 INFO spark.SecurityManager: Changing modify acls to: root
18/09/04 17:07:25 INFO spark.SecurityManager: Changing view acls groups to:
18/09/04 17:07:25 INFO spark.SecurityManager: Changing modify acls groups to:
18/09/04 17:07:25 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
18/09/04 17:07:26 INFO yarn.Client: Submitting application application_1534402443030_0345 to ResourceManager
18/09/04 17:07:26 INFO impl.YarnClientImpl: Submitted application application_1534402443030_0345
18/09/04 17:07:26 INFO yarn.Client: Application report for application_1534402443030_0345 (state: ACCEPTED)
18/09/04 17:07:26 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: root.users.root
start time: 1536052046594
final status: UNDEFINED
tracking URL: http://WMBigdata3:8778/proxy/application_1534402443030_0345/
user: root
18/09/04 17:07:26 INFO util.ShutdownHookManager: Shutdown hook called
18/09/04 17:07:26 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-bf031fa2-c9b4-435a-abdb-27b607214253
18/09/04 17:07:26 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-13281d42-2c64-41a0-9c93-e6f9d2377c83