相关配置
Configuration | Default Value | Meaning |
---|---|---|
spark.ui.port | 4040 | 每个SparkContext都会启动一个Web UI,默认端口为4040;如果多个SparkContexts在同一主机上运行,则它们将绑定到连续的端口从4040(4041,4042等)开始。 |
spark.port.maxRetries | 16 | spark内部各种网络通信申请端口的重试次数,Web UI的端口重试也读取这个配置 |
spark.ui.enabled | true | 字面意思 |
问题描述
诸如Azkaban/Zeppelin负责调度和运行Spark作业的,都会在某个调度节点起N个SparkSubmit进程。默认情况下每次SparkContext都会从4040开始尝试,一旦进程数量超过了16个,或者[4040, 4055]端口段被别的进程给占了,我们的Spark作业也就直接Over了。详细的错误信息如下:
07-08-2017 02:13:07 CST INFO - 17/08/07 02:13:07 ERROR ui.SparkUI: Failed to bind SparkUI
07-08-2017 02:13:07 CST INFO - java.net.BindException: Address already in use: Service 'SparkUI' failed after 16 retries! Consider explicitly setting the appropriate port for the service 'SparkUI' (for example spark.ui.port for SparkUI) to an available port or increasing spark.port.maxRetries.
07-08-2017 02:13:07 CST INFO - at sun.nio.ch.Net.bind0(Native Method)
07-08-2017 02:13:07 CST INFO - at sun.nio.ch.Net.bind(Net.java:463)
07-08-2017 02:13:07 CST INFO - at sun.nio.ch.Net.bind(Net.java:455)
07-08-2017 02:13:07 CST INFO - at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
07-08-2017 02:13:07 CST INFO - at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
07-08-2017 02:13:07 CST INFO - at org.spark_project.jetty.server.ServerConnector.open(ServerConnector.java:321)
07-08-2017 02:13:07 CST INFO - at org.spark_project.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
07-08-2017 02:13:07 CST INFO - at org.spark_project.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
07-08-2017 02:13:07 CST INFO - at org.spark_project.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
07-08-2017 02:13:07 CST INFO - at org.spark_project.jetty.server.Server.doStart(Server.java:366)
07-08-2017 02:13:07 CST INFO - at org.spark_project.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
07-08-2017 02:13:07 CST INFO - at org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(JettyUtils.scala:298)
07-08-2017 02:13:07 CST INFO - at org.apache.spark.ui.JettyUtils$$anonfun$5.apply(JettyUtils.scala:308)
07-08-2017 02:13:07 CST INFO - at org.apache.spark.ui.JettyUtils$$anonfun$5.apply(JettyUtils.scala:308)
07-08-2017 02:13:07 CST INFO - at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:2071)
07-08-2017 02:13:07 CST INFO - at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
07-08-2017 02:13:07 CST INFO - at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:2062)
07-08-2017 02:13:07 CST INFO - at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:308)
07-08-2017 02:13:07 CST INFO - at org.apache.spark.ui.WebUI.bind(WebUI.scala:139)
07-08-2017 02:13:07 CST INFO - at org.apache.spark.SparkContext$$anonfun$10.apply(SparkContext.scala:451)
07-08-2017 02:13:07 CST INFO - at org.apache.spark.SparkContext$$anonfun$10.apply(SparkContext.scala:451)
07-08-2017 02:13:07 CST INFO - at scala.Option.foreach(Option.scala:257)
07-08-2017 02:13:07 CST INFO - at org.apache.spark.SparkContext.<init>(SparkContext.scala:451)
Spark很友好的告诉你赶紧调大spark.port.maxRetries
,要是晚上的调度任务只是简单的看到这个提示就挂了,我觉得哥肯定要骂娘了。就算调大了这个参数,每次去翻看Driver日志的时候一堆Bind失败的日志,影响任务的执行效率,也影响日志的阅读。
方案
- 对于晚上的调度作业,直接
spark.ui.enabled=false
把LiveUI的功能关了,大晚上理论上不会爬起来吓人,后面有问题可看History - 对于白天的调度作业,
spark.ui.port=0
,0代表随机选择一个可用的端口,可以完美的避免作业的失败 - 调大
spark.port.maxRetries
呵呵哒