apache-spark – 如何远程连接JMX到Dataproc上的Spark worker

我可以通过添加以下内容连接到驱动程序:

spark.driver.extraJavaOptions=-Dcom.sun.management.jmxremote \
                              -Dcom.sun.management.jmxremote.port=9178 \
                              -Dcom.sun.management.jmxremote.authenticate=false \
                              -Dcom.sun.management.jmxremote.ssl=false

但是……

spark.executor.extraJavaOptions=-Dcom.sun.management.jmxremote \
                                -Dcom.sun.management.jmxremote.port=9178 \
                                -Dcom.sun.management.jmxremote.authenticate=false \
                                -Dcom.sun.management.jmxremote.ssl=false

……只会在驱动程序上产生一堆错误……

Container id: container_1501548048292_0024_01_000003
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
    at org.apache.hadoop.util.Shell.run(Shell.java:869)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:236)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:305)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:84)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 1

……最后崩溃了.

工人没有错误,它只是退出:

[org.apache.spark.util.ShutdownHookManager] - Shutdown hook called

Spark v2.2.0,集群是一个简单的1m-2w配置,没有执行器参数,我的作业运行没有问题.

最佳答案 正如
Rick Mortiz所指出的那样,问题是执行程序jmx的端口冲突.

设置:

-Dcom.sun.management.jmxremote.port=0

产生一个随机端口,并从Spark中删除了错误.要找出最终使用的端口,请执行以下操作:

netstat -alp | grep LISTEN.*<executor-pid>/java

列出了该进程当前打开的端口.

点赞