apache-spark – Spark工作者在运行一段时间后死亡

我正在运行火花流媒体工作.

我的群集配置

Spark version - 1.6.1
spark node  config
cores - 4
memory - 6.8 G (out of 8G)
number of nodes - 3

对于我的工作,我给每个节点6GB内存和总核心 – 3

作业运行一小时后,我在工作日志上收到以下错误

    Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f53b496a000, 262144, 0) failed; error='Cannot allocate memory' (errno=12)
    #
    # There is insufficient memory for the Java Runtime Environment to continue.
    # Native memory allocation (mmap) failed to map 262144 bytes for committing reserved memory.
    # An error report file with more information is saved as:
    # /usr/local/spark/sbin/hs_err_pid1622.log

虽然我在我的work-dir / app-id / stderr中没有看到任何错误.

什么是通常建议用于运行spark worker的xm *设置?

如何进一步调试此问题?

PS:我使用默认设置启动了我的worker和master.

更新:

我看到我的执行器经常被添加和删除,因为错误“无法分配内存”.

日志:

  16/06/24 12:53:47 INFO MemoryStore: Block broadcast_53 stored as values in memory (estimated size 14.3 KB, free 440.8 MB)
  16/06/24 12:53:47 INFO BlockManager: Found block rdd_145_1 locally
  16/06/24 12:53:47 INFO BlockManager: Found block rdd_145_0 locally
  Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f3440743000, 12288, 0) failed; error='Cannot allocate memory' (errno=12)

最佳答案 我有同样的情况.我在官方文件中找到原因,它说:

In general, Spark can run well with anywhere from 8 GB to hundreds of gigabytes of memory per machine. In all cases, we recommend allocating only at most 75% of the memory for Spark; leave the rest for the operating system and buffer cache.

您的计算内存有8GB,6GB用于工作节点.因此,如果操作系统使用的内存超过2GB,则为工作节点留下的内存不足,工作人员将丢失.
*只需检查操作系统将使用多少内存,并为工作节点分配剩余内存*

点赞