bigdata_hadoop集群配置_内存分配

haoop集群  做好内存管理跟重要,不然经常会给抛出个 OutMemory   ,内存溢出 

以horntonworks给出推荐配置为样本,给出一种常见的Hadoop集群上各组件的内存分配方案。配置时通过 ambari对应修改,或者后台同步修改 。

 【样本】

he final calculation is to determine the amount of RAM per container:

RAM-per-Container = maximum of (MIN_CONTAINER_SIZE, (Total Available RAM) / Containers)) 

With these calculations, the YARN and MapReduce configurations can be set:

Configuration FileConfiguration SettingValue Calculation
yarn-site.xmlyarn.nodemanager.resource.memory-mb= Containers * RAM-per-Container
yarn-site.xmlyarn.scheduler.minimum-allocation-mb= RAM-per-Container
yarn-site.xmlyarn.scheduler.maximum-allocation-mb= containers * RAM-per-Container
mapred-site.xmlmapreduce.map.memory.mb= RAM-per-Container
mapred-site.xmlmapreduce.reduce.memory.mb= 2 * RAM-per-Container
mapred-site.xmlmapreduce.map.java.opts= 0.8 * RAM-per-Container
mapred-site.xmlmapreduce.reduce.java.opts= 0.8 * 2 * RAM-per-Container
yarn-site.xml (check)yarn.app.mapreduce.am.resource.mb= 2 * RAM-per-Container
yarn-site.xml (check)yarn.app.mapreduce.am.command-opts= 0.8 * 2 * RAM-per-Container

Note: After installation, both yarn-site.xml and mapred-site.xml are located in the /etc/hadoop/conf folder.

 

Configuration FileConfiguration SettingValue Calculation M
yarn-site.xmlyarn.nodemanager.resource.memory-mb= Containers * RAM-per-Container(54G)
yarn-site.xmlyarn.scheduler.minimum-allocation-mb= 2048
yarn-site.xmlyarn.scheduler.maximum-allocation-mb= containers * RAM-per-Container (54G)
mapred-site.xmlmapreduce.map.memory.mb= 2048
mapred-site.xmlmapreduce.reduce.memory.mb= 4096
mapred-site.xmlmapreduce.map.java.opts= 1638
mapred-site.xmlmapreduce.reduce.java.opts= 3276
yarn-site.xml (check)yarn.app.mapreduce.am.resource.mb= 2048
yarn-site.xml (check)yarn.app.mapreduce.am.command-opts= 3276

 

【样例1】

Configuration FileConfiguration SettingValue Calculation
yarn-site.xmlyarn.nodemanager.resource.memory-mb= Containers * RAM-per-Container
yarn-site.xmlyarn.scheduler.minimum-allocation-mb= RAM-per-Container
yarn-site.xmlyarn.scheduler.maximum-allocation-mb= containers * RAM-per-Container
mapred-site.xmlmapreduce.map.memory.mb= RAM-per-Container
mapred-site.xmlmapreduce.reduce.memory.mb= 2 * RAM-per-Container
mapred-site.xmlmapreduce.map.java.opts= 0.8 * RAM-per-Container
mapred-site.xmlmapreduce.reduce.java.opts= 0.8 * 2 * RAM-per-Container
yarn-site.xml (check)yarn.app.mapreduce.am.resource.mb= 2 * RAM-per-Container
yarn-site.xml (check)yarn.app.mapreduce.am.command-opts= 0.8 * 2 * RAM-per-Container

【样例2】

方案最右侧一栏是一个8G VM的分配方案,方案预留1-2G的内存给操作系统,分配4G给Yarn/MapReduce,当然也包括了HIVE,剩余的2-3G是在需要使用HBase时预留给HBase的。 参考:http://blog.csdn.net/bluishglc/article/details/42436321

【备注】

另外自己通过thrift链接时 ,留意自己加载时 重设配置,造成任务类似失败 <set mapreduce.map.java.opts=-Xmx1024m;>   当hive提交任务

eg:select count(*) from test; 没问题,但是  自己平台提交有问题 。仔细比对配置  例如搜索关键词 :memory ,opts

 

参考:http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/rpm-chap1-11.html

    原文作者:MapReduce
    原文地址: https://www.cnblogs.com/cphmvp/p/6055353.html
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞