Java – Docker容器 – JVM内存峰值 – 竞技场块内存空间

我在针对在ECS / EC2 / Docker / Centos7 / Tomcat / OpenJDK8环境中运行的
Java Web应用程序的性能测试期间观察到JVM内存中的大型离散峰值.

性能测试非常简单,它包括对位于由Elastic Container Service管理的EC2主机上运行的一对Docker容器前面的AWS Application Load Balancer的连续并发请求.通常,并发级别是30个同时负载测试客户端连接/线程.在几分钟内,其中一个Docker容器通常受到影响.

内存峰值似乎位于非堆内存中.具体来说,内存峰值似乎与Arena Chunk内存空间有关.当比较未经历尖峰的JVM的内存占用时,Thread和Arena Chunk内存空间脱颖而出.

下面是使用jcmd实用程序对VM internal memory的比较.

请注意Arena Chunk内存的荒谬数字和线程内存的相对较高的数字.

测试的并发级别可以立即创建对Tomcat请求线程池中的线程的需求.但是,峰值并不总是出现在第一波请求中.

你见过类似的东西吗?你知道是什么导致了尖峰吗?

Docker Stats

记忆秒杀容器:

Mon Oct  9 00:31:45 UTC 2017
89440337e936        27.36%              530 MiB / 2.93 GiB      17.67%              15.6 MB / 24.1 MB   122 MB / 2.13 MB    0
Mon Oct  9 00:31:48 UTC 2017
89440337e936        114.13%             2.059 GiB / 2.93 GiB    70.29%              16.3 MB / 25.1 MB   122 MB / 2.13 MB    0

普通容器:

Mon Oct  9 00:53:41 UTC 2017
725c23df2562        0.08%               533.4 MiB / 2.93 GiB   17.78%              5 MB / 8.15 MB      122 MB / 29.3 MB    0
Mon Oct  9 00:53:44 UTC 2017
725c23df2562        0.07%               533.4 MiB / 2.93 GiB   17.78%              5 MB / 8.15 MB      122 MB / 29.3 MB    0

VM内存

内存峰值JVM:

# jcmd 393 VM.native_memory summary
393:

Native Memory Tracking:

Total: reserved=1974870KB, committed=713022KB
-                 Java Heap (reserved=524288KB, committed=524288KB)
                            (mmap: reserved=524288KB, committed=524288KB) 

-                     Class (reserved=1096982KB, committed=53466KB)
                            (classes #8938)
                            (malloc=1302KB #14768) 
                            (mmap: reserved=1095680KB, committed=52164KB) 

-                    Thread (reserved=8423906KB, committed=8423906KB)
                            (thread #35)
                            (stack: reserved=34952KB, committed=34952KB)
                            (malloc=114KB #175) 
                            (arena=8388840KB #68)

-                      Code (reserved=255923KB, committed=37591KB)
                            (malloc=6323KB #8486) 
                            (mmap: reserved=249600KB, committed=31268KB) 

-                        GC (reserved=6321KB, committed=6321KB)
                            (malloc=4601KB #311) 
                            (mmap: reserved=1720KB, committed=1720KB) 

-                  Compiler (reserved=223KB, committed=223KB)
                            (malloc=93KB #276) 
                            (arena=131KB #3)

-                  Internal (reserved=2178KB, committed=2178KB)
                            (malloc=2146KB #11517) 
                            (mmap: reserved=32KB, committed=32KB) 

-                    Symbol (reserved=13183KB, committed=13183KB)
                            (malloc=9244KB #85774) 
                            (arena=3940KB #1)

-    Native Memory Tracking (reserved=1908KB, committed=1908KB)
                            (malloc=8KB #95) 
                            (tracking overhead=1900KB)

-               Arena Chunk (reserved=18014398501093554KB, committed=18014398501093554KB)
                            (malloc=18014398501093554KB) 

-                   Unknown (reserved=38388KB, committed=38388KB)
                            (mmap: reserved=38388KB, committed=38388KB) 

普通JVM:

# jcmd 391 VM.native_memory summary
391:

Native Memory Tracking:

Total: reserved=1974001KB, committed=710797KB
-                 Java Heap (reserved=524288KB, committed=524288KB)
                            (mmap: reserved=524288KB, committed=524288KB) 

-                     Class (reserved=1096918KB, committed=53738KB)
                            (classes #9005)
                            (malloc=1238KB #13654) 
                            (mmap: reserved=1095680KB, committed=52500KB) 

-                    Thread (reserved=35234KB, committed=35234KB)
                            (thread #35)
                            (stack: reserved=34952KB, committed=34952KB)
                            (malloc=114KB #175) 
                            (arena=168KB #68)

-                      Code (reserved=255261KB, committed=35237KB)
                            (malloc=5661KB #8190) 
                            (mmap: reserved=249600KB, committed=29576KB) 

-                        GC (reserved=6321KB, committed=6321KB)
                            (malloc=4601KB #319) 
                            (mmap: reserved=1720KB, committed=1720KB) 

-                  Compiler (reserved=226KB, committed=226KB)
                            (malloc=96KB #317) 
                            (arena=131KB #3)

-                  Internal (reserved=2136KB, committed=2136KB)
                            (malloc=2104KB #11715) 
                            (mmap: reserved=32KB, committed=32KB) 

-                    Symbol (reserved=13160KB, committed=13160KB)
                            (malloc=9221KB #85798) 
                            (arena=3940KB #1)

-    Native Memory Tracking (reserved=1890KB, committed=1890KB)
                            (malloc=8KB #95) 
                            (tracking overhead=1882KB)

-               Arena Chunk (reserved=178KB, committed=178KB)
                            (malloc=178KB) 

-                   Unknown (reserved=38388KB, committed=38388KB)
                            (mmap: reserved=38388KB, committed=38388KB) 

最佳答案 一个
glibc/malloc option似乎解决了这个MALLOC_PER_THREAD = 0.但是,我决定使用
debian/openjdk docker base image而不是centos,这也解决了问题.

点赞