Tigergraph docker container always shows CPU utilization between 150-600%

I installed the docker into an AWS 4 CPU t2.xlarge EC2 , the image was pulled from Docker.

And found the TG in a docker container showing utilization between 150-600% (sudo docker stats).

> CONTAINER ID   NAME        CPU %     MEM USAGE / LIMIT     MEM %     NET I/O   BLOCK I/O         PIDS
> b900e236ec89   tg          333.92%   4.777GiB / 15.61GiB   30.60%    0B / 0B   1.6GB / 54.3GB    827

It’s an empty database. Here’s the ‘top’ cli shows in the container:

Is it normal? Or how to handle it?

Now it’s 130%

CONTAINER ID   NAME        CPU %     MEM USAGE / LIMIT     MEM %     NET I/O   BLOCK I/O         PIDS
b900e236ec89   tg          130.20%   4.805GiB / 15.61GiB   30.78%    0B / 0B   1.6GB / 56.5GB    826


When you see CPU utilization percentages exceeding 100% in docker stats, it’s typically because Docker is reporting CPU usage across multiple CPU cores. In most systems, 100% utilization indicates that a single CPU core is fully utilized. Therefore, if your system has multiple cores, the CPU percentage can exceed 100%.

For example, if you have a quad-core processor (4 cores), the maximum utilization would be 400%. This is because each core can be utilized up to 100%. So, a reading of 150% means one and a half cores are being used at their full capacity, and 600% means that all four cores are being used fully, and an additional two cores (if hyper-threading is enabled) are also being fully utilized.

Docker calculates CPU usage by adding up the usage across all cores. This approach provides a more accurate picture of the total CPU resources a container is consuming, especially in multi-core systems. If you need to monitor the CPU usage per core, you might need to use other system monitoring tools that provide a breakdown per core, rather than relying on Docker’s aggregated statistics.

Many thanks @Jon_Herke ,

But my real question is - why does an empty TigerGraph instance use so much CPU when idle?