Is there a minimum memory size requirement for TigerGraph?

We’re currently using TigerGraph on a single-node setup.
Our test graph data is about 100 GB, and TigerGraph seems to have compressed it down to around 45 GB.

The system has 32 GB of memory, and there is enough swap and disk space available.
However, when we run a simple query, it runs indefinitely and never completes.

[Wed Oct 29 10:32:40 2025] java invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=0

[Wed Oct 29 10:32:40 2025]  oom_kill_process+0x118/0x280

[Wed Oct 29 10:32:40 2025] [  pid  ]   uid  tgid total_vm      rss rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name

[Wed Oct 29 10:32:40 2025] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=docker-0227d74dd7b8382eab321f03e75e4660ec51c1457717eb15f8aacdc9dad2a8f2.scope,mems_allowed=0-1,oom_memcg=/system.slice/docker-0227d74dd7b8382eab321f03e75e4660ec51c1457717eb15f8aacdc9dad2a8f2.scope,task_memcg=/system.slice/docker-0227d74dd7b8382eab321f03e75e4660ec51c1457717eb15f8aacdc9dad2a8f2.scope,task=tg_dbs_gped,pid=4014373,uid=1000

[Wed Oct 29 10:32:40 2025] Memory cgroup out of memory: Killed process 4014373 (tg_dbs_gped) total-vm:63234956kB, anon-rss:37606476kB, file-rss:70272kB, shmem-rss:0kB, UID:1000 pgtables:97548kB oom_score_adj:0

[Wed Oct 29 10:32:44 2025] oom_reaper: reaped process 4014373 (tg_dbs_gped), now anon-rss:7620kB, file-rss:0kB, shmem-rss:0kB

After increasing the memory to 64 GB, the same query ran successfully without any issue.

I’ve noticed that TigerGraph can use disk storage as well, but it doesn’t seem to be helping in this case.

Is there any configuration or setting we might be missing?
Or does TigerGraph require that the available memory be at least as large as the compressed graph size?

Thank you!

Hi Bumsung,

We also have some hardware requirements as seen here too: Hardware and Software Requirements :: TigerGraph DB

For general performance, we would recommend:

  • adhering to the hardware recommendations
  • For most query workloads, the amount of RAM you should be at least 2x the amount of data (ideally the amount as seen from gstatusgraph, but sometimes if no such data exists we’ll use the “raw data” amount instead).
  • While you may observe that your current system works with 45 GB graph data (for 64 GB of RAM), you may benefit from faster performance.
  • TigerGraph is an in-memory database, so we performs best if all the data can fit into memory. Otherwise, there will be need to offload some data to the disk: Memory management :: TigerGraph DB

It’s also possible that the query may consume a lot of memory (e.g. select all vertices of a specific type…).

Best,
Supawish Limprasert (Jim)
Solution Engineer, TigerGraph

1 Like