We’re currently using TigerGraph on a single-node setup.
Our test graph data is about 100 GB, and TigerGraph seems to have compressed it down to around 45 GB.
The system has 32 GB of memory, and there is enough swap and disk space available.
However, when we run a simple query, it runs indefinitely and never completes.
[Wed Oct 29 10:32:40 2025] java invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=0
[Wed Oct 29 10:32:40 2025] oom_kill_process+0x118/0x280
[Wed Oct 29 10:32:40 2025] [ pid ] uid tgid total_vm rss rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
[Wed Oct 29 10:32:40 2025] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=docker-0227d74dd7b8382eab321f03e75e4660ec51c1457717eb15f8aacdc9dad2a8f2.scope,mems_allowed=0-1,oom_memcg=/system.slice/docker-0227d74dd7b8382eab321f03e75e4660ec51c1457717eb15f8aacdc9dad2a8f2.scope,task_memcg=/system.slice/docker-0227d74dd7b8382eab321f03e75e4660ec51c1457717eb15f8aacdc9dad2a8f2.scope,task=tg_dbs_gped,pid=4014373,uid=1000
[Wed Oct 29 10:32:40 2025] Memory cgroup out of memory: Killed process 4014373 (tg_dbs_gped) total-vm:63234956kB, anon-rss:37606476kB, file-rss:70272kB, shmem-rss:0kB, UID:1000 pgtables:97548kB oom_score_adj:0
[Wed Oct 29 10:32:44 2025] oom_reaper: reaped process 4014373 (tg_dbs_gped), now anon-rss:7620kB, file-rss:0kB, shmem-rss:0kB
After increasing the memory to 64 GB, the same query ran successfully without any issue.
I’ve noticed that TigerGraph can use disk storage as well, but it doesn’t seem to be helping in this case.
Is there any configuration or setting we might be missing?
Or does TigerGraph require that the available memory be at least as large as the compressed graph size?
Thank you!