Could Someone Give me Advice on Optimizing Graph Queries for Large Datasets?

Hi @samzy ,

I have a couple questions:

  • May I ask which TigerGraph version you are currently using? E.g. 3.10.1 or 3.9.3?
  • What is the rough runtimes of the queries you are mentioning? E.g. 3 minutes, 30 minutes…?
  • What is your cluster setup? What is your cluster’s partition number and replica number
  • Are you currently using TG Cloud? TG On-prem? Or TG in Docker?

Here is a list of some of the optimization options you could look more into:

Measuring time of each portion of TigerGraph code:

curr_part_start_time = timestamp();
some_vertex_set = SELECT s FROM ...
curr_part_end_time = timestamp();    
INT curr_part_time_ms = curr_part_end_time - curr_part_start_time;
PRINT curr_part_time_ms;

I hope this helps to some degree!

Best,
Supawish Limprasert (Jim)
Solution Engineer, TigerGraph