Data ingestion memory problem

Hello,

I have a problem with data ingestion in TigerGraph. Basically, I am using Impala to ingest data. Impala is connected with TigerGraph using JDBC.

The specs of the TigerGraph is as follows:
Environment size: 2 TB
Total node: 16 GB
Node Config:
RAM: 128 GB
CPU: 12 Cores
Disk: 640 GB

The problem is, I can only ingest data up to 2 months transaction, where given the specification, I think it should be able to ingest up to 1 year transaction data. The data is around 130 million rows per 3 days. The GSQL has been distributed. Probably the RAM might not be enough but I am looking for other potential problems that I might not be aware of.

I might suspect the problem would be in Impala, however, I am keen to know whether others have similar experience as I am.

Would like to know, given that information above, what might be the potential root cause and potential solution for that. Or maybe, the specs for TigerGraph itself is the one that needs upgrading as the solution for that.