Job creation error

We are getting below error when creation the Job

 Encountered an error when getting info from DICT!!
 Please check the status of ZK and DICT.
 Failed to create loading jobs: [load_patient_CSVs].
 For more info, please check log at node 'm1': /home/tigergraph/tigergraph/log/gsql/ERROR.20220220-151540.669

Below is what I see in the server logs

E@20220222 02:13:33.545 tigergraph|localhost:43940|00000001390 (AdminServiceClient.java:677) Download catalog from dict fails with code: 310
E@20220222 02:13:33.546 tigergraph|localhost:43940|00000001390 (Catalog.java:4819) Cannot get the generated code, please check if ZK or DICT is down.
E@20220222 02:13:33.546 tigergraph|localhost:43940|00000001390 (MetadataUpdateOperation.java:151) Failed executeInMemory for CreateLoadJobOperation

Any help to t-shoot the issue much appreciated.

  1. Open tg connection

    conn = tg.TigerGraphConnection(
      host=hostName,
      username=userName,
      password=password)
    
  2. Make sure connection is good

    # print any current schema so we can verify that we are connected
    res = conn.gsql('LS')
    print(res)
    

3. Define Job
job_load_patient_CSVs = '''
CREATE LOADING JOB load_patient_CSVs FOR GRAPH PatientCentral {
DEFINE FILENAME patient_CSVs_path="/coldstart/upload/pending_vertices/patients/";
LOAD patient_CSVs_path TO VERTEX Patient VALUES($0, $1, $2) USING HEADER="false";
}'''
res = conn.gsql(job_load_patient_CSVs)
print (res)

Not sure why DICT was down?

Hi Ram,

a couple things to check:

  • The error suggests the Zk and DICt services are not running, can you ssh into the cluster as tigergraph and check the status with ‘gadmin status’ and restart with ‘gadmin start all’
  • it looks like you are running the create job script from python? Can you try running just the CREATE LOADING JOB command from gsql? That will take the python/REST translation out and make it easier to debug
  • Your FILENAME variable references a folder, and I think in a cluster setup you need to specify which node(s) the file is resident on. the syntax is: “ALL:/data/”

check the doc on FILENAME here: https://docs.tigergraph.com/gsql-ref/current/ddl-and-loading/creating-a-loading-job#_define_filename

@Robert_Hardaway for the quick reply.

  • Running ‘gadmin start all’ fixed the issue
  • I was able to create and run the job from notebook

Although, I’m not sure why it got into this state!!!

Is using ‘ALL’ recommended here? I see loading attempt was made from all tigergraph nodes where REST endpoint it available!!!

if you using all, it’s going to start all inactive services. sometimes they are chained together (nginx, restpp, …) so you don’t want to start them one by one.
I would always use gadmin start all -y