GPE Keep crashing after loading some data into it

Below is the output file generated from starting it to crashed automatically.

Using signal 34 as cpu profiling switch
MessageQueue|ZMQContext|Initialized
GPE Server: Version
TigerGraph version: 3.7.0
product          release_3.7.0_09-15-2022         2ce031f1f8b26f47b652f781638fd9fd90f5faca  2022-09-05 08:06:11 -0700
cqrs             release_3.7.0_09-15-2022         b5dd722402e8be4628b2b5007c2f19531c00796c  2022-09-13 21:31:54 -0700
third_party      release_3.7.0_09-15-2022         fc8268f0daad62ccbb0f13fff42d6d56b2ed40fe  2022-08-18 23:30:17 -0700
er               release_3.7.0_09-15-2022         21625c29bc7fe7514952bf1cbc267199d2b0b6a0  2022-09-06 17:07:03 -0700
gle              release_3.7.0_09-15-2022         6d28f79a8f568336ed3b2d03a8c301985ab48afc  2022-09-15 03:20:57 -0700
algos            release_3.7.0_09-15-2022         d7209fdba5eb6f17bedb65183a65e541e9efa13b  2022-07-21 14:27:29 -0700
document         release_3.7.0_09-15-2022         63eb65dfb67c1c5af1eaad076eae93c34a949867  2022-07-29 17:06:28 -0700
glive            release_3.7.0_09-15-2022         1e857a778b3af8ba9e9a046c2feb41ccf0fa8135  2022-08-18 05:42:44 -0700
gus              release_3.7.0_09-15-2022         df4ba83b6b0e49d44afb17941ed24e415d447b27  2022-09-08 13:51:54 -0700
tools            release_3.7.0_09-15-2022         40c3c2fc23571269bc06fdd75d54ec1cda61f949  2022-09-15 11:26:54 -0700
engine           release_3.7.0_09-15-2022         0d9b2e2246de73bde020539cc06068a9975cb94d  2022-09-15 04:34:21 -0700
loader           release_3.7.0_09-15-2022         72b4e938fee28c4a7989d71b4dd79798ff0c67e5  2022-07-16 00:18:09 -0700

WARNING: Logging before InitGoogleLogging() is written to STDERR
I0408 15:50:27.396034 1147191 config.cpp:242] Queue infos:
I0408 15:50:27.396114 1147191 config.cpp:244] queue name:GSE_journal_0 topic:Topic: GSE_journal_0
 source: GSE
 target: GSE

I0408 15:50:27.396121 1147191 config.cpp:244] queue name:GSE_journal_1 topic:Topic: GSE_journal_1
 source: GSE
 target: GSE

I0408 15:50:27.396123 1147191 config.cpp:244] queue name:GSE_journal_10 topic:Topic: GSE_journal_10
 source: GSE
 target: GSE

I0408 15:50:27.396127 1147191 config.cpp:244] queue name:GSE_journal_11 topic:Topic: GSE_journal_11
 source: GSE
 target: GSE

I0408 15:50:27.396131 1147191 config.cpp:244] queue name:GSE_journal_12 topic:Topic: GSE_journal_12
 source: GSE
 target: GSE

I0408 15:50:27.396135 1147191 config.cpp:244] queue name:GSE_journal_13 topic:Topic: GSE_journal_13
 source: GSE
 target: GSE

I0408 15:50:27.396139 1147191 config.cpp:244] queue name:GSE_journal_14 topic:Topic: GSE_journal_14
 source: GSE
 target: GSE

I0408 15:50:27.396142 1147191 config.cpp:244] queue name:GSE_journal_15 topic:Topic: GSE_journal_15
 source: GSE
 target: GSE

I0408 15:50:27.396147 1147191 config.cpp:244] queue name:GSE_journal_16 topic:Topic: GSE_journal_16
 source: GSE
 target: GSE

I0408 15:50:27.396149 1147191 config.cpp:244] queue name:GSE_journal_17 topic:Topic: GSE_journal_17
 source: GSE
 target: GSE

I0408 15:50:27.396154 1147191 config.cpp:244] queue name:GSE_journal_18 topic:Topic: GSE_journal_18
 source: GSE
 target: GSE

I0408 15:50:27.396158 1147191 config.cpp:244] queue name:GSE_journal_19 topic:Topic: GSE_journal_19
 source: GSE
 target: GSE

I0408 15:50:27.396162 1147191 config.cpp:244] queue name:GSE_journal_2 topic:Topic: GSE_journal_2
 source: GSE
 target: GSE

I0408 15:50:27.396165 1147191 config.cpp:244] queue name:GSE_journal_20 topic:Topic: GSE_journal_20
 source: GSE
 target: GSE

I0408 15:50:27.396168 1147191 config.cpp:244] queue name:GSE_journal_21 topic:Topic: GSE_journal_21
 source: GSE
 target: GSE

I0408 15:50:27.396173 1147191 config.cpp:244] queue name:GSE_journal_22 topic:Topic: GSE_journal_22
 source: GSE
 target: GSE

I0408 15:50:27.396178 1147191 config.cpp:244] queue name:GSE_journal_23 topic:Topic: GSE_journal_23
 source: GSE
 target: GSE

I0408 15:50:27.396181 1147191 config.cpp:244] queue name:GSE_journal_24 topic:Topic: GSE_journal_24
 source: GSE
 target: GSE

I0408 15:50:27.396185 1147191 config.cpp:244] queue name:GSE_journal_25 topic:Topic: GSE_journal_25
 source: GSE
 target: GSE

I0408 15:50:27.396188 1147191 config.cpp:244] queue name:GSE_journal_26 topic:Topic: GSE_journal_26
 source: GSE
 target: GSE

I0408 15:50:27.396203 1147191 config.cpp:244] queue name:GSE_journal_27 topic:Topic: GSE_journal_27
 source: GSE
 target: GSE

I0408 15:50:27.396206 1147191 config.cpp:244] queue name:GSE_journal_28 topic:Topic: GSE_journal_28
 source: GSE
 target: GSE

I0408 15:50:27.396210 1147191 config.cpp:244] queue name:GSE_journal_29 topic:Topic: GSE_journal_29
 source: GSE
 target: GSE

I0408 15:50:27.396214 1147191 config.cpp:244] queue name:GSE_journal_3 topic:Topic: GSE_journal_3
 source: GSE
 target: GSE

I0408 15:50:27.396217 1147191 config.cpp:244] queue name:GSE_journal_30 topic:Topic: GSE_journal_30
 source: GSE
 target: GSE

I0408 15:50:27.396220 1147191 config.cpp:244] queue name:GSE_journal_31 topic:Topic: GSE_journal_31
 source: GSE
 target: GSE

I0408 15:50:27.396225 1147191 config.cpp:244] queue name:GSE_journal_4 topic:Topic: GSE_journal_4
 source: GSE
 target: GSE

I0408 15:50:27.396229 1147191 config.cpp:244] queue name:GSE_journal_5 topic:Topic: GSE_journal_5
 source: GSE
 target: GSE

I0408 15:50:27.396232 1147191 config.cpp:244] queue name:GSE_journal_6 topic:Topic: GSE_journal_6
 source: GSE
 target: GSE

I0408 15:50:27.396235 1147191 config.cpp:244] queue name:GSE_journal_7 topic:Topic: GSE_journal_7
 source: GSE
 target: GSE

I0408 15:50:27.396239 1147191 config.cpp:244] queue name:GSE_journal_8 topic:Topic: GSE_journal_8
 source: GSE
 target: GSE

I0408 15:50:27.396242 1147191 config.cpp:244] queue name:GSE_journal_9 topic:Topic: GSE_journal_9
 source: GSE
 target: GSE

I0408 15:50:27.396246 1147191 config.cpp:244] queue name:delta_queue topic:Topic: deltaQ
 source: GPE
 target: GPE

I0408 15:50:27.396250 1147191 config.cpp:244] queue name:get_request_queue topic:Topic: get_requestQ
 source: GPE
 target: GPE
GPE_1_1: tcp://127.0.0.1:7502

 source: RESTPP
 target: GPE
GPE_1_1: tcp://127.0.0.1:7502

 source: RESTPP-LOADER
 target: GPE
GPE_1_1: tcp://127.0.0.1:7502

 source: KAFKA-LOADER
 target: GPE
GPE_1_1: tcp://127.0.0.1:7502

I0408 15:50:27.396255 1147191 config.cpp:244] queue name:gse_delta_queue topic:Topic: deltaQ
 source: GSE
 target: GPE

I0408 15:50:27.396258 1147191 config.cpp:244] queue name:id_request_queue_query topic:Topic: id_requesQ_QUERY
 source: GPE
 target: GSE
GSE_1_1: tcp://127.0.0.1:6500

 source: RESTPP
 target: GSE
GSE_1_1: tcp://127.0.0.1:6500

 source: RESTPP-LOADER
 target: GSE
GSE_1_1: tcp://127.0.0.1:6500

 source: KAFKA-LOADER
 target: GSE
GSE_1_1: tcp://127.0.0.1:6500

I0408 15:50:27.396262 1147191 config.cpp:244] queue name:id_response_queue_query topic:Topic: id_responseQ_QUERY
 source: GSE
 target: GPE
GPE_1_1: tcp://127.0.0.1:7500

 source: GSE
 target: RESTPP
RESTPP_1_1: tcp://127.0.0.1:5500

 source: GSE
 target: RESTPP-LOADER
RESTPP-LOADER_1_1: tcp://127.0.0.1:8501

 source: GSE
 target: KAFKA-LOADER
KAFKA-LOADER_1_1: tcp://127.0.0.1:9501

I0408 15:50:27.396266 1147191 config.cpp:244] queue name:response_queue topic:Topic: responseQ
 source: GPE
 target: GPE
GPE_1_1: tcp://127.0.0.1:7501

 source: GPE
 target: RESTPP
RESTPP_1_1: tcp://127.0.0.1:5400

 source: GPE
 target: RESTPP-LOADER
RESTPP-LOADER_1_1: tcp://127.0.0.1:8401

 source: GPE
 target: KAFKA-LOADER
KAFKA-LOADER_1_1: tcp://127.0.0.1:9401

Partition: 1
Replica: 1
TotalGPEReplica: 1
TotalGPEPartition: 1
TotalGSEPartition: 1
OldInstanceName: GPE_1_1
InstanceName: GPE_1#1
PartitionDir: /data_ssd/tigergraph/data/gstore/0/part/
GlobalGPEDir: 
ExecutableDir: /data_ssd/tigergraph/app/3.7.0/bin//
LogDir: /data_ssd/tigergraph/log/gpe
LogLevl: BRIEF
LogMaxSizeMb: 100
LogRotationNum: 100
LogFileMaxDurationDay: 90
RetentionSizeGB: 40
NumberOfHashBucketInBit: 5
ZkAddress : 127.0.0.1:19999
MyIp: 127.0.0.1
RebuildThreadNum: 3
TopologyMemoryLimit: 0
VertexMemoryLimit: -1
EdgeMemoryLimit: -1
DiskConf: 
- path: /data_ssd/tigergraph/tmp/gpe/disks
  readthreads: 1
  writethreads: 1
  compressmethod: 
QueueInfo: 
 id_requestQueue: 
 id_responseQueue2GPE: 
 id_responseQueue2REST: 
 number_rest_servers: 2106483144
 request_queue: 
 response_queue: 
 post_queue: 
 delta_queue: 
 time out: 32518
 Server_Queues: []
 Client_Queues:[]

KafkaClient: 
  queue.buffering.max.ms: 1
  queue.buffering.max.messages: 64
  batch.num.messages: 64
  fetch.wait.max.ms: 10
  queued.min.messages: 100000
  fetch.error.backoff.ms: 6
  compression.codec: none
  message.max.bytes: 10485760
  request.required.acks: 1

LeaderElectionTTLSec_: 30


15:50:27.396538 gconfig.cpp:29] Engine_GConfig|ProcMaxGB set to 0
15:50:27.396545 gconfig.cpp:29] Engine_GConfig|ProcAlertGB set to 0
15:50:27.396547 gconfig.cpp:29] Engine_GConfig|KafkaAlertGB set to 0
15:50:27.396548 gconfig.cpp:29] Engine_GConfig|SysMinFreePct set to 10
15:50:27.396550 gconfig.cpp:29] Engine_GConfig|SysMinFreeInGb set to 50
15:50:27.396552 gconfig.cpp:29] Engine_GConfig|SysAlertFreePct set to 30
15:50:27.396553 gconfig.cpp:29] Engine_GConfig|CSIdle set to 200
15:50:27.396555 gconfig.cpp:29] Engine_GConfig|CSCritical set to 1000
15:50:27.396556 gconfig.cpp:29] Engine_GConfig|CPUIdle set to 95
15:50:27.396559 gconfig.cpp:29] Engine_GConfig|CPUCritical set to 98
15:50:27.396561 gconfig.cpp:29] Engine_GConfig|DiskAlertPct set to 10
15:50:27.396563 gconfig.cpp:29] Engine_GConfig|DiskCriticalPct set to 5Number of Processors: 64
I0408 15:50:27.396746 1147191 gdict.cpp:305] Dictionary initialize start
2023-04-08 15:50:27,396:1147191(0x7f06858d3240):ZOO_INFO@log_env@1102: Client environment:zookeeper.version=zookeeper C client 3.5.8
2023-04-08 15:50:27,396:1147191(0x7f06858d3240):ZOO_INFO@log_env@1106: Client environment:host.name=destill-nvme-tigergraph-02
2023-04-08 15:50:27,396:1147191(0x7f06858d3240):ZOO_INFO@log_env@1113: Client environment:os.name=Linux
2023-04-08 15:50:27,396:1147191(0x7f06858d3240):ZOO_INFO@log_env@1114: Client environment:os.arch=4.18.0-448.el8.x86_64
2023-04-08 15:50:27,396:1147191(0x7f06858d3240):ZOO_INFO@log_env@1115: Client environment:os.version=#1 SMP Wed Jan 18 15:02:46 UTC 2023
2023-04-08 15:50:27,396:1147191(0x7f06858d3240):ZOO_INFO@log_env@1123: Client environment:user.name=root
2023-04-08 15:50:27,396:1147191(0x7f06858d3240):ZOO_INFO@log_env@1131: Client environment:user.home=/home/tigergraph
2023-04-08 15:50:27,396:1147191(0x7f06858d3240):ZOO_INFO@log_env@1143: Client environment:user.dir=/data_ssd/tigergraph/app/3.7.0/bin
2023-04-08 15:50:27,396:1147191(0x7f06858d3240):ZOO_INFO@zookeeper_init_internal@1177: Initiating client connection, host=127.0.0.1:19999 sessionTimeout=120000 watcher=0x7f0682e7bd80 sessionId=0 sessionPasswd=<null> context=0x7f067d8dadc0 flags=0
2023-04-08 15:50:27,397:1147191(0x7f064a7ff700):ZOO_INFO@check_events@2473: initiated connection to server [127.0.0.1:19999]
2023-04-08 15:50:27,398:1147191(0x7f064a7ff700):ZOO_INFO@check_events@2521: session establishment complete on server [127.0.0.1:19999], sessionId=0x100062864610068, negotiated timeout=120000 
I0408 15:50:27.399288 1147221 zookeeper_context.cpp:206] Root Watcher SESSION_EVENT state = CONNECTED_STATE for path: NA
I0408 15:50:27.399335 1147221 zookeeper_context.cpp:77] ZooKeeper Connection is setup. Session id: 100062864610068, previous client id:0
I0408 15:50:27.399344 1147221 zookeeper_watcher.cpp:288] Zk Session connected, notifying watchers
I0408 15:50:27.399351 1147221 zookeeper_watcher.cpp:295]   --> Number of watchers: 0
I0408 15:50:27.399356 1147221 zookeeper_watcher.cpp:296]   --> Callback time used(us): 6
I0408 15:50:27.447835 1147191 gdict.cpp:978] CLIENT: resolved server address: 127.0.0.1:17797, old server was:
I0408 15:50:27.452178 1147191 gdict.cpp:951] Connected to 127.0.0.1:17797. ChannelState:2
I0408 15:50:27.452283 1147191 gdict.cpp:352] Dictionary initialize succeed, took 56 milliseconds
0408 15:50:27.462 D kafka/producer.go:90] KafkaProducer EventOutputProducer: Start sarama producer with addrs: [127.0.0.1:30002]
0408 15:50:27.462 D kafka/producer.go:90] KafkaProducer EventInputProducer: Start sarama producer with addrs: [127.0.0.1:30002]
0408 15:50:27.462 D kafka/consumer.go:135] Start to create sarama partition consumer
0408 15:50:27.462 D kafka/consumer.go:135] Start to create sarama partition consumer
0408 15:50:27.462 D event/event_producer.go:53] To Produce event EventMeta:{Targets:{ServiceName:"IFM"}  EventId:"5d24a8a3c5064e249a41ab401fa4acac"  SpanId:"ServiceStatusSelfReport"  TimestampNS:1680961827461028417  Source:{ServiceName:"GPE"  Replica:1  Partition:1}}  Status:{ServiceDescriptor:{ServiceName:"GPE"  Replica:1  Partition:1}  ServiceStatus:Warmup} with max retry times 2147483647
0408 15:50:27.462 I kafka/consumer.go:150] KafkaConsumer EventOutputConsumer, start offset: 10313, oldest: 0, newest: 10327
0408 15:50:27.462 I kafka/consumer.go:150] KafkaConsumer EventInputConsumer, start offset: 11378, oldest: 0, newest: 11378
0408 15:50:27.463 I kafka/consumer.go:175] KafkaConsumer EventOutputConsumer: the real offset used to start kafka consumer is [10313]
0408 15:50:27.463 I kafka/consumer.go:175] KafkaConsumer EventInputConsumer: the real offset used to start kafka consumer is [11378]
0408 15:50:27.463 D kafka/producer.go:156] KafkaProducer EventOutputProducer of topic EventOutputQueue, produced msg &{EventOutputQueue <nil> {"EventMeta":{"Targets":[{"ServiceName":"IFM","Replica":0,"Partition":0}],"EventId":"5d24a8a3c5064e249a41ab401fa4acac","SpanId":"ServiceStatusSelfReport","TimestampNS":"1680961827461028417","Source":{"ServiceName":"GPE","Replica":1,"Partition":1}},"Command":null,"Status":{"ServiceDescriptor":{"ServiceName":"GPE","Replica":1,"Partition":1},"ProcessState":"StateUnchanged","ServiceStatus":"Warmup","CommandStatus":null},"Metrics":null} [] <nil> 10327 0 0001-01-01 00:00:00 +0000 UTC 0 0 0xc000241200 0 0 false}, err: <nil>

15:50:27.397062 gconfig.cpp:29] Engine_GConfig|SystemWatchSeconds set to 5
15:50:27.397085 gconfig.cpp:29] Engine_GConfig|AlertSystemWatchSeconds set to 1
15:50:27.397171 gsystem.cpp:710] System_GSystem|GSystemWatcher|Health|ProcMaxGB|0|ProcAlertGB|0|CurrentGB|0|SysMinFreePct|9|SysAlertFreePct|30|FreePct|96
15:50:27.397298 gsystem.cpp:666] System_GSystem|GSystemWatcher|Idle|involuntary context switches|0|user time|0.02%|system time|0%|system CPU|25%|iowait|0%
15:50:27.452407 gcleanup.cpp:279] System_GCleanUp|Added thread|#0
15:50:27.452446 gcleanup.cpp:279] System_GCleanUp|Added thread|#1
15:50:27.452481 gcleanup.cpp:279] System_GCleanUp|Added thread|#2
15:50:27.452513 gcleanup.cpp:279] System_GCleanUp|Added thread|#3
15:50:27.452549 gcleanup.cpp:124] System_GCleanUp|Max:16|Interval:30|4 threads started.ENTERPRISE_EDITION
0408 15:50:27.463 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"db21cbb7448e4fceaa3d761710037064"  SpanId:"UnknownServiceStop"  TimestampNS:1680961797325012068  Source:{ServiceName:"EXE"  Partition:1}
Log folder at /data_ssd/tigergraph/log/gpe
0408 15:50:27.464 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"24dfb198a6a44db0b6e394cc813509f9"  SpanId:"[invoker]@1680959968512243650:service-start"  TimestampNS:1680961821622057992  Source:{ServiceName:"EXE"  Partition:1}
0408 15:50:27.464 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"9cbbee371e744a64a7506f138ecc8ae0"  SpanId:"[invoker]@1680959968512243650:service-start"  TimestampNS:1680961821623939497  Source:{ServiceName:"EXE"  Partition:1}
0408 15:50:27.464 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"37ddbf8000e44f219ac69ce5193a7ea5"  SpanId:"[invoker]@1680959968512243650:service-start"  TimestampNS:1680961821625772960  Source:{ServiceName:"EXE"  Partition:1}
0408 15:50:27.464 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"c3d688f55a044793aaddeeec0de9128b"  SpanId:"[invoker]@1680960220976957485:service-reset"  TimestampNS:1680961827296887783  Source:{ServiceName:"EXE"  Partition:1}
0408 15:50:27.465 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"015a987a635c4df5aaba5c5412f9fb0e"  SpanId:"[invoker]@1680959968512243650:service-start"  TimestampNS:1680961827299015616  Source:{ServiceName:"EXE"  Partition:1}
0408 15:50:27.465 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"c632238b63bf43c8af33556392fcb698"  SpanId:"[invoker]@1680959968512243650:service-start"  TimestampNS:1680961827301804159  Source:{ServiceName:"EXE"  Partition:1}
0408 15:50:27.465 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"8a3af1590e454258936486a291290788"  SpanId:"[invoker]@1680961821589453909:service-start"  TimestampNS:1680961827314466433  Source:{ServiceName:"EXE"  Partition:1}
0408 15:50:27.465 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"7e5593e49b9f4acaba096f829a488dd1"  SpanId:"[invoker]@1680959968512243650:service-start"  TimestampNS:1680961827317330563  Source:{ServiceName:"EXE"  Partition:1}
0408 15:50:27.465 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"ff8d0072f94340ee933be988e43dcef6"  SpanId:"[invoker]@1680959968512243650:service-start"  TimestampNS:1680961827320337338  Source:{ServiceName:"EXE"  Partition:1}
0408 15:50:27.466 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"411cc7058d9842939cbdc5cda532d52b"  SpanId:"[invoker]@1680959968512243650:service-start"  TimestampNS:1680961827323327582  Source:{ServiceName:"EXE"  Partition:1}
0408 15:50:27.466 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"9c8b6b1eff474a678efd9bba0a139272"  SpanId:"[invoker]@1680959968512243650:service-start"  TimestampNS:1680961827325961125  Source:{ServiceName:"EXE"  Partition:1}
0408 15:50:27.466 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"697d149b18784c6e87014835232dc2f5"  SpanId:"[invoker]@1680959968512243650:service-start"  TimestampNS:1680961827327452374  Source:{ServiceName:"EXE"  Partition:1}
0408 15:50:27.466 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"79f2224a4dff4c749e11bf72d57df8f4"  SpanId:"[invoker]@1680959968512243650:service-start"  TimestampNS:1680961827328905981  Source:{ServiceName:"EXE"  Partition:1}
0408 15:50:27.466 D event/event_consumer.go:201] Event will be dropped due to failing filter. EventMeta: Targets:{ServiceName:"IFM"}  EventId:"5d24a8a3c5064e249a41ab401fa4acac"  SpanId:"ServiceStatusSelfReport"  TimestampNS:1680961827461028417  Source:{ServiceName:"GPE"  Replica:1  Partition:1}
jemalloc stats - current allocated: 2774192 active: 3260416 metadata: 12098504 resident: 15044608 mapped: 38928384
jemalloc stats - current allocated: 36256472 active: 38793216 metadata: 15884768 resident: 59305984 mapped: 132427776
CompareTo GraphConfig:
Graph schema version 2:
Graph schema segSizeInBits_ 20
Incoming schema details: 
EdgeType typeid: 0, typename: send_nft, has subtype: 0, from vertex account to vertex nft_transfer, isdirected: 1, reversetypeid: 1, 
EdgeType typeid: 1, typename: reverse_send_nft, has subtype: 0, from vertex nft_transfer to vertex account, isdirected: 1, reversetypeid: 0, 
EdgeType typeid: 2, typename: receive_nft, has subtype: 0, from vertex nft_transfer to vertex account, isdirected: 1, reversetypeid: 3, 
EdgeType typeid: 3, typename: reverse_receive_nft, has subtype: 0, from vertex account to vertex nft_transfer, isdirected: 1, reversetypeid: 2, 
VertexType typeid: 0, typename: account, has subtype: 0, stattype: 2, iconname: , attribute( type: 4, name: account_address, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), 
VertexType typeid: 1, typename: nft_transfer, has subtype: 0, stattype: 2, iconname: , attribute( type: 4, name: trade_primary_id, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), attribute( type: 4, name: txn_hash, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: true), attribute( type: 9, name: block_number, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: true), attribute( type: 4, name: from_address, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), attribute( type: 4, name: to_address, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), attribute( type: 10, name: buyer_pay_amt, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), attribute( type: 10, name: seller_receive_amt, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), attribute( type: 4, name: token_address, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), attribute( type: 4, name: token_id, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), attribute( type: 9, name: token_num, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), attribute( type: 9, name: log_index, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), attribute( type: 9, name: trade_type, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), attribute( type: 9, name: trade_time, fixedsize_: 0, enumerator: 0, datetime: 1, has_index: false), attribute( type: 9, name: trade_category, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), attribute( type: 10, name: gas_cost, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), attribute( type: 9, name: is_scalp, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), attribute( type: 9, name: matched_sold_num, fixedsize_: 0, enumerator: 0, datetime: 0, has_index: false), 
jemalloc stats - current allocated: 2974924136 active: 2978451456 metadata: 24644608 resident: 3008741376 mapped: 3094179840
F0408 15:50:28.934101 1148243 edgeblockreader.cpp:413] Check failed: tmpptr <= edgeinfo_current_end_ 8407721,22
*** Check failure stack trace: ***
    @     0x7f0683f32102  google::LogMessage::Fail()
    @     0x7f0683f32061  google::LogMessage::SendToLog()
    @     0x7f0683f31a29  google::LogMessage::Flush()
    @     0x7f0683f34cc8  google::LogMessageFatal::~LogMessageFatal()
    @     0x7f068267cdc5  topology4::EdgeBlockReader::NextEdgeType()
    @     0x7f06826f39e6  topology4::DeltaRebuilder::WriteSegmentFiles()
    @     0x7f06826f9965  topology4::DeltaRebuilder::RunOneSegment()
    @     0x7f0682cb051f  gutil::GThreadPool::pool_main()
    @     0x7f0683f217c5  thread_proxy
    @     0x7f0680fc31ca  start_thread
    @     0x7f067f5eae73  __GI___clone
E0408 15:50:28.956012 1148243 glogging.cpp:139] ============ Call stacktrace. signal(Aborted) ============
 0# FailureSignalHandler at /home/graphsql/product/src/engine/utility/gutil/glogging.cpp:142
 1# 0x00007F0680FCDCF0 in /data_ssd/tigergraph/app/3.7.0/.syspre/usr/lib_ld1/libpthread.so.0
 2# 0x00007F067F5FFACF in /data_ssd/tigergraph/app/3.7.0/.syspre/usr/lib_ld1/libc.so.6
 3# 0x00007F067F5D2EA5 in /data_ssd/tigergraph/app/3.7.0/.syspre/usr/lib_ld1/libc.so.6
 4# 0x00007F0683F38C93 in /data_ssd/tigergraph/app/3.7.0/bin/libtigergraph.so
 5# 0x00007F0683F32102 in /data_ssd/tigergraph/app/3.7.0/bin/libtigergraph.so
 6# 0x00007F0683F32061 in /data_ssd/tigergraph/app/3.7.0/bin/libtigergraph.so
 7# 0x00007F0683F31A29 in /data_ssd/tigergraph/app/3.7.0/bin/libtigergraph.so
 8# 0x00007F0683F34CC8 in /data_ssd/tigergraph/app/3.7.0/bin/libtigergraph.so
 9# topology4::EdgeBlockReader::NextEdgeType(unsigned int&, unsigned int&) at /home/graphsql/product/src/engine/core/topology/topology4/edgeblockreader.cpp:199
10# topology4::DeltaRebuilder::WriteSegmentFiles(unsigned int, topology4::SegmentMeta&, topology4::QueryState*, gutil::GTimer&, topology4::SegmentCheckSum*, gutil::BitQueue*&) at /home/graphsql/product/src/engine/core/topology/topology4/deltarebuilder.cpp:1300
11# topology4::DeltaRebuilder::RunOneSegment(topology4::QueryState*, unsigned long, bool, bool) at /home/graphsql/product/src/engine/core/topology/topology4/deltarebuilder.cpp:1548
12# gutil::GThreadPool::pool_main(unsigned char) at /home/graphsql/product/src/engine/utility/gutil/gthreadpool.cpp:115
13# 0x00007F0683F217C5 in /data_ssd/tigergraph/app/3.7.0/bin/libtigergraph.so
14# 0x00007F0680FC31CA in /data_ssd/tigergraph/app/3.7.0/.syspre/usr/lib_ld1/libpthread.so.0
15# 0x00007F067F5EAE73 in /data_ssd/tigergraph/app/3.7.0/.syspre/usr/lib_ld1/libc.so.6

============ End of stacktrace ============
*** Aborted at 1680961829 (unix time) try "date -d @1680961829" if you are using GNU date ***
PC: @     0x7f067f5ffacf __GI_raise
*** SIGABRT (@0x3e900118137) received by PID 1147191 (TID 0x7f05307ff700) from PID 1147191; stack trace: ***
    @     0x7f0682c9ee9a gutil::(anonymous namespace)::FailureSignalHandler()
    @     0x7f0680fcdcf0 (unknown)
    @     0x7f067f5ffacf __GI_raise
    @     0x7f067f5d2ea5 __GI_abort
    @     0x7f0683f38c93 google::DumpStackTraceAndExit()
    @     0x7f0683f32102 google::LogMessage::Fail()
    @     0x7f0683f32061 google::LogMessage::SendToLog()
    @     0x7f0683f31a29 google::LogMessage::Flush()
    @     0x7f0683f34cc8 google::LogMessageFatal::~LogMessageFatal()
    @     0x7f068267cdc5 topology4::EdgeBlockReader::NextEdgeType()
    @     0x7f06826f39e6 topology4::DeltaRebuilder::WriteSegmentFiles()
    @     0x7f06826f9965 topology4::DeltaRebuilder::RunOneSegment()
    @     0x7f0682cb051f gutil::GThreadPool::pool_main()
    @     0x7f0683f217c5 thread_proxy
    @     0x7f0680fc31ca start_thread
    @     0x7f067f5eae73 __GI___clone

One point worth mentioning is the server I am testing has 512G memory installed. Has tigergraph tested under this kind of huge memory server?

Different components crash under different OS distributions.
Here is what happened under Ubuntu 20.04: EXE and CTRL always panic - #8 by bgdsh
The crash log is generated under Centos Stream 8.

to highlight the crash reason:

F0408 15:50:37.373958 1149454 edgeblockreader.cpp:413] Check failed: tmpptr <= edgeinfo_current_end_ 8407721,22

I have run gadmin reset GPE GSE for many times, GPE would become down as soon as I tried to load some new data into it.

@bgdsh As long as the hardware system is stable (RAM, Motherboard, SSD, etc) running TigerGraph on a 512GB system shouldn’t result in any performance loss. More information on benchmarks with different servers. Graph Database Benchmarks and Performance Comparison | TigerGraph

@bgdsh Can you go through this troubleshooting guide to see if any of these identifies the issue? Troubleshooting Guide :: TigerGraph Server If not, we can try some additional steps.