@markmegerian
Thanks for your all support, I got your concern that updating relationships and deleting edges constantly is not a good practice, But for the demo sake I need to do, to persist the changes and convey my messages in the demo, and become a problem solver in every criteria, but in production not going to this one constantly
actually I am doing some demos for my client, I want to proposed this Tiger Graph to my client for our next project or next product.
Right now I saw a weird behavior in my TG cloud Account, I have one GSQL which one is working perfect
CREATE QUERY segment_calculate(INT xParams, INT yParams) FOR GRAPH supply_chain {
/* Write query logic here */
SumAccum<DOUBLE> @@total_margin;
SumAccum<DOUBLE>@d_margin_penetration;
ListAccum<DOUBLE> @@margin_penetration;
SumAccum<INT>@lesser_then_margin;
SumAccum<DOUBLE>@yAxis;
SetAccum<VERTEX<customer>> @product_customers;
SetAccum<VERTEX<customer>> @@total_customers;
ListAccum<DOUBLE> @@customer_penetration;
SumAccum<INT>@lesser_then;
SumAccum<DOUBLE>@xAxis;
SumAccum<string>@customer_group;
SumAccum<string>@margin_group;
SumAccum<string>@segment_group;
start = {Product.*};
products_purchased = select s from start:s-(customer_product:e)-customer:t ACCUM s.@product_customers+=t, @@total_customers+=t
post-ACCUM(s) @@customer_penetration += s.d_customer_penetration;
products_purchased = select s from products_purchased:s accum @@total_margin+=s.d_margin
POST-ACCUM(s) s.d_margin_penetration=s.d_margin/@@total_margin,
s.@d_margin_penetration=s.d_margin/@@total_margin,
@@margin_penetration+=s.d_margin/@@total_margin;
PRINT @@total_margin;
# Calculate Percentile
products_purchased = select s from products_purchased:s accum
FOREACH attr in @@customer_penetration DO
IF s.d_customer_penetration > attr THEN s.@lesser_then += 1
END
end,
FOREACH attr in @@margin_penetration DO
IF s.@d_margin_penetration > attr THEN s.@lesser_then_margin += 1
END
end
post-ACCUM(s) s.@xAxis=s.@lesser_then*1.0/@@customer_penetration.size()*100,
s.@yAxis=s.@lesser_then_margin*1.0/@@margin_penetration.size()*100;
z = select t from products_purchased:s-(popularity_index:pi)-Popularity_Index:t ACCUM delete(pi);
z = select t from products_purchased:s-(margin_index:mi)-Margin_Index:t ACCUM delete(mi);
z = select t from products_purchased:s-(product_segment:ps)-Segment:t ACCUM delete(ps);
**here I use divide and Conquer strategy I just use only above GSQL **
**and remove below part the QUERY and it would take almost **
**1 second to execute it successfully but when I use below code with above code it continuously loading and gives **
**The query didn't finish because it exceeded the query timeout threshold (60 seconds).
Please check GSE log for license expiration and RESTPP/GPE log with request id (131081.RESTPP_1_1.1646799783472.N) for details.
Try increase RESTPP.Factory.DefaultQueryTimeoutSec or add header GSQL-TIMEOUT to override default system timeout.
even though Products are not that much in the database Total Products in DB : 9871, and only 110 products which one having complete data to traverse these edges
Whats wrong in below code ? why this taking longer time even though data is not that much, and after two , three time failure one time comes this will complete execute in 1 second**
products_purchased = select s from products_purchased:s ACCUM CASE WHEN s.@xAxis > xParams THEN
s.@customer_group += "High",
INSERT INTO popularity_index (FROM, TO)
VALUES (s.id, "High")
END,
CASE WHEN s.@xAxis < xParams THEN
s.@customer_group += "Low",
INSERT INTO popularity_index (FROM, TO)
VALUES (s.id, "Low")
END,
CASE WHEN s.@yAxis > yParams THEN
s.@margin_group += "High",
INSERT INTO margin_index (FROM, TO)
VALUES (s.id, "High")
END,
CASE WHEN s.@yAxis < yParams THEN
s.@margin_group += "Low",
INSERT INTO margin_index (FROM, TO)
VALUES (s.id, "Low")
END;
result = select s from products_purchased:s ACCUM CASE WHEN s.@margin_group == "High" and s.@customer_group == "High" THEN
s.@segment_group += "High Margin Popular Products",
INSERT INTO product_segment (FROM, TO)
VALUES (s.id, "High Margin Popular Products")
END,
CASE WHEN s.@margin_group == "High" and s.@customer_group == "Low" THEN
s.@segment_group += "High Margin Low Products",
INSERT INTO product_segment (FROM, TO)
VALUES (s.id, "High Margin Low Products")
END,
CASE WHEN s.@margin_group == "Low" and s.@customer_group == "High" THEN
s.@segment_group += "Low Margin Popular Products",
INSERT INTO product_segment (FROM, TO)
VALUES (s.id, "Low Margin Popular Products")
END,
CASE WHEN s.@margin_group == "Low" and s.@customer_group == "Low" THEN
s.@segment_group += "Need Improvement",
INSERT INTO product_segment (FROM, TO)
VALUES (s.id, "Need Improvement")
END;
PRINT result;
}
I want to persist the result that’s why we as a team doing this , so that we can explore it into the explore section, and when I removed all those insert edges from GSQL code and its working perfect and in effective time, but making edges in that much frequently manner, It seems its not that easily possible in this . If this is the Issue , If I will make the LOADING JOB and insert millions of data into the GRAPH that time ?