@markmegerian
hi I have one situation can you please provide me the best way to do that
If I have some Json receive from one of the Graph Transactional (installed Query) in pyTigergraph tg.call_installedquery(“statusustomerlastweek”, {'given_date": “2022-27-03”, “look_back”: “12”)
Response:
[{“customerA”: “Alpha”, segment: “Opportunity”, “chrn”: “Low”…},
{“customerA”: “Alpha”, segment: “Precious”, “chrn”: “High”…}
…
]
and I want dump it into my new (Graph Analysis) on same instance then how can i do it , Do i need to run the LOAD job but the load job get the data from CSVs and then we need to map it by 0 or 1…_ accordingly.
@markmegerian
@Jon_Herke @Szilard_Barany @Dan_Barkus @Elliot_Martin
I found this one
{
"vertices": {
"<vertex_type>": {
"<vertex_id>": {
"<attribute>": {
"value": <value>,
"op": <opcode>
}
}
}
},
"edges": {
"<source_vertex_type>": {
"<source_vertex_id>": {
"<edge_type>": {
"<target_vertex_type>": {
"<target_vertex_id>": {
"<attribute>": {
"value": <value>,
"op": <opcode>
}
}
}
}
}
}
}
}
for dump the data in new graph Analytics on same instance and my current Graph GSQL Query is this
SetAccum<EDGE> @@edgeSet;
seed = {ANY};
results = SELECT t
FROM seed:s -(:e)->:t
WHERE s.cv_segment_id == "Precious"
ACCUM @@edgeSet += e;
PRINT @@edgeSet, results;
which is giving the below output
[
{
“@@edgeSet”: [
{
“attributes”: {
“date_id”: “1970-01-01 00:00:00”,
“week_id”: 0
},
“directed”: false,
“e_type”: “customer_cvsegment”,
“from_id”: “Precious”,
“from_type”: “cv_segment”,
“to_id”: “101”,
“to_type”: “customer”
},
{
“attributes”: {
“date_id”: “1970-01-01 00:00:00”,
“week_id”: 0
},
“directed”: false,
“e_type”: “customer_cvsegment”,
“from_id”: “Precious”,
“from_type”: “cv_segment”,
“to_id”: “100”,
“to_type”: “customer”
}
],
“results”: [
{
“attributes”: {
“age”: 0,
“customer_id”: “101”,
“customer_name”: “Bob”,
“dob”: “1970-01-01 00:00:00”,
“email”: “”,
“gender”: “”,
“opt_status”: false,
“primary_status”: true
},
“v_id”: “101”,
“v_type”: “customer”
},
{
“attributes”: {
“age”: 0,
“customer_id”: “100”,
“customer_name”: “Alice”,
“dob”: “1970-01-01 00:00:00”,
“email”: “”,
“gender”: “”,
“opt_status”: false,
“primary_status”: true
},
“v_id”: “100”,
“v_type”: “customer”
}
]
}
]
How can I convert this output to achieve the above syntax so through this my whole data which i want , I am able to process in my new DB as well
Please help me out
If it was up to me, I think the easiest thing would be to write a little bit of python code to act as the intermediate logic. I can see that you want the output of the first query to precisely match the json needed to store the new results, but I haven’t done that. You may have to take the query results and modify the json so that the attributes and syntax matches.
There may be other ways to consider - for instance, all of this is only needed because you have moved everything into a new graph. If you have kept it in the same graph, even if you created a new set if disconnected vertices in the same graph, you could have done all the logic (get and put) within the logic of a single query, and that would be much simpler.
Furthermore, if you want to only work with the analysis graph , you could define a subgraph which only includes those vertices and edges, and would achieve much of the same goals as putting it all in a separate graph.
@markmegerian
Transactional Schema is my permanent schema
Actually the problem is I have a Transactional Graph and My new Analysis Graph , now I am using both together in my backend , Transactional will store the transactional information and you already guide me don’t create the separate schema for the Analysis purpose,
https://dev.tigergraph.com/forum/t/graph-studio-quest/2136/4
you can create the separate edge for the same, but when I created separated edges for analysis, I observe that it is little complicated for a person with non technical background so here I created separated graph, and from now onwards i run a GSQL from transactional graph for my Model reference(this is nothing but a name of analysis) and their analysis result affected customer I want to store it into Analysis Graph with their corresponding GSQL (model name), model vertex now is in my Analysis Graph, If i write Python code this will use a for loop O(n) and run to the complete Json, to make it storable for another Analysis Graph
and
here clearly mention that their is a limit JSON POST data and this will create memory issue
So is this creation of LOADING jobs behind the scene for this every analysis result transfer to Analysis Graph is this good approach ?
@markmegerian
is this Subgraph solves my problem ? I don’t know that much about subgraphs, from your last reply I get to know the concept of Subgraph would be suitable in my case.
can you guide me more on this.
Hopefully @Jon_Herke @Dan_Barkus and other experts on the TigerGraph team can help out with some examples. I find it to be a useful way to have a “virtual” view of a larger graph schema, but with only a subset of the vertices and edges visible. This can be appropriate when you only want to give access to some (but not all) of the graph schema.
@pkr2 I agree with @markmegerian. @pkr2 First thing I would do is give this a get a better understanding of how multi-graphs work in TigerGraph. Multigraph Overview :: Docs Let me know if that helps.
ye this one works ! we can share the data of global vertices and edges in between multiple graphs