I have a long-running query that takes more than an hour. While the GPE log shows the query completing successfully, the Java client times out after 60 minutes, giving the following error:
Error forwarding request to GSQL leader!
IOException: Premature end of chunk coded message body: closing chunk expected
Is there a way to extend the amount of time that the client will wait? I’ve already tried to change these settings, but it doesn’t help:
gadmin config set RESTPP.Factory.DefaultQueryTimeoutSec 36000
gadmin config set GUI.HTTPRequest.TimeoutSec 36000
For long running queries, I would recommend using the -async flag. This will mean that you don’t have to worry about any timeouts related to the GSQL client. You can find details here in the docs:
There’s a few different timeouts that might affect your GSQL client. I actually haven’t previously encountered the GSQL_CLIENT_IDLE_TIMEOUT_SEC env var that Jon shared. For modifying the query timeout itself, we also support setting a GSQL-TIMEOUT header and setting the query timeout in the gsql shell. Both of these are described here:
The Nginx configuration template also contains a keepalive timeout that will, I believe, affect the GSQL client for a long running query. You can find details on modifying the Nginx configuration here:
GSQL_CLIENT_IDLE_TIMEOUT_SEC had no effect on the Java client.
I also tried the “set query_timeout” technique, not by typing it into an interactive gsql session but by adding it to my command line (edited down for conciseness):
java -jar gsql_client.jar ‘set query_timeout=36000000 run query myQuery()’
I can see that setting it to a small value, such as 2 seconds, takes effect by timing out early, but with a value above 60 minutes, I’m still getting the original error at 60 minutes.
I would agree that -async would be a more logical approach to a long-running query, as HTTP wasn’t really meant to suspend for such lengths, but that would involve a fair amount of extra work to monitor the backgrounded query from my shell script. I’m hoping to avoid having to do that.