How to load 900 billion vertices in Spark for joining with parquete source

Hi dear support,

we need to join two sources (parquet file and vertices data in Spark) to produce enriched parquet file. In the sake if this we need to create some DataFrame for TigerGraph and fetch all verices in it somehow.
As I know TigerGraph has 10K vertices limit to fetch so I assume we need some kind of pagination mechanism or we have another options?

Here is code I implemented till the moment:

      test("Test TG DF read") {
        val jdbcDF1 = spark.read.format("jdbc").options(
          Map(
            "driver" -> "com.tigergraph.jdbc.Driver",
            "url" -> "jdbc:tg:http://127.0.0.1:14240",
            "username" -> "tigergraph",
            "password" -> "tigergraph",
            "graph" -> "dict", // graph name
            "dbtable" -> "vertex users",
            "limit" -> "1",
            "debug" -> "1")).load()
            jdbcDF1.show()
      }

Many thanks,
Evgeny