Network transmission costs alone would make the second way a loser.
Large OFFSETs are pretty inefficient because the backend generates and
discards the rows internally ... but at least it never converts them to
external form or ships them to the client. Rows beyond the LIMIT are
not generated at all.
Some of this would depend on the query, too, I suspect, since an ORDER
BY would require the entire result set to be determined, sorted and then
the limit/offset could take place. Regardless, it's better than
filtering in the Java/client side to avoid sending it from the database
backend to the client.
But how would that compare to using a cursor/fetch query. It seems like
the JDBC library will automatically use a cursor if you specify some
params on the PreparedStatement, though the details escape me. I think
it's related to setFetchSize() and/or setMaxRows(). Of course, those
are not guaranteed to do anything special either, and you'd still need
to retrieve and discard initial rows unless you can adjust your WHERE
condition to find the "next set".
If you have an ORDER BY on a unique field, for example, you could use
that field to query the next set by remembering the last value in your
previous query set (or select 1 more row than you need so you have the
exact value that would be next) and specifying it in the WHERE clause.
Even this could be an issue if updates would change the grouping.
LIMIT/OFFSET are not part of the SQL standard, too, should that matter
for DB portability. I believe mysql supports it, but it seems like
Oracle didn't (at least at one time).
David