Hi,
I'm using Django's ORM to access Postgres12. My "MyModel" table has a JSONB column called 'snapshot'. In Python terms, each row's 'snapshot' looks like this:
======================
snapshot = {
'pay_definition' : {
'1234': {..., 'name': 'foo', ...},
'99': {..., 'name': 'bar', ...},
}
======================
I'd like to find all unique values of 'name' in all rows of MyModel. I have this working using native JSON functions from the ORM like this:
=====================
class PayDef(Func):
function='to_jsonb'
template="%(function)s(row_to_json(jsonb_each(%(expressions)s->'pay_definition'))->'value'->'name')"
MyModel.objects.annotate(paydef=PayDef(F('snapshot'))).order_by().distinct('paydef').values_list('paydef', flat=True)
=====================
So, skipping the ordering/distinct/ORM parts, the core looks like this:
My question is if this the best way to solve this problem? The way my current logic works, reading from inside out is, I think:
To provide context on what "better" might be:
Thanks, Shaheed
P.S. I posted a Django-centric version of this to the relevant mailing list but got no replies; nevertheless, apologies for the cross post.
I'm using Django's ORM to access Postgres12. My "MyModel" table has a JSONB column called 'snapshot'. In Python terms, each row's 'snapshot' looks like this:
======================
snapshot = {
'pay_definition' : {
'1234': {..., 'name': 'foo', ...},
'99': {..., 'name': 'bar', ...},
}
======================
I'd like to find all unique values of 'name' in all rows of MyModel. I have this working using native JSON functions from the ORM like this:
=====================
class PayDef(Func):
function='to_jsonb'
template="%(function)s(row_to_json(jsonb_each(%(expressions)s->'pay_definition'))->'value'->'name')"
MyModel.objects.annotate(paydef=PayDef(F('snapshot'))).order_by().distinct('paydef').values_list('paydef', flat=True)
=====================
So, skipping the ordering/distinct/ORM parts, the core looks like this:
to_jsonb(row_to_json(jsonb_each('snapshot'->'pay_definition'))->'value'->'name')
My question is if this the best way to solve this problem? The way my current logic works, reading from inside out is, I think:
- Pass in the 'snapshot'.
- Since 'snapshot' is a JSON field, "->'pay_definition'" traverses this key.
- To skip the unknown numeric keys, "jsonb_each()" turns each key, value pair into an inner row like ['1234', {...}].
- To get to the value column of the inner row "row_to_json()->'value'".
- To get the name field's value "->'name'".
- A final call to "to_jsonb" in the PayDefs class. This bit is clearly Django-specific.
To provide context on what "better" might be:
- Snapshot JSONs might easily be 20MB in size.
- Each 'pay_definition' is probablyonly about 1kB in size, and there might be 50 of them in a snapshot.
- There might be 1000 MyModel instances in a given query.
- I'm using PostgreSQL 12
Thanks, Shaheed
P.S. I posted a Django-centric version of this to the relevant mailing list but got no replies; nevertheless, apologies for the cross post.