In article <4866.1606544057@localhost> you write: -=-=-=-=-=- >It's an interesting idea. > >There are quite a number of other serialization formats out there including: >bincode, msgpack, protobuf, and of course, IETF's CBOR RFC7049. Also JSON. >For "opendata" we've would up with CSV, which I really dislike. >{So funny that the UK's data problem with COVID rates was due to some >data flow that went CSV->XLS->database, fixed when they went CSV->XLSX->database, >when there was never a reason to use XLS* at all. So perhaps this argues your >case} The sqlite format is extremely complex. It bundles together a schema, modified B-trees with overflow and free list pages to store keyed data, and some multiple access locking flags that the description said aren't used any more. (Sqlite originally didn't support multiple access but that was a long time ago.) It looks to me like the vast majority of sqlite applications use the C library from sqlite.org and it's not clear that there's another full implementation. This raises the question of whether the real definition is the spec or the code. I think sqlite is fine for what it does but it doesn't make sense as a serialization format for the Internet. It is simultaneously too complicated with all of the SQL schema features that support for database update in place, and too semantically limited as jck pointed out. There are a lot of widely used data format that the IETF and other SDOs haven't blessed, like FITS which is a widely used format for multidimensional numeric data, but whose definition is maintained by the astronomers who originally designed it. I think sqlite is like that, fine for what it does, perfectly reasonable to transmit as files over the Internet, but not appropriate for IETF standardization. R's, John