Hi Snir,
Regarding compression and another topic recently evoked, endianness, I wonder if encoding using LEB128 had been discussed?
LEB128 encodes any value below 128 as 1 bytes, any value below ~16000 as 2 bytes, and so on. It is size and endianness independent. This means that we can extend a field from uint16 to uint32 without protocol incompatibility. It is much less computationally expensive than full compression. I think it would apply very well to a lot of our meta-data, but would be less effective than regular compression algorithms with image data proper. So it’s addressing another part of the problem.
I have no real feeling yet for how important that other part is in the streams, but my guts tell me “probably not much” (i.e. I expect much more image data than meta-data). Still, it might be worth trying. But it’s probably not just a capability: if we want to encode most fields that way, this means a protocol bump. Not sure it’s worth it.
The code to encode and decode is really short. Here is an example for writing: https://github.com/c3d/XL-programming-language/blob/master/xlr/serializer.cpp#L219. Here is an example for reading: https://github.com/c3d/XL-programming-language/blob/master/xlr/serializer.cpp#L406.
Christophe
Maybe helpful however Snir is working more on payload compression than structured data so the gain would be not great (he use few fields). However maybe some statistics can be done. Our "mini" header is 16 bit type and 32 bit length. For instance you can add some statistics recording - number of messages - total messages payload - total LEB128 encoded header (the not encoded is number of messages * 6). If you want also to try compressing (or doing statistics on) fields you could add some code to the code generated by python scripts in spice-common.
I instrumented a bit the code to understand how this LEB128 could affectfields (not the header).The algorithm remove about 50-60% of field data however this affect thetotal bandwidth utilization by about 0.2-0.4% mainly due to the displaychannel. Maybe for other channels we could have better gain.
Thanks for testing this. I’m not too surprised by them given the nature of the data.
Christophe Frediano
On 2 Mar 2017, at 17:53, Snir Sheriber <ssheribe@xxxxxxxxxx> wrote:
This series of patches allows compression of messages over selected spice channels
Few notes:
*Currently lz4 stream and regular compression are in use for small and large messages accordingly. packets are being sent in common msg type that was added, and it utilize previous compressed message structure.
*The stream compression & decompression dictionary is based on previous messages, there for messages that has compressed\decompressed in stream mode are being saved in continuous pre-allocated buffer and a pointer is used for tracking current position.
*Currently all channels are allowed to be compressed, we might want to avoid compression for some channels (for good or just in specific cases) ***please notice that usbredir compression was not revert yet so meanwhile i added 2 small patches to disable its compression caps.
*Multiple clients- basically it should work with more than one client, although adding compression/decompression to just part of the clients could theoretically make it worse (good for compression performance testing though)
-If someone has encountered issues with the combination of compression and other spice features please let me know -Suggestions and comments are welcome, especially for improving the messages sending :)
Snir.
Spice components: server,client,spice-common,spice-protocol
_______________________________________________Spice-devel mailing listSpice-devel@xxxxxxxxxxxxxxxxxxxxxhttps://lists.freedesktop.org/mailman/listinfo/spice-devel
|
_______________________________________________
Spice-devel mailing list
Spice-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/spice-devel