On Thu, Dec 12, 2024 at 5:37 PM Friedrich Weber <f.weber@xxxxxxxxxxx> wrote: > > Hi Ilya, > > some of our Proxmox VE users also report they need to enable rxbounce to > avoid their Windows VMs triggering these errors, see e.g. [1]. With > rxbounce, everything seems to work smoothly, so thanks for adding this > option. :) > > We're currently checking how our stack could handle this more > gracefully. From my understanding of the rxbounce option, it seems like > always passing it when mapping a volume (i.e., even if the VM disks > belong to Linux VMs that aren't affected by this issue) might not be a > good idea performance-wise. Hi Friedrich, Yup, enabling rxbounce causes the read data to be double-buffered. However, I don't have any numbers on hand. The overhead may turn out to be negligible in your case. > > Another option is to only pass `rxbounce` when mapping volumes that are > known to be Windows VM disks. This seems like the most sensible option, *nod* > but still, I wanted to ask: I see that when rxbounce was originally > introduced [2], the possibility of automatically switching over to > "rxbounce mode" when needed was also discussed. Do you think this is a > direction that krbd might take some time in the future? Very unlikely. Trying to enable rxbounce behind the scenes after observing some CRC errors bumps into questions like what should be the threshold in terms of the number of errors and also the number of OSD sessions that are affected, should rxbounce be enabled globally or just for those OSD sessions, should it be disabled after time passes, etc. Fundamentally, krbd can't distinguish between "legit" CRC errors and something that just needs to be worked around. There is a concern that enabling rxbounce automatically could mask bugs or behaviors similar to what the Windows kernel does with its dummy page which we (developers) would like to know about ;) Thanks, Ilya _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx