Hi Ilya, On 13/12/2024 16:25, Ilya Dryomov wrote: > [...] >> We're currently checking how our stack could handle this more >> gracefully. From my understanding of the rxbounce option, it seems like >> always passing it when mapping a volume (i.e., even if the VM disks >> belong to Linux VMs that aren't affected by this issue) might not be a >> good idea performance-wise. > > Hi Friedrich, > > Yup, enabling rxbounce causes the read data to be double-buffered. > However, I don't have any numbers on hand. The overhead may turn out > to be negligible in your case. I see. I also ran some very quick benchmarks where I didn't see much impact, but that's not quite enough to be confident. > [...] >> but still, I wanted to ask: I see that when rxbounce was originally >> introduced [2], the possibility of automatically switching over to >> "rxbounce mode" when needed was also discussed. Do you think this is a >> direction that krbd might take some time in the future? > > Very unlikely. Trying to enable rxbounce behind the scenes after > observing some CRC errors bumps into questions like what should be the > threshold in terms of the number of errors and also the number of OSD > sessions that are affected, should rxbounce be enabled globally or just > for those OSD sessions, should it be disabled after time passes, etc. > Fundamentally, krbd can't distinguish between "legit" CRC errors and > something that just needs to be worked around. There is a concern that > enabling rxbounce automatically could mask bugs or behaviors similar to > what the Windows kernel does with its dummy page which we (developers) > would like to know about ;) That makes sense, thank you for elaborating! Best wishes, Friedrich _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx