On Tue, Jan 5, 2021 at 9:01 AM Wissem MIMOUNA <wissem.mimouna@xxxxxxxxxxxxxxxx> wrote: > > Hi , > > Thank you for your feedback . It seems the error related to the fstrim run once a week ( default ) . Do you have object-map enabled? If not, the FS will gladly send huge discard extents which, if you have a large volume, could result in hundreds of thousands of ops to the cluster. That's a great way to hang IO. > Do you have more information about the NBD/XFS memeory pressure issues ? See [1]. > Thanks > > -----Message d'origine----- > De : Jason Dillaman <jdillama@xxxxxxxxxx> > Envoyé : mardi 5 janvier 2021 14:42 > À : Wissem MIMOUNA <wissem.mimouna@xxxxxxxxxxxxxxxx> > Cc : ceph-users@xxxxxxx > Objet : Re: Timeout ceph rbd-nbd mounted image > > You can try using the "--timeout X" optional for "rbd-nbd" to increase the timeout. Some kernels treat the default as infinity, but there were some >=4.9 kernels that switched behavior and started defaulting to 30 seconds. There is also known issues with attempting to place XFS file systems on top of NBD due to memory pressure issues. > > On Tue, Jan 5, 2021 at 4:36 AM Wissem MIMOUNA <wissem.mimouna@xxxxxxxxxxxxxxxx> wrote: > > > > Hello , > > > > Looking for information about a timeout which occur once a week for a ceph rbd image mounted on a machine using rbd-nbd (Linux Ubuntu machine). > > The error found in 'dmseg' is below : > > [798016.401469] block nbd0: Connection timed out [798016.401506] block > > nbd0: shutting down sockets > > > > Many Thanks > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an > > email to ceph-users-leave@xxxxxxx > > > > > -- > Jason > [1] https://tracker.ceph.com/issues/40822 -- Jason _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx