Hi Alex, Maybe this one [1] that leads to osd / mon asserts. Have a look at Laura's post here [2] for more information. Updating clients to Reef+ (not sure which kernel added the upmap read feature) or removing any pg_upmap_primaries entries may help in your situation. Regards, Frédéric. [1] https://tracker.ceph.com/issues/61948 [2] https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/GUQCIRZRMGQ3JOXS2PYZL7EPO3ZMYV6R/ ----- Le 27 Sep 24, à 10:30, Alex from North service.plant@xxxxx a écrit : > Hello everybody, > found intresting thing: for some reason ALL the monitors crash when I try to rbd > map on client host. > > here is my pool: > > root@ceph1:~# ceph osd pool ls > iotest > > Here is my rbd in this pool: > > root@ceph1:~# rbd ls -p iotest > test1 > > > this is a client creds to connect to this pool: > > [client.iotest] > key = AQASVfZm5bPGLBAAXyPWqJvNMBsXsJQcFrSAhg== > caps mgr = "profile rbd pool=iotest" > caps mon = "profile rbd" > caps osd = "profile rbd pool=iotest" > > This is rbmap file on a client host: > > root@node-stat:/etc/ceph# cat rbdmap > # RbdDevice Parameters > #poolname/imagename id=client,keyring=/etc/ceph/ceph.client.keyring > iotest/test1 id=iotest,keyring=/etc/ceph/ceph.client.iotest.keyring > > So, when I press Enter on command rbd map iotest/test1 --id iotest in the same > moment ALL the mons go down. > I pul log on pastebin as it is quite long > https://pastebin.com/iCr8pY1r > > All the hints are appreciated. Thanks in advance. > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx