On Tue, Sep 03, 2019 at 05:42:22PM +0200, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > > > > I'm not a ceph expert, but solution iii) seems decent for me, with a > > > little overhead (a readlinkk and a stat for every osd start). > > However you like it. But to note that in Ceph Nautilus the udev rules > > aren't shipped anymore. > > Ok. I make a note. > > > > > But still i don't understood why, if i have: > > > and: > > > (so, journal partition group-owned by 'disk' and 'ceph' user in group > > > 'disk'), still i have permission access. > > > The ceph-osd process reset group ownership on runtime? > > In Luminous udev is handling all of that, see 95-ceph-osd.rules. > > No, sorry, evidently i'm not explaining myself correctly. > > I've added the 'ceph' user to group 'disk': > > > root@capitanmarvel:~# LANG=C id ceph > > > uid=64045(ceph) gid=64045(ceph) groups=64045(ceph),6(disk) > > and journal devices are group-owned by 'disk' and have read and write > permission for the group (660): > > > brw-rw---- 1 root disk 8, 6 ago 28 14:38 /dev/sda6 > > So, because user 'ceph' are in group 'disk', and group 'disk' have read > and write permission to the device, i can ACTUALLY read and write to > the device. But is not the case. > > So, seems to me that 'ceph-osd' process ''reset'' group membership and > ignore the 'disk' group. > Note, that if i 'su' to ceph, i can read the disks: The ceph-osd process runs with user and group ceph and this is why it wants the group on the disk to be ceph as well. ps xa -o user,group,command | grep ceph-osd You would need to change the service template and alter the setgroup to 'disk' or 'ceph' to make it work. -- 8< -- CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@1.service └─7643 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph -- 8< -- I still recommend to go with an extra SSD and spare the hassle. -- Cheers, Alwin _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx