Hi Stefan, Zitat von Stefan Priebe - Profihost AG:
Hello, bcache didn't supported partitions on the past so that a lot of our osds have their data directly on: /dev/bcache[0-9] But that means i can't give them the needed part type of 4fbd7e29-9d25-41b8-afd0-062c0ceff05d and that means that the activation with udev und ceph-disk does not work. Had anybody already fixed this or hacked something together?
we had this running for filestore OSDs for quite some time (on Luminous and before), but have recently moved on to Bluestore, omitting bcache and instead putting block.db on partitions of the SSD devices (or rather partitions on an MD-RAID1 made out of two Toshiba PX02SMF020).
We simply mounted the OSD file system via label at boot time per fstab entries, and had the OSDs started via systemd. In case this matters: For historic reasons, the actual mount point wasn't in /var/lib/ceph/osd, but a different directory, with according symlinks set up under /var/lib/ceph/osd/.
How many OSDs do you run per bcache SSD caching device? Even at just 4:1, we ran into i/o bottlenecks (using above MD-RAID1 as the caching device), hence moving on to Bluestore. The same hardware now provides a much more responsive storage subsystem, which of course may be very specific to our work load and setup.
Regards Jens _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com