Re: udev rule or script to auto add bcache devices?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stefan,

Zitat von Stefan Priebe - Profihost AG:
Hello,

bcache didn't supported partitions on the past so that a lot of our osds
have their data directly on:
/dev/bcache[0-9]

But that means i can't give them the needed part type of
4fbd7e29-9d25-41b8-afd0-062c0ceff05d and that means that the activation
with udev und ceph-disk does not work.

Had anybody already fixed this or hacked something together?

we had this running for filestore OSDs for quite some time (on Luminous and before), but have recently moved on to Bluestore, omitting bcache and instead putting block.db on partitions of the SSD devices (or rather partitions on an MD-RAID1 made out of two Toshiba PX02SMF020).

We simply mounted the OSD file system via label at boot time per fstab entries, and had the OSDs started via systemd. In case this matters: For historic reasons, the actual mount point wasn't in /var/lib/ceph/osd, but a different directory, with according symlinks set up under /var/lib/ceph/osd/.

How many OSDs do you run per bcache SSD caching device? Even at just 4:1, we ran into i/o bottlenecks (using above MD-RAID1 as the caching device), hence moving on to Bluestore. The same hardware now provides a much more responsive storage subsystem, which of course may be very specific to our work load and setup.

Regards
Jens

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux