Hello all,
I would like to continue a thread that dates back to last May (sorry if this is not a good practice ?..) Thanks David for your usefil tips on this thread. In my side, I created my OSDs with ceph-deploy (in place of ceph-volume) [1], but this is exactly the same context as this mentioned on this thread (hdd drive for OSDs and wal/db partitions on NVMe device). The problem I encounter is that the script that fixes block.db partitions by their UUID works very well in live but does not resist to the reboot of the OSD node. If I restart the server, the symbolic links of block.db automatically go up with the device name /dev/nvme... The problem gets worse when we have 2 NVMe devices on the same node beacuse in this case, it happens that the paths to the block.db partitions are reversed and obviously OSDs don't start ! As I'm not yet in production, I can probably recreate all my OSDs by forcing the path to the block.db partitions with UUID, but I would like to know if there was a way to "freeze" the configuration of block.db paths by their UUID ("a posteriori") ? Or maybe (but this is more a system administration issue) that there is a way on Linux system to force an NVMe disk to be mounted with a fixed device name ? (I specify here that my NVMe partitions do not have a filesystem). Thanks for your help, Hervé [1] from admin node ceph-deploy osd create --debug --bluestore --data $hdd --block-db $db $osdnode Le 11/05/2018 à 18:46, David Turner a écrit :
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com