Yea, I saw the drive group documentation. It might be just a baby-related lack of sleep to blame but, to me, it wasn’t clear if I could achieve what I wanted to. I can set the criteria for which drives to use for data but can I ‘pair’ the data drive up with matching VG/LVs? I’m trying to get to: Data /dev/sdc WAL vgname/wal-sdc DB vgname/db-sdc It might be that this is an incorrect / stupid way to set things up but ISTR the docs stating pointing at an entire LV was best so you don’t have to worry about specifying sizes, etc. W > On 11 Jun 2020, at 13:10, Eugen Block <eblock@xxxxxx> wrote: > > Hi, > > assuming you're running Octopus the deployment guide [1] explains it quite well. > To specify rocksDB/WAL devices you have to make use of "drive_groups" [2]. > > Regards, > Eugen > > > [1] https://docs.ceph.com/docs/octopus/cephadm/install/#deploy-osds > [2] https://docs.ceph.com/docs/octopus/cephadm/drivegroups/#drivegroups > > > Zitat von Will Payne <will@xxxxxxxxxxxxxxxx>: > >> Hi, >> >> Total newbie question - I'm new to Ceph and am setting up a small test cluster. I've set up five nodes and can see the available drives but I'm unsure on exactly how I can add an OSD and specify the locations for WAL+DB. >> >> Maybe my Google-fu is weak but the only guides I can find refer to ceph-deploy which, as far as I can see, is deprecated. Guides which talk about using cephadm / ceph only mention adding a drive but not specifying the WAL+DB locations. >> >> I want to add HDDs as OSDs and put the WAL and DB onto separate LVs on an SSD. How? >> >> Will >> _______________________________________________ >> ceph-users mailing list -- ceph-users@xxxxxxx >> To unsubscribe send an email to ceph-users-leave@xxxxxxx > > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx