Re: Shared WAL/DB device partition for multiple OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi Jaroslaw,

I tried that (using /dev/nvme0n1), but no luck:

    ceph_deploy.osd][ERROR ] Failed to execute command:
    /usr/sbin/ceph- volume --cluster ceph lvm create --bluestore
    --data /dev/sdb --block.wal /dev/nvme0n1

When I run "/usr/sbin/ceph-volume ..." on the storage node, it fails
with:

    --> blkid could not detect a PARTUUID for device: /dev/nvme0n1

There is an LVM PV on /dev/nvme0n1p1 (for the node OS), could
that be a problem?

I'd be glad for any advice. If all else fails, I should be fine
if I create a 10GB DB partition for each ODS manually, right?


Cheers,

Oliver


On 11.05.2018 15:40, Jaroslaw Owsiewski wrote:
> Hi,
>
>
> ceph-deploy is smart enough:
>
> ceph-deploy --overwrite-conf osd prepare --bluestore --block-db /dev/nvme0n1 --block-wal /dev/nvme0n1 hostname:/dev/sd{b..m}
>
> Working example.
>
> $ lsblk
> NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
> sda            8:0    0 278.9G  0 disk
> └─sda1         8:1    0 278.9G  0 part /
> sdb            8:16   0   9.1T  0 disk
> ├─sdb1         8:17   0   100M  0 part /var/lib/ceph/osd/ceph-204
> └─sdb2         8:18   0   9.1T  0 part
> sdc            8:32   0   9.1T  0 disk
> ├─sdc1         8:33   0   100M  0 part /var/lib/ceph/osd/ceph-205
> └─sdc2         8:34   0   9.1T  0 part
> sdd            8:48   0   9.1T  0 disk
> ├─sdd1         8:49   0   100M  0 part /var/lib/ceph/osd/ceph-206
> └─sdd2         8:50   0   9.1T  0 part
> sde            8:64   0   9.1T  0 disk
> ├─sde1         8:65   0   100M  0 part /var/lib/ceph/osd/ceph-207
> └─sde2         8:66   0   9.1T  0 part
> sdf            8:80   0   9.1T  0 disk
> ├─sdf1         8:81   0   100M  0 part /var/lib/ceph/osd/ceph-208
> └─sdf2         8:82   0   9.1T  0 part
> sdg            8:96   0   9.1T  0 disk
> ├─sdg1         8:97   0   100M  0 part /var/lib/ceph/osd/ceph-209
> └─sdg2         8:98   0   9.1T  0 part
> sdh            8:112  0   9.1T  0 disk
> ├─sdh1         8:113  0   100M  0 part /var/lib/ceph/osd/ceph-210
> └─sdh2         8:114  0   9.1T  0 part
> sdi            8:128  0   9.1T  0 disk
> ├─sdi1         8:129  0   100M  0 part /var/lib/ceph/osd/ceph-211
> └─sdi2         8:130  0   9.1T  0 part
> sdj            8:144  0   9.1T  0 disk
> ├─sdj1         8:145  0   100M  0 part /var/lib/ceph/osd/ceph-212
> └─sdj2         8:146  0   9.1T  0 part
> sdk            8:160  0   9.1T  0 disk
> ├─sdk1         8:161  0   100M  0 part /var/lib/ceph/osd/ceph-213
> └─sdk2         8:162  0   9.1T  0 part
> sdl            8:176  0   9.1T  0 disk
> ├─sdl1         8:177  0   100M  0 part /var/lib/ceph/osd/ceph-214
> └─sdl2         8:178  0   9.1T  0 part
> sdm            8:192  0   9.1T  0 disk
> ├─sdm1         8:193  0   100M  0 part /var/lib/ceph/osd/ceph-215
> └─sdm2         8:194  0   9.1T  0 part
> nvme0n1      259:0    0 349.3G  0 disk
> ├─nvme0n1p1  259:2    0     1G  0 part
> ├─nvme0n1p2  259:4    0   576M  0 part
> ├─nvme0n1p3  259:1    0     1G  0 part
> ├─nvme0n1p4  259:3    0   576M  0 part
> ├─nvme0n1p5  259:5    0     1G  0 part
> ├─nvme0n1p6  259:6    0   576M  0 part
> ├─nvme0n1p7  259:7    0     1G  0 part
> ├─nvme0n1p8  259:8    0   576M  0 part
> ├─nvme0n1p9  259:9    0     1G  0 part
> ├─nvme0n1p10 259:10   0   576M  0 part
> ├─nvme0n1p11 259:11   0     1G  0 part
> ├─nvme0n1p12 259:12   0   576M  0 part
> ├─nvme0n1p13 259:13   0     1G  0 part
> ├─nvme0n1p14 259:14   0   576M  0 part
> ├─nvme0n1p15 259:15   0     1G  0 part
> ├─nvme0n1p16 259:16   0   576M  0 part
> ├─nvme0n1p17 259:17   0     1G  0 part
> ├─nvme0n1p18 259:18   0   576M  0 part
> ├─nvme0n1p19 259:19   0     1G  0 part
> ├─nvme0n1p20 259:20   0   576M  0 part
> ├─nvme0n1p21 259:21   0     1G  0 part
> ├─nvme0n1p22 259:22   0   576M  0 part
> ├─nvme0n1p23 259:23   0     1G  0 part
> └─nvme0n1p24 259:24   0   576M  0 part
>
> Regards
>
> --
> Jarek
>
> 2018-05-11 15:35 GMT+02:00 Oliver Schulz <oliver.schulz@xxxxxxxxxxxxxx <mailto:oliver.schulz@xxxxxxxxxxxxxx>>:
>
>     Dear Ceph Experts,
>
>     I'm trying to set up some new OSD storage nodes, now with
>     bluestore (our existing nodes still use filestore). I'm
>     a bit unclear on how to specify WAL/DB devices: Can
>     several OSDs share one WAL/DB partition? So, can I do
>
>          ceph-deploy osd create --bluestore --osd-db=/dev/nvme0n1p2
>     --data=/dev/sdb HOSTNAME
>
>          ceph-deploy osd create --bluestore --osd-db=/dev/nvme0n1p2
>     --data=/dev/sdc HOSTNAME
>
>          ...
>
>     Or do I need to use osd-db=/dev/nvme0n1p2 for data=/dev/sdb,
>     osd-db=/dev/nvme0n1p3 for data=/dev/sdc, and so on?
>
>     And just to make sure - if I specify "--osd-db", I don't need
>     to set "--osd-wal" as well, since the WAL will end up on the
>     DB partition automatically, correct?
>
>
>     Thanks for any hints,
>
>     Oliver
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux