Unhelpful behaviour of ceph-volume lvm batch with >1 NVME card for block.db

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We currently deploy our filestore OSDs with ceph-disk (via
ceph-ansible), and I was looking at using ceph-volume as we migrate to
bluestore.

Our servers have 60 OSDs and 2 NVME cards; each OSD is made up of a
single hdd, and an NVME partition for journal.

If, however, I do:
ceph-volume lvm batch /dev/sda /dev/sdb [...] /dev/nvme0n1 /dev/nvme1n1
then I get (inter alia):

Solid State VG:
  Targets:   block.db                  Total size: 1.82 TB
  Total LVs: 2                         Size per LV: 931.51 GB

  Devices:   /dev/nvme0n1, /dev/nvme1n1

i.e. ceph-volume is going to make a single VG containing both NVME
devices, and split that up into LVs to use for block.db

It seems to me that this is straightforwardly the wrong answer - either
NVME failing will now take out *every* OSD on the host, whereas the
obvious alternative (one VG per NVME, divide those into LVs) would give
you just as good performance, but you'd only lose 1/2 the OSDs if an
NVME card failed.

Am I missing something obvious here?

I appreciate I /could/ do it all myself, but even using ceph-ansible
that's going to be very tiresome...

Regards,

Matthew


-- 
 The Wellcome Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE. 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux