Re: ceph bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den tis 5 apr. 2022 kl 11:26 skrev Ali Akil <ali-akil@xxxxxx>:
> Hallo everybody,
> I have two questions regarding bluestore. I am struggling to understand
> the documentation :/
>
> I am planning to deploy 3 ceph nodes with 10xHDDs for OSD data, Raid 0
> 2xSSDs for block.db with replication on host level.
>
> First Question :
> Is it possible to deploy block.db on RAID 0 parition ? and Do i need to
> backup the SSDs for block.db or the data will be replicated on the other
> nodes ?

Raid-0 on the block DB will mean that if one of the SSDs die, *all* of your OSDs
are now lost. I would have each ssd be block.db for half the HDDs, so
that no single
failure causes the whole host to be lost. You would still lose half
the OSDs, but the
other half will still work if one of the SSDs die.

> Second:
> Under the BLOCK and BLOCK.DB <i am planning to deploy 3 ceph nodes with
> HDDs for OSD, Raid 0 2xSSDs for block.db with replication on host level.
> Do i need to backup the SSDs for block.db or the data will be replicated
> on the other hosts?> section in the documentation is stated the i must
> create volume groups and logical volumes, if i want to locate block.db
> on another disk. It's not stated though the reason behind that. So why
> it's not possible to just assign the block.db to the disk with E.g.
> --block.db /dev/sdb without creating a logical volume?

You can use --block.db /dev/sdb but it will use the whole of sdb for
that one single OSD you are creating.
In order to split a device as block.db for several OSDs, you should
partition it and give each OSD one partition for block.db

> Also what is the role for `ceph-volume lvm prepare` , if one should
> create these logical volumes manually?

The create pass is actually two tasks, prepare and activate. If you
only want to do the first half, "prepare" is there so you can
"activate" them later (i.e. set up systemd autostart and so on).

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux