Re: Shared WAL/DB device partition for multiple OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

thanks for the advice! I'm a bit confused now, though. ;-)
I thought DB and WAL were supposed to go on raw block
devices, not file systems?


Cheers,

Oliver


On 11.05.2018 16:01, João Paulo Sacchetto Ribeiro Bastos wrote:
Hello Oliver,

As far as I know yet, you can use the same DB device for about 4 or 5 OSDs, just need to be aware of the free space. I'm also developing a bluestore cluster, and our DB and WAL will be in the same SSD of about 480GB serving 4 OSD HDDs of 4 TB each. About the sizes, its just a feeling because I couldn't find yet any clear rule about how to measure the requirements.

* The only concern that took me some time to realize is that you should create a XFS partition if using ceph-deploy because if you don't it will simply give you a RuntimeError that doesn't give any hint about what's going on.

So, answering your question, you could do something like:
$ ceph-deploy osd create --bluestore --data=/dev/sdb --block-db /dev/nvme0n1p1 $HOSTNAME $ ceph-deploy osd create --bluestore --data=/dev/sdc --block-db /dev/nvme0n1p1 $HOSTNAME

On Fri, May 11, 2018 at 10:35 AM Oliver Schulz <oliver.schulz@xxxxxxxxxxxxxx <mailto:oliver.schulz@xxxxxxxxxxxxxx>> wrote:

    Dear Ceph Experts,

    I'm trying to set up some new OSD storage nodes, now with
    bluestore (our existing nodes still use filestore). I'm
    a bit unclear on how to specify WAL/DB devices: Can
    several OSDs share one WAL/DB partition? So, can I do

          ceph-deploy osd create --bluestore --osd-db=/dev/nvme0n1p2
    --data=/dev/sdb HOSTNAME

          ceph-deploy osd create --bluestore --osd-db=/dev/nvme0n1p2
    --data=/dev/sdc HOSTNAME

          ...

    Or do I need to use osd-db=/dev/nvme0n1p2 for data=/dev/sdb,
    osd-db=/dev/nvme0n1p3 for data=/dev/sdc, and so on?

    And just to make sure - if I specify "--osd-db", I don't need
    to set "--osd-wal" as well, since the WAL will end up on the
    DB partition automatically, correct?


    Thanks for any hints,

    Oliver
    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--

João Paulo Sacchetto Ribeiro Bastos
+55 31 99279-7092

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux