Re: Shared WAL/DB device partition for multiple OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Actually, if you go to https://ceph.com/community/new-luminous-bluestore/ you will see that DB/WAL work on a XFS partition, while the data itself goes on a raw block.

Also, I told you the wrong command in the last mail. When i said --osd-db it should be --block-db.

On Fri, May 11, 2018 at 11:51 AM Oliver Schulz <oliver.schulz@xxxxxxxxxxxxxx> wrote:
Hi,

thanks for the advice! I'm a bit confused now, though. ;-)
I thought DB and WAL were supposed to go on raw block
devices, not file systems?


Cheers,

Oliver


On 11.05.2018 16:01, João Paulo Sacchetto Ribeiro Bastos wrote:
> Hello Oliver,
>
> As far as I know yet, you can use the same DB device for about 4 or 5
> OSDs, just need to be aware of the free space. I'm also developing a
> bluestore cluster, and our DB and WAL will be in the same SSD of about
> 480GB serving 4 OSD HDDs of 4 TB each. About the sizes, its just a
> feeling because I couldn't find yet any clear rule about how to measure
> the requirements.
>
> * The only concern that took me some time to realize is that you should
> create a XFS partition if using ceph-deploy because if you don't it will
> simply give you a RuntimeError that doesn't give any hint about what's
> going on.
>
> So, answering your question, you could do something like:
> $ ceph-deploy osd create --bluestore --data="" --block-db
> /dev/nvme0n1p1 $HOSTNAME
> $ ceph-deploy osd create --bluestore --data="" --block-db
> /dev/nvme0n1p1 $HOSTNAME
>
> On Fri, May 11, 2018 at 10:35 AM Oliver Schulz
> <oliver.schulz@xxxxxxxxxxxxxx <mailto:oliver.schulz@xxxxxxxxxxxxxx>> wrote:
>
>     Dear Ceph Experts,
>
>     I'm trying to set up some new OSD storage nodes, now with
>     bluestore (our existing nodes still use filestore). I'm
>     a bit unclear on how to specify WAL/DB devices: Can
>     several OSDs share one WAL/DB partition? So, can I do
>
>           ceph-deploy osd create --bluestore --osd-db=/dev/nvme0n1p2
>     --data="" HOSTNAME
>
>           ceph-deploy osd create --bluestore --osd-db=/dev/nvme0n1p2
>     --data="" HOSTNAME
>
>           ...
>
>     Or do I need to use osd-db=/dev/nvme0n1p2 for data=""> >     osd-db=/dev/nvme0n1p3 for data="" and so on?
>
>     And just to make sure - if I specify "--osd-db", I don't need
>     to set "--osd-wal" as well, since the WAL will end up on the
>     DB partition automatically, correct?
>
>
>     Thanks for any hints,
>
>     Oliver
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> --
>
> João Paulo Sacchetto Ribeiro Bastos
> +55 31 99279-7092
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--

João Paulo Bastos
DevOps Engineer at Mav Tecnologia
Belo Horizonte - Brazil
+55 31 99279-7092

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux