Re: Best osd scenario + ansible config?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Wed, 4 Sep 2019 10:32:56 +0200
Yoann Moulin <yoann.moulin@xxxxxxx> ==> ceph-users@xxxxxxx :
> Hello,
> 
> > Tue, 3 Sep 2019 11:28:20 +0200
> > Yoann Moulin <yoann.moulin@xxxxxxx> ==> ceph-users@xxxxxxx :  
> >> Is it better to put all WAL on one SSD and all DBs on the other one? Or put WAL and DB of the first 5 OSDs on the first SSD and the 5 others on
> >> the second one.  
> > 
> > I don't know if this has a relevant impact on the latency/speed of the ceph system but we use LVM on top of a SW RAID 1 over two SSDs for WAL & DB on this RAID1.  
> 
> What is the recommended size for wall and db in my case?
> 
> I have :
> 
> 10x 6TB Disk OSDs (data)
>  2x 480G SSD
> 
> Best,

I'm still unsure with the size of the block.db and the wal.
This seems to be relevant:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/035086.html

But it is also said that the pure WAL need just 1 GB of space. 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-August/036509.html

So the conclusion would be to use 2*X(DB) + 1GB (WAL) if you put both on the same partition/LV.
With X being on of 3GB, 30GB or 300GB.

You have 10 OSDs. That means you should have 10 partitions/LVs for DBs & WALs.

This is something that should be cleared up in the docs!

Lars
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux