Re: NVMe disk - size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 11/15/19 3:19 PM, Kristof Coucke wrote:
> Hi all,
> 
>  
> 
> We’ve configured a Ceph cluster with 10 nodes, each having 13 large
> disks (14TB) and 2 NVMe disks (1,6TB).
> 
> The idea was to use the NVMe as “fast device”…
> 
> The recommendations I’ve read in the online documentation, state that
> the db block device should be around 4%~5% of the slow device. So, the
> block.db should be somewhere between 600GB and 700GB as a best practice.
> 
> However… I was thinking to only reserve 200GB per OSD as fast device…
> Which is 1/3 of the recommendation…
> 
>  
The 4% rule is way to much. Usually 10GB per 1TB of storage is
sufficient, so 1%. You should be safe with 200GB per OSD.

> 
> I’ve tested in the labs, and it does work fine with even very small
> devices (the spillover does its job).
> 
> Though, before taking the system in to production, I would like to
> verify that no issues arise.
> 
>  
> 
>   * Is it recommended to still use it as a block.db, or is it
>     recommended to only use it as a WAL device?

Use it as DB.

>   * Should I just split the NVMe in three and only configure 3 OSDs to
>     use the system? (This would mean that the performace shall be
>     degraded to the speed of the slowest device…)
> 

I would split them. So 6 and 7 OSDs per NVMe. I normally use LVM on top
of each device and create 2 LVs per OSD:

- WAL: 1GB
- DB: xx GB

>  
> 
> The initial cluster is +1PB and we’re planning to expand it again with
> 1PB in the near future to migrate our data.
> 
> We’ll only use the system thru the RGW (No CephFS, nor block device),
> and we’ll store “a lot” of small files on it… (Millions of files a day)
> 
>  
> 
> The reason I’m asking it, is that I’ve been able to break the test
> system (long story), causing OSDs to fail as they ran out of space…
> Expanding the disks (the block DB device as well as the main block
> device) failed with the ceph-bluestore-tool…
> 
>  
> 
> Thanks for your answer!
> 
>  
> 
> Kristof
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux