Re: [Bluestore] Some of my osd's uses BlueFS slow storage for db - why?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Konstantin Shalygin <k0ste@xxxxxxxx>
> Sent: 22 February 2019 14:23
> To: Nick Fisk <nick@xxxxxxxxxx>
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  [Bluestore] Some of my osd's uses BlueFS slow storage for db - why?
> 
> Bluestore/RocksDB will only put the next level up size of DB on flash if the whole size will fit.
> These sizes are roughly 3GB,30GB,300GB. Anything in-between those sizes are pointless. Only ~3GB of SSD will ever be used out of a
> 28GB partition. Likewise a 240GB partition is also pointless as only ~30GB will be used.
> 
> I'm currently running 30GB partitions on my cluster with a mix of 6,8,10TB disks. The 10TB's are about 75% full and use around 14GB,
> this is on mainly 3x Replica RBD(4MB objects)
> 
> Nick
> 
> Can you explain more? You mean that I should increase my 28Gb to 30Gb and this do a trick?
> How is your db_slow size? We should control it? You control it? How?

Yes, I was in a similar situation initially where I had deployed my OSD's with 25GB DB partitions and after 3GB DB used, everything else was going into slowDB on disk. From memory 29GB was just enough to make the DB fit on flash, but 30GB is a safe round figure to aim for. With a 30GB DB partition with most RBD type workloads all data should reside on flash even for fairly large disks running erasure coding.

Nick

> 
> 
> k

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux