Re: bluestore write iops calculation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Mon, Aug 5, 2019 at 6:35 PM <vitalif@xxxxxxxxxx> wrote:
> Hi  Team,
> @vitalif@xxxxxxxxxx , thank you for information and could you please
> clarify on the below quires as well,
>
> 1. Average object size we use will be 256KB to 512KB , will there be
> deferred write queue ?

With the default settings, no (bluestore_prefer_deferred_size_hdd =
32KB)

  Are you sure that 256-512KB operations aren't counted as multiple 
operations in your disk stats?

  I think it is not taking multiple operations. 

> 2. Share the link of existing rocksdb ticket which does 2 write +
> syncs.

My PR is here https://github.com/ceph/ceph/pull/26909, you can find the
issue tracker links inside it.

> 3. Any configuration by which we can reduce/optimize the iops ?

As already said part of your I/O may be caused by the metadata (rocksdb)
reads if it doesn't fit into RAM. You can try to add more RAM in that
case... :)

 I can add RAM ans is there a way to increase rocksdb caching , can I increase bluestore_cache_size_hdd to higher value to cache rocksdb? 

You can also try to add SSDs for metadata (block.db/block.wal).
 This we have planned to add some SSDs and how many OSD's rocks db we can add per SSDs and i guess if one SSD is down then all related OSDs has to be re-installed. 

Is there something else?... I don't think so.

--
Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux