Re: BlueStore write amplification

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

You can refer to a thread "Odd WAL traffic for BlueStore" in devel list for your questions.
This traffic is mostly observed on wal partition of bluestore, which is used by rocksdb. Above thread should give more insight to your questions.

Varada

On Tuesday 23 August 2016 12:09 PM, Zhiyuan Wang wrote:

Hi

I have test bluestore on SSD, and I found that the BW from fio is about 40MB, but

the write BW from iostat of SSD is about 400MB, nearly ten times.

Could someone help to explain this?

Thanks a lot.

 

Below are my configuration file:

[global]

        fsid = 31e77e3c-447c-4745-a91a-58bda80a868c

        enable experimental unrecoverable data corrupting features = bluestore rocksdb

        osd objectstore = bluestore

 

        bluestore default buffered read = true

        bluestore_min_alloc_size=4096

        osd pool default size = 1

 

        osd pg bits = 8

        osd pgp bits = 8

        auth supported = none

        log to syslog = false

        filestore xattr use omap = true

        auth cluster required = none

        auth service required = none

        auth client required = none

 

        public network = 192.168.200.233/24

        cluster network = 192.168.100.233/24

 

        mon initial members = node3

        mon host = 192.168.200.233

        mon data = "">

         

        filestore merge threshold = 40

        filestore split multiple = 8

        osd op threads = 8

            

        debug_bluefs = "0/0"

        debug_bluestore = "0/0"

        debug_bdev = "0/0"

        debug_lockdep = "0/0"

        debug_context = "0/0" 

        debug_crush = "0/0"

        debug_mds = "0/0"

        debug_mds_balancer = "0/0"

        debug_mds_locker = "0/0"

        debug_mds_log = "0/0"

        debug_mds_log_expire = "0/0"

        debug_mds_migrator = "0/0"

        debug_buffer = "0/0"

        debug_timer = "0/0"

        debug_filer = "0/0"

        debug_objecter = "0/0"

        debug_rados = "0/0"

        debug_rbd = "0/0"

        debug_journaler = "0/0"

        debug_objectcacher = "0/0"

        debug_client = "0/0"

        debug_osd = "0/0"

        debug_optracker = "0/0"

        debug_objclass = "0/0"

        debug_filestore = "0/0"

        debug_journal = "0/0"

        debug_ms = "0/0"

        debug_mon = "0/0"

        debug_monc = "0/0"

        debug_paxos = "0/0"

        debug_tp = "0/0"

        debug_auth = "0/0"

        debug_finisher = "0/0"

        debug_heartbeatmap = "0/0"

        debug_perfcounter = "0/0"

        debug_rgw = "0/0"

        debug_hadoop = "0/0"

        debug_asok = "0/0"

        debug_throttle = "0/0"

 

[osd.0]

        host = node3

        osd data = "">

        bluestore block path = /dev/disk/by-partlabel/osd-device-0-block

        bluestore block db path = /dev/disk/by-partlabel/osd-device-0-db

        bluestore block wal path = /dev/disk/by-partlabel/osd-device-0-wal

 

[osd.1]

        host = node3

        osd data = "">

        bluestore block path = /dev/disk/by-partlabel/osd-device-1-block

        bluestore block db path = /dev/disk/by-partlabel/osd-device-1-db

        bluestore block wal path = /dev/disk/by-partlabel/osd-device-1-wal

[osd.2]

        host = node3

        osd data = "">

        bluestore block path = /dev/disk/by-partlabel/osd-device-2-block

        bluestore block db path = /dev/disk/by-partlabel/osd-device-2-db

        bluestore block wal path = /dev/disk/by-partlabel/osd-device-2-wal

 

 

[osd.3]

        host = node3

        osd data = "">

        bluestore block path = /dev/disk/by-partlabel/osd-device-3-block

        bluestore block db path = /dev/disk/by-partlabel/osd-device-3-db

        bluestore block wal path = /dev/disk/by-partlabel/osd-device-3-wal

Email Disclaimer & Confidentiality Notice

This message is confidential and intended solely for the use of the recipient to whom they are addressed. If you are not the intended recipient you should not deliver, distribute or copy this e-mail. Please notify the sender immediately by e-mail and delete this e-mail from your system. Copyright © 2016 by Istuary Innovation Labs, Inc. All rights reserved. 

 


PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux