Re: using cache-tier with writeback mode, raods bench result degrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Robert,

Please do whatever needed to get it pulled into Hammer. 

Nick

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Robert LeBlanc
> Sent: 11 January 2016 20:48
> To: Nick Fisk <nick@xxxxxxxxxx>
> Cc: Ceph-User <ceph-users@xxxxxxxx>
> Subject: Re:  using cache-tier with writeback mode, raods bench
> result degrade
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
> 
> https://github.com/ceph/ceph/pull/7024
> - ----------------
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> 
> 
> On Mon, Jan 11, 2016 at 1:47 PM, Robert LeBlanc  wrote:
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA256
> >
> > Currently set as DNM. :( I guess the author has not updated the PR as
> > requested. If needed, I can probably submit a new PR as we would
> > really like to see this in the next Hammer release. I just need to
> > know if I need to get involved. I don't want to take credit for Nick's
> > work, so I've been waiting.
> >
> > Thanks
> > - ----------------
> > Robert LeBlanc
> > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> >
> >
> > On Mon, Jan 11, 2016 at 4:29 AM, Nick Fisk  wrote:
> >> Looks like it has been done
> >>
> >>
> https://github.com/zhouyuan/ceph/commit/f352b8b908e8788d053cbe15fa3
> 63
> >> 2b226a6758d
> >>
> >>
> >>> -----Original Message-----
> >>> From: Robert LeBlanc [mailto:robert@xxxxxxxxxxxxx]
> >>> Sent: 08 January 2016 18:23
> >>> To: Nick Fisk
> >>> Cc: Wade Holler ; hnuzhoulin
> >>> ; Ceph-User
> >>> Subject: Re:  using cache-tier with writeback mode,
> >>> raods bench result degrade
> >>>
> >>> -----BEGIN PGP SIGNED MESSAGE-----
> >>> Hash: SHA256
> >>>
> >>> Are you backporting that to hammer? We'd love it.
> >>> - ----------------
> >>> Robert LeBlanc
> >>> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> >>>
> >>>
> >>> On Fri, Jan 8, 2016 at 9:28 AM, Nick Fisk  wrote:
> >>> > There was/is a bug in Infernalis and older, where objects will
> >>> > always get
> >>> promoted on the 2nd read/write regardless of what you set the
> >>> min_recency_promote settings to. This can have a dramatic effect on
> >>> performance. I wonder if this is what you are experiencing?
> >>> >
> >>> > This has been fixed in Jewel https://github.com/ceph/ceph/pull/6702 .
> >>> >
> >>> > You can compile the changes above to see if it helps or I have a
> >>> > .deb for
> >>> Infernalis where this is fixed if it's easier.
> >>> >
> >>> > Nick
> >>> >
> >>> >> -----Original Message-----
> >>> >> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On
> >>> >> Behalf Of Wade Holler
> >>> >> Sent: 08 January 2016 16:14
> >>> >> To: hnuzhoulin ; ceph-devel@xxxxxxxxxxxxxxx
> >>> >> Cc: ceph-users@xxxxxxxx
> >>> >> Subject: Re:  using cache-tier with writeback mode,
> >>> >> raods bench result degrade
> >>> >>
> >>> >> My experience is performance degrades dramatically when dirty
> >>> >> objects are flushed.
> >>> >>
> >>> >> Best Regards,
> >>> >> Wade
> >>> >>
> >>> >>
> >>> >> On Fri, Jan 8, 2016 at 11:08 AM hnuzhoulin  wrote:
> >>> >> Hi,guyes
> >>> >> Recentlly,I am testing  cache-tier using writeback mode.but I
> >>> >> found a strange things.
> >>> >> the performance  using rados bench degrade.Is it correct?
> >>> >> If so,how to explain.following some info about my test:
> >>> >>
> >>> >> storage node:4 machine,two INTEL SSDSC2BB120G4(one for
> >>> >> systaem,the other one used as OSD),four sata as OSD.
> >>> >>
> >>> >> before using cache-tier:
> >>> >> root@ceph1:~# rados bench -p coldstorage 300 write --no-cleanup
> >>> >> ----------------------------------------------------------------
> >>> >> Total time run:         301.236355
> >>> >> Total writes made:      6041
> >>> >> Write size:             4194304
> >>> >> Bandwidth (MB/sec):     80.216
> >>> >>
> >>> >> Stddev Bandwidth:       10.5358
> >>> >> Max bandwidth (MB/sec): 104
> >>> >> Min bandwidth (MB/sec): 0
> >>> >> Average Latency:        0.797838
> >>> >> Stddev Latency:         0.619098
> >>> >> Max latency:            4.89823
> >>> >> Min latency:            0.158543
> >>> >>
> >>> >> root@ceph1:/root/cluster# rados bench -p coldstorage  300 seq
> >>> >> Total time run:        133.563980
> >>> >> Total reads made:     6041
> >>> >> Read size:            4194304
> >>> >> Bandwidth (MB/sec):    180.917
> >>> >>
> >>> >> Average Latency:       0.353559
> >>> >> Max latency:           1.83356
> >>> >> Min latency:           0.027878
> >>> >>
> >>> >> after configure cache-tier:
> >>> >> root@ubuntu:~/benchmarkcollect/Monitor# ceph osd tier add
> >>> coldstorage
> >>> >> hotstorage pool 'hotstorage' is now (or already was) a tier of
> >>> >> 'coldstorage'
> >>> >>
> >>> >> root@ubuntu:~/benchmarkcollect/Monitor# ceph osd tier cache-
> mode
> >>> >> hotstorage writeback set cache-mode for pool 'hotstorage' to
> >>> >> writeback
> >>> >>
> >>> >> root@ubuntu:~/benchmarkcollect/Monitor# ceph osd tier set-
> overlay
> >>> >> coldstorage hotstorage overlay for 'coldstorage' is now (or
> >>> >> already
> >>> >> was) 'hotstorage'
> >>> >>
> >>> >> oot@ubuntu:~# ceph osd dump|grep storage pool 6 'coldstorage'
> >>> >> replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins
> >>> >> pg_num 512 pgp_num 512 last_change 216 lfor 216 flags hashpspool
> >>> >> tiers 7 read_tier 7 write_tier 7 stripe_width 0 pool 7 'hotstorage'
> >>> >> replicated size 3 min_size 1 crush_ruleset 1 object_hash rjenkins
> >>> >> pg_num 128 pgp_num 128 last_change 228 flags
> >>> >> hashpspool,incomplete_clones tier_of 6 cache_mode writeback
> >>> >> target_bytes
> >>> >> 100000000000 hit_set bloom{false_positive_probability: 0.05,
> target_size:
> >>> >> 0, seed: 0} 3600s x6 stripe_width 0
> >>> >> -------------------------------------------------------------
> >>> >> rados bench -p coldstorage 300 write --no-cleanup Total time run:
> >>> >> 302.207573 Total writes made: 4315 Write size: 4194304 Bandwidth
> >>> >> (MB/sec): 57.113
> >>> >>
> >>> >> Stddev Bandwidth: 23.9375
> >>> >> Max bandwidth (MB/sec): 104
> >>> >> Min bandwidth (MB/sec): 0
> >>> >> Average Latency: 1.1204
> >>> >> Stddev Latency: 0.717092
> >>> >> Max latency: 6.97288
> >>> >> Min latency: 0.158371
> >>> >>
> >>> >> root@ubuntu:/# rados bench -p coldstorage 300 seq Total time run:
> >>> >> 153.869741 Total reads made: 4315 Read size: 4194304 Bandwidth
> >>> >> (MB/sec): 112.173
> >>> >>
> >>> >> Average Latency: 0.570487
> >>> >> Max latency: 1.75137
> >>> >> Min latency: 0.039635
> >>> >>
> >>> >>
> >>> >> ceph.conf:
> >>> >> --------------------------------------------
> >>> >> [global]
> >>> >> fsid = 4ec1eb64-226c-4d90-8c5c-b6b6644be831
> >>> >> mon_initial_members = ceph2, ceph3, ceph4 mon_host =
> >>> >> 10.**.**.241,10.**.**.242,10.**.**.243
> >>> >> auth_cluster_required = cephx
> >>> >> auth_service_required = cephx
> >>> >> auth_client_required = cephx
> >>> >> filestore_xattr_use_omap = true
> >>> >> osd_pool_default_size = 3
> >>> >> osd_pool_default_min_size = 1
> >>> >> auth_supported = cephx
> >>> >> osd_journal_size = 10240
> >>> >> osd_mkfs_type = xfs
> >>> >> osd crush update on start = false
> >>> >>
> >>> >> [client]
> >>> >> rbd_cache = true
> >>> >> rbd_cache_writethrough_until_flush = false rbd_cache_size =
> >>> >> 33554432 rbd_cache_max_dirty = 25165824 rbd_cache_target_dirty =
> >>> >> 16777216 rbd_cache_max_dirty_age = 1
> >>> >> rbd_cache_block_writes_upfront = false [osd]
> >>> >> filestore_omap_header_cache_size = 40000 filestore_fd_cache_size
> >>> >> = 40000 filestore_fiemap = true client_readahead_min = 2097152
> >>> >> client_readahead_max_bytes = 0 client_readahead_max_periods = 4
> >>> >> filestore_journal_writeahead = false filestore_max_sync_interval
> >>> >> = 10 filestore_queue_max_ops = 500 filestore_queue_max_bytes =
> >>> >> 1048576000 filestore_queue_committing_max_ops = 5000
> >>> >> filestore_queue_committing_max_bytes = 1048576000
> >>> >> keyvaluestore_queue_max_ops = 500
> keyvaluestore_queue_max_bytes
> >>> =
> >>> >> 1048576000 journal_queue_max_ops = 30000
> journal_queue_max_bytes
> >>> =
> >>> >> 3355443200 osd_op_threads = 20 osd_disk_threads = 8
> >>> >> filestore_op_threads = 4 osd_mount_options_xfs =
> >>> >> rw,noatime,nobarrier,inode64,logbsize=256k,delaylog
> >>> >>
> >>> >> [mon]
> >>> >> mon_osd_allow_primary_affinity=true
> >>> >>
> >>> >> --
> >>> >> 使用Opera的电子邮件客户端:http://www.opera.com/mail/
> >>> >> _______________________________________________
> >>> >> ceph-users mailing list
> >>> >> ceph-users@xxxxxxxxxxxxxx
> >>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>> >
> >>> > _______________________________________________
> >>> > ceph-users mailing list
> >>> > ceph-users@xxxxxxxxxxxxxx
> >>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >>> -----BEGIN PGP SIGNATURE-----
> >>> Version: Mailvelope v1.3.3
> >>> Comment: https://www.mailvelope.com
> >>>
> >>>
> wsFcBAEBCAAQBQJWj/52CRDmVDuy+mK58QAAI1UP/jDBE95QirkgybKwOS5
> >>> V
> >>> fpCn+9tAk3psAkVEg3f9XdXtr2et0XSRiZ2Nce20Dj8OFsLcvNLuDXNEzwVH
> >>> DLJTmQIvxyEFliXZyW+Tp1aumQylhlWMT/4zgc8H1YUxo0EBaYG8xXsItTBx
> >>>
> DRHykhYPokn0lJxYBpqM6YyiTvAh1OsPL8fwDZaSan5uIaPD7VnbXmoYdkI3
> >>>
> jJ2TosK7qANlHCeDu2u4Ppc3Re3cA0+cCcf15vGabNe3yGbA+k1GvKLFgLLV
> >>>
> zuveqUdxfkDQuTRoepZo44nLVkTnsbQrrTyR2cYJxEI+p2UHh6gO0gzK4VOC
> >>>
> nR6/rhnw6ZEXOlVR4pU1Gc854D+yZx1u+VLDEO1Z8FBquRtfiReOj9qvnarM
> >>>
> PwuVSstLmL5Z9hOJ3+fL1ZGvBn+W8lO7SUurdi4IKGkuRygX7OrCvGkOVuzq
> >>>
> SxHce5u79GIRyIHT7SIeX2gLMTu1ox+ThM9S9iyAtUzVxHMA6qcOe1KzS0dR
> >>>
> hnG5K68bE3i7ynwYa6U5MGTTxZKX6oQn6FZ58jBZeAejacPVRT954YdbKMEJ
> >>> bN1qCuCrSTa8sufGNltb64y+crYBUQlGjvU78I0PcjfVIYH5irxc6qVGdzB1
> >>>
> eUqsz8tWaQPDCYAsG661YmSHf1fghNs8BpDST0u5uItvcIQ8BMeqwWeA+eED
> >>> Cpzq
> >>> =4mWK
> >>> -----END PGP SIGNATURE-----
> >>
> >
> > -----BEGIN PGP SIGNATURE-----
> > Version: Mailvelope v1.3.3
> > Comment: https://www.mailvelope.com
> >
> >
> wsFcBAEBCAAQBQJWlBTZCRDmVDuy+mK58QAA3y8QAJt7vw5t3WcaGRu3GC
> l0
> > 26B+OvEd1rCbcg3D+s9FJTvdFdJraji/u7NFoK4Iw1tKKcpPYYsBqKr9G4d7
> > rl/0Q9hIuSiUO1CSdzUSjmGFZCxaV2jzqY7eBtJkhHcnjaE0HsLb7y7GwFWz
> > DWwBtjnK4lE9DH0OZvFFZzHeL/S2PI/PimhxS9lrQ7BaRMXngoee6GXHH4BR
> >
> 38ExG94DFHnZR5rMhw5d7djpCcmAwF5IK2oH1ZaAO9MQwEEwE6wH5TMjPH
> sN
> > mNPpflmcU87pOZbKh6aG/takp36dtvkdjpPPm6rNxjYBlQB1qIrLyH0vU5at
> > JAMJ+FbtdAIS5cqaOi1Bcd8lLRe8Z4p4TEiUz4LalOOynPJRyegG38P2W3Y4
> > fxUqBrFCJ/T6sFcVj8he5yJbAxFf5jXibHyeSo6Johv2JHl8Leld2QyEbt8U
> > b6lcx/aD2whC5CPZF1rqyqXOXSmK50ajbmTGTsvcUK/FfgUoQp3GW1lHXui1
> > mmn6yFhA4pa7xSarSLsZSZkcfZ+Cc4TgcuOkfX8zD7fDSQjZLgB3bFoHMNfF
> >
> FUPN67V5gLS2EovwdIxQRWjYoqefDfKMFTR9McM3FYV/9OCSYtplUWbJDuQI
> > ZQiLYuQNptC+PJcwu46dwt3jeOoaPCrttigtYL83ZA3cs9Z9240IJTUZFlJo
> > MIGZ
> > =EvoI
> > -----END PGP SIGNATURE-----
> 
> -----BEGIN PGP SIGNATURE-----
> Version: Mailvelope v1.3.3
> Comment: https://www.mailvelope.com
> 
> wsFcBAEBCAAQBQJWlBTwCRDmVDuy+mK58QAAKEoP/3B4aVPK3Li/lRksp/28
> LnxdcSO8o1adpzKVGKRREAg6a+OBAGU3pJRj06upCbwIKcksj8/8k3CfC9KN
> HbqQRHGG90oFK22PFvP526nWRhKD0GTIUvfbvSQrXXsCfiCnd7V0mhzIBeR5
> 9qJ3F8m058liijyVghk8ex0DllhcCFwYi3u4ZYy7CDQSM7zdGXujoekrAcrO
> qLhfPtjj5caBFyxIhdPVr9cF+aux6LvdSlWXr+6emNlXly3fmKZhEt0GBLSz
> Ne7I0EauVKcaQ9KbdlXBlKQ5MQh8dkCPi+OsGPpJI0cLsQWOvVuvMZhvTaYq
> B4EpRVK2zrDru+mpy9viK1WBuWyoRChP3FVhxHOXSGTVatdd1ZdZQIECr0gz
> 20987w2QYPiH65De5h16+fIm+K5DYo5DP0imggBhIgJZM27weFBzDn6Ngsw4
> lwipmV+hneQdjFmL5FovuBWyEEM66NUamE/F+W82ZZdGdLCNzCxRMO6nT
> GHb
> m00OWDdnwlQflzcsEbsP5zDNanjxHi787YI++Av3RfGBCFk/hX09WO9ZKHmi
> upnfW94onmf+NNr7ql6OLBLNcZMeUY4FXxwyaNNr0AWqT+t7j6QclxQoNs0i
> DAjnmfiznQpGjqIs1MsyIbcoVg77eDSEq5kCETnYJfCvm1xI+XchEIVUDTqz
> Lyph
> =X+rW
> -----END PGP SIGNATURE-----
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux