Re: ceph Performance random write is more then sequential

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, So far I have tried both the options and in both cases I am able to get better sequential performance then random  (as explained by somnath)  But performance numbers(iops, mbps) are way less then default option, I can understand that as ceph is dealing with 1000 times more objects then default option.  So keeping this is mind that I am running performance test for random only and leaving sequential tests. Still not sure how reports available on internet from intel and mellanox shows good number from sequential write, may be they have enabled cache. 

http://www.mellanox.com/related-docs/whitepapers/WP_Deploying_Ceph_over_High_Performance_Networks.pdf

Thanks
sumit

On Thu, Feb 5, 2015 at 2:09 PM, Alexandre DERUMIER <aderumier@xxxxxxxxx> wrote:
Hi,

>>What I saw after enabling RBD cache it is working as expected, means sequential write has better MBps than random write. can somebody explain this behaviour ?

This is because rbd_cache merge coalesced ios in bigger ios, so it's working only with sequential workload.

you'll do less ios but bigger ios to ceph, so less cpus,....


----- Mail original -----
De: "Sumit Gaur" <sumitkgaur@xxxxxxxxx>
À: "Florent MONTHEL" <fmonthel@xxxxxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Lundi 2 Février 2015 03:54:36
Objet: Re: ceph Performance random write is more then      sequential

Hi All,
What I saw after enabling RBD cache it is working as expected, means sequential write has better MBps than random write. can somebody explain this behaviour ? Is RBD cache setting must for ceph cluster to behave normally ?

Thanks
sumit

On Mon, Feb 2, 2015 at 9:59 AM, Sumit Gaur < sumitkgaur@xxxxxxxxx > wrote:



Hi Florent,
Cache tiering , No .

** Our Architecture :

vdbench/FIO inside VM <--> RBD without cache <-> Ceph Cluster (6 OSDs + 3 Mons)


Thanks
sumit

[root@ceph-mon01 ~]# ceph -s
cluster 47b3b559-f93c-4259-a6fb-97b00d87c55a
health HEALTH_WARN clock skew detected on mon.ceph-mon02, mon.ceph-mon03
monmap e1: 3 mons at {ceph-mon01= 192.168.10.19:6789/0,ceph-mon02=192.168.10.20:6789/0,ceph-mon03=192.168.10.21:6789/0 }, election epoch 14, quorum 0,1,2 ceph-mon01,ceph-mon02,ceph-mon03
osdmap e603: 36 osds: 36 up, 36 in
pgmap v40812: 5120 pgs, 2 pools, 179 GB data, 569 kobjects
522 GB used, 9349 GB / 9872 GB avail
5120 active+clean


On Mon, Feb 2, 2015 at 12:21 AM, Florent MONTHEL < fmonthel@xxxxxxxxxxxxx > wrote:

BQ_BEGIN
Hi Sumit

Do you have cache pool tiering activated ?
Some feed-back regarding your architecture ?
Thanks

Sent from my iPad

> On 1 févr. 2015, at 15:50, Sumit Gaur < sumitkgaur@xxxxxxxxx > wrote:
>
> Hi
> I have installed 6 node ceph cluster and to my surprise when I ran rados bench I saw that random write has more performance number then sequential write. This is opposite to normal disk write. Can some body let me know if I am missing any ceph Architecture point here ?
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





BQ_END



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux