Re: performance degradation issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes sure I did check with each osd and compare them.

 

Now, the replicated size is 2. But I change to size is 1, seem don’t improve more L

 

Can you help me check again in my config file ?  I don’t know what’s wrong exist in there 

 

From: Gregory Farnum [mailto:greg@xxxxxxxxxxx]
Sent: Thursday, May 23, 2013 12:14 PM
To: Khanh. Nguyen Dang Quoc
Cc: Gregory Farnum; ceph-users@xxxxxxxxxxxxxx
Subject: Re: performance degradation issue

 

Yeah, you need to check your disks individually and see how they compare. Sounds like the second one is slower. And you're also getting a bit slower going to 2x replication.

-Greg

On Wednesday, May 22, 2013, Khanh. Nguyen Dang Quoc wrote:

I do test follow the below steps:

 

+ create image with size 100Gb in pool data 

+ after that, I do map that image on one server.

+ and do mkfs.xfs /dev/rbd0  -> mount /deve/rbd0 /mnt

+ I do the write benchmark on that mount point with dd tool :

 

dd if=/dev/zero of=/mnt/good2 bs=1M count=10000 oflag=direct

 

 

From: Gregory Farnum [mailto:greg@xxxxxxxxxxx]
Sent: Thursday, May 23, 2013 11:47 AM
To: Khanh. Nguyen Dang Quoc
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: performance degradation issue

 

"rados bench write", you mean? Or something else?

 

Have you checkd the disk performance of each OSD outside of Ceph? In moving from one to two OSDs your performance isn't actually going to go up because you're replicating all the data. It ought to stay flat rather than dropping, but my guess is your second disk is slow.

On Wednesday, May 22, 2013, Khanh. Nguyen Dang Quoc wrote:

Hi Greg,

 

It’s the write  benchmark..

 

Regards

Khanh

 

From: Gregory Farnum [mailto:greg@xxxxxxxxxxx]
Sent: Thursday, May 23, 2013 10:56 AM
To: Khanh. Nguyen Dang Quoc
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: performance degradation issue

 

What's the benchmark?

-Greg

On Wednesday, May 22, 2013, Khanh. Nguyen Dang Quoc wrote:

Dear all,

 

now i faced one issue in ceph block device: performance degradation

 

ceph version 0.61.2 (fea782543a844bb277ae94d3391788b76c5bee60)

ceph status

 

health HEALTH_OK

monmap e1: 2 mons at {a=49.213.67.204:6789/0,b=49.213.67.203:6789/0}, election epoch 20, quorum 0,1 a,b

osdmap e53: 2 osds: 2 up, 2 in

pgmap v535: 576 pgs: 576 active+clean; 11086 MB data, 22350 MB used, 4437 GB / 4459 GB avail

mdsmap e29: 1/1/1 up {0=a=up:active}, 1 up:standby

 

I do benchmark with one osd, I receive the io speed is about 190MB/s

 

But When i add more osd, replicate size =2 , the write performance degradated is about 90MB/s

 

Follow the pratice, the write performance must be increated as adding more osd but I can't receive that. :(

anyone can you help me check what's wrong in config file or anything else?

 

Regards,

Khanh Nguyen



--
Software Engineer #42 @ http://inktank.com | http://ceph.com



--
Software Engineer #42 @ http://inktank.com | http://ceph.com



--
Software Engineer #42 @ http://inktank.com | http://ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux