Re: Yet another performance tuning for CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have 3 pools.

 

0 rbd,1 cephfs_data,2 cephfs_metadata

 

cephfs_data has 1024 as a pg_num, total pg number is 2113

 

POOL_NAME       USED   OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD    WR_OPS WR

cephfs_data      4000M    1000      0   2000                  0       0        0      2     0  27443 44472M

cephfs_metadata 11505k      24      0     48                  0       0        0     38 8456k   7384 14719k

rbd                  0       0      0      0                  0       0        0      0     0      0      0

 

total_objects    1024

total_used       30575M

total_avail      55857G

total_space      55887G

 

 

 

 

 

From: David Turner [mailto:drakonstein@xxxxxxxxx]
Sent: Tuesday, July 18, 2017 2:31 AM
To: Gencer Genç <gencer@xxxxxxxxxxxxx>; Patrick Donnelly <pdonnell@xxxxxxxxxx>
Cc: Ceph Users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: Yet another performance tuning for CephFS

 

What are your pool settings? That can affect your read/write speeds as much as anything in the ceph.conf file.

 

On Mon, Jul 17, 2017, 4:55 PM Gencer Genç <gencer@xxxxxxxxxxxxx> wrote:

I don't think so.

Because I tried one thing a few minutes ago. I opened 4 ssh channel and
run rsync command and copy bigfile to different targets in cephfs at the
same time. Then i looked into network graphs and i see numbers up to
1.09 gb/s. But why single copy/rsync cannot exceed 200mb/s? What
prevents it im really wonder this.

Gencer.


-----Original Message-----
From: Patrick Donnelly [mailto:pdonnell@xxxxxxxxxx]
Sent: 17 Temmuz 2017 Pazartesi 23:21
To: gencer@xxxxxxxxxxxxx
Cc: Ceph Users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: Yet another performance tuning for CephFS

On Mon, Jul 17, 2017 at 1:08 PM,  <gencer@xxxxxxxxxxxxx> wrote:
> But lets try another. Lets say i have a file in my server which is 5GB. If i
> do this:
>
> $ rsync ./bigfile /mnt/cephfs/targetfile --progress
>
> Then i see max. 200 mb/s. I think it is still slow :/ Is this an expected?

Perhaps that is the bandwidth limit of your local device rsync is reading from?

--
Patrick Donnelly

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux