I
opened a thread recently here asking about what can be generally accepted as
'ceph overhead' when using the file system. I wonder if the performance loss I
have on a cephfs 1x replication pool compared to native performance is really so
much. 5,6x to 2x slower than native disk performance
From: Maged Mokhtar [mailto:mmokhtar@xxxxxxxxxxx] Sent: 15 January 2019 22:55 To: Ketil Froyn; ceph-users@xxxxxxxxxxxxxx Subject: Re: Recommendations for sharing a file system to a heterogeneous client network?
Hi Ketil, I have not tested the creation/deletion but the read/write performance was
much better then the link you posted. Using CTDB setup based on Robert's
presentation, we were getting 800 MB/s write performance for queue depth =1
and 2.2 GB/s queue depth= 32 from a single CTDB/Samba gateway.
For the QD=32 test we used 2 Windows clients to the same gateway (to avoid
limitation from the Windows side). Tests were done using Microsoft diskspd tool
at 4M blocks with cache off. Gateway had 2x40 G nics : one for Windows
network the other for CephFS client, each was doing 20 Gbps (50% utilization)
cpu was 24 cores running at 85% utilization taken by the smbd process. We used
Ubuntu 16.04 CTDB/Samba with a SUSE SLE15 kernel for kernel client. Ceph was
Luminous 12.2.7. Maged
On 15/01/2019 22:04, Ketil Froyn wrote:
-- Maged Mokhtar CEO PetaSAN 4 Emad El Deen Kamel Cairo 11371, Egypt www.petasan.org +201006979931 skype: maged.mokhtar
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com