Re: CephFS - Small file - single thread - read performance.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
Yes, and to be sure I did the read test again from another client. 


-----Original Message-----
From: David C [mailto:dcsysengineer@xxxxxxxxx] 
Sent: 18 January 2019 16:00
To: Marc Roos
Cc: aderumier; Burkhard.Linke; ceph-users
Subject: Re:  CephFS - Small file - single thread - read 
performance.



On Fri, 18 Jan 2019, 14:46 Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx wrote:


	 
	
	[@test]# time cat 50b.img > /dev/null
	
	real    0m0.004s
	user    0m0.000s
	sys     0m0.002s
	[@test]# time cat 50b.img > /dev/null
	
	real    0m0.002s
	user    0m0.000s
	sys     0m0.002s
	[@test]# time cat 50b.img > /dev/null
	
	real    0m0.002s
	user    0m0.000s
	sys     0m0.001s
	[@test]# time cat 50b.img > /dev/null
	
	real    0m0.002s
	user    0m0.001s
	sys     0m0.001s
	[@test]#
	
	Luminous, centos7.6 kernel cephfs mount, 10Gbit, ssd meta, hdd 
data, mds 
	2,2Ghz
	


Did you drop the caches on your client before reading the file? 




	-----Original Message-----
	From: Alexandre DERUMIER [mailto:aderumier@xxxxxxxxx] 
	Sent: 18 January 2019 15:37
	To: Burkhard Linke
	Cc: ceph-users
	Subject: Re:  CephFS - Small file - single thread - 
read 
	performance.
	
	Hi,
	I don't have so big latencies:
	
	# time cat 50bytesfile > /dev/null
	
	real    0m0,002s
	user    0m0,001s
	sys     0m0,000s
	
	
	(It's on an ceph ssd cluster (mimic), kernel cephfs client (4.18), 
10GB 
	network with small latency too, client/server have 3ghz cpus)
	
	
	
	----- Mail original -----
	De: "Burkhard Linke" 
<Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
	À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
	Envoyé: Vendredi 18 Janvier 2019 15:29:45
	Objet: Re:  CephFS - Small file - single thread - read 
	performance.
	
	Hi, 
	
	On 1/18/19 3:11 PM, jesper@xxxxxxxx wrote: 
	> Hi. 
	> 
	> We have the intention of using CephFS for some of our shares, 
which 
	> we'd like to spool to tape as a part normal backup schedule. 
CephFS 
	> works nice for large files but for "small" .. < 0.1MB .. there 
seem to 
	
	> be a "overhead" on 20-40ms per file. I tested like this:
	> 
	> root@abe:/nfs/home/jk# time cat 
/ceph/cluster/rsyncbackups/13kbfile > 
	> /dev/null
	> 
	> real 0m0.034s
	> user 0m0.001s
	> sys 0m0.000s
	> 
	> And from local page-cache right after. 
	> root@abe:/nfs/home/jk# time cat 
/ceph/cluster/rsyncbackups/13kbfile > 
	> /dev/null
	> 
	> real 0m0.002s
	> user 0m0.002s
	> sys 0m0.000s
	> 
	> Giving a ~20ms overhead in a single file. 
	> 
	> This is about x3 higher than on our local filesystems (xfs) based 
on 
	> same spindles.
	> 
	> CephFS metadata is on SSD - everything else on big-slow HDD's (in 
both 
	
	> cases).
	> 
	> Is this what everyone else see? 
	
	
	Each file access on client side requires the acquisition of a 
	corresponding locking entity ('file capability') from the MDS. This 
adds 
	an extra network round trip to the MDS. In the worst case the MDS 
needs 
	to request a capability release from another client which still 
holds 
	the cap (e.g. file is still in page cache), adding another extra 
network 
	round trip. 
	
	
	CephFS is not NFS, and has a strong consistency model. This comes 
at a 
	price. 
	
	
	Regards, 
	
	Burkhard 
	
	
	_______________________________________________ 
	ceph-users mailing list 
	ceph-users@xxxxxxxxxxxxxx 
	http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
	
	_______________________________________________
	ceph-users mailing list
	ceph-users@xxxxxxxxxxxxxx
	http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
	
	
	_______________________________________________
	ceph-users mailing list
	ceph-users@xxxxxxxxxxxxxx
	http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
	


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux