Ceph Fuse Strange Behavior Very Strange

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, all:
    I have two small test on our cephfs cluster:
	
    time for i in {1..10000}; do echo hello > file${i}; done && time rm * && time for i in {1..10000}; do echo hello > file${i}; done && time rm *
    
    Client A : use kernel client

    Client B : use fuse client

    First I create folder “create_by_A" on Client A  and do the test on “create_by_A” with Client A 
    and create folder “create_by_B" on Client B and do the test on ““create_by_B” with Client B
   
    result on A:
      	real    0m3.768s
	user    0m0.236s
	sys     0m0.244s

	real    0m4.542s
	user    0m0.068s
	sys     0m0.228s

	real    0m3.545s
	user    0m0.200s
	sys     0m0.256s

	real    0m4.941s
	user    0m0.024s
	sys     0m0.264s

    result on B:
	real    0m16.768s
	user   0m0.368s
	sys     0m0.716s

	real    0m27.542s
	user    0m0.120s
	sys     0m0.888s

	real    0m15.990s
	user    0m0.288s
	sys     0m0.792s

	real    0m20.904s
	user    0m0.243s
	sys     0m0.577s

   It seem normal, but then 
   do the test on folder “create_by_A” with Client B
   do the test on folder “create_by_B” with Client A

   result on A:
	real 0m3.832s 
	user 0m0.200s 
	sys 0m0.264s 

	real 0m8.326s 
	user 0m0.100s 
	sys 0m0.192s 

	real 0m5.934s 
	user 0m0.264s 
	sys 0m0.368s 
	
	real 0m4.117s 
	user 0m0.104s 
	sys 0m0.200s
 
   result on B:
	real 2m25.713s 
	user 0m0.592s 
	sys 0m1.120s 

	real 2m16.726s 
	user 0m0.084s 
	sys 0m1.228s 

	real 2m9.301s 
	user 0m0.440s 
	sys 0m1.104s 
	
	real 2m19.365s 
	user 0m0.200s 
	sys 0m1.184s

   It seems very slow and strange

   System Version : Ubuntu 14.04
   kernel Version: 4.4.0-28-generic
   ceph version: 10.2.2

    ceph -s :
        health HEALTH_WARN
            noscrub,nodeep-scrub,sortbitwise flag(s) set
     monmap e1: 3 mons at {rndcl26=10.0.0.26:6789/0,rndcl38=10.0.0.38:6789/0,rndcl62=10.0.0.62:6789/0}
            election epoch 40, quorum 0,1,2 rndcl26,rndcl38,rndcl62
      fsmap e24091: 1/1/1 up {0=rndcl67=up:active}, 1 up:standby
     osdmap e9202: 119 osds: 119 up, 119 in
            flags noscrub,nodeep-scrub,sortbitwise
      pgmap v11577714: 8256 pgs, 3 pools, 62234 GB data, 165 Mobjects
            211 TB used, 221 TB / 432 TB avail
                8256 active+clean


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux