some performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

i did one experiment which gives some interesting result.

i create two OSD (ext4), each is a SSD attached on the same PC. i also
configure one monitor and one mds on that PC.
so generally, my OSDs, monitor and mds locate on the same node.

i set up the ceph service and mount the ceph also on a local directory
on that PC. so client, OSDs, monitor and mds all on the same node.
i suppose this will exclude the network communication cost.

i run fio benchmark which create one 10GB file (larger than main
memory) on the ceph mount point. it perform sequential read/write and
random read/write on the file, and generate the throughput result.

next i umount the ceph and stop ceph service. i create ext4 on the
same SSD that used as OSD before. then run the same workloads and get
the throughput result.

here are the results:

(throughput kb/s)Seq-read	Rand-read	Seq-write	Rand-write
ceph	                 7378	4740	           790	1211
ext4	                 58260	17334	 54697	34257

as you see, the ceph has huge performance down, even monitor, mds,
client and osds locate on the same physical machine.
another interesting thing is the seq-write has lower throughput
compared with random-write under ceph. not quite clear....

does anyone have idea about why ceph has that performance down?

Thanks,
Sheng


-- 
Sheng Qiu
Texas A & M University
Room 332B Wisenbaker
email: herbert1984106@xxxxxxxxx
College Station, TX 77843-3259
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux