Bad performance of CephFS (first use)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone,

I am new to the Ceph. I have 5 PC test cluster on wich id like to test 
CephFS behavior and performance.
I have used ceph-deploy on nod pc1 and installed ceph software (emeperor 
0.72.2-0.el6) on all 5 machines.
Then set pc1 as mon and mds. PC2, PC3 as OSD and PC4, PC5 as ceph 
clients. All machines use CentOS 6.5
operating system. Machines in Storage Cluster (pc1, pc2, pc3) use stock 
kernel 2.6.32-431.el6.x86_64 and
clients(pc4, pc5) use 3.10.38-1.el6.elrepo.x86_64.

Machine spec:
CPU: 3.5GHz Intel Core i3-4330 (dual core)
RAM: 2x4GB 1600MHz RAM
HDD: 750GB WD7502ABYS 7200RPM 32MB Cache SATA 3.0Gb/s

All machines are connected via 1Gb/s ethernet thru switch.

OSD use around 730GB disk partition formated with XFS, mounted at 
/export/sda2/osdX
and used in ceph-deploy osd prepare pc2:/export/sda2/osdX and 
ceph-deploy osd activate
pc2:/export/sda2/osdX.

Then I have mounted CephFS on clients (pc4, pc5):
mount -t ceph 10.0.0.1:6789:/ /mnt/ceph -o 
name=admin,secretfile=/root/admin.secret

I have run some tests and there are the results:
  write:  http://postimg.org/image/zblw6a2jz/
  rewrite: http://postimg.org/image/t9mzzszxz/
  read: http://postimg.org/image/6e73w046t/

Idealy the speed should be around 100MB/s (network limitation of single 
client).
But the write is around 30 and read around 70. I suspect the write 
performance is
half of the read because client writes the file and its replica at the 
same time.
Still its kind of low it should be around 50 MB/s.

Other test that I have run is writing two 10GB files from single client 
one at the time.
I have measured the utitlization of cluster machines with sar and 
plotted those graphs:
pc1 (mon, mds): http://postimg.org/image/44gp2929z/
pc2 (osd1): http://postimg.org/image/p5rommxyj/
pc3 (osd2): http://postimg.org/image/l94fijbop/
pc4 (client): http://postimg.org/image/3kaz82cix/

The network trafic looks strange and both disk on OSD are utilizied on 
100%.  Is it
normal behavior for such load?

Im really new to the Ceph and i have done almost no configuration or 
tunning of the system.
At the moment im bit confused about what to do to make the filesystem 
work better.
Can anyone please give me some tips? Thanks a lot.

Michal Pazdera

---
Tato zpr?va neobsahuje viry ani jin? ?kodliv? k?d - avast! Antivirus je aktivn?.
http://www.avast.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux