Re: Very bad performance on a ceph rbd pool via iSCSI to VMware esx

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Salsa,
 
More information about your Ceph cluster and VMware infrastructure is pretty much required.
 
What Ceph version?
Ceph cluster info - i.e. how many Monitors, OSD hosts, iSCIS gateways and are these components HW or VMs?
Do the Ceph components meet recommended hardware levels for CPU, RAM, HDs (Flash or spinners)?
Basic Ceph stats like "ceph osd tree" and "ceph df"
 
What VMware version?
Software or Hardware iSCSI on the VMW side?
 
What's the Storage network speed and are thing like jumbo frames set?
 
In general, 3 OSD hosts is the bare minimum for ceph so you're going to get minimum performance.
 

Andrew Ferris
Network & System Management
UBC Centre for Heart & Lung Innovation
St. Paul's Hospital, Vancouver
http://www.hli.ubc.ca
 


>>> Salsa <salsa@xxxxxxxxxxxxxx> 2/13/2020 7:56 AM >>>
I have a 3 hosts, 10 4TB HDDs per host ceph storage set up. I deined a 3 replica rbd pool and some images and presented them to a Vmware host via ISCSI, but the write performance is so bad the I managed to freeze a VM doing a big rsync to a datastore inside ceph and had to reboot it's host (seems I've filled up Vmware's ISCSI queue).

Right now I'm getting write latencies from 20ms to 80 ms (per OSD) and sometimes peaking at 600 ms (per OSD).
Client throughput is giving me around 4 MBs.
Using a 4MB stripe 1 image I got 1.955..359 B/s inside the VM.
On a 1MB stripe 1 I got 2.323.206 B/s inside the same VM.

I think the performance is way too slow, much more than should be and that I can fix this by correcting some configuration.

Any advices?

--
Salsa
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux