Re: Very bad performance on a ceph rbd pool via iSCSI to VMware esx

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What are the specs of your nodes? And what specific harddisks are you using?

On Fri, May 29, 2020, 18:41 Salsa <salsa@xxxxxxxxxxxxxx> wrote:

> I have a 3 hosts, 10 4TB HDDs per host ceph storage set up. I deined a 3
> replica rbd pool and some images and presented them to a Vmware host via
> ISCSI, but the write performance is so bad the I managed to freeze a VM
> doing a big rsync to a datastore inside ceph and had to reboot it's host
> (seems I've filled up Vmware's ISCSI queue).
>
> Right now I'm getting write latencies from 20ms to 80 ms (per OSD) and
> sometimes peaking at 600 ms (per OSD).
> Client throughput is giving me around 4 MBs.
>
> Using a 4MB stripe 1 image I got 1.955..359 B/s inside the VM.
> On a 1MB stripe 1 I got 2.323.206 B/s inside the same VM.
>
> I think the performance is way too slow, much more than should be and that
> I can fix this by correcting some configuration.
>
> Any advices?
>
> --
> Salsa
>
> Sent with [ProtonMail](https://protonmail.com) Secure Email.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux