Re: Very bad performance on a ceph rbd pool via iSCSI to VMware esx

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm currently running Nautilus:

[root@ceph01 ~]# ceph -s
  cluster:
    id:     eb4aea44-0c63-4202-b826-e16ea60ed54d
    health: HEALTH_WARN
            too few PGs per OSD (25 < min 30)

  services:
    mon:         3 daemons, quorum ceph01,ceph02,ceph03 (age 8d)
    mgr:         ceph02(active, since 9d), standbys: ceph03, ceph01
    osd:         30 osds: 30 up (since 9d), 30 in (since 9d)
    rbd-mirror:  3 daemons active (139316, 214105, 264563)
    tcmu-runner: 15 daemons active (ceph01:rbd/Disco_vm_01, ceph01:rbd/Disco_vm_02, ceph01:rbd/Disk1, ceph01:rbd/Disk2, ceph01:rbd/Disk3, ceph02:rbd/Disco_vm_01, ceph02:rbd/Disco_vm_02, ceph02:rbd/Disk1, ceph02:rbd/Disk2, ceph02:rbd/Disk3, ceph03:rbd/Disco_vm_01, ceph03:rbd/Disco_vm_02, ceph03:rbd/Disk1, ceph03:rbd/Disk2, ceph03:rbd/Disk3)

  data:
    pools:   1 pools, 256 pgs
    objects: 1.42M objects, 4.4 TiB
    usage:   13 TiB used, 96 TiB / 109 TiB avail
    pgs:     255 active+clean
             1   active+clean+scrubbing+deep

  io:
    client:   1.3 MiB/s rd, 1.8 MiB/s wr, 140 op/s rd, 82 op/s wr

[root@ceph01 ~]# ceph --version
ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable)

3 Hosts running: 1 monitor each, 1 MGR each, 10 OSDs each, 1 MDS each, 1 TCMU-RUNNER each, 1 rbd-mirror each. Hosts are HW.

HW wise: 64 GB RAM + 32 cores, but we are using HDDs.

VMware: 'esxcli' version: 6.0.0 and Software iSCSI on VMW side (I guess).

Storage network at 1Gb and no jumbo frames. I have a cluster network (1Gb) and a public network (1Gb) set.

[root@ceph01 ~]# ceph osd tree
ID CLASS WEIGHT    TYPE NAME       STATUS REWEIGHT PRI-AFF
-1       109.16061 root default
-3        36.38687     host ceph01
0   hdd   3.63869         osd.0       up  1.00000 1.00000
1   hdd   3.63869         osd.1       up  1.00000 1.00000
3   hdd   3.63869         osd.3       up  1.00000 1.00000
4   hdd   3.63869         osd.4       up  1.00000 1.00000
5   hdd   3.63869         osd.5       up  1.00000 1.00000
6   hdd   3.63869         osd.6       up  1.00000 1.00000
7   hdd   3.63869         osd.7       up  1.00000 1.00000
8   hdd   3.63869         osd.8       up  1.00000 1.00000
9   hdd   3.63869         osd.9       up  1.00000 1.00000
10   hdd   3.63869         osd.10      up  1.00000 1.00000
-5        36.38687     host ceph02
11   hdd   3.63869         osd.11      up  1.00000 1.00000
12   hdd   3.63869         osd.12      up  1.00000 1.00000
13   hdd   3.63869         osd.13      up  1.00000 1.00000
14   hdd   3.63869         osd.14      up  1.00000 1.00000
15   hdd   3.63869         osd.15      up  1.00000 1.00000
16   hdd   3.63869         osd.16      up  1.00000 1.00000
17   hdd   3.63869         osd.17      up  1.00000 1.00000
18   hdd   3.63869         osd.18      up  1.00000 1.00000
19   hdd   3.63869         osd.19      up  1.00000 1.00000
20   hdd   3.63869         osd.20      up  1.00000 1.00000
-7        36.38687     host ceph03
21   hdd   3.63869         osd.21      up  1.00000 1.00000
22   hdd   3.63869         osd.22      up  1.00000 1.00000
23   hdd   3.63869         osd.23      up  1.00000 1.00000
24   hdd   3.63869         osd.24      up  1.00000 1.00000
25   hdd   3.63869         osd.25      up  1.00000 1.00000
26   hdd   3.63869         osd.26      up  1.00000 1.00000
27   hdd   3.63869         osd.27      up  1.00000 1.00000
28   hdd   3.63869         osd.28      up  1.00000 1.00000
29   hdd   3.63869         osd.29      up  1.00000 1.00000
30   hdd   3.63869         osd.30      up  1.00000 1.00000

[root@ceph01 ~]# ceph df
RAW STORAGE:
    CLASS     SIZE        AVAIL      USED       RAW USED     %RAW USED
    hdd       109 TiB     96 TiB     13 TiB       13 TiB         11.99
    TOTAL     109 TiB     96 TiB     13 TiB       13 TiB         11.99

POOLS:
    POOL     ID     STORED      OBJECTS     USED       %USED     MAX AVAIL
    rbd          29     4.3 TiB         1.42M            13 TiB       13.13        29 TiB

--
Salsa

Sent with [ProtonMail](https://protonmail.com) Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, February 13, 2020 4:50 PM, Andrew Ferris <Andrew.Ferris@xxxxxxxxxx> wrote:

> Hi Salsa,
>
> More information about your Ceph cluster and VMware infrastructure is pretty much required.
>
> What Ceph version?
> Ceph cluster info - i.e. how many Monitors, OSD hosts, iSCIS gateways and are these components HW or VMs?
> Do the Ceph components meet recommended hardware levels for CPU, RAM, HDs (Flash or spinners)?
> Basic Ceph stats like "ceph osd tree" and "ceph df"
>
> What VMware version?
> Software or Hardware iSCSI on the VMW side?
>
> What's the Storage network speed and are thing like jumbo frames set?
>
> In general, 3 OSD hosts is the bare minimum for ceph so you're going to get minimum performance.
>
> Andrew Ferris
> Network & System Management
> UBC Centre for Heart & Lung Innovation
> St. Paul's Hospital, Vancouver
> [http://www.hli.ubc.ca](http://www.hli.ubc.ca/)
>
>>>> Salsa <salsa@xxxxxxxxxxxxxx> 2/13/2020 7:56 AM >>>
> I have a 3 hosts, 10 4TB HDDs per host ceph storage set up. I deined a 3 replica rbd pool and some images and presented them to a Vmware host via ISCSI, but the write performance is so bad the I managed to freeze a VM doing a big rsync to a datastore inside ceph and had to reboot it's host (seems I've filled up Vmware's ISCSI queue).
>
> Right now I'm getting write latencies from 20ms to 80 ms (per OSD) and sometimes peaking at 600 ms (per OSD).
> Client throughput is giving me around 4 MBs.
> Using a 4MB stripe 1 image I got 1.955..359 B/s inside the VM.
> On a 1MB stripe 1 I got 2.323.206 B/s inside the same VM.
>
> I think the performance is way too slow, much more than should be and that I can fix this by correcting some configuration.
>
> Any advices?
>
> --
> Salsa
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux