Re: RDB Performance / High IOWaits.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

IIUC those are standalone OSDs (no separate rocksDB on faster devices, I assume). Have you checked OSD saturation (e.g. iostat) on the nodes? Although I'd expect "slow requests" in the logs if the disks were saturated but that would be my first guess.


Zitat von jameslipski@xxxxxxxxxxxxxx:

Greetings,

I'm using ceph (14.2.2); in conjunction with Proxmox. Currently I'm just doing tests and ran into an issue relating to high I/O waits. Just to give a little bit of a background, specifically relating to my current ceph configurations; we have 6 nodes, each consisting of 2 osds (each node has 2x Intel SSDSC2KG019T8) OSD Type is bluestore. Global configurations (at least as shown on the proxmox interface) is as follows:

[global]
	 auth_client_required = xxxx
	 auth_cluster_required = xxxx
	 auth_service_required = xxxx
	 cluster_network = 10.125.0.0/24
	 fsid = f64d2a67-98c3-4dbc-abfd-906ea7aaf314
	 mon_allow_pool_delete = true
mon_host = 10.125.0.101 10.125.0.102 10.125.0.103 10.125.0.105 10.125.0.106 10.125.0.104
	 osd_pool_default_min_size = 2
	 osd_pool_default_size = 3
	 public_network = 10.125.0.0/24

[client]
	keyring = /etc/pve/priv/$cluster.$name.keyring



If I'm missing any relevant information relating to my ceph setup (I'm still learning this), please let me know.

Each node consists of 2x Xeon E5-2660 v3. Where I ran into high I/O waits is when running a VM. VM is a mysql replication server (using 8 cores), and is performing mostly writes. When trying to narrow down, it was pointing to disk writes. The only thing I'm seeing in the ceph logs are the following:

2020-06-08 02:43:01.062082 mgr.node01 (mgr.2914449) 8009571 : cluster [DBG] pgmap v8009574: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 2.4 MiB/s wr, 274 op/s 2020-06-08 02:43:03.063137 mgr.node01 (mgr.2914449) 8009572 : cluster [DBG] pgmap v8009575: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 0 B/s rd, 3.0 MiB/s wr, 380 op/s 2020-06-08 02:43:05.064125 mgr.node01 (mgr.2914449) 8009573 : cluster [DBG] pgmap v8009576: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 0 B/s rd, 2.9 MiB/s wr, 332 op/s 2020-06-08 02:43:07.065373 mgr.node01 (mgr.2914449) 8009574 : cluster [DBG] pgmap v8009577: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 0 B/s rd, 2.7 MiB/s wr, 313 op/s 2020-06-08 02:43:09.066210 mgr.node01 (mgr.2914449) 8009575 : cluster [DBG] pgmap v8009578: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 341 B/s rd, 2.9 MiB/s wr, 350 op/s 2020-06-08 02:43:11.066913 mgr.node01 (mgr.2914449) 8009576 : cluster [DBG] pgmap v8009579: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 341 B/s rd, 3.1 MiB/s wr, 346 op/s 2020-06-08 02:43:13.067926 mgr.node01 (mgr.2914449) 8009577 : cluster [DBG] pgmap v8009580: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 341 B/s rd, 3.5 MiB/s wr, 408 op/s 2020-06-08 02:43:15.068834 mgr.node01 (mgr.2914449) 8009578 : cluster [DBG] pgmap v8009581: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 341 B/s rd, 3.0 MiB/s wr, 320 op/s 2020-06-08 02:43:17.069627 mgr.node01 (mgr.2914449) 8009579 : cluster [DBG] pgmap v8009582: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 341 B/s rd, 2.5 MiB/s wr, 285 op/s 2020-06-08 02:43:19.070507 mgr.node01 (mgr.2914449) 8009580 : cluster [DBG] pgmap v8009583: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 341 B/s rd, 3.0 MiB/s wr, 349 op/s 2020-06-08 02:43:21.071241 mgr.node01 (mgr.2914449) 8009581 : cluster [DBG] pgmap v8009584: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 0 B/s rd, 2.8 MiB/s wr, 319 op/s 2020-06-08 02:43:23.072286 mgr.node01 (mgr.2914449) 8009582 : cluster [DBG] pgmap v8009585: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 2.7 MiB/s wr, 329 op/s 2020-06-08 02:43:25.073369 mgr.node01 (mgr.2914449) 8009583 : cluster [DBG] pgmap v8009586: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 2.8 MiB/s wr, 304 op/s 2020-06-08 02:43:27.074315 mgr.node01 (mgr.2914449) 8009584 : cluster [DBG] pgmap v8009587: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 2.2 MiB/s wr, 262 op/s 2020-06-08 02:43:29.075284 mgr.node01 (mgr.2914449) 8009585 : cluster [DBG] pgmap v8009588: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 682 B/s rd, 2.9 MiB/s wr, 342 op/s 2020-06-08 02:43:31.076180 mgr.node01 (mgr.2914449) 8009586 : cluster [DBG] pgmap v8009589: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 682 B/s rd, 2.4 MiB/s wr, 269 op/s 2020-06-08 02:43:33.077523 mgr.node01 (mgr.2914449) 8009587 : cluster [DBG] pgmap v8009590: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 682 B/s rd, 3.4 MiB/s wr, 389 op/s 2020-06-08 02:43:35.078543 mgr.node01 (mgr.2914449) 8009588 : cluster [DBG] pgmap v8009591: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 682 B/s rd, 3.1 MiB/s wr, 344 op/s 2020-06-08 02:43:37.079428 mgr.node01 (mgr.2914449) 8009589 : cluster [DBG] pgmap v8009592: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 682 B/s rd, 3.0 MiB/s wr, 334 op/s 2020-06-08 02:43:39.080419 mgr.node01 (mgr.2914449) 8009590 : cluster [DBG] pgmap v8009593: 512 pgs: 512 active+clean; 246 GiB data, 712 GiB used, 20 TiB / 21 TiB avail; 682 B/s rd, 3.3 MiB/s wr, 377 op/s


I'm not sure what could be causing high I/O waits; or whether this is an issue relating to my ceph configurations. Any suggestions would be appreciated/ or if you need any additional information, let me know what you need and I'll post them.

Thank you.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux