Re: Yet another meltdown starting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



OK, the command finally executed and it looks like the cluster is running stable for now. However, I'm afraid that 90s might not be sustainable.

Questions: Can I leave the beacon_grace at 90s? Is there a better parameter to set? Why is the MGR getting overloaded on a rather small cluster with 160 OSDs? How does this scale?

Some more info:

Here is a pool stats output for the workload just after the beacon grace increase succeeded:

pool con-fs2-meta1 id 12
  client io 1007 KiB/s rd, 1.1 MiB/s wr, 4 op/s rd, 421 op/s wr

pool con-fs2-meta2 id 13
  client io 0 B/s wr, 0 op/s rd, 21 op/s wr

pool con-fs2-data id 14
  client io 172 MiB/s rd, 1.8 GiB/s wr, 70 op/s rd, 4.19 kop/s wr

This is well over the limit of the aggregated IOP/s for the fs data pool and might have been even higher before I got the prompt back. The fs layout can be seen here, taken after IO went down:

con-fs2 - 1674 clients
=======
+------+----------------+---------+---------------+-------+-------+
| Rank |     State      |   MDS   |    Activity   |  dns  |  inos |
+------+----------------+---------+---------------+-------+-------+
|  0   |     active     | ceph-08 | Reqs:  119 /s | 5157k | 4673k |
| 0-s  | standby-replay | ceph-12 | Evts:  437 /s | 27.5k | 21.4k |
+------+----------------+---------+---------------+-------+-------+
+---------------------+----------+-------+-------+
|         Pool        |   type   |  used | avail |
+---------------------+----------+-------+-------+
|    con-fs2-meta1    | metadata |  175M |  954G |
|    con-fs2-meta2    |   data   |    0  |  954G |
|     con-fs2-data    |   data   |  131T |  858T |
| con-fs2-data-ec-ssd |   data   |  177G | 2289G |
+---------------------+----------+-------+-------+

con-fs2-meta2 is the default data pool not used for storing anything.

Best regads and thanks for any pointers.
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Frank Schilder <frans@xxxxxx>
Sent: 11 May 2020 14:52:05
To: ceph-users
Subject:  Yet another meltdown starting

Hi all,

another client-load induced meltdown. It is just starting and I hope we get it under control. This time, its the MGRs failing under the load. It looks like thay don't manage to get their beacons to the mons and are kicked out as unresponsive. However, the processes are fine and up. Its just an enormous load.

I'm trying to increase

# ceph config set global mon_mgr_beacon_grace 90

but the command doesn't complete. I guess because all the MGRs are out. Is there any way to force the MONs *not* to mark MGRs as unresponsive?

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux