Re: _delete_some new onodes has appeared since PG removal started

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Nope, umpap is currently impossible on this clusters 😬
due client lib (guys works on update now).

ID   CLASS WEIGHT   REWEIGHT SIZE    RAW USE DATA    OMAP   META    AVAIL   %USE VAR  PGS STATUS TYPE NAME
-166       10.94385        -  11 TiB 382 GiB 317 GiB 64 KiB  66 GiB  11 TiB 3.42 1.00   -                host meta115
 768  nvme  0.91199  1.00000 932 GiB  36 GiB  30 GiB  8 KiB 6.0 GiB 896 GiB 3.85 1.13   1     up             osd.768
 769  nvme  0.91199  1.00000 932 GiB  22 GiB  18 GiB  4 KiB 4.0 GiB 909 GiB 2.41 0.71   0     up             osd.769
 770  nvme  0.91199  1.00000 932 GiB  38 GiB  31 GiB  8 KiB 6.3 GiB 894 GiB 4.04 1.18   2     up             osd.770
 771  nvme  0.91199  1.00000 932 GiB  22 GiB  18 GiB    0 B 3.9 GiB 910 GiB 2.33 0.68   0     up             osd.771
 772  nvme  0.91199  1.00000 932 GiB  37 GiB  30 GiB  4 KiB 6.1 GiB 895 GiB 3.93 1.15   2     up             osd.772
 773  nvme  0.91199  1.00000 932 GiB  34 GiB  28 GiB  4 KiB 6.0 GiB 898 GiB 3.65 1.07   1     up             osd.773
 774  nvme  0.91199  1.00000 932 GiB  32 GiB  26 GiB  8 KiB 5.4 GiB 900 GiB 3.43 1.00   1     up             osd.774
 775  nvme  0.91199  1.00000 932 GiB  36 GiB  30 GiB  4 KiB 6.1 GiB 895 GiB 3.91 1.14   2     up             osd.775
 776  nvme  0.91199  1.00000 932 GiB  36 GiB  30 GiB  4 KiB 6.4 GiB 895 GiB 3.90 1.14   1     up             osd.776
 777  nvme  0.91199  1.00000 932 GiB  36 GiB  30 GiB  8 KiB 6.1 GiB 895 GiB 3.89 1.14   2     up             osd.777
 778  nvme  0.91199  1.00000 932 GiB  32 GiB  27 GiB  8 KiB 5.5 GiB 899 GiB 3.48 1.02   1     up             osd.778
 779  nvme  0.91199  1.00000 932 GiB  21 GiB  17 GiB  4 KiB 3.7 GiB 911 GiB 2.23 0.65   0     up             osd.779
                       TOTAL  11 TiB 382 GiB 317 GiB 65 KiB  66 GiB  11 TiB 3.42
MIN/MAX VAR: 0.65/1.18  STDDEV: 0.66

Second PG landed... don't see any huge spikes on ceph_osd_op_latency metric.


k


> On 21 Apr 2021, at 17:12, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
> 
> Yes, with the fixes in 14.2.19 PG removal is really much much much
> better than before.
> 
> But on some clusters (in particular with rocksdb on the hdd) there is
> still a rare osd flap at the end of the PG removal -- indicated by the
> logs I shared earlier.
> Our workaround to prevent that new flap is to increase
> osd_heartbeat_grace (e.g. to 45).
> 
> With 3.5M objects in a PG, I suggest that you try moving one PG with
> upmap and watch how it goes (especially at the end).

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux