Re: Unknown PGs after osd move

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Andreas,


Andreas John <aj@xxxxxxxxxxx> writes:

> Hey Nico,
>
> maybe you "pinned" the IP of the OSDs in question in ceph.conf to the IP
> of the old chassis?

That would be nice - unfortunately our ceph.conf is almost empty:


[22:11:59] server15.place6:/sys/class/block/sdg# cat /etc/ceph/ceph.conf
# cdist maintained - do not change

[global]

fsid = 1ccd84f6-e362-4c50-9ffe-59436745e445

public network  = 2a0a:e5c0:2:1::/64
cluster network = 2a0a:e5c0:2:1::/64

mon initial members = ceph1.place6.ungleich.ch, ceph2.place6.ungleich.ch, ceph3.place6.ungleich.ch
mon host            = ceph1.place6.ungleich.ch, ceph2.place6.ungleich.ch, ceph3.place6.ungleich.ch

auth cluster required = cephx
auth service required = cephx
auth client  required = cephx

osd pool default size = 3

# Required since nautilus, otherwise ceph fails to bind to public IP
# 2020-05-15, Nico!
ms_bind_ipv4 = false
ms_bind_ipv6 = true

# Restrain recovery operations so that normal cluster is not affected
[osd]
osd max backfills = 1
osd recovery max active = 1
osd recovery op priority = 2

> Good Luck,
>
> derjohn
>
>
> P.S.  < 100MB/sec is a terrible performance for recovery with 85 OSDs.
> Is it rotational on 1 GBit/sec network? You could set ceph osd set
> nodeep-scrub to prevent too much read from the plattners and get better
> recovery performance.

All nodes are connected with 2x 10 Gbit/s bonded/LACP, so I'd expect at
least a couple of hundred MB/s network bandwidth per OSD.

On one server I just restarted the OSDs and now the read performance
dropped down to 1-4 MB/s per OSD with being about 90% busy.

Since nautilus we observed much longer starting times of OSDs and I
wonder if the osd does some kind of fsck these days and delays the
peering process because of that?

The disks in question are 3.5"/10TB/6 Gbit/s SATA disks connected to an
H800 controller - so generally speaking I do not see a reasonable
bottleneck here.



--
Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux