Re: RGWs offline after upgrade to Nautilus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
apparently, my previous suggestions don't apply here (full OSDs or max_pgs_per_osd limit). Did you also check the rgw client keyrings? Did you also upgrade the operating system? Maybe some apparmor stuff? Can you set debug to 30 to see if there're more to see? Anything in the mon or mgr logs or in the syslog?

Thanks,
Eugen


Zitat von bzieglmeier@xxxxxxxxx:

Changing sending email address as something was wrong with my last one. Still OP here.

Cluster is generally healthy. Not running out of storage space or pools filling up. As mentioned in the original post, one RGW is able to come online. I've cross-compared about every file permission, config file, keyring, etc between the working RGW and all other non working RGWs, and nothing seems to allow them to rejoin the cluster.

ceph -s
[root@host ceph]# ceph -s
  cluster:
    id:    <id>
    health: HEALTH_WARN
            601 large omap objects
            502 pgs not deep-scrubbed in time
            1 pgs not scrubbed in time

  services:
    mon: 3 daemons, quorum <mon1>,<mon2>,<mon3> (age 28h)
    mgr: <mgr1>(active, since 28h), standbys: <mgr2>, <mgr3>
    osd: 130 osds: 130 up (since 3d), 130 in
    rgw: 1 daemon active (<rgw1>)

  task status:

  data:
    pools:   7 pools, 4288 pgs
    objects: 926.15M objects, 88 TiB
    usage:   397 TiB used, 646 TiB / 1.0 PiB avail
    pgs:     4258 active+clean
             30   active+clean+scrubbing+deep

  io:
    client:   340 KiB/s rd, 280 KiB/s wr, 370 op/s rd, 496 op/s wr

ceph df:
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
    hdd       763 TiB     450 TiB     313 TiB      313 TiB         41.04
    ssd       279 TiB     196 TiB      80 TiB       84 TiB         29.95
    TOTAL     1.0 PiB     646 TiB     394 TiB      397 TiB         38.07

POOLS:
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL .rgw.root 51 32 172 KiB 98 14 MiB 0 177 TiB zone.rgw.control 60 32 0 B 8 0 B 0 177 TiB zone.rgw.meta 61 32 11 MiB 34.04k 5.0 GiB 0 177 TiB zone.rgw.log 62 32 508 GiB 438.39k 508 GiB 0.09 177 TiB zone.rgw.buckets.data 63 4096 88 TiB 925.20M 361 TiB 40.47 177 TiB zone.rgw.buckets.index 64 32 890 GiB 469.31k 890 GiB 0.16 177 TiB zone.rgw.buckets.non-ec 66 32 3.7 MiB 610 3.7 MiB 0 177 TiB
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux