Stretch cluster data unavailable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ceph reef 18.2.4

We have pool with size 3 (2 copies in first dc , 1 copy in second ) replicated between datacenter. When we put host in maintanance in different datacenter some data is unavailable - why ? How to prevent it or fix ?

2 nodes in each dc + witness

pool 13 'VolumesStandardW2' replicated size 3 min_size 2 crush_rule 4 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode on last_change 6257 lfor 0/2232/2230 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd read_balance_score 2.30

Policy

take W2

chooseleaf firstn 2 type host

emit

take W1

chooseleaf firstn -1 type host

emit

HEALTH_WARN 1 host is in maintenance mode; 1/5 mons down, quorum xxx xxx xxx xxx xxx; 3 osds down; 1 OSDs or CRUSH {nodes, device-classes} have {NOUP,NODOWN,NOIN,NOOUT} flags set; 1 host (3 osds) down; Reduced data availability: 137 pgs inactive; Degraded data redundancy: 203797/779132 objects degraded (26.157%), 522 pgs degraded, 554 pgs undersized

[WRN] HOST_IN_MAINTENANCE: 1 host is in maintenance mode

[WRN] PG_AVAILABILITY: Reduced data availability: 137 pgs inactive

pg 12.5c is stuck undersized for 2m, current state active+undersized+degraded, last acting [1,9]

pg 12.5d is stuck undersized for 2m, current state active+undersized+degraded, last acting [0,6]

pg 12.5e is stuck undersized for 2m, current state active+undersized+degraded, last acting [2,11]

pg 12.5f is stuck undersized for 2m, current state active+undersized+degraded, last acting [2,9]

pg 13.0 is stuck inactive for 2m, current state undersized+degraded+peered, last acting [7,11]

pg 13.1 is stuck inactive for 2m, current state undersized+degraded+peered, last acting [8,9]

pg 13.2 is stuck inactive for 2m, current state undersized+degraded+peered, last acting [11,6]

pg 13.4 is stuck inactive for 2m, current state undersized+degraded+peered, last acting [9,6]

ceph balancer status

{

"active": true,

"last_optimize_duration": "0:00:00.000198",

"last_optimize_started": "Wed Sep 4 13:03:53 2024",

"mode": "upmap",

"no_optimization_needed": true,

"optimize_result": "Some objects (0.261574) are degraded; try again later",

"plans": []
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux