recovery for node disaster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 I have a cluster of three nodes, with three replicas per pool on cluster
nodes
---------
HOST             ADDR             LABELS      STATUS
apcepfpspsp0101  192.168.114.157  _admin mon
apcepfpspsp0103  192.168.114.158  mon _admin
apcepfpspsp0105  192.168.114.159  mon _admin
3 hosts in cluster
---------
# ceph osd crush rule dump
[
    {
        "rule_id": 0,
        "rule_name": "replicated_rule",
        "type": 1,
        "steps": [
            {
                "op": "take",
                "item": -1,
                "item_name": "default"
            },
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "host"
            },
            {
                "op": "emit"
            }
        ]
    }
]
---------
epoch 1033
fsid 9c35e594-2392-11ed-809a-005056ae050c
created 2022-08-24T09:53:36.481866+0000
modified 2023-02-12T18:57:34.447536+0000
flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
crush_version 51
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client luminous
min_compat_client luminous
require_osd_release quincy
stretch_mode_enabled false
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 21 flags
hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr
pool 2 'k8s-rbd' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 541 lfor 0/0/44
flags hashpspool,selfmanaged_snaps max_bytes 75161927680 stripe_width 0
application rbd
pool 3 'k8s-cephfs_metadata' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 543
lfor 0/0/57 flags hashpspool max_bytes 5368709120 stripe_width 0
pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 4 'k8s-cephfs_data' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 542
lfor 0/0/57 flags hashpspool max_bytes 32212254720 stripe_width 0
application cephfs
-----------

Is it possible to recover data when two nodes with all physical disks are
lost for any reason?
What is the maximum number of fault tolerance for the cluster?
For this purpose, consider the default settings .
what changes should I make to increase fault tolerance?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux