OSD_SCRUB_ERRORS 1 scrub errors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Team,

Ceph is showing below errors frequently on different Disks. every time
after pg repair it is resolving.

# ceph health detail
HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent
OSD_SCRUB_ERRORS 1 scrub errors
PG_DAMAGED Possible data damage: 1 pg inconsistent
pg 1.574 is active+clean+inconsistent, acting [19,25,2]

# cat /var/log/ceph/ceph-osd.19.log | grep error
2020-07-12 11:42:11.824 7f864e0b2700 -1 log_channel(cluster) log [ERR] :
1.574 shard 25 soid
1:2ea0a7a3:::rbd_data.515c96b8b4567.0000000000007a7c:head : candidate had a
read error
2020-07-12 11:42:15.035 7f86520ba700 -1 log_channel(cluster) log [ERR] :
1.574 deep-scrub 1 errors

# ceph --version
ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic
(stable)
# rados -p vpsacephcl01 list-inconsistent-obj 1.3c9 --format=json-pretty
{
    "epoch": 845,
    "inconsistents": [
        {
            "object": {
                "name": "rbd_data.515c96b8b4567.000000000000c377",
                "nspace": "",
                "locator": "",
                "snap": "head",
                "version": 21101
            },
            "errors": [],
            "union_shard_errors": [
                "read_error"
            ],
            "selected_object_info": {
                "oid": {
                    "oid": "rbd_data.515c96b8b4567.000000000000c377",
                    "key": "",
                    "snapid": -2,
                    "hash": 867656649,
                    "max": 0,
                    "pool": 1,
                    "namespace": ""
                },
                "version": "853'21101",
                "prior_version": "853'21100",
                "last_reqid": "client.2317742.0:24909022",
                "user_version": 21101,
                "size": 4194304,
                "mtime": "2020-07-16 21:02:20.564245",
                "local_mtime": "2020-07-16 21:02:20.572003",
                "lost": 0,
                "flags": [
                    "dirty",
                    "omap_digest"
                ],
                "truncate_seq": 0,
                "truncate_size": 0,
                "data_digest": "0xffffffff",
                "omap_digest": "0xffffffff",
                "expected_object_size": 4194304,
                "expected_write_size": 4194304,
                "alloc_hint_flags": 0,
                "manifest": {
                    "type": 0
                },
                "watchers": {}
            },
            "shards": [
                {
                    "osd": 5,
                    "primary": false,
                    "errors": [],
                    "size": 4194304,
                    "omap_digest": "0xffffffff",
                    "data_digest": "0x8ebd7de4"
                },
                {
                    "osd": 19,
                    "primary": true,
                    "errors": [],
                    "size": 4194304,
                    "omap_digest": "0xffffffff",
                    "data_digest": "0x8ebd7de4"
                },
                {
                    "osd": 24,
                    "primary": false,
                    "errors": [
                        "read_error"
                    ],
                    "size": 4194304
                }
            ]
        }
    ]
}

-- 
Thanks & Regards
Abhimnyu Dhobale
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux