Hi, I have some magic deep scrubbing error, and can't find the reason. The system contains 5 nodes with 20 OSDs in total and everything works fine, except these scrubbing errors. Sometimes the deep-scrub finds inconsistencies, but not exactly clear why. The content of the objects are exactly the same, based on the md5sum. The extended attributes are also matching, the only difference I can see is the date when these files were accessed or modified by the system. The clocks are in sync on all node. So, the question is how this checksum is calculated? What else should be checked when the checksum differs? As the data matches on all 3 OSDs, I can safely use the "pg repair" in this case. Is it correct? The version of the ceph on all node are the same: ~# ceph-osd -v ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43) For example, I can see this status: pg 1.da is active+clean+inconsistent, acting [18,12,1] This is in the log of the OSD18: 2015-11-25 08:00:11.486633 7f3f2f0b5700 0 log_channel(cluster) log [INF] : 1.da deep-scrub starts 2015-11-25 08:01:13.820282 7f3f2f0b5700 -1 log_channel(cluster) log [ERR] : 1.da shard 12: soid 2d268cda/rb.0.532aa.238e1f29.00000002df78/head//1 data_digest 0x9c482f6c != known data_digest 0x2a8b75d1 from auth shard 1 2015-11-25 08:01:26.718426 7f3f2f0b5700 -1 log_channel(cluster) log [ERR] : 1.da deep-scrub 0 missing, 1 inconsistent objects 2015-11-25 08:01:26.718435 7f3f2f0b5700 -1 log_channel(cluster) log [ERR] : 1.da deep-scrub 1 errors The md5sum of the object looks identical on all OSDs: ~# md5sum /var/lib/ceph/osd/ceph-18/current/1.da_head/DIR_A/DIR_D/DIR_C/rb.0.532aa.238e1f29.00000002df78__head_2D268CDA__1 4b051db39517ff4c38049a5c2d50ce81 /var/lib/ceph/osd/ceph-18/current/1.da_head/DIR_A/DIR_D/DIR_C/rb.0.532aa.238e1f29.00000002df78__head_2D268CDA__1 ~# md5sum /var/lib/ceph/osd/ceph-12/current/1.da_head/DIR_A/DIR_D/DIR_C/rb.0.532aa.238e1f29.00000002df78__head_2D268CDA__1 4b051db39517ff4c38049a5c2d50ce81 /var/lib/ceph/osd/ceph-12/current/1.da_head/DIR_A/DIR_D/DIR_C/rb.0.532aa.238e1f29.00000002df78__head_2D268CDA__1 ~# md5sum /var/lib/ceph/osd/ceph-1/current/1.da_head/DIR_A/DIR_D/DIR_C/rb.0.532aa.238e1f29.00000002df78__head_2D268CDA__1 4b051db39517ff4c38049a5c2d50ce81 /var/lib/ceph/osd/ceph-1/current/1.da_head/DIR_A/DIR_D/DIR_C/rb.0.532aa.238e1f29.00000002df78__head_2D268CDA__1 The stat is a little bit different: ~# stat /var/lib/ceph/osd/ceph-18/current/1.da_head/DIR_A/DIR_D/DIR_C/rb.0.532aa.238e1f29.00000002df78__head_2D268CDA__1 File: ‘/var/lib/ceph/osd/ceph-18/current/1.da_head/DIR_A/DIR_D/DIR_C/rb.0.532aa.238e1f29.00000002df78__head_2D268CDA__1’ Size: 4194304 Blocks: 8200 IO Block: 4096 regular file Device: 831h/2097d Inode: 280718364 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-11-22 19:35:39.508017680 +0100 Modify: 2015-11-22 19:35:39.512017725 +0100 Change: 2015-11-22 19:36:01.168257254 +0100 Birth: - ~# stat /var/lib/ceph/osd/ceph-12/current/1.da_head/DIR_A/DIR_D/DIR_C/rb.0.532aa.238e1f29.00000002df78__head_2D268CDA__1 File: '/var/lib/ceph/osd/ceph-12/current/1.da_head/DIR_A/DIR_D/DIR_C/rb.0.532aa.238e1f29.00000002df78__head_2D268CDA__1' Size: 4194304 Blocks: 8200 IO Block: 4096 regular file Device: 831h/2097d Inode: 405959653 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-11-22 19:35:39.333851735 +0100 Modify: 2015-11-22 19:35:39.333851735 +0100 Change: 2015-11-22 19:36:01.157710026 +0100 Birth: - ~# stat /var/lib/ceph/osd/ceph-1/current/1.da_head/DIR_A/DIR_D/DIR_C/rb.0.532aa.238e1f29.00000002df78__head_2D268CDA__1 File: '/var/lib/ceph/osd/ceph-1/current/1.da_head/DIR_A/DIR_D/DIR_C/rb.0.532aa.238e1f29.00000002df78__head_2D268CDA__1' Size: 4194304 Blocks: 8200 IO Block: 4096 regular file Device: 841h/2113d Inode: 408838486 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-10-30 12:25:50.539795416 +0100 Modify: 2015-11-20 13:30:06.026592488 +0100 Change: 2015-11-22 00:39:15.778009055 +0100 Birth: - Thanks in advance, Csaba |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com