Re: inconsistent PG -> unfound objects on an erasure coded system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ceph3 is not the same host as ceph03?
-Sam

On Tue, Mar 8, 2016 at 11:48 AM, Jeffrey McDonald <jmcdonal@xxxxxxx> wrote:
> Hi Sam,
>
> 1) Are those two hardlinks to the same file? No:
>
> # find . -name '*fa202ec9b4b3b217275a*' -exec ls -ltr {} +
> -rw-r--r-- 1 root root       0 Jan 23 21:49
> ./DIR_9/DIR_5/DIR_4/DIR_D/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long
> -rw-r--r-- 1 root root 1048576 Jan 23 21:49
> ./DIR_9/DIR_5/DIR_4/DIR_D/DIR_E/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long
>
> one has a zero size.
>
>  find . -name '*fa202ec9b4b3b217275a*' -exec lsattr {} +
> ----------------
> ./DIR_9/DIR_5/DIR_4/DIR_D/DIR_E/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long
> ----------------
> ./DIR_9/DIR_5/DIR_4/DIR_D/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long
>
> 2) What size is/are it/they?
>
> The first has size 0, the second has size 1048576
>
> 3) Can you tarball it/them up with their xattrs and get it to me?
> yes,  find . -name '*fa202ec9b4 find . -name '*fa202ec9b4b3b217275a*' -exec
> tar zcvp --xattrs -f /tmp/suspectfiles.tgz  {} +
> ./DIR_9/DIR_5/DIR_4/DIR_D/DIR_E/default.724733.17\\u\\ushadow\\uprostate\\srnaseq\\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\\sUNCID\\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\\uUNC14-SN744\\u0400\\uAC3LWGACXX\\u7\\uGAGTGG.tar.gz.2~\\u1r\\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long
> ./DIR_9/DIR_5/DIR_4/DIR_D/default.724733.17\\u\\ushadow\\uprostate\\srnaseq\\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\\sUNCID\\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\\uUNC14-SN744\\u0400\\uAC3LWGACXX\\u7\\uGAGTGG.tar.gz.2~\\u1r\\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long
> b3b217275a*' -exec tar zcvp --xattrs -f /tmp/suspectfiles.tgz  {} +
>
> Files are located at:
> https://drive.google.com/open?id=0Bzz8TrxFvfemYkI1WkdsQ19ScFk
>
>
> find . -name '*fa202ec9b4b3b217275a*' -exec xattr -l {} +
> ./DIR_9/DIR_5/DIR_4/DIR_D/DIR_E/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long:
> user.cephos.lfn3:
> default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkLf.4\u156__head_79CED459__46_ffffffffffffffff_0
> ./DIR_9/DIR_5/DIR_4/DIR_D/DIR_E/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long:
> user.cephos.spill_out:
> 0000   30 00                                              0.
>
> ./DIR_9/DIR_5/DIR_4/DIR_D/DIR_E/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long:
> user.ceph._:
> 0000   0F 08 C1 01 00 00 04 03 FD 00 00 00 00 00 00 00    ................
> 0010   DC 00 00 00 64 65 66 61 75 6C 74 2E 37 32 34 37    ....default.7247
> 0020   33 33 2E 31 37 5F 5F 73 68 61 64 6F 77 5F 70 72    33.17__shadow_pr
> 0030   6F 73 74 61 74 65 2F 72 6E 61 73 65 71 2F 38 65    ostate/rnaseq/8e
> 0040   35 64 61 36 65 38 2D 38 38 38 31 2D 34 38 31 33    5da6e8-8881-4813
> 0050   2D 61 34 65 33 2D 33 32 37 64 66 35 37 66 64 31    -a4e3-327df57fd1
> 0060   62 37 2F 55 4E 43 49 44 5F 32 34 30 39 32 38 33    b7/UNCID_2409283
> 0070   2E 33 30 34 61 39 35 63 31 2D 32 31 38 30 2D 34    .304a95c1-2180-4
> 0080   61 38 31 2D 61 38 35 61 2D 38 38 30 34 32 37 65    a81-a85a-880427e
> 0090   39 37 64 36 37 2E 31 34 30 33 30 34 5F 55 4E 43    97d67.140304_UNC
> 00A0   31 34 2D 53 4E 37 34 34 5F 30 34 30 30 5F 41 43    14-SN744_0400_AC
> 00B0   33 4C 57 47 41 43 58 58 5F 37 5F 47 41 47 54 47    3LWGACXX_7_GAGTG
> 00C0   47 2E 74 61 72 2E 67 7A 2E 32 7E 5F 31 72 5F 46    G.tar.gz.2~_1r_F
> 00D0   47 69 64 6D 70 45 50 38 47 52 73 4A 6B 4E 4C 66    GidmpEP8GRsJkNLf
> 00E0   41 68 39 43 6F 6B 78 6B 4C 66 2E 34 5F 31 35 36    Ah9CokxkLf.4_156
> 00F0   FE FF FF FF FF FF FF FF 59 D4 CE 79 00 00 00 00    ........Y..y....
> 0100   00 46 00 00 00 00 00 00 00 06 03 1C 00 00 00 46    .F.............F
> 0110   00 00 00 00 00 00 00 FF FF FF FF 00 00 00 00 00    ................
> 0120   00 00 00 FF FF FF FF FF FF FF FF 00 00 00 00 A4    ................
> 0130   18 03 00 00 00 00 00 EC FD 01 00 A3 18 03 00 00    ................
> 0140   00 00 00 EC FD 01 00 02 02 15 00 00 00 08 01 DF    ................
> 0150   0A 00 00 00 00 00 A9 FE A7 01 00 00 00 00 00 00    ................
> 0160   00 00 00 00 40 00 00 00 00 00 D4 49 A4 56 1D 8F    ....@......I.V..
> 0170   96 18 02 02 15 00 00 00 00 00 00 00 00 00 00 00    ................
> 0180   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
> 0190   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
> 01A0   00 00 80 C6 2E 00 00 00 00 00 00 00 00 00 00 00    ................
> 01B0   00 00 00 24 00 00 00 D4 49 A4 56 D1 2F B1 19 FF    ...$....I.V./...
> 01C0   FF FF FF FF FF FF FF                               .......
>
> ./DIR_9/DIR_5/DIR_4/DIR_D/DIR_E/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long:
> user.ceph.snapset:
> 0000   02 02 19 00 00 00 00 00 00 00 00 00 00 00 01 00    ................
> 0010   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00       ...............
>
> ./DIR_9/DIR_5/DIR_4/DIR_D/DIR_E/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long:
> user.ceph.hinfo_key:
> 0000   01 01 24 00 00 00 00 00 10 00 00 00 00 00 06 00    ..$.............
> 0010   00 00 13 15 AE 5C 52 12 87 62 6A B8 71 B8 B8 16    ......R..bj.q...
> 0020   0A 14 7E DA 84 79 43 E0 3B B1                      ..~..yC.;.
>
> ./DIR_9/DIR_5/DIR_4/DIR_D/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long:
> user.cephos.lfn3:
> default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkLf.4\u156__head_79CED459__46_3189d_0
> ./DIR_9/DIR_5/DIR_4/DIR_D/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long:
> user.cephos.spill_out:
> 0000   30 00                                              0.
>
> ./DIR_9/DIR_5/DIR_4/DIR_D/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long:
> user.ceph._:
> 0000   0F 08 C1 01 00 00 04 03 FD 00 00 00 00 00 00 00    ................
> 0010   DC 00 00 00 64 65 66 61 75 6C 74 2E 37 32 34 37    ....default.7247
> 0020   33 33 2E 31 37 5F 5F 73 68 61 64 6F 77 5F 70 72    33.17__shadow_pr
> 0030   6F 73 74 61 74 65 2F 72 6E 61 73 65 71 2F 38 65    ostate/rnaseq/8e
> 0040   35 64 61 36 65 38 2D 38 38 38 31 2D 34 38 31 33    5da6e8-8881-4813
> 0050   2D 61 34 65 33 2D 33 32 37 64 66 35 37 66 64 31    -a4e3-327df57fd1
> 0060   62 37 2F 55 4E 43 49 44 5F 32 34 30 39 32 38 33    b7/UNCID_2409283
> 0070   2E 33 30 34 61 39 35 63 31 2D 32 31 38 30 2D 34    .304a95c1-2180-4
> 0080   61 38 31 2D 61 38 35 61 2D 38 38 30 34 32 37 65    a81-a85a-880427e
> 0090   39 37 64 36 37 2E 31 34 30 33 30 34 5F 55 4E 43    97d67.140304_UNC
> 00A0   31 34 2D 53 4E 37 34 34 5F 30 34 30 30 5F 41 43    14-SN744_0400_AC
> 00B0   33 4C 57 47 41 43 58 58 5F 37 5F 47 41 47 54 47    3LWGACXX_7_GAGTG
> 00C0   47 2E 74 61 72 2E 67 7A 2E 32 7E 5F 31 72 5F 46    G.tar.gz.2~_1r_F
> 00D0   47 69 64 6D 70 45 50 38 47 52 73 4A 6B 4E 4C 66    GidmpEP8GRsJkNLf
> 00E0   41 68 39 43 6F 6B 78 6B 4C 66 2E 34 5F 31 35 36    Ah9CokxkLf.4_156
> 00F0   FE FF FF FF FF FF FF FF 59 D4 CE 79 00 00 00 00    ........Y..y....
> 0100   00 46 00 00 00 00 00 00 00 06 03 1C 00 00 00 46    .F.............F
> 0110   00 00 00 00 00 00 00 FF FF FF FF 00 00 00 00 00    ................
> 0120   00 00 00 FF FF FF FF FF FF FF FF 00 00 00 00 9C    ................
> 0130   18 03 00 00 00 00 00 EC FD 01 00 00 00 00 00 00    ................
> 0140   00 00 00 00 00 00 00 02 02 15 00 00 00 08 01 DF    ................
> 0150   0A 00 00 00 00 00 89 FE A7 01 00 00 00 00 00 00    ................
> 0160   00 00 00 00 00 00 00 00 00 00 D4 49 A4 56 9F B6    ...........I.V..
> 0170   08 05 02 02 15 00 00 00 00 00 00 00 00 00 00 00    ................
> 0180   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
> 0190   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
> 01A0   00 00 78 C6 2E 00 00 00 00 00 00 00 00 00 00 00    ..x.............
> 01B0   00 00 00 34 00 00 00 D4 49 A4 56 7C A9 0F 06 FF    ...4....I.V|....
> 01C0   FF FF FF FF FF FF FF                               .......
>
> ./DIR_9/DIR_5/DIR_4/DIR_D/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long:
> user.ceph.snapset:
> 0000   02 02 19 00 00 00 00 00 00 00 00 00 00 00 01 00    ................
> 0010   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00       ...............
>
> ./DIR_9/DIR_5/DIR_4/DIR_D/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long:
> user.ceph.hinfo_key:
> 0000   01 01 24 00 00 00 00 00 00 00 00 00 00 00 06 00    ..$.............
> 0010   00 00 FF FF FF FF FF FF FF FF FF FF FF FF FF FF    ................
> 0020   FF FF FF FF FF FF FF FF FF FF                      ..........
>
> ./DIR_9/DIR_5/DIR_4/DIR_D/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long:
> user.cephos.seq:
> 0000   01 01 10 00 00 00 BE 26 65 00 00 00 00 00 00 00    .......&e.......
> 0010   00 00 03 00 00 00 01                               .......
>
>
> 4) Has anything unusual ever happened to the host which osd.307 is on?
> Particularly a power loss?
>
> I don't recall anything.   A couple of times the data center overheated (air
> ) but these nodes are in a water-cooled enclosure and were OK.   What I did
> have is stability issues with the older hardware (ceph1,ceph2,ceph3) where
> there weren't outright power failures but frequent system problems where the
> systems ran out of memory and became wedged.  Its likely that this PG was
> migrated from there.   Would migration preserve this problem?
>
> 5) Can you do an xfs fsck on osd.307's filesystem? Will do.   I will report
> back shortly.
>
> Jeff
>
> On Tue, Mar 8, 2016 at 1:12 PM, Samuel Just <sjust@xxxxxxxxxx> wrote:
>>
>> So, I did turn up something interesting.  There is an object with two
>> files (one in an invalid directory):
>>
>> ~/Downloads [deepthought●] » grep 'fa202ec9b4b3b217275a' dir.filtered
>>
>> ./70.459s0_head/DIR_9/DIR_5/DIR_4/DIR_D/DIR_E/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long
>>
>> ./70.459s0_head/DIR_9/DIR_5/DIR_4/DIR_D/default.724733.17\u\ushadow\uprostate\srnaseq\s8e5da6e8-8881-4813-a4e3-327df57fd1b7\sUNCID\u2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304\uUNC14-SN744\u0400\uAC3LWGACXX\u7\uGAGTGG.tar.gz.2~\u1r\uFGidmpEP8GRsJkNLfAh9CokxkL_fa202ec9b4b3b217275a_0_long
>>
>> That file shows up twice, once in DIR_9/DIR_5/DIR_4/DIR_D and once in
>> DIR_9/DIR_5/DIR_4/DIR_D/DIR_E.  The instance of it in
>> DIR_9/DIR_5/DIR_4/DIR_D is causing scrub to return extra objects.
>> Filestore also appears to be unable to delete it:
>>
>> 2016-03-07 21:44:02.193606 7fe96b56f700 15
>> filestore(/var/lib/ceph/osd/ceph-307) remove
>>
>> 70.459s0_head/79ced459/default.724733.17__shadow_prostate/rnaseq/8e5da6e8-8881-4813-a4e3-327df57fd1b7/UNCID_2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304_UNC14-SN744_0400_AC3LWGACXX_7_GAGTGG.tar.gz.2~_1r_FGidmpEP8GRsJkNLfAh9CokxkLf.4_156/head//70/202909/0
>> 2016-03-07 21:44:02.197676 7fe96b56f700 10
>> filestore(/var/lib/ceph/osd/ceph-307) remove
>>
>> 70.459s0_head/79ced459/default.724733.17__shadow_prostate/rnaseq/8e5da6e8-8881-4813-a4e3-327df57fd1b7/UNCID_2409283.304a95c1-2180-4a81-a85a-880427e97d67.140304_UNC14-SN744_0400_AC3LWGACXX_7_GAGTGG.tar.gz.2~_1r_FGidmpEP8GRsJkNLfAh9CokxkLf.4_156/head//70/202909/0
>> = -2
>>
>> That second symptom probably explains the ENOTEMPTY bugs you are seeing.
>> 1) Are those two hardlinks to the same file?
>> 2) What size is/are it/they?
>> 3) Can you tarball it/them up with their xattrs and get it to me?
>> 4) Has anything unusual ever happened to the host which osd.307 is on?
>>  Particularly a power loss?
>> 5) Can you do an xfs fsck on osd.307's filesystem?
>> -Sam
>>
>> On Tue, Mar 8, 2016 at 6:55 AM, Samuel Just <sjust@xxxxxxxxxx> wrote:
>> > Oh, for the pg with unfound objects, restart the primary, that should
>> > fix it.
>> > -Sam
>> >
>> > On Tue, Mar 8, 2016 at 6:44 AM, Jeffrey McDonald <jmcdonal@xxxxxxx>
>> > wrote:
>> >> Resent to ceph-users to be under the message size limit....
>> >>
>> >> On Tue, Mar 8, 2016 at 6:16 AM, Jeffrey McDonald <jmcdonal@xxxxxxx>
>> >> wrote:
>> >>>
>> >>> OK, this is  done and I've observed the state change of 70.459 from
>> >>> active+clean to active+clean+inconsistent after the first scrub.
>> >>>
>> >>> Files attached:  bash script of commands (setuposddebug.bash), log
>> >>> script
>> >>> from the script (setuposddebug.log), three pg queries, one at the
>> >>> start, one
>> >>> at the end of the first scrub, one at the end of the second scrub.
>> >>>
>> >>> At this point, I now have 27 active+clean+inconsistent PGs.    While
>> >>> I'm
>> >>> not too concerned about how they are labeled, clients cannot extract
>> >>> objects
>> >>> which are in the  PGs and are labeled as unfound.    Its important for
>> >>> us to
>> >>> maintain user confidence in the system so I need a fix as soon as
>> >>> possible......
>> >>>
>> >>> The log files from the OSDs are here:
>> >>>
>> >>> https://drive.google.com/open?id=0Bzz8TrxFvfembkt2XzlCZFVJZFU
>> >>>
>> >>> Thanks,
>> >>> Jeff
>> >>>
>> >>> On Mon, Mar 7, 2016 at 7:26 PM, Samuel Just <sjust@xxxxxxxxxx> wrote:
>> >>>>
>> >>>> Yep, just as before.  Actually, do it twice (wait for 'scrubbing' to
>> >>>> go away each time).
>> >>>> -Sam
>> >>>>
>> >>>> On Mon, Mar 7, 2016 at 5:25 PM, Jeffrey McDonald <jmcdonal@xxxxxxx>
>> >>>> wrote:
>> >>>> > Just to be sure I grab what you need:
>> >>>> >
>> >>>> > 1- set debug logs for the pg 70.459
>> >>>> > 2 - Issue a deep-scrub ceph pg deep-scrub 70.459
>> >>>> > 3- stop once the 70.459 pg goes inconsistent?
>> >>>> >
>> >>>> > Thanks,
>> >>>> > Jeff
>> >>>> >
>> >>>> >
>> >>>> > On Mon, Mar 7, 2016 at 6:52 PM, Samuel Just <sjust@xxxxxxxxxx>
>> >>>> > wrote:
>> >>>> >>
>> >>>> >> Hmm, I'll look into this a bit more tomorrow.  Can you get the
>> >>>> >> tree
>> >>>> >> structure of the 70.459 pg directory on osd.307 (find . will do
>> >>>> >> fine).
>> >>>> >> -Sam
>> >>>> >>
>> >>>> >> On Mon, Mar 7, 2016 at 4:50 PM, Jeffrey McDonald
>> >>>> >> <jmcdonal@xxxxxxx>
>> >>>> >> wrote:
>> >>>> >> > 307 is on ceph03.
>> >>>> >> > Jeff
>> >>>> >> >
>> >>>> >> > On Mon, Mar 7, 2016 at 6:48 PM, Samuel Just <sjust@xxxxxxxxxx>
>> >>>> >> > wrote:
>> >>>> >> >>
>> >>>> >> >> Which node is osd.307 on?
>> >>>> >> >> -Sam
>> >>>> >> >>
>> >>>> >> >> On Mon, Mar 7, 2016 at 4:43 PM, Samuel Just <sjust@xxxxxxxxxx>
>> >>>> >> >> wrote:
>> >>>> >> >> > ' I didn't see the errors in the tracker on the new nodes,
>> >>>> >> >> > but
>> >>>> >> >> > they
>> >>>> >> >> > were only receiving new data, not migrating it.' -- What do
>> >>>> >> >> > you
>> >>>> >> >> > mean
>> >>>> >> >> > by that?
>> >>>> >> >> > -Sam
>> >>>> >> >> >
>> >>>> >> >> > On Mon, Mar 7, 2016 at 4:42 PM, Jeffrey McDonald
>> >>>> >> >> > <jmcdonal@xxxxxxx>
>> >>>> >> >> > wrote:
>> >>>> >> >> >> The filesystem is xfs everywhere, there are nine hosts.
>> >>>> >> >> >> The
>> >>>> >> >> >> two
>> >>>> >> >> >> new
>> >>>> >> >> >> ceph
>> >>>> >> >> >> nodes 08, 09 have a new kernel.    I didn't see the errors
>> >>>> >> >> >> in
>> >>>> >> >> >> the
>> >>>> >> >> >> tracker on
>> >>>> >> >> >> the new nodes, but they were only receiving new data, not
>> >>>> >> >> >> migrating
>> >>>> >> >> >> it.
>> >>>> >> >> >> Jeff
>> >>>> >> >> >>
>> >>>> >> >> >> ceph2: Linux ceph2 3.13.0-65-generic #106-Ubuntu SMP Fri Oct
>> >>>> >> >> >> 2
>> >>>> >> >> >> 22:08:27
>> >>>> >> >> >> UTC
>> >>>> >> >> >> 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >>>> >> >> >> ceph1: Linux ceph1 3.13.0-65-generic #106-Ubuntu SMP Fri Oct
>> >>>> >> >> >> 2
>> >>>> >> >> >> 22:08:27
>> >>>> >> >> >> UTC
>> >>>> >> >> >> 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >>>> >> >> >> ceph3: Linux ceph3 3.13.0-65-generic #106-Ubuntu SMP Fri Oct
>> >>>> >> >> >> 2
>> >>>> >> >> >> 22:08:27
>> >>>> >> >> >> UTC
>> >>>> >> >> >> 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >>>> >> >> >> ceph03: Linux ceph03 3.13.0-65-generic #106-Ubuntu SMP Fri
>> >>>> >> >> >> Oct 2
>> >>>> >> >> >> 22:08:27
>> >>>> >> >> >> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >>>> >> >> >> ceph01: Linux ceph01 3.13.0-65-generic #106-Ubuntu SMP Fri
>> >>>> >> >> >> Oct 2
>> >>>> >> >> >> 22:08:27
>> >>>> >> >> >> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >>>> >> >> >> ceph02: Linux ceph02 3.13.0-65-generic #106-Ubuntu SMP Fri
>> >>>> >> >> >> Oct 2
>> >>>> >> >> >> 22:08:27
>> >>>> >> >> >> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >>>> >> >> >> ceph06: Linux ceph06 3.13.0-65-generic #106-Ubuntu SMP Fri
>> >>>> >> >> >> Oct 2
>> >>>> >> >> >> 22:08:27
>> >>>> >> >> >> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >>>> >> >> >> ceph05: Linux ceph05 3.13.0-65-generic #106-Ubuntu SMP Fri
>> >>>> >> >> >> Oct 2
>> >>>> >> >> >> 22:08:27
>> >>>> >> >> >> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >>>> >> >> >> ceph04: Linux ceph04 3.13.0-65-generic #106-Ubuntu SMP Fri
>> >>>> >> >> >> Oct 2
>> >>>> >> >> >> 22:08:27
>> >>>> >> >> >> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >>>> >> >> >> ceph08: Linux ceph08 3.19.0-51-generic #58~14.04.1-Ubuntu
>> >>>> >> >> >> SMP
>> >>>> >> >> >> Fri
>> >>>> >> >> >> Feb
>> >>>> >> >> >> 26
>> >>>> >> >> >> 22:02:58 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>> >>>> >> >> >> ceph07: Linux ceph07 3.13.0-65-generic #106-Ubuntu SMP Fri
>> >>>> >> >> >> Oct 2
>> >>>> >> >> >> 22:08:27
>> >>>> >> >> >> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >>>> >> >> >> ceph09: Linux ceph09 3.19.0-51-generic #58~14.04.1-Ubuntu
>> >>>> >> >> >> SMP
>> >>>> >> >> >> Fri
>> >>>> >> >> >> Feb
>> >>>> >> >> >> 26
>> >>>> >> >> >> 22:02:58 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>> >>>> >> >> >>
>> >>>> >> >> >>
>> >>>> >> >> >> On Mon, Mar 7, 2016 at 6:39 PM, Samuel Just
>> >>>> >> >> >> <sjust@xxxxxxxxxx>
>> >>>> >> >> >> wrote:
>> >>>> >> >> >>>
>> >>>> >> >> >>> What filesystem and kernel are you running on the osds?
>> >>>> >> >> >>> This
>> >>>> >> >> >>> (and
>> >>>> >> >> >>> your other bug, actually) could be explained by some kind
>> >>>> >> >> >>> of
>> >>>> >> >> >>> weird
>> >>>> >> >> >>> kernel readdir behavior.
>> >>>> >> >> >>> -Sam
>> >>>> >> >> >>>
>> >>>> >> >> >>> On Mon, Mar 7, 2016 at 4:36 PM, Samuel Just
>> >>>> >> >> >>> <sjust@xxxxxxxxxx>
>> >>>> >> >> >>> wrote:
>> >>>> >> >> >>> > Hmm, so much for that theory, still looking.  If you can
>> >>>> >> >> >>> > produce
>> >>>> >> >> >>> > another set of logs (as before) from scrubbing that pg,
>> >>>> >> >> >>> > it
>> >>>> >> >> >>> > might
>> >>>> >> >> >>> > help.
>> >>>> >> >> >>> > -Sam
>> >>>> >> >> >>> >
>> >>>> >> >> >>> > On Mon, Mar 7, 2016 at 4:34 PM, Jeffrey McDonald
>> >>>> >> >> >>> > <jmcdonal@xxxxxxx>
>> >>>> >> >> >>> > wrote:
>> >>>> >> >> >>> >> they're all the same.....see attached.
>> >>>> >> >> >>> >>
>> >>>> >> >> >>> >> On Mon, Mar 7, 2016 at 6:31 PM, Samuel Just
>> >>>> >> >> >>> >> <sjust@xxxxxxxxxx>
>> >>>> >> >> >>> >> wrote:
>> >>>> >> >> >>> >>>
>> >>>> >> >> >>> >>> Have you confirmed the versions?
>> >>>> >> >> >>> >>> -Sam
>> >>>> >> >> >>> >>>
>> >>>> >> >> >>> >>> On Mon, Mar 7, 2016 at 4:29 PM, Jeffrey McDonald
>> >>>> >> >> >>> >>> <jmcdonal@xxxxxxx>
>> >>>> >> >> >>> >>> wrote:
>> >>>> >> >> >>> >>> > I have one other very strange event happening, I've
>> >>>> >> >> >>> >>> > opened a
>> >>>> >> >> >>> >>> > ticket
>> >>>> >> >> >>> >>> > on
>> >>>> >> >> >>> >>> > it:
>> >>>> >> >> >>> >>> > http://tracker.ceph.com/issues/14766
>> >>>> >> >> >>> >>> >
>> >>>> >> >> >>> >>> > During this migration, OSD failed probably over 400
>> >>>> >> >> >>> >>> > times
>> >>>> >> >> >>> >>> > while
>> >>>> >> >> >>> >>> > moving
>> >>>> >> >> >>> >>> > data
>> >>>> >> >> >>> >>> > around.   We move the empty directories and restarted
>> >>>> >> >> >>> >>> > the
>> >>>> >> >> >>> >>> > OSDs.
>> >>>> >> >> >>> >>> > I
>> >>>> >> >> >>> >>> > can't
>> >>>> >> >> >>> >>> > say if this is related--I have no reason to suspect
>> >>>> >> >> >>> >>> > it
>> >>>> >> >> >>> >>> > is.
>> >>>> >> >> >>> >>> >
>> >>>> >> >> >>> >>> > Jeff
>> >>>> >> >> >>> >>> >
>> >>>> >> >> >>> >>> > On Mon, Mar 7, 2016 at 5:31 PM, Shinobu Kinjo
>> >>>> >> >> >>> >>> > <shinobu.kj@xxxxxxxxx>
>> >>>> >> >> >>> >>> > wrote:
>> >>>> >> >> >>> >>> >>
>> >>>> >> >> >>> >>> >> What could cause this kind of unexpected behaviour?
>> >>>> >> >> >>> >>> >> Any assumption??
>> >>>> >> >> >>> >>> >> Sorry for interrupting you.
>> >>>> >> >> >>> >>> >>
>> >>>> >> >> >>> >>> >> Cheers,
>> >>>> >> >> >>> >>> >> S
>> >>>> >> >> >>> >>> >>
>> >>>> >> >> >>> >>> >> On Tue, Mar 8, 2016 at 8:19 AM, Samuel Just
>> >>>> >> >> >>> >>> >> <sjust@xxxxxxxxxx>
>> >>>> >> >> >>> >>> >> wrote:
>> >>>> >> >> >>> >>> >> > Hmm, at the end of the log, the pg is still
>> >>>> >> >> >>> >>> >> > inconsistent.
>> >>>> >> >> >>> >>> >> > Can
>> >>>> >> >> >>> >>> >> > you
>> >>>> >> >> >>> >>> >> > attach a ceph pg query on that pg?
>> >>>> >> >> >>> >>> >> > -Sam
>> >>>> >> >> >>> >>> >> >
>> >>>> >> >> >>> >>> >> > On Mon, Mar 7, 2016 at 3:05 PM, Samuel Just
>> >>>> >> >> >>> >>> >> > <sjust@xxxxxxxxxx>
>> >>>> >> >> >>> >>> >> > wrote:
>> >>>> >> >> >>> >>> >> >> If so, that strongly suggests that the pg was
>> >>>> >> >> >>> >>> >> >> actually
>> >>>> >> >> >>> >>> >> >> never
>> >>>> >> >> >>> >>> >> >> inconsistent in the first place and that the bug
>> >>>> >> >> >>> >>> >> >> is
>> >>>> >> >> >>> >>> >> >> in
>> >>>> >> >> >>> >>> >> >> scrub
>> >>>> >> >> >>> >>> >> >> itself
>> >>>> >> >> >>> >>> >> >> presumably getting confused about an object
>> >>>> >> >> >>> >>> >> >> during a
>> >>>> >> >> >>> >>> >> >> write.
>> >>>> >> >> >>> >>> >> >> The
>> >>>> >> >> >>> >>> >> >> next
>> >>>> >> >> >>> >>> >> >> step would be to get logs like the above from a
>> >>>> >> >> >>> >>> >> >> pg as
>> >>>> >> >> >>> >>> >> >> it
>> >>>> >> >> >>> >>> >> >> scrubs
>> >>>> >> >> >>> >>> >> >> transitioning from clean to inconsistent.  If
>> >>>> >> >> >>> >>> >> >> it's
>> >>>> >> >> >>> >>> >> >> really
>> >>>> >> >> >>> >>> >> >> a
>> >>>> >> >> >>> >>> >> >> race
>> >>>> >> >> >>> >>> >> >> between scrub and a write, it's probably just
>> >>>> >> >> >>> >>> >> >> non-deterministic,
>> >>>> >> >> >>> >>> >> >> you
>> >>>> >> >> >>> >>> >> >> could set logging on a set of osds and
>> >>>> >> >> >>> >>> >> >> continuously
>> >>>> >> >> >>> >>> >> >> scrub
>> >>>> >> >> >>> >>> >> >> any
>> >>>> >> >> >>> >>> >> >> pgs
>> >>>> >> >> >>> >>> >> >> which only map to those osds until you reproduce
>> >>>> >> >> >>> >>> >> >> the
>> >>>> >> >> >>> >>> >> >> problem.
>> >>>> >> >> >>> >>> >> >> -Sam
>> >>>> >> >> >>> >>> >> >>
>> >>>> >> >> >>> >>> >> >> On Mon, Mar 7, 2016 at 2:44 PM, Samuel Just
>> >>>> >> >> >>> >>> >> >> <sjust@xxxxxxxxxx>
>> >>>> >> >> >>> >>> >> >> wrote:
>> >>>> >> >> >>> >>> >> >>> So after the scrub, it came up clean?  The
>> >>>> >> >> >>> >>> >> >>> inconsistent/missing
>> >>>> >> >> >>> >>> >> >>> objects reappeared?
>> >>>> >> >> >>> >>> >> >>> -Sam
>> >>>> >> >> >>> >>> >> >>>
>> >>>> >> >> >>> >>> >> >>> On Mon, Mar 7, 2016 at 2:33 PM, Jeffrey McDonald
>> >>>> >> >> >>> >>> >> >>> <jmcdonal@xxxxxxx>
>> >>>> >> >> >>> >>> >> >>> wrote:
>> >>>> >> >> >>> >>> >> >>>> Hi Sam,
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> I've done as you requested:
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> pg 70.459 is active+clean+inconsistent, acting
>> >>>> >> >> >>> >>> >> >>>> [307,210,273,191,132,450]
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> # for i in 307 210 273 191 132 450 ; do
>> >>>> >> >> >>> >>> >> >>>>> ceph tell osd.$i injectargs  '--debug-osd 20
>> >>>> >> >> >>> >>> >> >>>>> --debug-filestore 20
>> >>>> >> >> >>> >>> >> >>>>> --debug-ms 1'
>> >>>> >> >> >>> >>> >> >>>>> done
>> >>>> >> >> >>> >>> >> >>>> debug_osd=20/20 debug_filestore=20/20
>> >>>> >> >> >>> >>> >> >>>> debug_ms=1/1
>> >>>> >> >> >>> >>> >> >>>> debug_osd=20/20 debug_filestore=20/20
>> >>>> >> >> >>> >>> >> >>>> debug_ms=1/1
>> >>>> >> >> >>> >>> >> >>>> debug_osd=20/20 debug_filestore=20/20
>> >>>> >> >> >>> >>> >> >>>> debug_ms=1/1
>> >>>> >> >> >>> >>> >> >>>> debug_osd=20/20 debug_filestore=20/20
>> >>>> >> >> >>> >>> >> >>>> debug_ms=1/1
>> >>>> >> >> >>> >>> >> >>>> debug_osd=20/20 debug_filestore=20/20
>> >>>> >> >> >>> >>> >> >>>> debug_ms=1/1
>> >>>> >> >> >>> >>> >> >>>> debug_osd=20/20 debug_filestore=20/20
>> >>>> >> >> >>> >>> >> >>>> debug_ms=1/1
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> # date
>> >>>> >> >> >>> >>> >> >>>> Mon Mar  7 16:03:38 CST 2016
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> # ceph pg deep-scrub 70.459
>> >>>> >> >> >>> >>> >> >>>> instructing pg 70.459 on osd.307 to deep-scrub
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> Scrub finished around
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> # date
>> >>>> >> >> >>> >>> >> >>>> Mon Mar  7 16:13:03 CST 2016
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> I've tar'd+gziped the files which can be
>> >>>> >> >> >>> >>> >> >>>> downloaded
>> >>>> >> >> >>> >>> >> >>>> from
>> >>>> >> >> >>> >>> >> >>>> here.
>> >>>> >> >> >>> >>> >> >>>> The
>> >>>> >> >> >>> >>> >> >>>> logs
>> >>>> >> >> >>> >>> >> >>>> start a minute or two after today at 16:00.
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> https://drive.google.com/folderview?id=0Bzz8TrxFvfema2NQUmotd1BOTnM&usp=sharing
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> Oddly(to me anyways), this pg is now
>> >>>> >> >> >>> >>> >> >>>> active+clean:
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> # ceph pg dump  | grep 70.459
>> >>>> >> >> >>> >>> >> >>>> dumped all in format plain
>> >>>> >> >> >>> >>> >> >>>> 70.459 21377 0 0 0 0 64515446306 3088 3088
>> >>>> >> >> >>> >>> >> >>>> active+clean
>> >>>> >> >> >>> >>> >> >>>> 2016-03-07
>> >>>> >> >> >>> >>> >> >>>> 16:26:57.796537 279563'212832 279602:628151
>> >>>> >> >> >>> >>> >> >>>> [307,210,273,191,132,450]
>> >>>> >> >> >>> >>> >> >>>> 307
>> >>>> >> >> >>> >>> >> >>>> [307,210,273,191,132,450] 307 279563'212832
>> >>>> >> >> >>> >>> >> >>>> 2016-03-07
>> >>>> >> >> >>> >>> >> >>>> 16:12:30.741984
>> >>>> >> >> >>> >>> >> >>>> 279563'212832 2016-03-07 16:12:30.741984
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> Regards,
>> >>>> >> >> >>> >>> >> >>>> Jeff
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> On Mon, Mar 7, 2016 at 4:11 PM, Samuel Just
>> >>>> >> >> >>> >>> >> >>>> <sjust@xxxxxxxxxx>
>> >>>> >> >> >>> >>> >> >>>> wrote:
>> >>>> >> >> >>> >>> >> >>>>>
>> >>>> >> >> >>> >>> >> >>>>> I think the unfound object on repair is fixed
>> >>>> >> >> >>> >>> >> >>>>> by
>> >>>> >> >> >>> >>> >> >>>>> d51806f5b330d5f112281fbb95ea6addf994324e (not
>> >>>> >> >> >>> >>> >> >>>>> in
>> >>>> >> >> >>> >>> >> >>>>> hammer
>> >>>> >> >> >>> >>> >> >>>>> yet).
>> >>>> >> >> >>> >>> >> >>>>> I
>> >>>> >> >> >>> >>> >> >>>>> opened http://tracker.ceph.com/issues/15002
>> >>>> >> >> >>> >>> >> >>>>> for
>> >>>> >> >> >>> >>> >> >>>>> the
>> >>>> >> >> >>> >>> >> >>>>> backport
>> >>>> >> >> >>> >>> >> >>>>> and
>> >>>> >> >> >>> >>> >> >>>>> to
>> >>>> >> >> >>> >>> >> >>>>> make sure it's covered in ceph-qa-suite.  No
>> >>>> >> >> >>> >>> >> >>>>> idea
>> >>>> >> >> >>> >>> >> >>>>> at
>> >>>> >> >> >>> >>> >> >>>>> this
>> >>>> >> >> >>> >>> >> >>>>> time
>> >>>> >> >> >>> >>> >> >>>>> why
>> >>>> >> >> >>> >>> >> >>>>> the
>> >>>> >> >> >>> >>> >> >>>>> objects are disappearing though.
>> >>>> >> >> >>> >>> >> >>>>> -Sam
>> >>>> >> >> >>> >>> >> >>>>>
>> >>>> >> >> >>> >>> >> >>>>> On Mon, Mar 7, 2016 at 1:57 PM, Samuel Just
>> >>>> >> >> >>> >>> >> >>>>> <sjust@xxxxxxxxxx>
>> >>>> >> >> >>> >>> >> >>>>> wrote:
>> >>>> >> >> >>> >>> >> >>>>> > The one just scrubbed and now inconsistent.
>> >>>> >> >> >>> >>> >> >>>>> > -Sam
>> >>>> >> >> >>> >>> >> >>>>> >
>> >>>> >> >> >>> >>> >> >>>>> > On Mon, Mar 7, 2016 at 1:57 PM, Jeffrey
>> >>>> >> >> >>> >>> >> >>>>> > McDonald
>> >>>> >> >> >>> >>> >> >>>>> > <jmcdonal@xxxxxxx>
>> >>>> >> >> >>> >>> >> >>>>> > wrote:
>> >>>> >> >> >>> >>> >> >>>>> >> Do you want me to enable this for the pg
>> >>>> >> >> >>> >>> >> >>>>> >> already
>> >>>> >> >> >>> >>> >> >>>>> >> with
>> >>>> >> >> >>> >>> >> >>>>> >> unfound
>> >>>> >> >> >>> >>> >> >>>>> >> objects
>> >>>> >> >> >>> >>> >> >>>>> >> or the
>> >>>> >> >> >>> >>> >> >>>>> >> placement group just scrubbed and now
>> >>>> >> >> >>> >>> >> >>>>> >> inconsistent?
>> >>>> >> >> >>> >>> >> >>>>> >> Jeff
>> >>>> >> >> >>> >>> >> >>>>> >>
>> >>>> >> >> >>> >>> >> >>>>> >> On Mon, Mar 7, 2016 at 3:54 PM, Samuel Just
>> >>>> >> >> >>> >>> >> >>>>> >> <sjust@xxxxxxxxxx>
>> >>>> >> >> >>> >>> >> >>>>> >> wrote:
>> >>>> >> >> >>> >>> >> >>>>> >>>
>> >>>> >> >> >>> >>> >> >>>>> >>> Can you enable
>> >>>> >> >> >>> >>> >> >>>>> >>>
>> >>>> >> >> >>> >>> >> >>>>> >>> debug osd = 20
>> >>>> >> >> >>> >>> >> >>>>> >>> debug filestore = 20
>> >>>> >> >> >>> >>> >> >>>>> >>> debug ms = 1
>> >>>> >> >> >>> >>> >> >>>>> >>>
>> >>>> >> >> >>> >>> >> >>>>> >>> on all osds in that PG, rescrub, and
>> >>>> >> >> >>> >>> >> >>>>> >>> convey to
>> >>>> >> >> >>> >>> >> >>>>> >>> us
>> >>>> >> >> >>> >>> >> >>>>> >>> the
>> >>>> >> >> >>> >>> >> >>>>> >>> resulting
>> >>>> >> >> >>> >>> >> >>>>> >>> logs?
>> >>>> >> >> >>> >>> >> >>>>> >>> -Sam
>> >>>> >> >> >>> >>> >> >>>>> >>>
>> >>>> >> >> >>> >>> >> >>>>> >>> On Mon, Mar 7, 2016 at 1:36 PM, Jeffrey
>> >>>> >> >> >>> >>> >> >>>>> >>> McDonald
>> >>>> >> >> >>> >>> >> >>>>> >>> <jmcdonal@xxxxxxx>
>> >>>> >> >> >>> >>> >> >>>>> >>> wrote:
>> >>>> >> >> >>> >>> >> >>>>> >>> > Here is a PG which just went
>> >>>> >> >> >>> >>> >> >>>>> >>> > inconsistent:
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> > pg 70.459 is active+clean+inconsistent,
>> >>>> >> >> >>> >>> >> >>>>> >>> > acting
>> >>>> >> >> >>> >>> >> >>>>> >>> > [307,210,273,191,132,450]
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> > Attached is the result of a pg query on
>> >>>> >> >> >>> >>> >> >>>>> >>> > this.
>> >>>> >> >> >>> >>> >> >>>>> >>> > I
>> >>>> >> >> >>> >>> >> >>>>> >>> > will
>> >>>> >> >> >>> >>> >> >>>>> >>> > wait
>> >>>> >> >> >>> >>> >> >>>>> >>> > for your
>> >>>> >> >> >>> >>> >> >>>>> >>> > feedback before issuing a repair.
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> > From what I read, the inconsistencies
>> >>>> >> >> >>> >>> >> >>>>> >>> > are
>> >>>> >> >> >>> >>> >> >>>>> >>> > more
>> >>>> >> >> >>> >>> >> >>>>> >>> > likely
>> >>>> >> >> >>> >>> >> >>>>> >>> > the
>> >>>> >> >> >>> >>> >> >>>>> >>> > result of
>> >>>> >> >> >>> >>> >> >>>>> >>> > ntp,
>> >>>> >> >> >>> >>> >> >>>>> >>> > but
>> >>>> >> >> >>> >>> >> >>>>> >>> > all nodes have the local ntp master and
>> >>>> >> >> >>> >>> >> >>>>> >>> > all
>> >>>> >> >> >>> >>> >> >>>>> >>> > are
>> >>>> >> >> >>> >>> >> >>>>> >>> > showing
>> >>>> >> >> >>> >>> >> >>>>> >>> > sync.
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> > Regards,
>> >>>> >> >> >>> >>> >> >>>>> >>> > Jeff
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> > On Mon, Mar 7, 2016 at 3:15 PM, Gregory
>> >>>> >> >> >>> >>> >> >>>>> >>> > Farnum
>> >>>> >> >> >>> >>> >> >>>>> >>> > <gfarnum@xxxxxxxxxx>
>> >>>> >> >> >>> >>> >> >>>>> >>> > wrote:
>> >>>> >> >> >>> >>> >> >>>>> >>> >>
>> >>>> >> >> >>> >>> >> >>>>> >>> >> [ Keeping this on the users list. ]
>> >>>> >> >> >>> >>> >> >>>>> >>> >>
>> >>>> >> >> >>> >>> >> >>>>> >>> >> Okay, so next time this happens you
>> >>>> >> >> >>> >>> >> >>>>> >>> >> probably
>> >>>> >> >> >>> >>> >> >>>>> >>> >> want
>> >>>> >> >> >>> >>> >> >>>>> >>> >> to
>> >>>> >> >> >>> >>> >> >>>>> >>> >> do a
>> >>>> >> >> >>> >>> >> >>>>> >>> >> pg
>> >>>> >> >> >>> >>> >> >>>>> >>> >> query
>> >>>> >> >> >>> >>> >> >>>>> >>> >> on
>> >>>> >> >> >>> >>> >> >>>>> >>> >> the PG which has been reported as
>> >>>> >> >> >>> >>> >> >>>>> >>> >> dirty. I
>> >>>> >> >> >>> >>> >> >>>>> >>> >> can't
>> >>>> >> >> >>> >>> >> >>>>> >>> >> help
>> >>>> >> >> >>> >>> >> >>>>> >>> >> much
>> >>>> >> >> >>> >>> >> >>>>> >>> >> beyond
>> >>>> >> >> >>> >>> >> >>>>> >>> >> that, but hopefully Kefu or David will
>> >>>> >> >> >>> >>> >> >>>>> >>> >> chime in
>> >>>> >> >> >>> >>> >> >>>>> >>> >> once
>> >>>> >> >> >>> >>> >> >>>>> >>> >> there's
>> >>>> >> >> >>> >>> >> >>>>> >>> >> a
>> >>>> >> >> >>> >>> >> >>>>> >>> >> little
>> >>>> >> >> >>> >>> >> >>>>> >>> >> more for them to look at.
>> >>>> >> >> >>> >>> >> >>>>> >>> >> -Greg
>> >>>> >> >> >>> >>> >> >>>>> >>> >>
>> >>>> >> >> >>> >>> >> >>>>> >>> >> On Mon, Mar 7, 2016 at 1:00 PM, Jeffrey
>> >>>> >> >> >>> >>> >> >>>>> >>> >> McDonald
>> >>>> >> >> >>> >>> >> >>>>> >>> >> <jmcdonal@xxxxxxx>
>> >>>> >> >> >>> >>> >> >>>>> >>> >> wrote:
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > Hi Greg,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > I'm running the ceph version hammer,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > ceph version 0.94.5
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > (9764da52395923e0b32908d83a9f7304401fee43)
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > The hardware migration was performed
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > by
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > just
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > setting
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > the
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > crush
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > map to
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > zero
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > for the OSD we wanted to retire.
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > The
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > system
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > was
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > performing
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > poorly
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > with
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > these older OSDs and we had a
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > difficult
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > time
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > maintaining
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > stability of
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > the
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > system.    The old OSDs are still
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > there
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > but
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > all
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > of
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > the
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > data
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > is
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > now
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > migrated
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > to new and/or existing hardware.
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > Thanks,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > Jeff
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > On Mon, Mar 7, 2016 at 2:56 PM,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > Gregory
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > Farnum
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > <gfarnum@xxxxxxxxxx>
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > wrote:
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >>
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> On Mon, Mar 7, 2016 at 12:07 PM,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> Jeffrey
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> McDonald
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> <jmcdonal@xxxxxxx>
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> wrote:
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > Hi,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > For a while, we've been seeing
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > inconsistent
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > placement
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > groups
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > on
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > our
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > erasure
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > coded system.   The placement
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > groups
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > go
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > from
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > a
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > state
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > of
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > active+clean
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > to
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > active+clean+inconsistent after a
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > deep
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > scrub:
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 2016-03-07 13:45:42.044131
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 7f385d118700 -1
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > log_channel(cluster)
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > log
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > [ERR] :
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 70.320s0 deep-scrub stat mismatch,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > got
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 21446/21428
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > objects,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 0/0
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > clones,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 21446/21428 dirty, 0/0 omap, 0/0
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > hit_set_archive,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 0/0
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > whiteouts,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 64682334170/64624353083 bytes,0/0
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > hit_set_archive
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > bytes.
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 2016-03-07 13:45:42.044416
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 7f385d118700 -1
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > log_channel(cluster)
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > log
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > [ERR] :
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 70.320s0 deep-scrub 18 missing, 0
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > inconsistent
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > objects
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 2016-03-07 13:45:42.044464
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 7f385d118700 -1
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > log_channel(cluster)
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > log
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > [ERR] :
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 70.320 deep-scrub 73 errors
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > So I tell the placement group to
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > perform a
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > repair:
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 2016-03-07 13:49:26.047177
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 7f385d118700  0
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > log_channel(cluster)
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > log
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > [INF] :
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 70.320 repair starts
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 2016-03-07 13:49:57.087291
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 7f3858b0a700  0
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > --
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 10.31.0.2:6874/13937
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > >>
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 10.31.0.6:6824/8127
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > pipe(0x2e578000
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > sd=697
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > :6874
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > The repair finds missing shards
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > and
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > repairs
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > them,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > but
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > then I
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > have
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 18
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 'unfound objects' :
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 2016-03-07 13:51:28.467590
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 7f385d118700 -1
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > log_channel(cluster)
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > log
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > [ERR] :
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 70.320s0 repair stat mismatch, got
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 21446/21428
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > objects,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 0/0
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > clones,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 21446/21428 dirty, 0/0 omap, 0/0
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > hit_set_archive,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 0/0
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > whiteouts,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 64682334170/64624353083 bytes,0/0
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > hit_set_archive
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > bytes.
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 2016-03-07 13:51:28.468358
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 7f385d118700 -1
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > log_channel(cluster)
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > log
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > [ERR] :
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 70.320s0 repair 18 missing, 0
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > inconsistent
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > objects
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 2016-03-07 13:51:28.469431
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 7f385d118700 -1
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > log_channel(cluster)
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > log
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > [ERR] :
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 70.320 repair 73 errors, 73 fixed
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > I've traced one of the unfound
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > objects
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > all
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > the
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > way
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > through the
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > system
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > and
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > I've found that they are not
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > really
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > lost.
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > I
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > can
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > fail
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > over
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > the
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > osd
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > and
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > recover the files.   This is
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > happening
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > quite
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > regularly
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > now
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > after a
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > large
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > migration of data from old
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > hardware to
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > new(migration
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > is
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > now
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > complete).
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > The system sets the PG into
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 'recovery',
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > but
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > we've
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > seen
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > the
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > system
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > in
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > a
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > recovering state for many days.
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > Should
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > we
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > just
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > be
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > patient
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > or do
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > we
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > need
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > to dig further into the issue?
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >>
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> You may need to dig into this more,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> although
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> I'm
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> not
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> sure
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> what
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> the
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> issue is likely to be. What version
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> of
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> Ceph
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> are
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> you
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> running? How
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> did
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> you do this hardware migration?
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> -Greg
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >>
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > pg 70.320 is stuck unclean for
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 704.803040,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > current
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > state
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > active+recovering,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > last acting
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > [277,101,218,49,304,412]
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > pg 70.320 is active+recovering,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > acting
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > [277,101,218,49,304,412],
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 18
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > unfound
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > There is no indication of any
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > problems
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > with
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > down
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > OSDs
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > or
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > network
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > issues
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > with
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > OSDs.
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > Thanks,
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > Jeff
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > --
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > Jeffrey McDonald, PhD
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > Assistant Director for HPC
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > Operations
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > Minnesota Supercomputing Institute
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > University of Minnesota Twin
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > Cities
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 599 Walter Library
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > email:
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > jeffrey.mcdonald@xxxxxxxxxxx
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 117 Pleasant St SE
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > phone: +1
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 612
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 625-6905
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > Minneapolis, MN 55455        fax:
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > +1
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 612
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > 624-8861
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > _______________________________________________
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > ceph-users mailing list
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > ceph-users@xxxxxxxxxxxxxx
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > --
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > Jeffrey McDonald, PhD
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > Assistant Director for HPC Operations
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > Minnesota Supercomputing Institute
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > University of Minnesota Twin Cities
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > 599 Walter Library           email:
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > jeffrey.mcdonald@xxxxxxxxxxx
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > 117 Pleasant St SE           phone:
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > +1
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > 612
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > 625-6905
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > Minneapolis, MN 55455        fax:
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > +1
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > 612
>> >>>> >> >> >>> >>> >> >>>>> >>> >> > 624-8861
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> > --
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> > Jeffrey McDonald, PhD
>> >>>> >> >> >>> >>> >> >>>>> >>> > Assistant Director for HPC Operations
>> >>>> >> >> >>> >>> >> >>>>> >>> > Minnesota Supercomputing Institute
>> >>>> >> >> >>> >>> >> >>>>> >>> > University of Minnesota Twin Cities
>> >>>> >> >> >>> >>> >> >>>>> >>> > 599 Walter Library           email:
>> >>>> >> >> >>> >>> >> >>>>> >>> > jeffrey.mcdonald@xxxxxxxxxxx
>> >>>> >> >> >>> >>> >> >>>>> >>> > 117 Pleasant St SE           phone: +1
>> >>>> >> >> >>> >>> >> >>>>> >>> > 612
>> >>>> >> >> >>> >>> >> >>>>> >>> > 625-6905
>> >>>> >> >> >>> >>> >> >>>>> >>> > Minneapolis, MN 55455        fax:   +1
>> >>>> >> >> >>> >>> >> >>>>> >>> > 612
>> >>>> >> >> >>> >>> >> >>>>> >>> > 624-8861
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> > _______________________________________________
>> >>>> >> >> >>> >>> >> >>>>> >>> > ceph-users mailing list
>> >>>> >> >> >>> >>> >> >>>>> >>> > ceph-users@xxxxxxxxxxxxxx
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>>> >> >> >>> >>> >> >>>>> >>> >
>> >>>> >> >> >>> >>> >> >>>>> >>
>> >>>> >> >> >>> >>> >> >>>>> >>
>> >>>> >> >> >>> >>> >> >>>>> >>
>> >>>> >> >> >>> >>> >> >>>>> >>
>> >>>> >> >> >>> >>> >> >>>>> >> --
>> >>>> >> >> >>> >>> >> >>>>> >>
>> >>>> >> >> >>> >>> >> >>>>> >> Jeffrey McDonald, PhD
>> >>>> >> >> >>> >>> >> >>>>> >> Assistant Director for HPC Operations
>> >>>> >> >> >>> >>> >> >>>>> >> Minnesota Supercomputing Institute
>> >>>> >> >> >>> >>> >> >>>>> >> University of Minnesota Twin Cities
>> >>>> >> >> >>> >>> >> >>>>> >> 599 Walter Library           email:
>> >>>> >> >> >>> >>> >> >>>>> >> jeffrey.mcdonald@xxxxxxxxxxx
>> >>>> >> >> >>> >>> >> >>>>> >> 117 Pleasant St SE           phone: +1 612
>> >>>> >> >> >>> >>> >> >>>>> >> 625-6905
>> >>>> >> >> >>> >>> >> >>>>> >> Minneapolis, MN 55455        fax:   +1 612
>> >>>> >> >> >>> >>> >> >>>>> >> 624-8861
>> >>>> >> >> >>> >>> >> >>>>> >>
>> >>>> >> >> >>> >>> >> >>>>> >>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> --
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>> Jeffrey McDonald, PhD
>> >>>> >> >> >>> >>> >> >>>> Assistant Director for HPC Operations
>> >>>> >> >> >>> >>> >> >>>> Minnesota Supercomputing Institute
>> >>>> >> >> >>> >>> >> >>>> University of Minnesota Twin Cities
>> >>>> >> >> >>> >>> >> >>>> 599 Walter Library           email:
>> >>>> >> >> >>> >>> >> >>>> jeffrey.mcdonald@xxxxxxxxxxx
>> >>>> >> >> >>> >>> >> >>>> 117 Pleasant St SE           phone: +1 612
>> >>>> >> >> >>> >>> >> >>>> 625-6905
>> >>>> >> >> >>> >>> >> >>>> Minneapolis, MN 55455        fax:   +1 612
>> >>>> >> >> >>> >>> >> >>>> 624-8861
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> >>>>
>> >>>> >> >> >>> >>> >> > _______________________________________________
>> >>>> >> >> >>> >>> >> > ceph-users mailing list
>> >>>> >> >> >>> >>> >> > ceph-users@xxxxxxxxxxxxxx
>> >>>> >> >> >>> >>> >> >
>> >>>> >> >> >>> >>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>>> >> >> >>> >>> >>
>> >>>> >> >> >>> >>> >>
>> >>>> >> >> >>> >>> >>
>> >>>> >> >> >>> >>> >> --
>> >>>> >> >> >>> >>> >> Email:
>> >>>> >> >> >>> >>> >> shinobu@xxxxxxxxx
>> >>>> >> >> >>> >>> >> GitHub:
>> >>>> >> >> >>> >>> >> shinobu-x
>> >>>> >> >> >>> >>> >> Blog:
>> >>>> >> >> >>> >>> >> Life with Distributed Computational System based on
>> >>>> >> >> >>> >>> >> OpenSource
>> >>>> >> >> >>> >>> >
>> >>>> >> >> >>> >>> >
>> >>>> >> >> >>> >>> >
>> >>>> >> >> >>> >>> >
>> >>>> >> >> >>> >>> > --
>> >>>> >> >> >>> >>> >
>> >>>> >> >> >>> >>> > Jeffrey McDonald, PhD
>> >>>> >> >> >>> >>> > Assistant Director for HPC Operations
>> >>>> >> >> >>> >>> > Minnesota Supercomputing Institute
>> >>>> >> >> >>> >>> > University of Minnesota Twin Cities
>> >>>> >> >> >>> >>> > 599 Walter Library           email:
>> >>>> >> >> >>> >>> > jeffrey.mcdonald@xxxxxxxxxxx
>> >>>> >> >> >>> >>> > 117 Pleasant St SE           phone: +1 612 625-6905
>> >>>> >> >> >>> >>> > Minneapolis, MN 55455        fax:   +1 612 624-8861
>> >>>> >> >> >>> >>> >
>> >>>> >> >> >>> >>> >
>> >>>> >> >> >>> >>
>> >>>> >> >> >>> >>
>> >>>> >> >> >>> >>
>> >>>> >> >> >>> >>
>> >>>> >> >> >>> >> --
>> >>>> >> >> >>> >>
>> >>>> >> >> >>> >> Jeffrey McDonald, PhD
>> >>>> >> >> >>> >> Assistant Director for HPC Operations
>> >>>> >> >> >>> >> Minnesota Supercomputing Institute
>> >>>> >> >> >>> >> University of Minnesota Twin Cities
>> >>>> >> >> >>> >> 599 Walter Library           email:
>> >>>> >> >> >>> >> jeffrey.mcdonald@xxxxxxxxxxx
>> >>>> >> >> >>> >> 117 Pleasant St SE           phone: +1 612 625-6905
>> >>>> >> >> >>> >> Minneapolis, MN 55455        fax:   +1 612 624-8861
>> >>>> >> >> >>> >>
>> >>>> >> >> >>> >>
>> >>>> >> >> >>
>> >>>> >> >> >>
>> >>>> >> >> >>
>> >>>> >> >> >>
>> >>>> >> >> >> --
>> >>>> >> >> >>
>> >>>> >> >> >> Jeffrey McDonald, PhD
>> >>>> >> >> >> Assistant Director for HPC Operations
>> >>>> >> >> >> Minnesota Supercomputing Institute
>> >>>> >> >> >> University of Minnesota Twin Cities
>> >>>> >> >> >> 599 Walter Library           email:
>> >>>> >> >> >> jeffrey.mcdonald@xxxxxxxxxxx
>> >>>> >> >> >> 117 Pleasant St SE           phone: +1 612 625-6905
>> >>>> >> >> >> Minneapolis, MN 55455        fax:   +1 612 624-8861
>> >>>> >> >> >>
>> >>>> >> >> >>
>> >>>> >> >
>> >>>> >> >
>> >>>> >> >
>> >>>> >> >
>> >>>> >> > --
>> >>>> >> >
>> >>>> >> > Jeffrey McDonald, PhD
>> >>>> >> > Assistant Director for HPC Operations
>> >>>> >> > Minnesota Supercomputing Institute
>> >>>> >> > University of Minnesota Twin Cities
>> >>>> >> > 599 Walter Library           email: jeffrey.mcdonald@xxxxxxxxxxx
>> >>>> >> > 117 Pleasant St SE           phone: +1 612 625-6905
>> >>>> >> > Minneapolis, MN 55455        fax:   +1 612 624-8861
>> >>>> >> >
>> >>>> >> >
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > --
>> >>>> >
>> >>>> > Jeffrey McDonald, PhD
>> >>>> > Assistant Director for HPC Operations
>> >>>> > Minnesota Supercomputing Institute
>> >>>> > University of Minnesota Twin Cities
>> >>>> > 599 Walter Library           email: jeffrey.mcdonald@xxxxxxxxxxx
>> >>>> > 117 Pleasant St SE           phone: +1 612 625-6905
>> >>>> > Minneapolis, MN 55455        fax:   +1 612 624-8861
>> >>>> >
>> >>>> >
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>>
>> >>> Jeffrey McDonald, PhD
>> >>> Assistant Director for HPC Operations
>> >>> Minnesota Supercomputing Institute
>> >>> University of Minnesota Twin Cities
>> >>> 599 Walter Library           email: jeffrey.mcdonald@xxxxxxxxxxx
>> >>> 117 Pleasant St SE           phone: +1 612 625-6905
>> >>> Minneapolis, MN 55455        fax:   +1 612 624-8861
>> >>>
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >>
>> >> Jeffrey McDonald, PhD
>> >> Assistant Director for HPC Operations
>> >> Minnesota Supercomputing Institute
>> >> University of Minnesota Twin Cities
>> >> 599 Walter Library           email: jeffrey.mcdonald@xxxxxxxxxxx
>> >> 117 Pleasant St SE           phone: +1 612 625-6905
>> >> Minneapolis, MN 55455        fax:   +1 612 624-8861
>> >>
>> >>
>>
>
>
>
> --
>
> Jeffrey McDonald, PhD
> Assistant Director for HPC Operations
> Minnesota Supercomputing Institute
> University of Minnesota Twin Cities
> 599 Walter Library           email: jeffrey.mcdonald@xxxxxxxxxxx
> 117 Pleasant St SE           phone: +1 612 625-6905
> Minneapolis, MN 55455        fax:   +1 612 624-8861
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux