I am using hammer 0.94 On Fri, Jul 31, 2015 at 4:01 PM, Mallikarjun Biradar <mallikarjuna.biradar@xxxxxxxxx> wrote: > Yeah. OSD service stopped. > Nope, I am not using any orchestration system. > > user@host-1:~$ ps -ef | grep ceph > root 2305 1 7 Jul27 ? 06:52:36 /usr/bin/ceph-osd > --cluster=ceph -i 3 -f > root 2522 1 6 Jul27 ? 06:19:42 /usr/bin/ceph-osd > --cluster=ceph -i 0 -f > root 2792 1 6 Jul27 ? 06:07:49 /usr/bin/ceph-osd > --cluster=ceph -i 2 -f > root 2904 1 8 Jul27 ? 07:48:19 /usr/bin/ceph-osd > --cluster=ceph -i 1 -f > root 13368 1 5 Jul28 ? 04:15:31 /usr/bin/ceph-osd > --cluster=ceph -i 17 -f > root 16685 1 6 Jul28 ? 04:36:54 /usr/bin/ceph-osd > --cluster=ceph -i 16 -f > root 26942 1 7 Jul29 ? 03:54:45 /usr/bin/ceph-osd > --cluster=ceph -i 24 -f > user 42767 42749 0 15:58 pts/3 00:00:00 grep --color=auto ceph > use@host-1:~$ ceph osd tree > ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY > -1 170.99991 root default > -7 68.39996 chassis chassis2 > -4 34.19998 host host-3 > 8 6.84000 osd.8 up 1.00000 1.00000 > 9 6.84000 osd.9 up 1.00000 1.00000 > 10 6.84000 osd.10 up 1.00000 1.00000 > 11 6.84000 osd.11 up 1.00000 1.00000 > 21 6.84000 osd.21 up 1.00000 1.00000 > -5 34.19998 host host-6 > 12 6.84000 osd.12 up 1.00000 1.00000 > 13 6.84000 osd.13 up 1.00000 1.00000 > 14 6.84000 osd.14 up 1.00000 1.00000 > 15 6.84000 osd.15 up 1.00000 1.00000 > 23 6.84000 osd.23 up 1.00000 1.00000 > -6 102.59995 chassis chassis1 > -2 47.87997 host host-1 > 0 6.84000 osd.0 up 1.00000 1.00000 > 1 6.84000 osd.1 up 1.00000 1.00000 > 2 6.84000 osd.2 up 1.00000 1.00000 > 3 6.84000 osd.3 up 1.00000 1.00000 > 16 6.84000 osd.16 up 1.00000 1.00000 > 17 6.84000 osd.17 up 1.00000 1.00000 > 24 6.84000 osd.24 up 1.00000 1.00000 > -3 54.71997 host host-2 > 4 6.84000 osd.4 up 1.00000 1.00000 > 5 6.84000 osd.5 up 1.00000 1.00000 > 6 6.84000 osd.6 up 1.00000 1.00000 > 7 6.84000 osd.7 up 1.00000 1.00000 > 18 6.84000 osd.18 up 1.00000 1.00000 > 19 6.84000 osd.19 up 1.00000 1.00000 > 25 6.84000 osd.25 up 1.00000 1.00000 > 26 6.84000 osd.26 up 1.00000 1.00000 > 20 0 osd.20 up 1.00000 1.00000 > 22 0 osd.22 up 1.00000 1.00000 > user@host-1:~$ > user@host-1:~$ df > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/sdq1 414579696 11211248 382285928 3% / > none 4 0 4 0% /sys/fs/cgroup > udev 65980912 4 65980908 1% /dev > tmpfs 13198836 1124 13197712 1% /run > none 5120 0 5120 0% /run/lock > none 65994176 12 65994164 1% /run/shm > none 102400 0 102400 0% /run/user > /dev/sdl1 7345777988 3233438932 4112339056 45% /var/lib/ceph/osd/ceph-2 > /dev/sda1 7345777988 4484766028 2861011960 62% /var/lib/ceph/osd/ceph-3 > /dev/sdn1 7345777988 3344604424 4001173564 46% /var/lib/ceph/osd/ceph-1 > /dev/sdp1 7345777988 3897260808 3448517180 54% /var/lib/ceph/osd/ceph-0 > /dev/sdc1 7345777988 3029110220 4316667768 42% /var/lib/ceph/osd/ceph-16 > /dev/sde1 7345777988 2673181020 4672596968 37% /var/lib/ceph/osd/ceph-17 > /dev/sdg1 7345777988 3537932824 3807845164 49% /var/lib/ceph/osd/ceph-24 > user@host-1:~$ > > On Fri, Jul 31, 2015 at 3:53 PM, John Spray <john.spray@xxxxxxxxxx> wrote: >> >> >> On 31/07/15 09:47, Mallikarjun Biradar wrote: >>> >>> For a moment it de-list removed OSD's and after sometime it again >>> comes up in ceph osd tree listing. >>> >> >> Is the OSD service itself definitely stopped? Are you using any >> orchestration systems (puppet, chef) that might be re-creating its auth key >> etc? >> >> John _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com