My fellow ceph-ers,
What is the recommended way of replacing an journal disk/ssd in a
cluster node configured with ceph-deploy. (no osd journal setting)
ceph version 0.94.10 (b1e0532418e4631af01acbc0cedd426f1905f4af)
I do not have space/connections for an other disk on this node.
taking the node down, taking the ssd out and a dd_rescue to a new ssd
may be my workaround, but I want to learn/test an failing ssd case.
ceph osd set noout
/etc/init.d/ceph stop osd.0
/etc/init.d/ceph stop osd.1
/etc/init.d/ceph stop osd.2
/etc/init.d/ceph stop osd.3
/etc/init.d/ceph stop osd.14
ceph-osd -i 0 --flush-journal
ceph-osd -i 1 --flush-journal
ceph-osd -i 2 --flush-journal
ceph-osd -i 3 --flush-journal
ceph-osd -i 14 --flush-journal
sgdisk --backup=/root/table /dev/sdg
sgdisk --load-backup=/root/table_table /dev/sdg
can I use mkjournal in this case?
ceph-osd -i 0 --mkjournal
ceph-osd -i 1 --mkjournal
ceph-osd -i 2 --mkjournal
ceph-osd -i 3 --mkjournal
ceph-osd -i 14 --mkjournal
I am aiming at an command sequence to flush and remove a journal,
replace ssd, recreate journal for disk and spin the osd back up.
# what I found so far
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-May/039434.html
root@ceph-deploy:~/ceph-cluster-two# ceph-deploy disk list ceph04
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.25): /usr/bin/ceph-deploy disk
list ceph04
[ceph04][DEBUG ] connected to host: ceph04
[ceph04][DEBUG ] detect platform information from remote host
[ceph04][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: debian 8.9 jessie
[ceph_deploy.osd][DEBUG ] Listing disks on ceph04...
[ceph04][DEBUG ] find the location of an executable
[ceph04][INFO ] Running command: /usr/sbin/ceph-disk list
[ceph04][DEBUG ] /dev/sda :
[ceph04][DEBUG ] /dev/sda1 ceph data, active, cluster ceph, osd.0,
journal /dev/sdg6
[ceph04][DEBUG ] /dev/sdb :
[ceph04][DEBUG ] /dev/sdb1 ceph data, active, cluster ceph, osd.1,
journal /dev/sdg2
[ceph04][DEBUG ] /dev/sdc :
[ceph04][DEBUG ] /dev/sdc1 ceph data, active, cluster ceph, osd.2,
journal /dev/sdg3
[ceph04][DEBUG ] /dev/sdd :
[ceph04][DEBUG ] /dev/sdd1 ceph data, active, cluster ceph, osd.3,
journal /dev/sdg7
[ceph04][DEBUG ] /dev/sde :
[ceph04][DEBUG ] /dev/sde1 other, linux_raid_member
[ceph04][DEBUG ] /dev/sdf :
[ceph04][DEBUG ] /dev/sdf1 ceph data, active, cluster ceph, osd.14,
journal /dev/sdg5
[ceph04][DEBUG ] /dev/sdg :
[ceph04][DEBUG ] /dev/sdg2 ceph journal, for /dev/sdb1
[ceph04][DEBUG ] /dev/sdg3 ceph journal, for /dev/sdc1
[ceph04][DEBUG ] /dev/sdg5 ceph journal, for /dev/sdf1
[ceph04][DEBUG ] /dev/sdg6 ceph journal, for /dev/sda1
[ceph04][DEBUG ] /dev/sdg7 ceph journal, for /dev/sdd1
Thank you in advance,
Kind regards,
Jelle de Jong
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com