3 HOST:
1 CPU + 4Disks(3T SATA Disk)
Ceph version: 0.80.6
OS: Redhat 6.5
Cluster: 3 host, and have 3 MONs + 9 OSDs( One OSD hold one Disk)
1. When cluster status is Health_OK, I write a little data, then I can find some block file in PG directory.
[root@rhls-test2 release]# ll data/osd/ceph-0/current/2.106_head/
total 4100
-rw-r--r--. 1 root root 4194304 Dec 17 16:25 rb.0.1021.6b8b4567.000000000024__head_753F3906__2
total 4100
-rw-r--r--. 1 root root 4194304 Dec 17 16:25 rb.0.1021.6b8b4567.000000000024__head_753F3906__2
2. Before replace the osd disk , we set the cluster NOOUT flag.
3. We stop one OSD.2 which response the PG(2.106) as replica node, and replace the disk with empty disk.
4. and we format the disk with xfs filesystem, and use the ceph-osd --mkfs format.
ceph-osd -i 2 --mkfs --set-osd-fsid 86828adf-7579-4127-8789-cb5e8266f15c
note:
For simply to replace disk,
we modify the ceph-osd code , add -set-osd-fsid option for ceph-osd to set the osd use the old fsid.
5. the osd start is OK , and we can find all PG's statues is active+clean.
cluster 7c731223-9637-4e21-a6f5-c576a9cf92a4
health HEALTH_OK
monmap e1: 3 mons at {a=192.169.1.84:6789/0,b=192.169.1.85:6789/0,c=192.169.1.86:6789/0}, election epoch 78, quorum 0,1,2 a,b,c
osdmap e808: 9 osds: 9 up, 9 in
pgmap v36218: 3072 pgs, 3 pools, 7069 MB data, 8254 objects
48063 MB used, 22298 GB / 22345 GB avail
3072 active+clean
health HEALTH_OK
monmap e1: 3 mons at {a=192.169.1.84:6789/0,b=192.169.1.85:6789/0,c=192.169.1.86:6789/0}, election epoch 78, quorum 0,1,2 a,b,c
osdmap e808: 9 osds: 9 up, 9 in
pgmap v36218: 3072 pgs, 3 pools, 7069 MB data, 8254 objects
48063 MB used, 22298 GB / 22345 GB avail
3072 active+clean
6. but I find the osd.2 disk have not any data block except the meta data(omap, superblock ,etc). and I can find the all PG's directory, but is empty.
[root@rhls-test2 release]# ll data/osd/ceph-2/current/2.106_head/
total 0
total 0
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com