Re: Data replication not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> I have almost the same problem.
> my cluster before: 1mon, 2 mds, 3 osdïosd-00,osd-01,osd-02ï
> when i want to add the fourth osd, osd-03, i did:
> mon-00 # ceph mon getmap -o /opt/cluster_debug/monmap
> osd-03 #Âcosd -c /etc/ceph/ceph.conf -i 3 --mkfs --monmap /root/monmap
> mon-00 #Âceph osd setmaxosd 4
> osd-03 #Âservice ceph start osd3
> ÂÂÂceph osd getcrushmap -o /opt/cluster_debug/crushmap
>
> Âcrushtool -d /opt/cluster_debug/crushmap -o /opt/cluster_debug/crushmap.txt
>
> Âvim /opt/cluster_debug/crushmap.txt
>
> Â crushtool -c /opt/cluster_debug/crushmap.txt -o
> /opt/cluster_debug/crushmap.new
>
> Â ceph osd setcrushmap -i /opt/cluster_debug/crushmap.new
>
> [root@mon-00 ~]# ceph osd stat
>
> 2011-04-29 03:25:31.039361 mon <- [osd,stat]
>
> 2011-04-29 03:25:31.039790 mon0 -> 'e8: 4 osds: 4 up, 4 in' (0)
>
> but no data was migrated to osd-03, Âadditionally, process cosd on all 4 osd
> disappeared!!!
>
> Attachment is the new crushmap.
>

Your crushmap is not correct. You should add an entry: "item device3
weight 1.000" in domain root.

-- 
Henry Chang
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux