I am working with a small test cluster, but the problems
described here will remain in production. I have an external fiber
channel storage array and have exported 2 3TB disks (just as JBODs).
I can use ceph-deploy to create an OSD for each of these disks on a
node named Vashti. So far everything is fine. The problem is that I
have another machine, named Zadok (also will be part of the ceph
cluster), which is on the same fiber channel network and so can see
the same two disks. This on its own is still not a problem. But the
ceph init script now seems to scan all devices it can see and if it
finds an OSD on any of them it just starts it. So now both machines
will find both disks and mount/start both of them, which will lead
to corruption. I have seen this happen already. So how can I
prevent this from happening? Ideally I would want one OSD running
on each machine. I cannot use fiber channel zoning to make one disk
invisible to one machine because that only works on the FC port
level, but both disks come from the same storage array and thus
share the same FC port. Is there any way to manually configure which OSDs are started on which machines? The osd configuration block includes the osd name and host, so is there a way to say that, say, osd.0 should only be started on host vashti and osd.1 should only be started on host zadok? I tried using this configuration: [osd.0]But the init script still starts both of them. Is there any way to disable the automatic scanning of disks? I'm stuck with this hardware so hopefully there is a way to make it work. Thanks for any help. Kevin |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com