Re: OSD on an external, shared device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ah, that sounds like what I want. I'll look into that, thanks.

Kevin

On 11/27/2013 11:37 AM, LaSalle, Jurvis wrote:
Is LUN masking an option in your SAN?

On 11/27/13, 2:34 PM, "Kevin Horan" <khoran@xxxxxxxxxx> wrote:

Thanks. I may have to go this route, but it seems awfully fragile. One
stray  command could destroy the entire cluster, replicas and all. Since
all disks are visible to all nodes, any one of them could mount
everything, corrupting all OSDs at once.
     Surly other people are using external FC drives, how do you limit
the visibility of the drives? Am I missing something here? Could there
be a configuration option or something added to ceph to ensure that it
never tries to mount things on its own?

Thanks.

Kevin
On 11/26/2013 05:14 PM, Kyle Bader wrote:
      Is there any way to manually configure which OSDs are started on
which
machines? The osd configuration block includes the osd name and host,
so is
there a way to say that, say, osd.0 should only be started on host
vashti
and osd.1 should only be started on host zadok?  I tried using this
configuration:
The ceph udev rules are going to automatically mount disks that match
the ceph "magic" guids, to dig through the full logic you need to
inspect these files:

/lib/udev/rules.d/60-ceph-partuuid-workaround.rules
/lib/udev/rules.d/95-ceph-osd.rules

The upstart scripts look to see what is mounted at /var/lib/ceph/osd/
and starts osd daemons as appropriate:

/etc/init/ceph-osd-all-starter.conf

In theory you should be able to remove the udev scripts and mount the
osds in /var/lib/ceph/osd if your using upstart. You will want to make
sure that upgrades to the ceph package don't replace the files, maybe
that means making a null rule and using "-o
Dpkg::Options::='--force-confold" in ceph-deploy/chef/puppet/whatever.
You will also want to avoid putting the mounts in fstab because it
could render your node unbootable if the device or filesystem fails.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux