Re: Strange configuration with many SAN and few servers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, you can get the OSDs back if you replace the server.

In fact, in your case you might not want to bother including hosts as a distinguishable entity in the crush map; and then to "replace the server" you could hair mount the LUNs somewhere else and turn on the OSDs. You would need to set a few config options (like the one that automatically updates crush location on boot), but it shouldn't be too difficult.

More concerning is that since you're on a SAN you rely on its hardware not to fail. And depending on what controls you have you might choose to handle redundancy differently than normal.
-Greg
On Fri, Nov 7, 2014 at 3:42 AM Mario Giammarco <mgiammarco@xxxxxxxxx> wrote:
Hello,
I need to build a ceph test lab.
I have to do it with existing hardware.
I have several iscsi and fibre channel san but few servers.
Imagine I have:

- 4 SAN with 1 lun on each san
- 2 diskless (apart boot disk) servers

I mount two luns on first server and two luns on second server.
Then (I suppose) I put 4 ceph osd one on each lun.
Now if a server breaks I lose two osds. But the osd data is not lost because
it is on disk.
My question is: if I replace the server can I use again the osds remounting
the luns on the new server?

Thanks,
Mario

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux