Our hardware is like this, three identical servers with 8 osd disks, 1 ssd disk
as journal, 1 for os, 32GB of ECC RAM, 4 GiB copper ethernet. We deploy this
cluster since February 2015 and most of the the system load is not too great,
lots of idle time.
Right now we have a node that mounts rbd blocks and export them as nfs. It
works quite well but at a cost of one extra node as bridge between storage
client (vms) and storage provider cluster (ceph osd and mon).
What I want to know is, is there any reason why I shouldn't mount rbd disks on
one of the server, the ones that also runs OSD and MON daemons, and export them
as nfs or iSCSI? Assuming that I already done my homework to make my setup
highly available using pacemaker (eg. floating IP, iSCSI/NFS resource), isn't
something like this would be better as it is more reliable? ie. I remove the
middle-man node(s) so I only have to make sure about those ceph nodes and vm-hosts.
Thank you
as journal, 1 for os, 32GB of ECC RAM, 4 GiB copper ethernet. We deploy this
cluster since February 2015 and most of the the system load is not too great,
lots of idle time.
Right now we have a node that mounts rbd blocks and export them as nfs. It
works quite well but at a cost of one extra node as bridge between storage
client (vms) and storage provider cluster (ceph osd and mon).
What I want to know is, is there any reason why I shouldn't mount rbd disks on
one of the server, the ones that also runs OSD and MON daemons, and export them
as nfs or iSCSI? Assuming that I already done my homework to make my setup
highly available using pacemaker (eg. floating IP, iSCSI/NFS resource), isn't
something like this would be better as it is more reliable? ie. I remove the
middle-man node(s) so I only have to make sure about those ceph nodes and vm-hosts.
Thank you
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com