The clients will need to be able to contact the mons and the osds. NEVER use 2 mons. Mons are a quorum and work best with odd numbers (1, 3, 5, etc). 1 mon is better than 2 mons. It is better to remove the raid and put the individual disks as OSDs. Ceph handles the redundancy through replica copies. It is much better to have a third node for failure domain reasons so you can have 3 copies of your data and have 1 in each of the 3 servers. The OSDs store their information in broken up objects divvied up into PGs that are assigned to the OSDs. You would need to set up CephFS and rsync the data into it to migrate the data into ceph.
I don't usually recommend this, but you might prefer Gluster. You would use the raided disks as the brick in each node. Set it up to have 2 copies (better to have 3 but you only have 2 nodes). Each server can be used to NFS map the gluster mount point. The files are stored as flat files on the bricks, but you would still need to create the gluster first and then rsync the data into the mounted gluster instead of directly onto the disk. With this you don't have to worry about the mon service, mds service, osd services, balancing the crush map, etc. Gluster of course has its own complexities and limitations, but it might be closer to what you're looking for right now.
On Wed, May 3, 2017 at 4:06 PM Marcus Pedersén <marcus.pedersen@xxxxxx> wrote:
_______________________________________________Hello everybody!
I am a newbie on ceph and I really like it and want to try it out.
I have a couple of thoughts and questions after reading documentation and need some help to see that I am on the right path.
Today I have two file servers in production that I want to start my ceph fs on and expand from that.
I want these servers to function as a failover cluster and as I see it I will be able to do it with ceph.
To get a failover cluster without a single point of failure I need at least 2 monitors, 2 mds and 2 osd (my existing file servers), right?
Today, both of the file servers use a raid on 8 disks. Do I format my raid xfs and run my osds on the raid?
Or do I split up my raid and add the disks directly to the osds?
When I connect clients to my ceph fs are they talking to the mds or are the clients talking to the ods directly as well?
If the client just talk to the mds then the ods and the monitor can be in a separate network and the mds connected both to the client network and the local "ceph" network.
Today, we have about 11TB data on these file servers, how do I move the data to the ceph fs? Is it possible to rsync to one of the ods disks, start the ods daemon and let it replicate itself?
Is it possible to set up the ceph fs with 2 mds, 2 monitors and 1 ods and add the second ods later?
This is to be able to have one file server in production, config ceph and test with the other, swap to the ceph system and when it is up and running add the second ods.
Of course I will test this out before I bring it to production.
Many thanks in advance!
Best regards
Marcus
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com