Re: Adding Disks / Storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Jim,

On 14/08/13 13:10, Jim Summers wrote:
Hello All,

Just starting out with ceph and wanted to make sure that ceph will do a
couple of things.

1.  Has the ability to keep a cephfs available to users even if one of
the ODS servers has to be rebooted or whatever.

Yes. If the pool is replicated and if the OSD server you want to reboot, or fail, or whatever, only contains a portion of the replicas.

i.e., if you have 2 OSDs for on that pool, the pool has a replica size of 2, and each OSD is on a different server, then that pool will still be available if one of those OSDs is brought down (for whatever reason).

2.  It is possible to keep adding disks to build one large storage /
cephfs.  By this I am thinking I want users to see the mount point:
/data/ceph
and it is initially made from two ODS that have 24TB of local storage to
serve.  So that would give them about 40TB initially.

Yes, but you'll have to keep in mind 1.

So, say you have 24TB worth of OSDs per server, two servers in total. If you set your cephfs pool's replica size to 1, then you'll be able to leverage the full 24TB*2; but considering that's all you have, you're not replicating any of your data thus 1. goes out of the window.

If on the other hand you set replica size 2, you'll guarantee 1. but end up with 24TB of useful space.

For more on replicas and how to configure your maps in order to leverage different failure domains, you should look into the docs [1,2,3] -- taking a look at the docs should help you figure out what would be the best approach for your use case.

[1] - http://ceph.com/docs/master/rados/operations/data-placement/
[2] - http://ceph.com/docs/master/rados/operations/pools/
[3] - http://ceph.com/docs/master/rados/operations/crush-map/


3.  Then over time add a third ODS that also has 24TB and that just
becomes part of the cephfs that is mounted at /data/ceph which would
then allow it to have about 60TB.

Is that do-able?

You can easily expand or contract the cluster by adding or removing OSDs. So yes.


4.  The ods servers also have fiber channel access to some LUNS on a DDN
san.  Can I also add those into the same storage pool and mount point
/data/ceph

My short answer would be yes. As long as an OSD (the daemon) is configured to used that as their backing device, that should be possible.

I would advise you to take a look over the docs, as I am sure most of your questions should be answered there.


  -Joao


--
Joao Eduardo Luis
Software Engineer | http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux