Re: Replication question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, now if I run a lab and the data is somewhat important but I can bare losing the data, couldn't I shrink the pool replica count and that increases the amount of storage I can use without using erasure coding?

So for 145TB with a replica of 3 = ~41 TB total in the cluster

But if that same clusters replica was decreased to 2 I could possibly get 145TB / 2 - overhead for cluster and get ~65TB in the cluster at one time..correct?

Thanks in advance!


On Mar 12, 2015 11:53 AM, "Kamil Kuramshin" <kamil.kuramshin@xxxxxxxx> wrote:
For example, here is my confuguration:

superuser@admin:~$ ceph df

GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED
    242T      209T       20783G          8.38
POOLS:
    NAME                  ID     USED      %USED     MAX AVAIL     OBJECTS
    ec_backup-storage     4      9629G      3.88          137T     2465171
    cache                 5       136G      0.06        38393M       35036
    block-devices         6      1953G      0.79        70202G      500060


ec_backup-storage - is Erasure Encoded pool, k=2, m=1 (default)
cache - is replicated pool consisting dedicated 12xSSDx60Gb disks, replica size=3, used as cache tier for EC pool
block-devices - is replicated pool, replica size=3, using same OSD's that in
Erasure Encoded pool
 
On '
MAX AVAIL' column you can see that EC pool currently has 137Tb of free space, but in same time if we will write to replicated pool there is only 70Tb, but both pools are on the same OSD's. So using EC pool saves 2 times more effective space in my case!

12.03.2015 17:50, Thomas Foster пишет:

Thank you!  That helps alot.

On Mar 12, 2015 10:40 AM, "Steve Anthony" <sma310@xxxxxxxxxx> wrote:
Actually, it's more like 41TB. It's a bad idea to run at near full capacity (by default past 85%) because you need some space where Ceph can replicate data as part of its healing process in the event of disk or node failure. You'll get a health warning when you exceed this ratio.

You can use erasure coding to increase the amount of data you can store beyond 41TB, but you'll still need some replicated disk as a caching layer in front of the erasure coded pool if you're using RBD. See: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-December/036430.html

As to how much space you can save with erasure coding, that will depend on if you're using RBD and need a cache layer and the values you set for k and m (number of data chunks and coding chunks). There's been some discussion on the list with regards to choosing those values.

-Steve

On 03/12/2015 10:07 AM, Thomas Foster wrote:
I am looking into how I can maximize my space with replication, and I am trying to understand how I can do that.

I have 145TB of space and a replication of 3 for the pool and was thinking that the max data I can have in the cluster is ~47TB in my cluster at one time..is that correct?  Or is there a way to get more data into the cluster with less space using erasure coding?  

Any help would be greatly appreciated.




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Steve Anthony
LTS HPC Support Specialist
Lehigh University
sma310@xxxxxxxxxx

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux