User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Icedove/31.2.0
For example, here is my
confuguration: superuser@admin:~$ ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 242T 209T 20783G 8.38 POOLS: NAME ID USED %USED MAX
AVAIL OBJECTS ec_backup-storage 4 9629G 3.88
137T 2465171 cache 5 136G 0.06
38393M 35036 block-devices 6 1953G 0.79
70202G 500060
ec_backup-storage - is Erasure Encoded pool, k=2,
m=1 (default) cache - is replicated pool consisting dedicated
12xSSDx60Gb disks, replica size=3, used as cache tier for EC
pool block-devices - is replicated pool, replica size=3,
using same OSD's that in Erasure
Encoded pool
On 'MAX AVAIL'
column you can see that EC pool currently has 137Tb of
free space, but in same time if we will write to replicated
pool there is only 70Tb, but both pools are
on the sameOSD's. So using EC pool saves 2times more effective space in my case!
12.03.2015 17:50, Thomas Foster пишет:
Thank you! That helps alot.
On Mar 12, 2015 10:40 AM, "Steve Anthony"
<sma310@xxxxxxxxxx>
wrote:
Actually, it's more
like 41TB. It's a bad idea to run at near full capacity (by
default past 85%) because you need some space where Ceph can
replicate data as part of its healing process in the event
of disk or node failure. You'll get a health warning when
you exceed this ratio.
As to how much space you can save with erasure coding, that
will depend on if you're using RBD and need a cache layer
and the values you set for k and m (number of data chunks
and coding chunks). There's been some discussion on the list
with regards to choosing those values.
-Steve
On 03/12/2015 10:07 AM, Thomas Foster wrote:
I am looking into how I can maximize my
space with replication, and I am trying to understand
how I can do that.
I have 145TB of space and a replication of 3 for
the pool and was thinking that the max data I can have
in the cluster is ~47TB in my cluster at one time..is
that correct? Or is there a way to get more data into
the cluster with less space using erasure coding?