CephFS New EC Data Pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

I'm trying to add a new data pool to CephFS, as we need some longer
term archival storage.

ceph mds add_data_pool archive
Error EINVAL: can't use pool 'archive' as it's an erasure-code pool

Here are the steps taken to create the pools for this new datapool:
ceph osd pool create arccache 512 512 replicated replicated_ruleset
ceph osd pool set arccache min_size 2
ceph osd pool set arccache size 3
ceph osd erasure-code-profile set ec62profile k=6 m=2
ruleset-failure-domain=disktype ruleset-root=std
ceph osd pool create archive 2048 2048 erasure ec62profile ecpool
ceph osd tier add-cache archive arccache $((1024*1024*1024*1024*5))
ceph osd tier cache-mode arccache writeback
ceph osd tier set-overlay archive arccache
ceph osd pool set arccache cache_target_dirty_ratio 0.3
ceph osd pool set arccache target_max_objects 2000000

I'm running Ceph 0.94.2 on CentOS 7.1

The other thing that is probably *not* what we want is that I can add
the cache tier (arccache) as a datapool to CephFS. Doing so adds pool
id 35 (the cache tier) to the mdsmap, which is not what happens when
you create a new cephfs with a tiered EC pool as the datapool.

dumped mdsmap epoch 63386
epoch   63386
flags   0
created 2015-06-19 09:52:52.598619
modified        2015-07-21 16:21:12.672241
tableserver     0
root    0
session_timeout 60
session_autoclose       300
max_file_size   17592186044416
last_failure    63309
last_failure_osd_epoch  86152
compat  compat={},rocompat={},incompat={1=base v0.20,2=client
writeable ranges,3=default file layouts on dirs,4=dir inode in
separate object,5=mds uses versioned encoding,6=dirfrag is stored in
omap,8=no anchor
table}
max_mds 1
in      0
up      {0=142503496}
failed
stopped
data_pools      34,35
metadata_pool   32
inline_data     disabled
141642223:      10.5.38.2:6800/78600 'hobbit02' mds.-1.0 up:standby seq 1
141732776:      10.5.38.14:6846/5875 'hobbit14' mds.-1.0 up:standby seq 1
156005649:      10.5.38.13:6892/20895 'hobbit13' mds.-1.0 up:standby seq 1
142503496:      10.5.38.1:6926/213073 'hobbit01' mds.0.2916 up:active seq 41344

Any thoughts? Is it a bug? Any work arounds?

--
Adam
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux