Re: EC and rbd-mirroring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ilya

Ah, thanks. I misunderstood that part. However, I can't get it to work, data still goes to the wrong pool.

I did this, which seemed to stick.

# ceph config set global rbd_default_data_pool rbd_data
# ceph config dump | grep rbd_default
global advanced rbd_default_data_pool rbd_data

I turned on mirroring and image sync started alright:

# rbd mirror image status rbd/depot64
depot64:
  global_id:   9315eb0f-207c-4f7c-928c-ec4ba7ba7c47
  state:       up+syncing
  description: bootstrapping, IMAGE_SYNC/COPY_IMAGE 29%
  service:     dcn-ceph-02.qunlre on dcn-ceph-02
  last_update: 2021-08-18 19:21:22



# rbd info depot64
rbd image 'depot64':
	size 1 TiB in 262144 objects
	order 22 (4 MiB objects)
	snapshot_count: 2
	id: bfee43c1858765
	block_name_prefix: rbd_data.bfee43c1858765
	format: 2
	features: layering, exclusive-lock, journaling
	op_features:
	flags:
	create_timestamp: Wed Aug 18 19:02:45 2021
	access_timestamp: Wed Aug 18 19:02:45 2021
	modify_timestamp: Wed Aug 18 19:02:45 2021
	journal: bfee43c1858765
	mirroring state: enabled
	mirroring mode: journal
	mirroring global id: 9315eb0f-207c-4f7c-928c-ec4ba7ba7c47
	mirroring primary: false

No data in the rbd_data pool though, still going to the rbd pool:

# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    224 TiB  162 TiB   61 TiB    61 TiB      27.32
ssd    1.3 TiB  860 GiB  481 GiB   481 GiB      35.89
TOTAL  225 TiB  163 TiB   62 TiB    62 TiB      27.37

--- POOLS ---
POOL                   ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
device_health_metrics   1    1   47 MiB       61  140 MiB   0.02    240 GiB
cephfs_data             3  128   33 TiB   56.26M   57 TiB  36.29     50 TiB
cephfs_metadata         4   32   26 GiB    1.32M   53 GiB   6.80    360 GiB
rbd_data                5   32    8 KiB        1   16 KiB      0     50 TiB
rbd                     6  128  257 GiB   62.55k  420 GiB  36.84    360 GiB

Did I miss something obvious?

Thanks,

Torkil

On 18/08/2021 14.30, Ilya Dryomov wrote:
On Wed, Aug 18, 2021 at 12:40 PM Torkil Svensgaard <torkil@xxxxxxxx> wrote:

Hi

I am looking at one way mirroring from cluster A to B cluster B.

As pr [1] I have configured two pools for RBD on cluster B:

1) Pool rbd_data using default EC 2+2
2) Pool rbd using replica 2

I have a peer relationship set up so when I enable mirroring on an image
in cluster A it will be replicated to cluster B but it will put both
data and metadata in the rbd pool.

How do I get the rbd-mirror daemon to use rbd_data for data and rbd for
metadata only?

Thanks,

Torkil

[1]
https://docs.ceph.com/en/latest/rados/operations/erasure-code/#erasure-coding-with-overwrites

Hi Torkil,

This is covered here:

https://docs.ceph.com/en/latest/rbd/rbd-mirroring/#data-pools

Thanks,

                 Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux