Can I remove cephfs_data pool when using cephFS mode (mimic 13.2)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone,

When I started setting up a test cluster, I created a cephfs_data pool since I thought I was going to use the replicated type for the pool. This turned out not to be to most ideal choice when it comes to usable disk space. So I decided to create another erasure coded pool with k=2;m=2 which I did. Please see the attached log file. I also attached the output of the pools from the dashboard.

The problem is this. Whenever an OSD needs to be brought down, the cluster is going to rebalance and enter the degraded mode. The rebalance of the cluster takes a while since all the PG's need to be redistributed. Now I could use something like:

ceph osd set noout && ceph osd set norebalance

However, I want to be able to have the cluster recovered much faster but cephfs_data pool which has a lot of PG's which I cannot reduce (not able in mimic) is keeping the cluster busy. This pool isn't used for storing any data and I am not sure if I can remove this pool without effecting the th-ec22 EC pool.

Anyone any thoughs on this?

In case I provided insufficient or missing information, please let me know. I'd gladly share those with you.

-- 
Met vriendelijke groeten, Kind regards

Valentin Bajrami
Target Holding 	

Attachment: Screenshot from 2019-08-07 20-34-28.png
Description: PNG image

# ceph df detail
GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED     OBJECTS 
    1.6 TiB     1.3 TiB      280 GiB         17.48     34.88 k 
POOLS:
    NAME                           ID     QUOTA OBJECTS     QUOTA BYTES     USED        %USED     MAX AVAIL     OBJECTS     DIRTY       READ        WRITE       RAW USED 
    cephfs_data                    2      N/A               N/A                19 B         0       409 GiB           2          2       44 KiB      86 KiB         57 B 
    cephfs_metadata                3      N/A               N/A             204 MiB      0.05       409 GiB          91         91      3.1 MiB     103 KiB      611 MiB 
    th-ec22                        4      N/A               N/A             133 GiB     17.82       614 GiB       34462     34.46 k     134 KiB     158 KiB      266 GiB 
    .rgw.root                      5      N/A               N/A             1.1 KiB         0       409 GiB           4          4        846 B         4 B      3.3 KiB 
    default.rgw.meta               6      N/A               N/A             5.1 KiB         0       409 GiB          27         27      2.3 KiB       279 B       15 KiB 
    default.rgw.log                7      N/A               N/A                 0 B         0       409 GiB         207        207      4.7 MiB     3.1 MiB          0 B 
    default.rgw.control            8      N/A               N/A                 0 B         0       409 GiB           8          8          0 B         0 B          0 B 
    default.rgw.buckets.index      9      N/A               N/A                 0 B         0       409 GiB           7          7      1.3 KiB       565 B          0 B 
    default.rgw.buckets.data       10     N/A               N/A             202 MiB      0.05       409 GiB          71         71        199 B     1.0 KiB      606 MiB 
    default.rgw.buckets.non-ec     11     N/A               N/A                 0 B         0       409 GiB           0          0        159 B       111 B          0 B 



# ceph osd pool ls detail
pool 2 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 5130 lfor 0/1701 flags hashpspool,selfmanaged_snaps stripe_width 0 application cephfs
	removed_snaps [1~3]
pool 3 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 4069 flags hashpspool stripe_width 0 application cephfs
pool 4 'th-ec22' erasure size 4 min_size 3 crush_rule 1 object_hash rjenkins pg_num 96 pgp_num 96 last_change 8325 lfor 0/5645 flags hashpspool,ec_overwrites,selfmanaged_snaps stripe_width 8192 application cephfs
	removed_snaps [1~3]
pool 5 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 4069 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw
pool 6 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 4069 flags hashpspool stripe_width 0 application rgw
pool 7 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 4069 flags hashpspool stripe_width 0 application rgw
pool 8 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 4069 flags hashpspool stripe_width 0 application rgw
pool 9 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 4069 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw
pool 10 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 4069 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw
pool 11 'default.rgw.buckets.non-ec' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 4069 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw

OR

# ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS 
 7   hdd 0.19530  1.00000 200 GiB  33 GiB 167 GiB 16.62 0.95 214 
 1   hdd 0.19530  0.98708 200 GiB  36 GiB 164 GiB 18.22 1.04 212 
 2   hdd 0.19530  1.00000 200 GiB  35 GiB 165 GiB 17.65 1.01 214 
 6   hdd 0.19530  1.00000 200 GiB  34 GiB 166 GiB 17.01 0.97 214 
 5   hdd 0.19530  1.00000 200 GiB  35 GiB 165 GiB 17.43 1.00 213 
 0   hdd 0.19530  1.00000 200 GiB  36 GiB 164 GiB 17.98 1.03 214 
 3   hdd 0.19530  1.00000 200 GiB  34 GiB 166 GiB 16.88 0.97 210 
 4   hdd 0.19530  1.00000 200 GiB  36 GiB 164 GiB 18.05 1.03 213 
                    TOTAL 1.6 TiB 280 GiB 1.3 TiB 17.48          
MIN/MAX VAR: 0.95/1.04  STDDEV: 0.56


_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux