Re: pool(s) do not have an application enabled after upgrade ti 17.2.7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




07.11.2023 19:02, Eugen Block пишет:
Sorry, I didn't really read your message until the end. Apparently, you only have one cephfs, so the other two pools can probably be deleted.
Just to get the full picture, can you share these outputs?

ceph orch ls mds
ceph orch ps --daemon_type mds


yes, sure

ceph orch ls mds
NAME        PORTS  RUNNING  REFRESHED  AGE  PLACEMENT
mds.cephfs             3/3  2m ago     3M   count:3

ceph orch ps --daemon_type mds
NAME                    HOST  PORTS  STATUS        REFRESHED  AGE MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID mds.cephfs.vmx1.fhqrcn  vmx1         running (6h)    75s ago 3M    26.9M        -  17.2.7   5dff4047e947  9f58642a2e00 mds.cephfs.vmx2.ynfnqw  vmx2         running (6h)     2m ago 3M    26.7M        -  17.2.7   5dff4047e947  2e1514e544ed mds.cephfs.vmx3.ytsueq  vmx3         running (6h)    76s ago 3M    23.6M        -  17.2.7   5dff4047e947  c9133ecb9d02




You could check the pool stats for those two pools with 'ceph osd pool stats', if they don't contain any data you probably won't see any activity.

ceph osd pool stats
pool .mgr id 1
  nothing is going on

pool rbdpool id 2
  client io 6.3 MiB/s wr, 0 op/s rd, 527 op/s wr

pool cephfs_data id 3
  nothing is going on

pool cephfs_metadata id 4
  nothing is going on

pool cephfs.cephfs.meta id 5
  nothing is going on

pool cephfs.cephfs.data id 6
  nothing is going on


currently even real cephs contains almost nothing, there are only libvirt vm's configs there...


but, cephfs contains only 2 pools:


ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]


and not  cephfs.cephfs.meta and cephfs.cephfs.data ,which have no application type.



To remove the (correct) service from cephadm you can either remove it with the dashboard in the "Services" tab or with CLI: 'ceph orch rm <MDS_SERVICE>'.

There is only mds service for running cephfs and pools

Depending on the output I requested this may not be necessary. But if it is, this would remove the daemons and the service definition, then you can go ahead and delete those pools, again either via dashboard or CLI.


I think I need to remove pools cephfs.cephfs.meta and cephfs.cephfs.data  using

cephosdpooldelete{pool-name}[{pool-name}--yes-i-really-really-mean-it]

by the way, as far as I know,  deleting pools not allowed by default, I have to allow it first.

is this correct way to allow it?

|ceph tell mon.\* injectargs '--mon-allow-pool-delete=true' Thank you! |


Zitat von Dmitry Melekhov <dm@xxxxxxxxxx>:

07.11.2023 16:34, Eugen Block пишет:
Hi,

can you check for existing metadata?

ceph osd pool ls detail --format=json-pretty|grep -A 2 application_metadata


Yes, it is here:


ceph osd pool ls detail --format=json-pretty|grep -A 2 application_metadata
        "application_metadata": {
            "mgr": {}
        }
--
        "application_metadata": {
            "rbd": {}
        }
--
        "application_metadata": {
            "cephfs": {
                "data": "cephfs"
--
        "application_metadata": {
            "cephfs": {
                "metadata": "cephfs"
--
        "application_metadata": {}
    },
    {
--
        "application_metadata": {}
    }
]


So, yes, application is not set on 2 pools.



Not sure if it's the same issue as described here: https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/2NL2Q57HTSGDDBLARLRCVRVX2PE6FKDA/

There the fix was to set the the application metadata like this:

ceph osd pool application set <your metadata pool name> cephfs
metadata <your ceph fs filesystem name>



Yes, I guess it is another way to fix this-  set application on pools.

But I don't sure I need to do this, because looks like I don't use these 2 pools.

I think it is better to delete them if I not need them and if it is safe.


Thank you!



Zitat von Dmitry Melekhov <dm@xxxxxxxxxx>:

Hello!


I'm very new to ceph ,s orry I'm asking extremely basic questions.


I just upgraded 17.2.6 to 17.2.7 and got warning:

2 pool(s) do not have an application enabled

These pools are

5 cephfs.cephfs.meta
6 cephfs.cephfs.data

I don't remember why and how I created them, I just followed some instruction...
And don't remember their state before upgrade :-(
And I see in dashboard 0 bytes is used in both pools.

But I have two other pools

3 cephfs_data
4 cephfs_metadata

which are in use by cephfs:

ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

and really have data in them.

Could you tell me, can I just remove these two pools without application, if everything works , i.e. cephfs is mounted and accessible?

Thank you!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux