Sorry, I didn't really read your message until the end. Apparently,
you only have one cephfs, so the other two pools can probably be
deleted.
Just to get the full picture, can you share these outputs?
ceph orch ls mds
ceph orch ps --daemon_type mds
You could check the pool stats for those two pools with 'ceph osd pool
stats', if they don't contain any data you probably won't see any
activity.
To remove the (correct) service from cephadm you can either remove it
with the dashboard in the "Services" tab or with CLI: 'ceph orch rm
<MDS_SERVICE>'. Depending on the output I requested this may not be
necessary. But if it is, this would remove the daemons and the service
definition, then you can go ahead and delete those pools, again either
via dashboard or CLI.
Zitat von Dmitry Melekhov <dm@xxxxxxxxxx>:
07.11.2023 16:34, Eugen Block пишет:
Hi,
can you check for existing metadata?
ceph osd pool ls detail --format=json-pretty|grep -A 2 application_metadata
Yes, it is here:
ceph osd pool ls detail --format=json-pretty|grep -A 2 application_metadata
"application_metadata": {
"mgr": {}
}
--
"application_metadata": {
"rbd": {}
}
--
"application_metadata": {
"cephfs": {
"data": "cephfs"
--
"application_metadata": {
"cephfs": {
"metadata": "cephfs"
--
"application_metadata": {}
},
{
--
"application_metadata": {}
}
]
So, yes, application is not set on 2 pools.
Not sure if it's the same issue as described here:
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/2NL2Q57HTSGDDBLARLRCVRVX2PE6FKDA/
There the fix was to set the the application metadata like this:
ceph osd pool application set <your metadata pool name> cephfs
metadata <your ceph fs filesystem name>
Yes, I guess it is another way to fix this- set application on pools.
But I don't sure I need to do this, because looks like I don't use
these 2 pools.
I think it is better to delete them if I not need them and if it is safe.
Thank you!
Zitat von Dmitry Melekhov <dm@xxxxxxxxxx>:
Hello!
I'm very new to ceph ,s orry I'm asking extremely basic questions.
I just upgraded 17.2.6 to 17.2.7 and got warning:
2 pool(s) do not have an application enabled
These pools are
5 cephfs.cephfs.meta
6 cephfs.cephfs.data
I don't remember why and how I created them, I just followed some
instruction...
And don't remember their state before upgrade :-(
And I see in dashboard 0 bytes is used in both pools.
But I have two other pools
3 cephfs_data
4 cephfs_metadata
which are in use by cephfs:
ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
and really have data in them.
Could you tell me, can I just remove these two pools without
application, if everything works , i.e. cephfs is mounted and
accessible?
Thank you!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx