Re: Stupid question about ceph fs volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

it's really as easy as it sounds (fresh test cluster on 18.2.1 without any pools yet):

ceph:~ # ceph fs volume create cephfs

(wait a minute or two)

ceph:~ # ceph fs status
cephfs - 0 clients
======
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.soc9-ceph.uqcybj Reqs: 0 /s 10 13 12 0
       POOL           TYPE     USED  AVAIL
cephfs.cephfs.meta  metadata  64.0k  13.8G
cephfs.cephfs.data    data       0   13.8G
      STANDBY MDS
cephfs.soc9-ceph.cgkvrf
MDS version: ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)

The pools and the daemons are created automatically (you can control the placement of the daemons with the --placement option). Note that the metadata pool needs to be on fast storage, so you might need to change the ruleset for the metadata pool after creation in case you have HDDs in place.
Changing pools after the creation can be done via ceph fs commands:

ceph:~ # ceph osd pool create cephfs_data2
pool 'cephfs_data2' created

ceph:~ # ceph fs add_data_pool cephfs cephfs_data2
Pool 'cephfs_data2' (id '4') has pg autoscale mode 'on' but is not marked as bulk.
  Consider setting the flag by running
    # ceph osd pool set cephfs_data2 bulk true
added data pool 4 to fsmap

ceph:~ # ceph fs status
cephfs - 0 clients
======
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.soc9-ceph.uqcybj Reqs: 0 /s 10 13 12 0
       POOL           TYPE     USED  AVAIL
cephfs.cephfs.meta  metadata  64.0k  13.8G
cephfs.cephfs.data    data       0   13.8G
   cephfs_data2       data       0   13.8G


You can't remove the default data pool, though (here it's cephfs.cephfs.data). If you want to control the pool creation you can fall back to the method you mentioned, create pools as you require them and then create a new cephfs, and deploy the mds service.

I haven't looked too deep into changing the default pool yet, so there might be a way to switch that as well.

Regards,
Eugen


Zitat von Albert Shih <Albert.Shih@xxxxxxxx>:

Hi everyone,

Stupid question about

  ceph fs volume create

how can I specify the metadata pool and the data pool ?

I was able to create a cephfs «manually» with something like

  ceph fs new vo cephfs_metadata cephfs_data

but as I understand the documentation, with this method I need to deploy
the mds, and the «new» way to do it is to use ceph fs volume.

But with ceph fs volume I didn't find any documentation of how to set the
metadata/data pool

I also didn't find any way to change after the creation of the volume the
pool.

Thanks

--
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
mer. 24 janv. 2024 19:24:23 CET
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux