subvolume snapshot problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi guys.  Disclaimer, although I have been doing IT for decades but ceph ehh … not so much.  However, I have had great luck with a good solid HEALTH_OK couple of clusters for several weeks now.  Created cephfs with some subvolumes, mounted them and they are all working great.  However, I’m running into an issue that I can’t find the solution of.  I’m wanting snapshots of subvolumes of our cephfs, and it just won’t do it.  I even upgraded today from 16.2.7 to 16.2.9 which went GREAT but still no snapshot.  I can snapshot the base volume main-test just fine without error.  But any subvolume snapshot create command results in this: (some info modified to hide private info)

[ceph: root@cepht02 /]# ceph fs subvolume snapshot create main-test swarm swarmsnap1

Error EPERM: error in mkdir /volumes/_nogroup/swarm/.snap/swarmsnap1

 Other info if it helps:

[ceph: root@cepht02 /]# ceph fs subvolume info main-test testdata

{

    "atime": "2022-05-13 18:38:23",

    "bytes_pcent": "undefined",

    "bytes_quota": "infinite",

    "bytes_used": 25567695102,

    "created_at": "2022-05-13 15:06:10",

    "ctime": "2022-05-13 18:38:23",

    "data_pool": "cephfs.main-test.data",

    "features": [

        "snapshot-clone",

        "snapshot-autoprotect",

        "snapshot-retention"

    ],

    "gid": 3001,

    "mode": 16893,

    "mon_addrs": [

        "10.0.0.45:6789",

        "10.0.0.46:6789",

        "10.0.0.44:6789"

    ],

    "mtime": "2022-01-06 14:26:53",

    "path": "/volumes/_nogroup/testdata/147f9699-67c0-4576-8312-ab8b6d73e928",

    "pool_namespace": "",

    "state": "complete",

    "type": "subvolume",

    "uid": 3001

}

ceph version 16.2.9 (4c3647a322c0ff5a1dd2344e039859dcbd28c830) pacific (stable)

[ceph: root@cepht02 /]# ceph -s

  cluster:

    id: b315ef82-bf3a-11ec-ba0a-506b8d9376cc

    health: HEALTH_OK

  services:

    mon: 3 daemons, quorum cepht02,cepht03,cepht01 (age 72m)

    mgr: cepht02.pcoour(active, since 73m), standbys: cepht01.opwneq

    mds: 1/1 daemons up, 1 standby

    osd: 10 osds: 10 up (since 67m), 10 in (since 6d)

  data:

    volumes: 1/1 healthy

    pools:   5 pools, 129 pgs

    objects: 249.37k objects, 190 GiB

    usage:   445 GiB used, 4.4 TiB / 4.9 TiB avail

    pgs:     129 active+clean

  io:

    client:   11 KiB/s wr, 0 op/s rd, 0 op/s wr

**

*John Selph*

*System Administrator*

*L3 Technologies Integrated mission systems*

*Subsidiary of L3Harris Technologies, Inc.*

m +1 903 413 3060

L3Harris.com / john.d.selph@xxxxxxxxxxxx

10001 Jack Finney Blvd / Greenville, Tx 75402 / USA

<http://www.l3harris.com/>



CONFIDENTIALITY NOTICE: This email and any attachments are for the sole use of the intended recipient and may contain material that is proprietary, confidential, privileged or otherwise legally protected or restricted under applicable government laws. Any review, disclosure, distributing or other use without expressed permission of the sender is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies without reading, printing, or saving.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux