python error when adding subvolume permission in cli

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

at one of my clusters, i'm having a problem with authorizing new clients to an existing share.
The problem exists on all of the 3 nodes with the same error.

Due to a similar command is working on another cluster, something seems to be wrong here. I also tried to delete and recreate the fs and the mds, but it didn't help.

The servers are running on Debian 12.1 bookworm with Proxmox. Ceph version is 17.2.6 from the proxmox repositories.

Please help me to find the cause of this problem.

Thank you very much.

Robert

~# ceph fs subvolume authorize shared cweb01lab glcweb01a.lab.xyz.host --access_level=rwp --verbose
parsed_args: Namespace(completion=False, help=False, cephconf=None, input_file=None, output_file=None, setuser=None, setgroup=None, client_id=None, client_name=None, cluster=None, admin_socket=None, status=False, watch=False, watch_debug=False, watch_info=False, watch_sec=False, watch_warn=False, watch_error=False, watch_channel=None, version=False, verbose=True, output_format=None, cluster_timeout=None, block=False, period=1), childargs: ['fs', 'subvolume', 'authorize', 'shared', 'cweb01lab', 'glcweb01a.lab.xyz.host', '--access_level=rwp']
cmd000: pg stat
cmd001: pg getmap
cmd002: pg dump [<dumpcontents:all|summary|sum|delta|pools|osds|pgs|pgs_brief>...]
cmd003: pg dump_json [<dumpcontents:all|summary|sum|pools|osds|pgs>...]
cmd004: pg dump_pools_json
cmd005: pg ls-by-pool <poolstr> [<states>...]
cmd006: pg ls-by-primary <id|osd.id> [<pool:int>] [<states>...]
cmd007: pg ls-by-osd <id|osd.id> [<pool:int>] [<states>...]
cmd008: pg ls [<pool:int>] [<states>...]
cmd009: pg dump_stuck [<stuckops:inactive|unclean|stale|undersized|degraded>...] [<threshold:int>]
cmd010: pg debug <debugop:unfound_objects_exist|degraded_pgs_exist>
cmd011: pg scrub <pgid>
cmd012: pg deep-scrub <pgid>
cmd013: pg repair <pgid>
cmd014: pg force-recovery <pgid>...
cmd015: pg force-backfill <pgid>...
cmd016: pg cancel-force-recovery <pgid>...
cmd017: pg cancel-force-backfill <pgid>...
cmd018: osd perf
cmd019: osd df [<output_method:plain|tree>] [<filter_by:class|name>] [<filter>]
cmd020: osd blocked-by
cmd021: osd pool stats [<pool_name>]
cmd022: osd pool scrub <who>...
cmd023: osd pool deep-scrub <who>...
cmd024: osd pool repair <who>...
cmd025: osd pool force-recovery <who>...
cmd026: osd pool force-backfill <who>...
cmd027: osd pool cancel-force-recovery <who>...
cmd028: osd pool cancel-force-backfill <who>...
cmd029: osd reweight-by-utilization [<oload:int>] [<max_change:float>] [<max_osds:int>] [--no-increasing]
cmd030: osd test-reweight-by-utilization [<oload:int>] [<max_change:float>] [<max_osds:int>] [--no-increasing]
cmd031: osd reweight-by-pg [<oload:int>] [<max_change:float>] [<max_osds:int>] [<pools>...]
cmd032: osd test-reweight-by-pg [<oload:int>] [<max_change:float>] [<max_osds:int>] [<pools>...]
cmd033: osd destroy <id|osd.id> [--force] [--yes-i-really-mean-it]
cmd034: osd purge <id|osd.id> [--force] [--yes-i-really-mean-it]
cmd035: osd safe-to-destroy <ids>...
cmd036: osd ok-to-stop <ids>... [<max:int>]
cmd037: osd scrub <who>
cmd038: osd deep-scrub <who>
cmd039: osd repair <who>
cmd040: service dump
cmd041: service status
cmd042: config show <who> [<key>]
cmd043: config show-with-defaults <who>
cmd044: device ls
cmd045: device info <devid>
cmd046: device ls-by-daemon <who>
cmd047: device ls-by-host <host>
cmd048: device set-life-expectancy <devid> <from> [<to>]
cmd049: device rm-life-expectancy <devid>
cmd050: alerts send
cmd051: balancer status
cmd052: balancer mode <mode:none|crush-compat|upmap>
cmd053: balancer on
cmd054: balancer off
cmd055: balancer pool ls
cmd056: balancer pool add <pools>...
cmd057: balancer pool rm <pools>...
cmd058: balancer eval-verbose [<option>]
cmd059: balancer eval [<option>]
cmd060: balancer optimize <plan> [<pools>...]
cmd061: balancer show <plan>
cmd062: balancer rm <plan>
cmd063: balancer reset
cmd064: balancer dump <plan>
cmd065: balancer ls
cmd066: balancer execute <plan>
cmd067: crash info <id>
cmd068: crash post
cmd069: crash ls [--format <value>]
cmd070: crash ls-new [--format <value>]
cmd071: crash rm <id>
cmd072: crash prune <keep:int>
cmd073: crash archive <id>
cmd074: crash archive-all
cmd075: crash stat
cmd076: crash json_report <hours:int>
cmd077: device query-daemon-health-metrics <who>
cmd078: device scrape-daemon-health-metrics <who>
cmd079: device scrape-health-metrics [<devid>]
cmd080: device get-health-metrics <devid> [<sample>]
cmd081: device check-health
cmd082: device monitoring on
cmd083: device monitoring off
cmd084: device predict-life-expectancy <devid>
cmd085: influx config-set <key> <value>
cmd086: influx config-show
cmd087: influx send
cmd088: influx config-show
cmd089: influx config-set <key> <value>
cmd090: influx send
cmd091: insights
cmd092: insights prune-health [<hours:int>]
cmd093: iostat [<width:int>] [--print-header]
cmd094: fs snapshot mirror enable [<fs_name>]
cmd095: fs snapshot mirror disable [<fs_name>]
cmd096: fs snapshot mirror peer_add <fs_name> [<remote_cluster_spec>] [<remote_fs_name>] [<remote_mon_host>] [<cephx_key>]
cmd097: fs snapshot mirror peer_list [<fs_name>]
cmd098: fs snapshot mirror peer_remove <fs_name> [<peer_uuid>]
cmd099: fs snapshot mirror peer_bootstrap create <fs_name> <client_name> [<site_name>]
cmd100: fs snapshot mirror peer_bootstrap import <fs_name> [<token>]
cmd101: fs snapshot mirror add <fs_name> [<path>]
cmd102: fs snapshot mirror remove <fs_name> [<path>]
cmd103: fs snapshot mirror dirmap <fs_name> [<path>]
cmd104: fs snapshot mirror show distribution [<fs_name>]
cmd105: fs snapshot mirror daemon status
cmd106: nfs export create cephfs <cluster_id> <pseudo_path> <fsname> [<path>] [--readonly] [--client_addr <value>...] [--squash <value>] [--sectype <value>...]
cmd107: nfs export create rgw <cluster_id> <pseudo_path> [<bucket>] [<user_id>] [--readonly] [--client_addr <value>...] [--squash <value>] [--sectype <value>...]
cmd108: nfs export rm <cluster_id> <pseudo_path>
cmd109: nfs export delete <cluster_id> <pseudo_path>
cmd110: nfs export ls <cluster_id> [--detailed]
cmd111: nfs export info <cluster_id> <pseudo_path>
cmd112: nfs export get <cluster_id> <pseudo_path>
cmd113: nfs export apply <cluster_id>
cmd114: nfs cluster create <cluster_id> [<placement>] [--ingress] [--virtual_ip <value>] [--port <int>]
cmd115: nfs cluster rm <cluster_id>
cmd116: nfs cluster delete <cluster_id>
cmd117: nfs cluster ls
cmd118: nfs cluster info [<cluster_id>]
cmd119: nfs cluster config get <cluster_id>
cmd120: nfs cluster config set <cluster_id>
cmd121: nfs cluster config reset <cluster_id>
cmd122: device ls-lights
cmd123: device light <enable:on|off> <devid> [<light_type:ident|fault>] [--force]
cmd124: orch host add <hostname> [<addr>] [<labels>...] [--maintenance]
cmd125: orch host rm <hostname> [--force] [--offline]
cmd126: orch host drain <hostname> [--force]
cmd127: orch host set-addr <hostname> <addr>
cmd128: orch host ls [--format {plain|json|json-pretty|yaml|xml-pretty|xml}] [--host_pattern <value>] [--label <value>] [--host_status <value>]
cmd129: orch host label add <hostname> <label>
cmd130: orch host label rm <hostname> <label> [--force]
cmd131: orch host ok-to-stop <hostname>
cmd132: orch host maintenance enter <hostname> [--force]
cmd133: orch host maintenance exit <hostname>
cmd134: orch host rescan <hostname> [--with-summary]
cmd135: orch device ls [<hostname>...] [--format {plain|json|json-pretty|yaml|xml-pretty|xml}] [--refresh] [--wide]
cmd136: orch device zap <hostname> <path> [--force]
cmd137: orch ls [<service_type>] [<service_name>] [--export] [--format {plain|json|json-pretty|yaml|xml-pretty|xml}] [--refresh]
cmd138: orch ps [<hostname>] [--service_name <value>] [--daemon_type <value>] [--daemon_id <value>] [--format {plain|json|json-pretty|yaml|xml-pretty|xml}] [--refresh]
cmd139: orch apply osd [--all-available-devices] [--format {plain|json|json-pretty|yaml|xml-pretty|xml}] [--unmanaged] [--dry-run] [--no-overwrite]
cmd140: orch daemon add osd [<svc_arg>] [<method:raw|lvm>]
cmd141: orch osd rm <osd_id>... [--replace] [--force] [--zap]
cmd142: orch osd rm stop <osd_id>...
cmd143: orch osd rm status [--format {plain|json|json-pretty|yaml|xml-pretty|xml}]
cmd144: orch daemon add [<daemon_type:mon|mgr|rbd-mirror|cephfs-mirror|crash|alertmanager|grafana|node-exporter|ceph-exporter|prometheus|loki|promtail|mds|rgw|nfs|iscsi|snmp-gateway>] [<placement>]
cmd145: orch daemon add mds <fs_name> [<placement>]
cmd146: orch daemon add rgw <svc_id> [<placement>] [--port <int>] [--ssl]
cmd147: orch daemon add nfs <svc_id> [<placement>]
cmd148: orch daemon add iscsi <pool> <api_user> <api_password> [<trusted_ip_list>] [<placement>]
cmd149: orch <action:start|stop|restart|redeploy|reconfig|rotate-key> <service_name>
cmd150: orch daemon <action:start|stop|restart|reconfig|rotate-key> <name>
cmd151: orch daemon redeploy <name> [<image>]
cmd152: orch daemon rm <names>... [--force]
cmd153: orch rm <service_name> [--force]
cmd154: orch apply [<service_type:mon|mgr|rbd-mirror|cephfs-mirror|crash|alertmanager|grafana|node-exporter|ceph-exporter|prometheus|loki|promtail|mds|rgw|nfs|iscsi|snmp-gateway>] [<placement>] [--dry-run] [--format {plain|json|json-pretty|yaml|xml-pretty|xml}] [--unmanaged] [--no-overwrite]
cmd155: orch apply mds <fs_name> [<placement>] [--dry-run] [--unmanaged] [--format {plain|json|json-pretty|yaml|xml-pretty|xml}] [--no-overwrite]
cmd156: orch apply rgw <svc_id> [<placement>] [--realm <value>] [--zone <value>] [--port <int>] [--ssl] [--dry-run] [--format {plain|json|json-pretty|yaml|xml-pretty|xml}] [--unmanaged] [--no-overwrite]
cmd157: orch apply nfs <svc_id> [<placement>] [--format {plain|json|json-pretty|yaml|xml-pretty|xml}] [--port <int>] [--dry-run] [--unmanaged] [--no-overwrite]
cmd158: orch apply iscsi <pool> <api_user> <api_password> [<trusted_ip_list>] [<placement>] [--unmanaged] [--dry-run] [--format {plain|json|json-pretty|yaml|xml-pretty|xml}] [--no-overwrite]
cmd159: orch apply snmp-gateway <snmp_version:V2c|V3> <destination> [<port:int>] [<engine_id>] [<auth_protocol:MD5|SHA>] [<privacy_protocol:DES|AES>] [<placement>] [--unmanaged] [--dry-run] [--format {plain|json|json-pretty|yaml|xml-pretty|xml}] [--no-overwrite]
cmd160: orch set backend [<module_name>]
cmd161: orch pause
cmd162: orch resume
cmd163: orch cancel
cmd164: orch status [--detail] [--format {plain|json|json-pretty|yaml|xml-pretty|xml}]
cmd165: orch tuned-profile apply [<profile_name>] [<placement>] [<settings>] [--no-overwrite]
cmd166: orch tuned-profile rm <profile_name>
cmd167: orch tuned-profile ls [--format {plain|json|json-pretty|yaml|xml-pretty|xml}]
cmd168: orch tuned-profile add-setting <profile_name> <setting> <value>
cmd169: orch tuned-profile rm-setting <profile_name> <setting>
cmd170: orch upgrade check [<image>] [<ceph_version>]
cmd171: orch upgrade ls [<image>] [--tags] [--show-all-versions]
cmd172: orch upgrade status
cmd173: orch upgrade start [<image>] [--daemon_types <value>] [--hosts <value>] [--services <value>] [--limit <int>] [--ceph_version <value>]
cmd174: orch upgrade pause
cmd175: orch upgrade resume
cmd176: orch upgrade stop
cmd177: osd perf query add <query:client_id|rbd_image_id|all_subkeys>
cmd178: osd perf query remove <query_id:int>
cmd179: osd perf counters get <query_id:int>
cmd180: osd pool autoscale-status [--format <value>]
cmd181: osd pool set threshold <num:float>
cmd182: osd pool get noautoscale
cmd183: osd pool unset noautoscale
cmd184: osd pool set noautoscale
cmd185: progress
cmd186: progress json
cmd187: progress clear
cmd188: progress on
cmd189: progress off
cmd190: prometheus file_sd_config
cmd191: healthcheck history ls [--format {plain|json|json-pretty|yaml}]
cmd192: healthcheck history clear
cmd193: rbd mirror snapshot schedule add <level_spec> <interval> [<start_time>]
cmd194: rbd mirror snapshot schedule remove <level_spec> [<interval>] [<start_time>]
cmd195: rbd mirror snapshot schedule list [<level_spec>]
cmd196: rbd mirror snapshot schedule status [<level_spec>]
cmd197: rbd perf image stats [<pool_spec>] [<sort_by:write_ops|write_bytes|write_latency|read_ops|read_bytes|read_latency>]
cmd198: rbd perf image counters [<pool_spec>] [<sort_by:write_ops|write_bytes|write_latency|read_ops|read_bytes|read_latency>]
cmd199: rbd task add flatten <image_spec>
cmd200: rbd task add remove <image_spec>
cmd201: rbd task add trash remove <image_id_spec>
cmd202: rbd task add migration execute <image_spec>
cmd203: rbd task add migration commit <image_spec>
cmd204: rbd task add migration abort <image_spec>
cmd205: rbd task cancel <task_id>
cmd206: rbd task list [<task_id>]
cmd207: rbd trash purge schedule add <level_spec> <interval> [<start_time>]
cmd208: rbd trash purge schedule remove <level_spec> [<interval>] [<start_time>]
cmd209: rbd trash purge schedule list [<level_spec>]
cmd210: rbd trash purge schedule status [<level_spec>]
cmd211: restful create-key <key_name>
cmd212: restful delete-key <key_name>
cmd213: restful list-keys
cmd214: restful create-self-signed-cert
cmd215: restful restart
cmd216: mgr self-test python-version
cmd217: mgr self-test run
cmd218: mgr self-test background start <workload:command_spam|throw_exception|shutdown>
cmd219: mgr self-test background stop
cmd220: mgr self-test config get <key>
cmd221: mgr self-test config get_localized <key>
cmd222: mgr self-test remote
cmd223: mgr self-test module <module>
cmd224: mgr self-test cluster-log <channel> <priority> <message>
cmd225: mgr self-test health set <checks>
cmd226: mgr self-test health clear [<checks>...]
cmd227: mgr self-test insights_set_now_offset <hours:int>
cmd228: mgr self-test eval [<s>]
cmd229: fs snap-schedule status [<path>] [<fs>] [--format <value>]
cmd230: fs snap-schedule list <path> [--recursive] [--fs <value>] [--format <value>]
cmd231: fs snap-schedule add <path> <snap_schedule> [<start>] [<fs>]
cmd232: fs snap-schedule remove <path> [<repeat>] [<start>] [<fs>]
cmd233: fs snap-schedule retention add <path> <retention_spec_or_period> [<retention_count>] [<fs>]
cmd234: fs snap-schedule retention remove <path> <retention_spec_or_period> [<retention_count>] [<fs>]
cmd235: fs snap-schedule activate <path> [<repeat>] [<start>] [<fs>]
cmd236: fs snap-schedule deactivate <path> [<repeat>] [<start>] [<fs>]
cmd237: fs perf stats [<mds_rank>] [<client_id>] [<client_ip>]
cmd238: fs status [<fs>] [--format <value>]
cmd239: osd status [<bucket>]
cmd240: telegraf config-show
cmd241: telegraf config-set <key> <value>
cmd242: telegraf send
cmd243: telemetry status
cmd244: telemetry diff
cmd245: telemetry on [<license>]
cmd246: telemetry off
cmd247: telemetry enable channel all [<channels>...]
cmd248: telemetry enable channel [<channels>...]
cmd249: telemetry disable channel all [<channels>...]
cmd250: telemetry disable channel [<channels>...]
cmd251: telemetry channel ls
cmd252: telemetry collection ls
cmd253: telemetry send [<endpoint:ceph|device>...] [<license>]
cmd254: telemetry show [<channels>...]
cmd255: telemetry preview [<channels>...]
cmd256: telemetry show-device
cmd257: telemetry preview-device
cmd258: telemetry show-all
cmd259: telemetry preview-all
cmd260: test_orchestrator load_data
cmd261: fs volume ls
cmd262: fs volume create <name> [<placement>]
cmd263: fs volume rm <vol_name> [<yes-i-really-mean-it>]
cmd264: fs volume rename <vol_name> <new_vol_name> [--yes-i-really-mean-it]
cmd265: fs volume info <vol_name> [--human-readable]
cmd266: fs subvolumegroup ls <vol_name>
cmd267: fs subvolumegroup create <vol_name> <group_name> [<size:int>] [<pool_layout>] [<uid:int>] [<gid:int>] [<mode>]
cmd268: fs subvolumegroup rm <vol_name> <group_name> [--force]
cmd269: fs subvolumegroup info <vol_name> <group_name>
cmd270: fs subvolumegroup resize <vol_name> <group_name> <new_size> [--no-shrink]
cmd271: fs subvolumegroup exist <vol_name>
cmd272: fs subvolume ls <vol_name> [<group_name>]
cmd273: fs subvolume create <vol_name> <sub_name> [<size:int>] [<group_name>] [<pool_layout>] [<uid:int>] [<gid:int>] [<mode>] [--namespace-isolated]
cmd274: fs subvolume rm <vol_name> <sub_name> [<group_name>] [--force] [--retain-snapshots]
cmd275: fs subvolume authorize <vol_name> <sub_name> <auth_id> [<group_name>] [<access_level>] [<tenant_id>] [--allow-existing-id]
cmd276: fs subvolume deauthorize <vol_name> <sub_name> <auth_id> [<group_name>]
cmd277: fs subvolume authorized_list <vol_name> <sub_name> [<group_name>]
cmd278: fs subvolume evict <vol_name> <sub_name> <auth_id> [<group_name>]
cmd279: fs subvolumegroup getpath <vol_name> <group_name>
cmd280: fs subvolume getpath <vol_name> <sub_name> [<group_name>]
cmd281: fs subvolume info <vol_name> <sub_name> [<group_name>]
cmd282: fs subvolume exist <vol_name> [<group_name>]
cmd283: fs subvolume metadata set <vol_name> <sub_name> <key_name> <value> [<group_name>]
cmd284: fs subvolume metadata get <vol_name> <sub_name> <key_name> [<group_name>]
cmd285: fs subvolume metadata ls <vol_name> <sub_name> [<group_name>]
cmd286: fs subvolume metadata rm <vol_name> <sub_name> <key_name> [<group_name>] [--force]
cmd287: fs subvolumegroup pin <vol_name> <group_name> <pin_type:export|distributed|random> <pin_setting>
cmd288: fs subvolumegroup snapshot ls <vol_name> <group_name>
cmd289: fs subvolumegroup snapshot create <vol_name> <group_name> <snap_name>
cmd290: fs subvolumegroup snapshot rm <vol_name> <group_name> <snap_name> [--force]
cmd291: fs subvolume snapshot ls <vol_name> <sub_name> [<group_name>]
cmd292: fs subvolume snapshot create <vol_name> <sub_name> <snap_name> [<group_name>]
cmd293: fs subvolume snapshot info <vol_name> <sub_name> <snap_name> [<group_name>]
cmd294: fs subvolume snapshot metadata set <vol_name> <sub_name> <snap_name> <key_name> <value> [<group_name>]
cmd295: fs subvolume snapshot metadata get <vol_name> <sub_name> <snap_name> <key_name> [<group_name>]
cmd296: fs subvolume snapshot metadata ls <vol_name> <sub_name> <snap_name> [<group_name>]
cmd297: fs subvolume snapshot metadata rm <vol_name> <sub_name> <snap_name> <key_name> [<group_name>] [--force]
cmd298: fs subvolume snapshot rm <vol_name> <sub_name> <snap_name> [<group_name>] [--force]
cmd299: fs subvolume resize <vol_name> <sub_name> <new_size> [<group_name>] [--no-shrink]
cmd300: fs subvolume pin <vol_name> <sub_name> <pin_type:export|distributed|random> <pin_setting> [<group_name>]
cmd301: fs subvolume snapshot protect <vol_name> <sub_name> <snap_name> [<group_name>]
cmd302: fs subvolume snapshot unprotect <vol_name> <sub_name> <snap_name> [<group_name>]
cmd303: fs subvolume snapshot clone <vol_name> <sub_name> <snap_name> <target_sub_name> [<pool_layout>] [<group_name>] [<target_group_name>]
cmd304: fs clone status <vol_name> <clone_name> [<group_name>]
cmd305: fs clone cancel <vol_name> <clone_name> [<group_name>]
cmd306: zabbix config-show
cmd307: zabbix config-set <key> <value>
cmd308: zabbix send
cmd309: zabbix discovery
cmd310: pg map <pgid>
cmd311: pg repeer <pgid>
cmd312: osd last-stat-seq <id|osd.id>
cmd313: auth export [<entity>]
cmd314: auth get <entity>
cmd315: auth get-key <entity>
cmd316: auth print-key <entity>
cmd317: auth print_key <entity>
cmd318: auth list
cmd319: auth ls
cmd320: auth import
cmd321: auth add <entity> [<caps>...]
cmd322: auth get-or-create-key <entity> [<caps>...]
cmd323: auth get-or-create <entity> [<caps>...]
cmd324: auth get-or-create-pending <entity>
cmd325: auth clear-pending <entity>
cmd326: auth commit-pending <entity>
cmd327: fs authorize <filesystem> <entity> <caps>...
cmd328: auth caps <entity> <caps>...
cmd329: auth del <entity>
cmd330: auth rm <entity>
cmd331: compact
cmd332: fsid
cmd333: log <logtext>...
cmd334: log last [<num:int>] [<level:debug|info|sec|warn|error>] [<channel:*|cluster|audit|cephadm>]
cmd335: status
cmd336: health [<detail:detail>]
cmd337: health mute <code> [<ttl>] [--sticky]
cmd338: health unmute [<code>]
cmd339: time-sync-status
cmd340: df [<detail:detail>]
cmd341: report [<tags>...]
cmd342: features
cmd343: quorum_status
cmd344: mon ok-to-stop <ids>...
cmd345: mon ok-to-add-offline
cmd346: mon ok-to-rm <id>
cmd347: tell <type.id> <args>...
cmd348: version
cmd349: node ls [<type:all|osd|mon|mds|mgr>]
cmd350: mon scrub
cmd351: mon metadata [<id>]
cmd352: mon count-metadata <property>
cmd353: mon versions
cmd354: versions
cmd355: mds stat
cmd356: fs dump [<epoch:int>]
cmd357: mds metadata [<who>]
cmd358: mds count-metadata <property>
cmd359: mds versions
cmd360: mds ok-to-stop <ids>...
cmd361: mds freeze <role_or_gid> <val>
cmd362: mds set_state <gid:int> <state:int>
cmd363: mds fail <role_or_gid>
cmd364: mds repaired <role>
cmd365: mds rm <gid:int>
cmd366: mds rmfailed <role> [--yes-i-really-mean-it]
cmd367: mds compat show
cmd368: fs compat show <fs_name>
cmd369: mds compat rm_compat <feature:int>
cmd370: mds compat rm_incompat <feature:int>
cmd371: fs new <fs_name> <metadata> <data> [--force] [--allow-dangerous-metadata-overlay] [<fscid:int>] [--recover]
cmd372: fs fail <fs_name>
cmd373: fs rm <fs_name> [--yes-i-really-mean-it]
cmd374: fs reset <fs_name> [--yes-i-really-mean-it]
cmd375: fs ls
cmd376: fs get <fs_name>
cmd377: fs set <fs_name> <var:max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client> <val> [--yes-i-really-mean-it] [--yes-i-really-really-mean-it]
cmd378: fs flag set <flag_name:enable_multiple> <val> [--yes-i-really-mean-it]
cmd379: fs feature ls
cmd380: fs lsflags <fs_name>
cmd381: fs compat <fs_name> <subop:rm_compat|rm_incompat|add_compat|add_incompat> <feature:int> [<feature_str>]
cmd382: fs required_client_features <fs_name> <subop:add|rm> <val>
cmd383: fs add_data_pool <fs_name> <pool>
cmd384: fs rm_data_pool <fs_name> <pool>
cmd385: fs set_default <fs_name>
cmd386: fs set-default <fs_name>
cmd387: fs mirror enable <fs_name>
cmd388: fs mirror disable <fs_name>
cmd389: fs mirror peer_add <fs_name> <uuid> <remote_cluster_spec> <remote_fs_name>
cmd390: fs mirror peer_remove <fs_name> <uuid>
cmd391: fs rename <fs_name> <new_fs_name> [--yes-i-really-mean-it]
cmd392: mon dump [<epoch:int>]
cmd393: mon stat
cmd394: mon getmap [<epoch:int>]
cmd395: mon add <name> <addr> [<location>...]
cmd396: mon rm <name>
cmd397: mon remove <name>
cmd398: mon feature ls [--with-value]
cmd399: mon feature set <feature_name> [--yes-i-really-mean-it]
cmd400: mon set-rank <name> <rank:int>
cmd401: mon set-addrs <name> <addrs>
cmd402: mon set-weight <name> <weight:int>
cmd403: mon enable-msgr2
cmd404: mon set election_strategy <strategy>
cmd405: mon add disallowed_leader <name>
cmd406: mon rm disallowed_leader <name>
cmd407: mon set_location <name> <args>...
cmd408: mon enable_stretch_mode <tiebreaker_mon> <new_crush_rule> <dividing_bucket>
cmd409: mon set_new_tiebreaker <name> [--yes-i-really-mean-it]
cmd410: osd stat
cmd411: osd dump [<epoch:int>]
cmd412: osd info [<id|osd.id>]
cmd413: osd tree [<epoch:int>] [<states:up|down|in|out|destroyed>...]
cmd414: osd tree-from [<epoch:int>] <bucket> [<states:up|down|in|out|destroyed>...]
cmd415: osd ls [<epoch:int>]
cmd416: osd getmap [<epoch:int>]
cmd417: osd getcrushmap [<epoch:int>]
cmd418: osd getmaxosd
cmd419: osd ls-tree [<epoch:int>] <name>
cmd420: osd find <id|osd.id>
cmd421: osd metadata [<id|osd.id>]
cmd422: osd count-metadata <property>
cmd423: osd versions
cmd424: osd numa-status
cmd425: osd map <pool> <object> [<nspace>]
cmd426: osd lspools
cmd427: osd crush rule list
cmd428: osd crush rule ls
cmd429: osd crush rule ls-by-class <class>
cmd430: osd crush rule dump [<name>]
cmd431: osd crush dump
cmd432: osd setcrushmap [<prior_version:int>]
cmd433: osd crush set [<prior_version:int>]
cmd434: osd crush add-bucket <name> <type> [<args>...]
cmd435: osd crush rename-bucket <srcname> <dstname>
cmd436: osd crush set <id|osd.id> <weight:float> <args>...
cmd437: osd crush add <id|osd.id> <weight:float> <args>...
cmd438: osd crush set-all-straw-buckets-to-straw2
cmd439: osd crush class create <class>
cmd440: osd crush class rm <class>
cmd441: osd crush set-device-class <class> <ids>...
cmd442: osd crush rm-device-class <ids>...
cmd443: osd crush class rename <srcname> <dstname>
cmd444: osd crush create-or-move <id|osd.id> <weight:float> <args>...
cmd445: osd crush move <name> <args>...
cmd446: osd crush swap-bucket <source> <dest> [--yes-i-really-mean-it]
cmd447: osd crush link <name> <args>...
cmd448: osd crush rm <name> [<ancestor>]
cmd449: osd crush remove <name> [<ancestor>]
cmd450: osd crush unlink <name> [<ancestor>]
cmd451: osd crush reweight-all
cmd452: osd crush reweight <name> <weight:float>
cmd453: osd crush reweight-subtree <name> <weight:float>
cmd454: osd crush tunables <profile:legacy|argonaut|bobtail|firefly|hammer|jewel|optimal|default>
cmd455: osd crush set-tunable <tunable:straw_calc_version> <value:int>
cmd456: osd crush get-tunable <tunable:straw_calc_version>
cmd457: osd crush show-tunables
cmd458: osd crush rule create-simple <name> <root> <type> [<mode:firstn|indep>]
cmd459: osd crush rule create-replicated <name> <root> <type> [<class>]
cmd460: osd crush rule create-erasure <name> [<profile>]
cmd461: osd crush rule rm <name>
cmd462: osd crush rule rename <srcname> <dstname>
cmd463: osd crush tree [--show-shadow]
cmd464: osd crush ls <node>
cmd465: osd crush class ls
cmd466: osd crush class ls-osd <class>
cmd467: osd crush get-device-class <ids>...
cmd468: osd crush weight-set ls
cmd469: osd crush weight-set dump
cmd470: osd crush weight-set create-compat
cmd471: osd crush weight-set create <pool> <mode:flat|positional>
cmd472: osd crush weight-set rm <pool>
cmd473: osd crush weight-set rm-compat
cmd474: osd crush weight-set reweight <pool> <item> <weight:float>...
cmd475: osd crush weight-set reweight-compat <item> <weight:float>...
cmd476: osd setmaxosd <newmax:int>
cmd477: osd set-full-ratio <ratio:float>
cmd478: osd set-backfillfull-ratio <ratio:float>
cmd479: osd set-nearfull-ratio <ratio:float>
cmd480: osd get-require-min-compat-client
cmd481: osd set-require-min-compat-client <version> [--yes-i-really-mean-it]
cmd482: osd pause
cmd483: osd unpause
cmd484: osd erasure-code-profile set <name> [<profile>...] [--force]
cmd485: osd erasure-code-profile get <name>
cmd486: osd erasure-code-profile rm <name>
cmd487: osd erasure-code-profile ls
cmd488: osd set <key:full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|pglog_hardlimit> [--yes-i-really-mean-it]
cmd489: osd unset <key:full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim>
cmd490: osd require-osd-release <release:octopus|pacific|quincy> [--yes-i-really-mean-it]
cmd491: osd down <ids>... [--definitely-dead]
cmd492: osd stop <ids>...
cmd493: osd out <ids>...
cmd494: osd in <ids>...
cmd495: osd rm <ids>...
cmd496: osd add-noup <ids>...
cmd497: osd add-nodown <ids>...
cmd498: osd add-noin <ids>...
cmd499: osd add-noout <ids>...
cmd500: osd rm-noup <ids>...
cmd501: osd rm-nodown <ids>...
cmd502: osd rm-noin <ids>...
cmd503: osd rm-noout <ids>...
cmd504: osd set-group <flags> <who>...
cmd505: osd unset-group <flags> <who>...
cmd506: osd reweight <id|osd.id> <weight:float>
cmd507: osd reweightn <weights>
cmd508: osd force-create-pg <pgid> [--yes-i-really-mean-it]
cmd509: osd pg-temp <pgid> [<id|osd.id>...]
cmd510: osd pg-upmap <pgid> <id|osd.id>...
cmd511: osd rm-pg-upmap <pgid>
cmd512: osd pg-upmap-items <pgid> <id|osd.id>...
cmd513: osd rm-pg-upmap-items <pgid>
cmd514: osd primary-temp <pgid> <id|osd.id>
cmd515: osd primary-affinity <id|osd.id> <weight:float>
cmd516: osd destroy-actual <id|osd.id> [--yes-i-really-mean-it]
cmd517: osd purge-new <id|osd.id> [--yes-i-really-mean-it]
cmd518: osd purge-actual <id|osd.id> [--yes-i-really-mean-it]
cmd519: osd lost <id|osd.id> [--yes-i-really-mean-it]
cmd520: osd create [<uuid>] [<id|osd.id>]
cmd521: osd new <uuid> [<id|osd.id>]
cmd522: osd blocklist [<range>] <blocklistop:add|rm> <addr> [<expire:float>]
cmd523: osd blocklist ls
cmd524: osd blocklist clear
cmd525: osd blacklist <blacklistop:add|rm> <addr> [<expire:float>]
cmd526: osd blacklist ls
cmd527: osd blacklist clear
cmd528: osd pool mksnap <pool> <snap>
cmd529: osd pool rmsnap <pool> <snap>
cmd530: osd pool ls [<detail:detail>]
cmd531: osd pool create <pool> [<pg_num:int>] [<pgp_num:int>] [<pool_type:replicated|erasure>] [<erasure_code_profile>] [<rule>] [<expected_num_objects:int>] [<size:int>] [<pg_num_min:int>] [<pg_num_max:int>] [<autoscale_mode:on|off|warn>] [--bulk] [<target_size_bytes:int>] [<target_size_ratio:float>]
cmd532: osd pool delete <pool> [<pool2>] [--yes-i-really-really-mean-it] [--yes-i-really-really-mean-it-not-faking]
cmd533: osd pool rm <pool> [<pool2>] [--yes-i-really-really-mean-it] [--yes-i-really-really-mean-it-not-faking]
cmd534: osd pool rename <srcpool> <destpool>
cmd535: osd pool get <pool> <var:size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|erasure_code_profile|min_read_recency_for_promote|all|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_autoscale_bias|pg_num_min|pg_num_max|target_size_bytes|target_size_ratio|dedup_tier|dedup_chunk_algorithm|dedup_cdc_chunk_size|eio|bulk>
cmd536: osd pool set <pool> <var:size|min_size|pg_num|pgp_num|pgp_num_actual|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_mode|pg_autoscale_bias|pg_num_min|pg_num_max|target_size_bytes|target_size_ratio|dedup_tier|dedup_chunk_algorithm|dedup_cdc_chunk_size|eio|bulk> <val> [--yes
 -i-really-mean-it]
cmd537: osd pool set-quota <pool> <field:max_objects|max_bytes> <val>
cmd538: osd pool get-quota <pool>
cmd539: osd pool application enable <pool> <app> [--yes-i-really-mean-it]
cmd540: osd pool application disable <pool> <app> [--yes-i-really-mean-it]
cmd541: osd pool application set <pool> <app> <key> <value>
cmd542: osd pool application rm <pool> <app> <key>
cmd543: osd pool application get [<pool>] [<app>] [<key>]
cmd544: osd utilization
cmd545: osd force_healthy_stretch_mode [--yes-i-really-mean-it]
cmd546: osd force_recovery_stretch_mode [--yes-i-really-mean-it]
cmd547: osd tier add <pool> <tierpool> [--force-nonempty]
cmd548: osd tier rm <pool> <tierpool>
cmd549: osd tier remove <pool> <tierpool>
cmd550: osd tier cache-mode <pool> <mode:writeback|readproxy|readonly|none> [--yes-i-really-mean-it]
cmd551: osd tier set-overlay <pool> <overlaypool>
cmd552: osd tier rm-overlay <pool>
cmd553: osd tier remove-overlay <pool>
cmd554: osd tier add-cache <pool> <tierpool> <size:int>
cmd555: config-key get <key>
cmd556: config-key set <key> [<val>]
cmd557: config-key put <key> [<val>]
cmd558: config-key del <key>
cmd559: config-key rm <key>
cmd560: config-key exists <key>
cmd561: config-key list
cmd562: config-key ls
cmd563: config-key dump [<key>]
cmd564: mgr stat
cmd565: mgr dump [<epoch:int>]
cmd566: mgr fail [<who>]
cmd567: mgr module ls
cmd568: mgr services
cmd569: mgr module enable <module> [--force]
cmd570: mgr module disable <module>
cmd571: mgr metadata [<who>]
cmd572: mgr count-metadata <property>
cmd573: mgr versions
cmd574: config set <who> <name> <value> [--force]
cmd575: config rm <who> <name>
cmd576: config get <who> [<key>]
cmd577: config dump
cmd578: config help <key>
cmd579: config ls
cmd580: config assimilate-conf
cmd581: config log [<num:int>]
cmd582: config reset <num:int>
cmd583: config generate-minimal-conf
cmd584: injectargs <injected_args>...
cmd585: smart [<devid>]
cmd586: mon_status
cmd587: heap <heapcmd:dump|start_profiler|stop_profiler|release|stats> [<value>]
cmd588: connection scores dump
cmd589: connection scores reset
cmd590: sync_force [--yes-i-really-mean-it]
cmd591: add_bootstrap_peer_hint <addr>
cmd592: add_bootstrap_peer_hintv <addrv>
cmd593: quorum enter
cmd594: quorum exit
cmd595: ops
cmd596: sessions
cmd597: dump_historic_ops
cmd598: dump_historic_slow_ops
validate_command: fs subvolume authorize shared cweb01lab glcweb01a.lab.xyz.host --access_level=rwp
better match: 0.5 > 0.0: pg stat
better match: 0.5 > 0.5: pg getmap
better match: 0.5 > 0.5: pg dump [<dumpcontents:all|summary|sum|delta|pools|osds|pgs|pgs_brief>...]
better match: 0.5 > 0.5: pg dump_json [<dumpcontents:all|summary|sum|pools|osds|pgs>...]
better match: 0.5 > 0.5: pg dump_pools_json
better match: 0.5 > 0.5: pg ls-by-pool <poolstr> [<states>...]
better match: 0.5 > 0.5: pg ls-by-primary <id|osd.id> [<pool:int>] [<states>...]
better match: 0.5 > 0.5: pg ls-by-osd <id|osd.id> [<pool:int>] [<states>...]
better match: 0.5 > 0.5: pg ls [<pool:int>] [<states>...]
better match: 0.5 > 0.5: pg dump_stuck [<stuckops:inactive|unclean|stale|undersized|degraded>...] [<threshold:int>]
better match: 0.5 > 0.5: pg debug <debugop:unfound_objects_exist|degraded_pgs_exist>
better match: 0.5 > 0.5: pg scrub <pgid>
better match: 0.5 > 0.5: pg deep-scrub <pgid>
better match: 0.5 > 0.5: pg repair <pgid>
better match: 0.5 > 0.5: pg force-recovery <pgid>...
better match: 0.5 > 0.5: pg force-backfill <pgid>...
better match: 0.5 > 0.5: pg cancel-force-recovery <pgid>...
better match: 0.5 > 0.5: pg cancel-force-backfill <pgid>...
better match: 0.5 > 0.5: osd perf
better match: 0.5 > 0.5: osd df [<output_method:plain|tree>] [<filter_by:class|name>] [<filter>]
better match: 0.5 > 0.5: osd blocked-by
better match: 0.5 > 0.5: osd pool stats [<pool_name>]
better match: 0.5 > 0.5: osd pool scrub <who>...
better match: 0.5 > 0.5: osd pool deep-scrub <who>...
better match: 0.5 > 0.5: osd pool repair <who>...
better match: 0.5 > 0.5: osd pool force-recovery <who>...
better match: 0.5 > 0.5: osd pool force-backfill <who>...
better match: 0.5 > 0.5: osd pool cancel-force-recovery <who>...
better match: 0.5 > 0.5: osd pool cancel-force-backfill <who>...
better match: 0.5 > 0.5: osd reweight-by-utilization [<oload:int>] [<max_change:float>] [<max_osds:int>] [--no-increasing]
better match: 0.5 > 0.5: osd test-reweight-by-utilization [<oload:int>] [<max_change:float>] [<max_osds:int>] [--no-increasing]
better match: 0.5 > 0.5: osd reweight-by-pg [<oload:int>] [<max_change:float>] [<max_osds:int>] [<pools>...]
better match: 0.5 > 0.5: osd test-reweight-by-pg [<oload:int>] [<max_change:float>] [<max_osds:int>] [<pools>...]
better match: 0.5 > 0.5: osd destroy <id|osd.id> [--force] [--yes-i-really-mean-it]
better match: 0.5 > 0.5: osd purge <id|osd.id> [--force] [--yes-i-really-mean-it]
better match: 0.5 > 0.5: osd safe-to-destroy <ids>...
better match: 0.5 > 0.5: osd ok-to-stop <ids>... [<max:int>]
better match: 0.5 > 0.5: osd scrub <who>
better match: 0.5 > 0.5: osd deep-scrub <who>
better match: 0.5 > 0.5: osd repair <who>
better match: 0.5 > 0.5: service dump
better match: 0.5 > 0.5: service status
better match: 0.5 > 0.5: config show <who> [<key>]
better match: 0.5 > 0.5: config show-with-defaults <who>
better match: 0.5 > 0.5: device ls
better match: 0.5 > 0.5: device info <devid>
better match: 0.5 > 0.5: device ls-by-daemon <who>
better match: 0.5 > 0.5: device ls-by-host <host>
better match: 0.5 > 0.5: device set-life-expectancy <devid> <from> [<to>]
better match: 0.5 > 0.5: device rm-life-expectancy <devid>
better match: 0.5 > 0.5: alerts send
better match: 0.5 > 0.5: balancer status
better match: 0.5 > 0.5: balancer mode <mode:none|crush-compat|upmap>
better match: 0.5 > 0.5: balancer on
better match: 0.5 > 0.5: balancer off
better match: 0.5 > 0.5: balancer pool ls
better match: 0.5 > 0.5: balancer pool add <pools>...
better match: 0.5 > 0.5: balancer pool rm <pools>...
better match: 0.5 > 0.5: balancer eval-verbose [<option>]
better match: 0.5 > 0.5: balancer eval [<option>]
better match: 0.5 > 0.5: balancer optimize <plan> [<pools>...]
better match: 0.5 > 0.5: balancer show <plan>
better match: 0.5 > 0.5: balancer rm <plan>
better match: 0.5 > 0.5: balancer reset
better match: 0.5 > 0.5: balancer dump <plan>
better match: 0.5 > 0.5: balancer ls
better match: 0.5 > 0.5: balancer execute <plan>
better match: 0.5 > 0.5: crash info <id>
better match: 0.5 > 0.5: crash post
better match: 0.5 > 0.5: crash ls [--format <value>]
better match: 0.5 > 0.5: crash ls-new [--format <value>]
better match: 0.5 > 0.5: crash rm <id>
better match: 0.5 > 0.5: crash prune <keep:int>
better match: 0.5 > 0.5: crash archive <id>
better match: 0.5 > 0.5: crash archive-all
better match: 0.5 > 0.5: crash stat
better match: 0.5 > 0.5: crash json_report <hours:int>
better match: 0.5 > 0.5: device query-daemon-health-metrics <who>
better match: 0.5 > 0.5: device scrape-daemon-health-metrics <who>
better match: 0.5 > 0.5: device scrape-health-metrics [<devid>]
better match: 0.5 > 0.5: device get-health-metrics <devid> [<sample>]
better match: 0.5 > 0.5: device check-health
better match: 0.5 > 0.5: device monitoring on
better match: 0.5 > 0.5: device monitoring off
better match: 0.5 > 0.5: device predict-life-expectancy <devid>
better match: 0.5 > 0.5: influx config-set <key> <value>
better match: 0.5 > 0.5: influx config-show
better match: 0.5 > 0.5: influx send
better match: 0.5 > 0.5: influx config-show
better match: 0.5 > 0.5: influx config-set <key> <value>
better match: 0.5 > 0.5: influx send
better match: 0.5 > 0.5: insights
better match: 0.5 > 0.5: insights prune-health [<hours:int>]
better match: 0.5 > 0.5: iostat [<width:int>] [--print-header]
better match: 1.5 > 0.5: fs snapshot mirror enable [<fs_name>]
better match: 1.5 > 1.5: fs snapshot mirror disable [<fs_name>]
better match: 1.5 > 1.5: fs snapshot mirror peer_add <fs_name> [<remote_cluster_spec>] [<remote_fs_name>] [<remote_mon_host>] [<cephx_key>]
better match: 1.5 > 1.5: fs snapshot mirror peer_list [<fs_name>]
better match: 1.5 > 1.5: fs snapshot mirror peer_remove <fs_name> [<peer_uuid>]
better match: 1.5 > 1.5: fs snapshot mirror peer_bootstrap create <fs_name> <client_name> [<site_name>]
better match: 1.5 > 1.5: fs snapshot mirror peer_bootstrap import <fs_name> [<token>]
better match: 1.5 > 1.5: fs snapshot mirror add <fs_name> [<path>]
better match: 1.5 > 1.5: fs snapshot mirror remove <fs_name> [<path>]
better match: 1.5 > 1.5: fs snapshot mirror dirmap <fs_name> [<path>]
better match: 1.5 > 1.5: fs snapshot mirror show distribution [<fs_name>]
better match: 1.5 > 1.5: fs snapshot mirror daemon status
better match: 1.5 > 1.5: fs snap-schedule status [<path>] [<fs>] [--format <value>]
better match: 1.5 > 1.5: fs snap-schedule list <path> [--recursive] [--fs <value>] [--format <value>]
better match: 1.5 > 1.5: fs snap-schedule add <path> <snap_schedule> [<start>] [<fs>]
better match: 1.5 > 1.5: fs snap-schedule remove <path> [<repeat>] [<start>] [<fs>]
better match: 1.5 > 1.5: fs snap-schedule retention add <path> <retention_spec_or_period> [<retention_count>] [<fs>]
better match: 1.5 > 1.5: fs snap-schedule retention remove <path> <retention_spec_or_period> [<retention_count>] [<fs>]
better match: 1.5 > 1.5: fs snap-schedule activate <path> [<repeat>] [<start>] [<fs>]
better match: 1.5 > 1.5: fs snap-schedule deactivate <path> [<repeat>] [<start>] [<fs>]
better match: 1.5 > 1.5: fs perf stats [<mds_rank>] [<client_id>] [<client_ip>]
better match: 1.5 > 1.5: fs status [<fs>] [--format <value>]
better match: 1.5 > 1.5: fs volume ls
better match: 1.5 > 1.5: fs volume create <name> [<placement>]
better match: 1.5 > 1.5: fs volume rm <vol_name> [<yes-i-really-mean-it>]
better match: 1.5 > 1.5: fs volume rename <vol_name> <new_vol_name> [--yes-i-really-mean-it]
better match: 1.5 > 1.5: fs volume info <vol_name> [--human-readable]
better match: 1.5 > 1.5: fs subvolumegroup ls <vol_name>
better match: 1.5 > 1.5: fs subvolumegroup create <vol_name> <group_name> [<size:int>] [<pool_layout>] [<uid:int>] [<gid:int>] [<mode>]
better match: 1.5 > 1.5: fs subvolumegroup rm <vol_name> <group_name> [--force]
better match: 1.5 > 1.5: fs subvolumegroup info <vol_name> <group_name>
better match: 1.5 > 1.5: fs subvolumegroup resize <vol_name> <group_name> <new_size> [--no-shrink]
better match: 1.5 > 1.5: fs subvolumegroup exist <vol_name>
better match: 2.5 > 1.5: fs subvolume ls <vol_name> [<group_name>]
better match: 2.5 > 2.5: fs subvolume create <vol_name> <sub_name> [<size:int>] [<group_name>] [<pool_layout>] [<uid:int>] [<gid:int>] [<mode>] [--namespace-isolated]
better match: 2.5 > 2.5: fs subvolume rm <vol_name> <sub_name> [<group_name>] [--force] [--retain-snapshots]
better match: 6.5 > 2.5: fs subvolume authorize <vol_name> <sub_name> <auth_id> [<group_name>] [<access_level>] [<tenant_id>] [--allow-existing-id]
bestcmds_sorted:
[{'flags': 8,
  'help': 'Allow a cephx auth ID access to a subvolume',
  'module': 'mgr',
  'perm': 'rw',
  'sig': [argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, positional=True, name=prefix, n=1, numseen=0, prefix=fs),
          argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, positional=True, name=prefix, n=1, numseen=0, prefix=subvolume),
          argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, positional=True, name=prefix, n=1, numseen=0, prefix=authorize),
          argdesc(<class 'ceph_argparse.CephString'>, req=True, positional=True, name=vol_name, n=1, numseen=0),
          argdesc(<class 'ceph_argparse.CephString'>, req=True, positional=True, name=sub_name, n=1, numseen=0),
          argdesc(<class 'ceph_argparse.CephString'>, req=True, positional=True, name=auth_id, n=1, numseen=0),
          argdesc(<class 'ceph_argparse.CephString'>, req=False, positional=True, name=group_name, n=1, numseen=0),
          argdesc(<class 'ceph_argparse.CephString'>, req=False, positional=True, name=access_level, n=1, numseen=0),
          argdesc(<class 'ceph_argparse.CephString'>, req=False, positional=True, name=tenant_id, n=1, numseen=0),
          argdesc(<class 'ceph_argparse.CephBool'>, req=False, positional=True, name=allow_existing_id, n=1, numseen=0)]}]
Submitting command:  {'prefix': 'fs subvolume authorize', 'vol_name': 'shared', 'sub_name': 'cweb01lab', 'auth_id': 'glcweb01a.lab.xyz.host', 'access_level': 'rwp', 'target': ('mon-mgr', '')}
submit {"prefix": "fs subvolume authorize", "vol_name": "shared", "sub_name": "cweb01lab", "auth_id": "glcweb01a.lab.xyz.host", "access_level": "rwp", "target": ["mon-mgr", ""]} to mon-mgr
Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 1756, in _handle_command
    return self.handle_command(inbuf, cmd)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/share/ceph/mgr/volumes/module.py", line 533, in handle_command
    return handler(inbuf, cmd)
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/share/ceph/mgr/volumes/module.py", line 38, in wrap
    return f(self, inbuf, cmd)
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/share/ceph/mgr/volumes/module.py", line 631, in _cmd_fs_subvolume_authorize
    return self.vc.authorize_subvolume(vol_name=cmd['vol_name'],
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 304, in authorize_subvolume
    key = subvolume.authorize(authid, accesslevel, tenant_id, allow_existing_id)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/share/ceph/mgr/volumes/fs/operations/versions/subvolume_v1.py", line 415, in authorize
    key = self._authorize_subvolume(auth_id, access_level, existing_caps)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/share/ceph/mgr/volumes/fs/operations/versions/subvolume_v1.py", line 446, in _authorize_subvolume
    key = self._authorize(auth_id, access_level, existing_caps)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/share/ceph/mgr/volumes/fs/operations/versions/subvolume_v1.py", line 483, in _authorize
    return allow_access(self.mgr, client_entity, want_mds_cap, want_osd_cap,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/share/ceph/mgr/volumes/fs/operations/access.py", line 93, in allow_access
    caps = json.loads(out)
           ^^^^^^^^^^^^^^^
  File "/lib/python3.11/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/lib/python3.11/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/lib/python3.11/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux