Re: Ceph mds is stuck in creating status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I attached osd & fs dumps. There are two pools (cephfs_data, cephfs_metadata) for CephFS clearly. And this system's network is 40Gbps ethernet for public & cluster. So I don't think the network speed would be problem. Thank you.

2018년 10월 16일 (화) 오전 1:18, John Spray <jspray@xxxxxxxxxx>님이 작성:
On Mon, Oct 15, 2018 at 4:24 PM Kisik Jeong <kisik.jeong@xxxxxxxxxxxx> wrote:
>
> Thank you for your reply, John.
>
> I  restarted my Ceph cluster and captured the mds logs.
>
> I found that mds shows slow request because some OSDs are laggy.
>
> I followed the ceph mds troubleshooting with 'mds slow request', but there is no operation in flight:
>
> root@hpc1:~/iodc# ceph daemon mds.hpc1 dump_ops_in_flight
> {
>     "ops": [],
>     "num_ops": 0
> }
>
> Is there any other reason that mds shows slow request? Thank you.

Those stuck requests seem to be stuck because they're targeting pools
that don't exist.  Has something strange happened in the history of
this cluster that might have left a filesystem referencing pools that
no longer exist?  Ceph is not supposed to permit removal of pools in
use by CephFS, but perhaps something went wrong.

Check out the "ceph osd dump --format=json-pretty" and "ceph fs dump
--format=json-pretty" outputs and how the pool ID's relate.  According
to those logs, data pool with ID 1 and metadata pool with ID 2 do not
exist.

John

> -Kisik
>
> 2018년 10월 15일 (월) 오후 11:43, John Spray <jspray@xxxxxxxxxx>님이 작성:
>>
>> On Mon, Oct 15, 2018 at 3:34 PM Kisik Jeong <kisik.jeong@xxxxxxxxxxxx> wrote:
>> >
>> > Hello,
>> >
>> > I successfully deployed Ceph cluster with 16 OSDs and created CephFS before.
>> > But after rebooting due to mds slow request problem, when creating CephFS, Ceph mds goes creating status and never changes.
>> > Seeing Ceph status, there is no other problem I think. Here is 'ceph -s' result:
>>
>> That's pretty strange.  Usually if an MDS is stuck in "creating", it's
>> because an OSD operation is stuck, but in your case all your PGs are
>> healthy.
>>
>> I would suggest setting "debug mds=20" and "debug objecter=10" on your
>> MDS, restarting it and capturing those logs so that we can see where
>> it got stuck.
>>
>> John
>>
>> > csl@hpc1:~$ ceph -s
>> >   cluster:
>> >     id:     1a32c483-cb2e-4ab3-ac60-02966a8fd327
>> >     health: HEALTH_OK
>> >
>> >   services:
>> >     mon: 1 daemons, quorum hpc1
>> >     mgr: hpc1(active)
>> >     mds: cephfs-1/1/1 up  {0=hpc1=up:creating}
>> >     osd: 16 osds: 16 up, 16 in
>> >
>> >   data:
>> >     pools:   2 pools, 640 pgs
>> >     objects: 7 objects, 124B
>> >     usage:   34.3GiB used, 116TiB / 116TiB avail
>> >     pgs:     640 active+clean
>> >
>> > However, CephFS still works in case of 8 OSDs.
>> >
>> > If there is any doubt of this phenomenon, please let me know. Thank you.
>> >
>> > PS. I attached my ceph.conf contents:
>> >
>> > [global]
>> > fsid = 1a32c483-cb2e-4ab3-ac60-02966a8fd327
>> > mon_initial_members = hpc1
>> > mon_host = 192.168.40.10
>> > auth_cluster_required = cephx
>> > auth_service_required = cephx
>> > auth_client_required = cephx
>> >
>> > public_network = 192.168.40.0/24
>> > cluster_network = 192.168.40.0/24
>> >
>> > [osd]
>> > osd journal size = 1024
>> > osd max object name len = 256
>> > osd max object namespace len = 64
>> > osd mount options f2fs = active_logs=2
>> >
>> > [osd.0]
>> > host = hpc9
>> > public_addr = 192.168.40.18
>> > cluster_addr = 192.168.40.18
>> >
>> > [osd.1]
>> > host = hpc10
>> > public_addr = 192.168.40.19
>> > cluster_addr = 192.168.40.19
>> >
>> > [osd.2]
>> > host = hpc9
>> > public_addr = 192.168.40.18
>> > cluster_addr = 192.168.40.18
>> >
>> > [osd.3]
>> > host = hpc10
>> > public_addr = 192.168.40.19
>> > cluster_addr = 192.168.40.19
>> >
>> > [osd.4]
>> > host = hpc9
>> > public_addr = 192.168.40.18
>> > cluster_addr = 192.168.40.18
>> >
>> > [osd.5]
>> > host = hpc10
>> > public_addr = 192.168.40.19
>> > cluster_addr = 192.168.40.19
>> >
>> > [osd.6]
>> > host = hpc9
>> > public_addr = 192.168.40.18
>> > cluster_addr = 192.168.40.18
>> >
>> > [osd.7]
>> > host = hpc10
>> > public_addr = 192.168.40.19
>> > cluster_addr = 192.168.40.19
>> >
>> > [osd.8]
>> > host = hpc9
>> > public_addr = 192.168.40.18
>> > cluster_addr = 192.168.40.18
>> >
>> > [osd.9]
>> > host = hpc10
>> > public_addr = 192.168.40.19
>> > cluster_addr = 192.168.40.19
>> >
>> > [osd.10]
>> > host = hpc9
>> > public_addr = 192.168.10.18
>> > cluster_addr = 192.168.40.18
>> >
>> > [osd.11]
>> > host = hpc10
>> > public_addr = 192.168.10.19
>> > cluster_addr = 192.168.40.19
>> >
>> > [osd.12]
>> > host = hpc9
>> > public_addr = 192.168.10.18
>> > cluster_addr = 192.168.40.18
>> >
>> > [osd.13]
>> > host = hpc10
>> > public_addr = 192.168.10.19
>> > cluster_addr = 192.168.40.19
>> >
>> > [osd.14]
>> > host = hpc9
>> > public_addr = 192.168.10.18
>> > cluster_addr = 192.168.40.18
>> >
>> > [osd.15]
>> > host = hpc10
>> > public_addr = 192.168.10.19
>> > cluster_addr = 192.168.40.19
>> >
>> > --
>> > Kisik Jeong
>> > Ph.D. Student
>> > Computer Systems Laboratory
>> > Sungkyunkwan University
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Kisik Jeong
> Ph.D. Student
> Computer Systems Laboratory
> Sungkyunkwan University


--
Kisik Jeong
Ph.D. Student
Computer Systems Laboratory
Sungkyunkwan University
csl@hpc1:~/iodc$ ceph fs dump --format=json-pretty
dumped fsmap epoch 7

{
    "epoch": 7,
    "compat": {
        "compat": {},
        "ro_compat": {},
        "incompat": {
            "feature_1": "base v0.20",
            "feature_2": "client writeable ranges",
            "feature_3": "default file layouts on dirs",
            "feature_4": "dir inode in separate object",
            "feature_5": "mds uses versioned encoding",
            "feature_6": "dirfrag is stored in omap",
            "feature_8": "no anchor table",
            "feature_9": "file layout v2"
        }
    },
    "feature_flags": {
        "enable_multiple": false,
        "ever_enabled_multiple": false
    },
    "standbys": [],
    "filesystems": [
        {
            "mdsmap": {
                "epoch": 7,
                "flags": 12,
                "ever_allowed_features": 0,
                "explicitly_allowed_features": 0,
                "created": "2018-10-15 23:59:41.928648",
                "modified": "2018-10-16 03:04:34.836401",
                "tableserver": 0,
                "root": 0,
                "session_timeout": 60,
                "session_autoclose": 300,
                "max_file_size": 1099511627776,
                "last_failure": 0,
                "last_failure_osd_epoch": 78,
                "compat": {
                    "compat": {},
                    "ro_compat": {},
                    "incompat": {
                        "feature_1": "base v0.20",
                        "feature_2": "client writeable ranges",
                        "feature_3": "default file layouts on dirs",
                        "feature_4": "dir inode in separate object",
                        "feature_5": "mds uses versioned encoding",
                        "feature_6": "dirfrag is stored in omap",
                        "feature_8": "no anchor table",
                        "feature_9": "file layout v2"
                    }
                },
                "max_mds": 1,
                "in": [
                    0
                ],
                "up": {
                    "mds_0": 4203
                },
                "failed": [],
                "damaged": [],
                "stopped": [],
                "info": {
                    "gid_4203": {
                        "gid": 4203,
                        "name": "hpc1",
                        "rank": 0,
                        "incarnation": 7,
                        "state": "up:creating",
                        "state_seq": 2,
                        "addr": "192.168.40.10:6801/550609",
                        "standby_for_rank": -1,
                        "standby_for_fscid": -1,
                        "standby_for_name": "",
                        "standby_replay": false,
                        "export_targets": [],
                        "features": 4611087853745930235
                    }
                },
                "data_pools": [
                    1
                ],
                "metadata_pool": 2,
                "enabled": true,
                "fs_name": "cephfs",
                "balancer": "",
                "standby_count_wanted": 0
            },
            "id": 1
        }
    ]
}
csl@hpc1:~/iodc$ ceph osd dump --format=json-pretty

{
    "epoch": 78,
    "fsid": "66b24069-fb7e-4842-9ff5-8f87f4ba0751",
    "created": "2018-10-15 23:50:51.883303",
    "modified": "2018-10-16 00:25:25.475329",
    "flags": "sortbitwise,recovery_deletes,purged_snapdirs",
    "crush_version": 33,
    "full_ratio": 0.950000,
    "backfillfull_ratio": 0.900000,
    "nearfull_ratio": 0.850000,
    "cluster_snapshot": "",
    "pool_max": 2,
    "max_osd": 16,
    "require_min_compat_client": "jewel",
    "min_compat_client": "jewel",
    "require_osd_release": "luminous",
    "pools": [
        {
            "pool": 1,
            "pool_name": "cephfs_data",
            "flags": 1,
            "flags_names": "hashpspool",
            "type": 1,
            "size": 1,
            "min_size": 1,
            "crush_rule": 0,
            "object_hash": 2,
            "pg_num": 512,
            "pg_placement_num": 512,
            "crash_replay_interval": 0,
            "last_change": "77",
            "last_force_op_resend": "0",
            "last_force_op_resend_preluminous": "0",
            "auid": 0,
            "snap_mode": "selfmanaged",
            "snap_seq": 0,
            "snap_epoch": 0,
            "pool_snaps": [],
            "removed_snaps": "[]",
            "quota_max_bytes": 0,
            "quota_max_objects": 0,
            "tiers": [],
            "tier_of": -1,
            "read_tier": -1,
            "write_tier": -1,
            "cache_mode": "none",
            "target_max_bytes": 0,
            "target_max_objects": 0,
            "cache_target_dirty_ratio_micro": 400000,
            "cache_target_dirty_high_ratio_micro": 600000,
            "cache_target_full_ratio_micro": 800000,
            "cache_min_flush_age": 0,
            "cache_min_evict_age": 0,
            "erasure_code_profile": "",
            "hit_set_params": {
                "type": "none"
            },
            "hit_set_period": 0,
            "hit_set_count": 0,
            "use_gmt_hitset": true,
            "min_read_recency_for_promote": 0,
            "min_write_recency_for_promote": 0,
            "hit_set_grade_decay_rate": 0,
            "hit_set_search_last_n": 0,
            "grade_table": [],
            "stripe_width": 0,
            "expected_num_objects": 0,
            "fast_read": false,
            "options": {},
            "application_metadata": {
                "cephfs": {}
            }
        },
        {
            "pool": 2,
            "pool_name": "cephfs_metadata",
            "flags": 1,
            "flags_names": "hashpspool",
            "type": 1,
            "size": 1,
            "min_size": 1,
            "crush_rule": 0,
            "object_hash": 2,
            "pg_num": 128,
            "pg_placement_num": 128,
            "crash_replay_interval": 0,
            "last_change": "77",
            "last_force_op_resend": "0",
            "last_force_op_resend_preluminous": "0",
            "auid": 0,
            "snap_mode": "selfmanaged",
            "snap_seq": 0,
            "snap_epoch": 0,
            "pool_snaps": [],
            "removed_snaps": "[]",
            "quota_max_bytes": 0,
            "quota_max_objects": 0,
            "tiers": [],
            "tier_of": -1,
            "read_tier": -1,
            "write_tier": -1,
            "cache_mode": "none",
            "target_max_bytes": 0,
            "target_max_objects": 0,
            "cache_target_dirty_ratio_micro": 400000,
            "cache_target_dirty_high_ratio_micro": 600000,
            "cache_target_full_ratio_micro": 800000,
            "cache_min_flush_age": 0,
            "cache_min_evict_age": 0,
            "erasure_code_profile": "",
            "hit_set_params": {
                "type": "none"
            },
            "hit_set_period": 0,
            "hit_set_count": 0,
            "use_gmt_hitset": true,
            "min_read_recency_for_promote": 0,
            "min_write_recency_for_promote": 0,
            "hit_set_grade_decay_rate": 0,
            "hit_set_search_last_n": 0,
            "grade_table": [],
            "stripe_width": 0,
            "expected_num_objects": 0,
            "fast_read": false,
            "options": {},
            "application_metadata": {
                "cephfs": {}
            }
        }
    ],
    "osds": [
        {
            "osd": 0,
            "uuid": "d66fad26-b234-4b72-bd9b-fde646cda5e9",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 5,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.40.18:6800/180692",
            "cluster_addr": "192.168.40.18:6801/180692",
            "heartbeat_back_addr": "192.168.40.18:6802/180692",
            "heartbeat_front_addr": "192.168.40.18:6803/180692",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 1,
            "uuid": "3d9bc6f2-1917-44e2-86d5-a33b2c519491",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 9,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.40.19:6800/210313",
            "cluster_addr": "192.168.40.19:6801/210313",
            "heartbeat_back_addr": "192.168.40.19:6802/210313",
            "heartbeat_front_addr": "192.168.40.19:6803/210313",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 2,
            "uuid": "909396c9-bb4f-478a-a147-219a528019a1",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 12,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.40.18:6804/181671",
            "cluster_addr": "192.168.40.18:6805/181671",
            "heartbeat_back_addr": "192.168.40.18:6806/181671",
            "heartbeat_front_addr": "192.168.40.18:6807/181671",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 3,
            "uuid": "3c94ac02-4e63-4387-9044-13e710db0b52",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 16,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.40.19:6804/211335",
            "cluster_addr": "192.168.40.19:6805/211335",
            "heartbeat_back_addr": "192.168.40.19:6806/211335",
            "heartbeat_front_addr": "192.168.40.19:6807/211335",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 4,
            "uuid": "b4e162a4-f7c8-4def-9b34-def31e54636e",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 20,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.40.18:6808/182682",
            "cluster_addr": "192.168.40.18:6809/182682",
            "heartbeat_back_addr": "192.168.40.18:6810/182682",
            "heartbeat_front_addr": "192.168.40.18:6811/182682",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 5,
            "uuid": "aab00ceb-1932-4d99-8736-38f7de5b8c3f",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 24,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.40.19:6808/212259",
            "cluster_addr": "192.168.40.19:6809/212259",
            "heartbeat_back_addr": "192.168.40.19:6810/212259",
            "heartbeat_front_addr": "192.168.40.19:6811/212259",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 6,
            "uuid": "adf9b122-036b-4bea-8c63-43a4e03593f3",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 28,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.40.18:6812/183594",
            "cluster_addr": "192.168.40.18:6813/183594",
            "heartbeat_back_addr": "192.168.40.18:6814/183594",
            "heartbeat_front_addr": "192.168.40.18:6815/183594",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 7,
            "uuid": "17520ef3-b8fa-4eec-8fec-654972e915a1",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 32,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.40.19:6812/213175",
            "cluster_addr": "192.168.40.19:6813/213175",
            "heartbeat_back_addr": "192.168.40.19:6814/213175",
            "heartbeat_front_addr": "192.168.40.19:6815/213175",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 8,
            "uuid": "38cd402b-98af-4902-82e1-c4700240e5e3",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 36,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.40.18:6816/184530",
            "cluster_addr": "192.168.40.18:6817/184530",
            "heartbeat_back_addr": "192.168.40.18:6818/184530",
            "heartbeat_front_addr": "192.168.40.18:6819/184530",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 9,
            "uuid": "665f52cc-67a3-4a1e-b5ab-13769983a42e",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 40,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.40.19:6816/214289",
            "cluster_addr": "192.168.40.19:6817/214289",
            "heartbeat_back_addr": "192.168.40.19:6818/214289",
            "heartbeat_front_addr": "192.168.40.19:6819/214289",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 10,
            "uuid": "6f74e272-af58-4d1c-88e0-5523b3ef517a",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 44,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.10.18:6800/185513",
            "cluster_addr": "192.168.40.18:6820/185513",
            "heartbeat_back_addr": "192.168.40.18:6821/185513",
            "heartbeat_front_addr": "192.168.10.18:6801/185513",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 11,
            "uuid": "dfb9c1ab-0e11-4e7c-b6f5-b60ecbd439b7",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 48,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.10.19:6800/215291",
            "cluster_addr": "192.168.40.19:6820/215291",
            "heartbeat_back_addr": "192.168.40.19:6821/215291",
            "heartbeat_front_addr": "192.168.10.19:6801/215291",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 12,
            "uuid": "396e5910-dca8-4408-8fbb-c46e844dad6b",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 52,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.10.18:6802/186471",
            "cluster_addr": "192.168.40.18:6822/186471",
            "heartbeat_back_addr": "192.168.40.18:6823/186471",
            "heartbeat_front_addr": "192.168.10.18:6803/186471",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 13,
            "uuid": "bfc7d56c-065d-4f3e-9230-e341436fac3e",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 56,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.10.19:6802/216266",
            "cluster_addr": "192.168.40.19:6822/216266",
            "heartbeat_back_addr": "192.168.40.19:6823/216266",
            "heartbeat_front_addr": "192.168.10.19:6803/216266",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 14,
            "uuid": "8e341b1d-794e-47af-85f2-15acdab437b4",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 60,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.10.18:6804/187400",
            "cluster_addr": "192.168.40.18:6824/187400",
            "heartbeat_back_addr": "192.168.40.18:6825/187400",
            "heartbeat_front_addr": "192.168.10.18:6805/187400",
            "state": [
                "exists",
                "up"
            ]
        },
        {
            "osd": 15,
            "uuid": "898425d3-844a-49d5-84b3-7b3fd1dd5bb5",
            "up": 1,
            "in": 1,
            "weight": 1.000000,
            "primary_affinity": 1.000000,
            "last_clean_begin": 0,
            "last_clean_end": 0,
            "up_from": 64,
            "up_thru": 74,
            "down_at": 0,
            "lost_at": 0,
            "public_addr": "192.168.10.19:6804/217357",
            "cluster_addr": "192.168.40.19:6824/217357",
            "heartbeat_back_addr": "192.168.40.19:6825/217357",
            "heartbeat_front_addr": "192.168.10.19:6805/217357",
            "state": [
                "exists",
                "up"
            ]
        }
    ],
    "osd_xinfo": [
        {
            "osd": 0,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 1,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 2,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 3,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 4,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 5,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 6,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 7,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 8,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 9,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 10,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 11,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 12,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 13,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 14,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        },
        {
            "osd": 15,
            "down_stamp": "0.000000",
            "laggy_probability": 0.000000,
            "laggy_interval": 0,
            "features": 4611087853745930235,
            "old_weight": 0
        }
    ],
    "pg_upmap": [],
    "pg_upmap_items": [],
    "pg_temp": [],
    "primary_temp": [],
    "blacklist": {
        "192.168.40.10:6801/137222136": "2018-10-17 00:25:25.475313"
    },
    "erasure_code_profiles": {
        "default": {
            "k": "2",
            "m": "1",
            "plugin": "jerasure",
            "technique": "reed_sol_van"
        }
    }
}
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux