Re: PG undersized+peered and inactive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 24 Oct 2019, Liu, Changcheng wrote:
> On 11:24 Thu 24 Oct, Sage Weil wrote:
> > The default placement policy is to separate replicas across hosts, and 
> > you only have one host.  You can create a new rule that replicas across 
> > osds with 'ceph osd crush rule create-replicated ...' and switch your data 
> > pool to that with 'ceph osd pool set crush_rule $pool $rulename'.
> 
> If this is the case, why there's no problem when creating pool based on
> 3-OSD created in vstart environment? Could this problem be solved in single
> node cluster?

vstart sets the default failure domain to OSD instead of host.  
ceph-deploy is usually used for real clusters, where a host failure domain 
is a better default.

> Previously, I also created 2 nodes cluster with 4-OSDs(3 OSD on A node,
> 1 OSD on B node, default crush rule). It also hit the same problem.

With 2 nodes you probably ended up with PGs in active+undersized

s

> > 
> > s
> > 
> > On Thu, 24 Oct 2019, Liu, Changcheng wrote:
> > 
> > > Hi all,
> > >    I'm using ceph-deploy tool to deploy single node cluster based on
> > >    Ceph master branch(head id: bf09a04d2). The PGs are inactive and undersized.
> > >        -bash-4.2$ ceph -s
> > >          cluster:
> > >            id:     93e5ff9c-1dee-4bd8-9b0a-318b917dfd8c
> > >            health: HEALTH_WARN
> > >                    Reduced data availability: 4 pgs inactive
> > >                    Degraded data redundancy: 4 pgs undersized
> > >         
> > >          services:
> > >            mon: 1 daemons, quorum rdmarhel0 (age 30m)
> > >            mgr: rdmarhel0(active, since 29m)
> > >            osd: 3 osds: 3 up (since 25m), 3 in (since 25m)
> > >         
> > >          data:
> > >            pools:   1 pools, 4 pgs
> > >            objects: 0 objects, 0 B
> > >            usage:   3.0 GiB used, 834 GiB / 837 GiB avail
> > >            pgs:     100.000% pgs not active
> > >                     4 undersized+peered
> > >         
> > >        -bash-4.2$
> > > 
> > >    Other info is attached.
> > > 
> > >    I apply below patch based on head-id:bf09a04d2. It's for Ceph/RDMA deployment.
> > >    The above problem is hit whatever for RDMA-messenger deployment or Posix-TCP-messenger deployment.
> > >      diff --git a/systemd/ceph-fuse@xxxxxxxxxxx b/systemd/ceph-fuse@xxxxxxxxxxx
> > >      index d603042..ff2e907 100644
> > >      --- a/systemd/ceph-fuse@xxxxxxxxxxx
> > >      +++ b/systemd/ceph-fuse@xxxxxxxxxxx
> > >      @@ -12,6 +12,7 @@ ExecStart=/usr/bin/ceph-fuse -f --cluster ${CLUSTER} %I
> > >       LockPersonality=true
> > >       MemoryDenyWriteExecute=true
> > >       NoNewPrivileges=true
> > >      +LimitMEMLOCK=infinity
> > >       # ceph-fuse requires access to /dev fuse device
> > >       PrivateDevices=no
> > >       ProtectControlGroups=true
> > >      diff --git a/systemd/ceph-mds@xxxxxxxxxxx b/systemd/ceph-mds@xxxxxxxxxxx
> > >      index 39a2e63..0e58dfe 100644
> > >      --- a/systemd/ceph-mds@xxxxxxxxxxx
> > >      +++ b/systemd/ceph-mds@xxxxxxxxxxx
> > >      @@ -14,7 +14,8 @@ ExecReload=/bin/kill -HUP $MAINPID
> > >       LockPersonality=true
> > >       MemoryDenyWriteExecute=true
> > >       NoNewPrivileges=true
> > >      -PrivateDevices=yes
> > >      +LimitMEMLOCK=infinity
> > >      +PrivateDevices=no
> > >       ProtectControlGroups=true
> > >       ProtectHome=true
> > >       ProtectKernelModules=true
> > >      diff --git a/systemd/ceph-mgr@xxxxxxxxxxx b/systemd/ceph-mgr@xxxxxxxxxxx
> > >      index c98f637..682c7ec 100644
> > >      --- a/systemd/ceph-mgr@xxxxxxxxxxx
> > >      +++ b/systemd/ceph-mgr@xxxxxxxxxxx
> > >      @@ -18,7 +18,8 @@ LockPersonality=true
> > >       MemoryDenyWriteExecute=false
> > >       
> > >       NoNewPrivileges=true
> > >      -PrivateDevices=yes
> > >      +LimitMEMLOCK=infinity
> > >      +PrivateDevices=no
> > >       ProtectControlGroups=true
> > >       ProtectHome=true
> > >       ProtectKernelModules=true
> > >      diff --git a/systemd/ceph-mon@xxxxxxxxxxx b/systemd/ceph-mon@xxxxxxxxxxx
> > >      index c95fcab..51854fa 100644
> > >      --- a/systemd/ceph-mon@xxxxxxxxxxx
> > >      +++ b/systemd/ceph-mon@xxxxxxxxxxx
> > >      @@ -21,7 +21,8 @@ LockPersonality=true
> > >       MemoryDenyWriteExecute=true
> > >       # Need NewPrivileges via `sudo smartctl`
> > >       NoNewPrivileges=false
> > >      -PrivateDevices=yes
> > >      +LimitMEMLOCK=infinity
> > >      +PrivateDevices=no
> > >       ProtectControlGroups=true
> > >       ProtectHome=true
> > >       ProtectKernelModules=true
> > >      diff --git a/systemd/ceph-osd@xxxxxxxxxxx b/systemd/ceph-osd@xxxxxxxxxxx
> > >      index 1b5c9c8..06c20d7 100644
> > >      --- a/systemd/ceph-osd@xxxxxxxxxxx
> > >      +++ b/systemd/ceph-osd@xxxxxxxxxxx
> > >      @@ -16,6 +16,8 @@ LockPersonality=true
> > >       MemoryDenyWriteExecute=true
> > >       # Need NewPrivileges via `sudo smartctl`
> > >       NoNewPrivileges=false
> > >      +LimitMEMLOCK=infinity
> > >      +PrivateDevices=no
> > >       ProtectControlGroups=true
> > >       ProtectHome=true
> > >       ProtectKernelModules=true
> > >      diff --git a/systemd/ceph-radosgw@xxxxxxxxxxx b/systemd/ceph-radosgw@xxxxxxxxxxx
> > >      index 7e3ddf6..fe1a6b9 100644
> > >      --- a/systemd/ceph-radosgw@xxxxxxxxxxx
> > >      +++ b/systemd/ceph-radosgw@xxxxxxxxxxx
> > >      @@ -13,7 +13,8 @@ ExecStart=/usr/bin/radosgw -f --cluster ${CLUSTER} --name client.%i --setuser ce
> > >       LockPersonality=true
> > >       MemoryDenyWriteExecute=true
> > >       NoNewPrivileges=true
> > >      -PrivateDevices=yes
> > >      +LimitMEMLOCK=infinity
> > >      +PrivateDevices=no
> > >       ProtectControlGroups=true
> > >       ProtectHome=true
> > >       ProtectKernelModules=true
> > >      diff --git a/systemd/ceph-volume@.service b/systemd/ceph-volume@.service
> > >      index c21002c..e2d1f67 100644
> > >      --- a/systemd/ceph-volume@.service
> > >      +++ b/systemd/ceph-volume@.service
> > >      @@ -9,6 +9,7 @@ KillMode=none
> > >       Environment=CEPH_VOLUME_TIMEOUT=10000
> > >       ExecStart=/bin/sh -c 'timeout $CEPH_VOLUME_TIMEOUT /usr/sbin/ceph-volume-systemd %i'
> > >       TimeoutSec=0
> > >      +LimitMEMLOCK=infinity
> > >       
> > >       [Install]
> > >       WantedBy=multi-user.target
> > >      -- 
> > >      1.8.3.1
> > > 
> > > 
> 
> 
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux