Re: How to fix: HEALTH_ERR 45 pgs are stuck inactive for more than 300 seconds; 19 pgs degraded; 45 pgs stuck inactive; 19 pgs stuck unclean; 19 pgs undersized; recovery 2514/5028 objects degraded (50.000%)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




2017-01-16 12:24 GMT+01:00 Loris Cuoghi <lc@xxxxxxxxxxxxxxxxx>:
Hello,

Le 16/01/2017 à 11:50, Stéphane Klein a écrit :
Hi,

I have two OSD and Mon nodes.

I'm going to add third osd and mon on this cluster but before I want to
fix this error:
>
> [SNIP SNAP]

You've just created your cluster.

With the standard CRUSH rules you need one OSD on three different hosts for an active+clean cluster.


With this parameters:

```
# cat /etc/ceph/ceph.conf
[global]
mon initial members = ceph-rbx-1,ceph-rbx-2
cluster network = 172.29.20.0/24
mon host = 172.29.20.10,172.29.20.11
osd_pool_default_size = 2
osd_pool_default_min_size = 1
public network = 172.29.20.0/24
max open files = 131072
fsid = ....

[client.libvirt]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok # must be writable by QEMU and allowed by SELinux or AppArmor
log file = /var/log/ceph/qemu-guest-$pid.log # must be writable by QEMU and allowed by SELinux or AppArmor

[osd]
osd mkfs options xfs = -f -i size=2048
osd mkfs type = xfs
osd journal size = 5120
osd mount options xfs = noatime,largeio,inode64,swalloc 
```
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux