Re: active+undersized+degraded due to OSD size differences?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Your first assumption was correct. You can set the 'size' parameter of the
pool to 2 (ceph osd pool set <name> size 2), but you'll also want to either
want to drop min_size to 1 or accept the fact that you cannot ever have
either metadata OSD go down. It's fine for a toy cluster, but for any
production use case, you'll really want *at least* 3 hosts/drives here...
and even that's a bare minimum, and why it defaults to size=3.

Tyler

On Sun, Jun 19, 2022, 12:51 PM Thomas Roth <t.roth@xxxxxx> wrote:

> Hi all,
>
> I have set up a cluster for use with cephfs. Trying to follow the
> recommendations for the MDS service, I picked two machines which provide
> SSD-based
> disk space, 2 TB each, to put the cephfs- metadata pool there.
> My ~20 HDD-based OSDs in the cluster have 43 TB each.
>
> I created a crush rule tied to this MDS-hardware and then the metadata
> pool by specifying the rule name, mds-ssd,
> > ceph osd pool create metadata0 128 128 replicated mds-ssd
> whereas the data pool was just created as standard replicated pool.
>
> cephfs creation seemed to work with these, but now the system is stuck with
>
>  >    pgs:     22/72 objects degraded (30.556%)
>  >             513 active+clean
>  >             110 active+undersized
>  >             18  active+undersized+degraded
>
>
> What is the main reason here? I can think of these:
> 1. There are just two OSDs for the metadata pool - a replicated pool
> without further tweaks would need three OSDs/hosts?
> 2. Ceph might have placed the metadata  pool onto the said OSDs, but
> considers them still valid targets for other pools, hence tries to
> reconcile OSDs
> of 2TB and 43TB and fails?
>
>
> Btw, how can I change the default failure domain? osd, host, whatever?
> This is all Quincy, cephadm, so there is no ceph.conf anymore, and I did
> not find the command to inject my failure domain into the config database...
>
>
> Regards
> Thomas
> --
> --------------------------------------------------------------------
> Thomas Roth           IT-HPC-Linux
> Location: SB3 2.291   Phone: 1453
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux