Re: OSDs get full with bluestore logs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den fre 28 aug. 2020 kl 11:47 skrev Khodayar Doustar <khodayard@xxxxxxxxx>:

> I've actually destroyed the cluster and a new one installed.
> I've just changed the installation method and version. I've used
> ceph-ansible this time and installed Nautilus.
> The cluster worked fine with the same hardware.
> Yes Janne, you are right that it had very small disks (9X20GB disks, 3 for
> each node) but there was no problem with Nautilus.
>
>
I just don't think anyone finds it useful to spend time figuring out the
lowest possible limits for each release, for each type of OSD storage.

So 10G worked at some point, 20 at another but if you are serious about
ceph, make the disks thin provisioned and larger by
a huge margin (say 100G) and let it grow to the size it needs for your
tests. If the limit is 22 or 23.7 won't matter then and your test will run
through.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux