Re: [Urgent] Ceph system Down, Ceph FS volume in recovering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There ya go.

You have 4 hosts, one of which appears to be down and have a single OSD that is so small as to not be useful.  Whatever cephgw03 is, it looks like a mistake.  OSDs much smaller than, say, 1TB often aren’t very useful.

Your pools appear to be replicated, size=3.

So each of your cephosd* hosts stores one replica of each RADOS object.

You added the 10TB spinners to only two of your hosts, which means that they’re only being used as though they were 4TB OSDs.  That’s part of what’s going on.

You want to add a 10TB spinner to cephosd02.  That will help your situation significantly.

After that, consider adding a cephosd04 host.  Having at least one more failure domain than replicas lets you better use uneven host capacities.




> On Feb 24, 2024, at 10:06 AM, nguyenvandiep@xxxxxxxxxxxxxx wrote:
> 
> Hi Mr Anthony,
> 
> pls check the output 
> 
> https://anotepad.com/notes/s7nykdmc
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux