Re: Cinder + CEPH Storage Full Scenario

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

when an OSD gets full, any write operation to the entire cluster will be disabled.

As a result, creating a single RBD will become impossible and all VMs that need to write to one of their Ceph back RBDs will suffer the same pain.

Usually, this ends up as a bad sorry for the VMs.

The best practice is to monitor the disk space usage for the OSDs and as a matter of fact RHCS 1.# includes a cep old df command to do this. You can also use the output of the cep old report command to grab the appropriate info to compute it or rely on external SNMP monitoring tools to grab the usage details of the particular OSD disk drives.

Have a great day.
JC

> On Oct 19, 2015, at 02:32, Bharath Krishna <BKrishna@xxxxxxxxxxxxxxx> wrote:
> 
> I mean cluster OSDs are physically full.
> 
> I understand its not a pretty way to operate CEPH allowing to become full,
> but I just wanted to know the boundary condition if it becomes full.
> 
> Will cinder create volume operation creates new volume at all or error is
> thrown at Cinder API level itself stating that no space available?
> 
> When IO stalls, will I be able to read the data from CEPH cluster I.e can
> I still read data from existing volumes created from CEPH cluster?
> 
> Thanks for the quick reply.
> 
> Regards
> M Bharath Krishna
> 
> On 10/19/15, 2:51 PM, "Jan Schermer" <jan@xxxxxxxxxxx> wrote:
> 
>> Do you mean when the CEPH cluster (OSDs) is physically full or when the
>> quota is reached?
>> 
>> If CEPH becomes full it just stalls all IO (maybe just write IO, but
>> effectively same thing) - not pretty and you must never ever let it
>> become full.
>> 
>> Jan
>> 
>> 
>>> On 19 Oct 2015, at 11:15, Bharath Krishna <BKrishna@xxxxxxxxxxxxxxx>
>>> wrote:
>>> 
>>> Hi
>>> 
>>> What happens when Cinder service with CEPH backend storage cluster
>>> capacity is FULL?
>>> 
>>> What would be the out come of new cinder create volume request?
>>> 
>>> Will volume be created with space not available for use or an error
>>> thrown from Cinder API stating no space available for new volume.
>>> 
>>> I could not try this in my environment and fill up the cluster.
>>> 
>>> Please reply if you have ever tried and tested this.
>>> 
>>> Thank you.
>>> 
>>> Regards,
>>> M Bharath Krishna
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux