Re: [GSoC] Queries regarding the Project

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I understand that, we can't write to objects which belong to the
particular PG (the one having at least one full OSD). But a storage
pool can have multiple PGs, and some of them must have only non-full
OSDs. Through those PGs, we can write to the OSDs which are not full.

Did I understand it correctly?


On Fri, Mar 24, 2017 at 1:01 PM, kefu chai <tchaikov@xxxxxxxxx> wrote:
> Hi Spandan,
>
> Please do not email me privately, instead use the public mailing list,
> which allows other developers to provide you help if I am unable to do
> so. it also means that you can start interacting with the rest of the
> community instead of only me (barely useful).
>
> On Fri, Mar 24, 2017 at 2:38 PM, Spandan Kumar Sahu
> <spandankumarsahu@xxxxxxxxx> wrote:
>> Hi
>>
>> I couldn't figure out, why is this happening,
>>
>> "...Because once any of the storage device assigned to a storage pool is
>> full, the whole pool is not writeable anymore, even there is abundant space
>> in other devices."
>> -- Ceph GSoC Project Ideas (Smarter reweight-by-utilisation)
>>
>> I went through this[1] paper on CRUSH, and according to what I understand,
>> CRUSH pseudo-randomly chooses a device based on weights which can reflect
>> various parameters like the amount of space available.
>
> CRUSH is a variant of consistent hashing. Ceph cannot automatically
> choose *another* OSD which is not chosen by CRUSH, even if that OSD is
> not full and has abundant space.
>
>>
>> What I don't understand is, how will it stop a pool having abundant space on
>> other devices, from getting selected, if one of its devices is full? Sure,
>> the chances of getting selected might decrease, if one device is full, but
>> how does it completely prevent writing to the pool?
>
> if a PG are served by three OSDs. if any of them is full, how can we
> continue creating/writing to objects which belong to that PG?
>
>
> --
> Regards
> Kefu Chai



-- 
Spandan Kumar Sahu
IIT Kharagpur
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux