Re: How to set size for CephFs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is just a playground, right?

One approach would be to separate the small OSDs from the larger ones by creating a new device class, e.g. "small" and leave "hdd" for the larger drives. Then you can create a small pool for cephfs_metadata (that usually doesn't require too much space) defined by a respective crush rule pointing to the "small" devices. For the cephfs_data pool you need more space so you use a crush rule that points to "hdd" devices. This way you can decide for each pool (e.g. rbd) which OSDs to use.


Zitat von Alokkumar Mahajan <alokkumar.mahajan@xxxxxxxxx>:

Hello Eugen,
Below is the o/p:-

ceph osd df:-

[image: image.png]

ceph osd tree:-

[image: image.png]

On Thu, 28 Nov 2019 at 14:54, Eugen Block <eblock@xxxxxx> wrote:

Hi,

can you share the output of `ceph osd df` and `ceph osd tree`?
The smallest of your OSDs will be the bottleneck. Since ceph tries to
distribute the data evenly across all OSDs you won't be able to use
the large OSDs, at least not without adjusting your setup.

Regards,
Eugen


Zitat von Alokkumar Mahajan <alokkumar.mahajan@xxxxxxxxx>:

> Thanks Wido.
> @ But normally CephFS will be able to use all the space inside your
> Cephcluster.
> So, you are saying even if i see the size for CephFS pools as 55 GB, it
can
> still use whole 600GB (or the available disk) from Cluster?
>
> This is what i have with PGNum = 150 (for Data) and 32 (for Metadata) in
my
> cluster.
>
> Pool
> Type
> Size
> Usage
> cephfs_data
> data
> 55.5366GiB
> 4%
> cephfs_meta
> metadata
> 55.7469GiB
>
> Thanks
>
>
> On Thu, 28 Nov 2019 at 13:49, Wido den Hollander <wido@xxxxxxxx> wrote:
>
>>
>>
>> On 11/28/19 6:41 AM, Alokkumar Mahajan wrote:
>> > Hello,
>> > I am new to Ceph and currently i am working on setting up CephFs and
RBD
>> > environment. I have successfully setup Ceph Cluster with 4 OSD's (2
>> > OSD's with size 50GB and 2 OSD's with size 300GB).
>> >
>> > But while setting up CephFs the size which i see allocated for CephFs
>> > Data and metadata pools is 55GB. But i want to have 300GB assigned for
>> > CephFs.
>> >
>> > I tried using "target_size_bytes" flag while creating pool but it is
>> > not working (it saus invalid command). Same result when i
>> > use target_size_bytes with (ceph osd pool set) after creating pool.
>> >
>> > I am not sure if i am doing something silly here.
>> >
>> > Can someone please guide me on this?
>> >
>>
>>
>> You can set quotas on CephFS or on the RADOS pool for the CephFS 'data'
>> (haven't tried the last one though).
>>
>> But normally CephFS will be able to use all the space inside your Ceph
>> cluster.
>>
>> It's not that you can easily just allocate X GB/TB to CephFS.
>>
>> Wido
>>
>> > Thanks in adv.!
>> >
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux