Re: WAL/DB size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 13, 2019 at 10:04 PM Wido den Hollander <wido@xxxxxxxx> wrote:
> I just checked an RGW-only setup. 6TB drive, 58% full, 11.2GB of DB in
> use. No slow db in use.

random rgw-only setup here: 12TB drive, 77% full, 48GB metadata and
10GB omap for index and whatever.

That's 0.5% + 0.1%. And that's a setup that's using mostly erasure
coding and small-ish objects.


> I've talked with many people from the community and I don't see an
> agreement for the 4% rule.

agreed, 4% isn't a reasonable default.
I've seen setups with even 10% metadata usage, but these are weird
edge cases with very small objects on NVMe-only setups (obviously
without a separate DB device).

Paul

>
> Wido
>
> >
> > Thank you,
> >
> > Dominic L. Hilsbos, MBA
> > Director – Information Technology
> > Perform Air International Inc.
> > DHilsbos@xxxxxxxxxxxxxx
> > www.PerformAir.com
> >
> >
> >
> > -----Original Message-----
> > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Wido den Hollander
> > Sent: Tuesday, August 13, 2019 12:51 PM
> > To: ceph-users@xxxxxxxxxxxxxx
> > Subject: Re:  WAL/DB size
> >
> >
> >
> > On 8/13/19 5:54 PM, Hemant Sonawane wrote:
> >> Hi All,
> >> I have 4 6TB of HDD and 2 450GB SSD and I am going to partition each
> >> disk to 220GB for rock.db. So my question is does it make sense to use
> >> wal for my configuration? if yes then what could be the size of it? help
> >> will be really appreciated.
> >
> > Yes, the WAL needs to be about 1GB in size. That should work in allmost
> > all configurations.
> >
> > 220GB is more then you need for the DB as well. It's doesn't hurt, but
> > it's not needed. For each 6TB drive you need about ~60GB of space for
> > the DB.
> >
> > Wido
> >
> >> --
> >> Thanks and Regards,
> >>
> >> Hemant Sonawane
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux