Re: Add OSD with primary on HDD, WAL and DB on SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: Anthony D'Atri <anthony.datri@xxxxxxxxx>
> Sent: Monday, August 24, 2020 7:30 PM
> To: Tony Liu <tonyliu0592@xxxxxxxxxxx>
> Subject: Re:  Re: Add OSD with primary on HDD, WAL and DB on
> SSD
> 
> Why such small HDDs?  Kinda not worth the drive bays and power, instead
> of the complexity of putting WAL+DB on a shared SSD, might you have been
> able to just buy SSDs and not split? ymmv.

2TB is for testing, it will bump up to 10TB for production.

> The limit is a function of the way the DB levels work, it’s not
> intentional.
> 
> WAL by default takes a fixed size, like 512 MB or something.
> 
> 64 GB is a reasonable size, it accomodates the WAL and allows space for
> DB compaction without overflowing.

For each 10TB HDD, what's the recommended DB device size for both
DB and WAL? The doc recommends 1% - 4%, meaning 100GB - 400GB for
each 10TB HDD. But given the WAL data size and DB data size, I am
not sure if that 100GB - 400GB will be used efficiently.

> With this commit the situation should be improved, though you don’t
> mention what release you’re running
> 
> https://github.com/ceph/ceph/pull/29687

I am using ceph version 15.2.4 octopus (stable).

Thanks!
Tony

> >>>  I don't need to create
> >>> WAL device, just primary on HDD and DB on SSD, and WAL will be using
> >>> DB device cause it's faster. Is that correct?
> >>
> >> Yes.
> >>
> >>
> >> But be aware that the DB sizes are limited to 3GB, 30GB and 300GB.
> >> Anything less than those sizes will have a lot of untilised space,
> >> e.g a 20GB device will only utilise 3GB.
> >
> > I have 1 480GB SSD and 7 2TB HDDs. 7 LVs are created on SSD, each is
> > about 64GB, for 7 OSDs.
> >
> > Since it's shared by DB and WAL, DB will take 30GB and WAL will take
> > the rest 34GB. Is that correct?
> >
> > Is that size of DB and WAL good for 2TB HDD (block store and object
> > store cases)?
> >
> > Could you share a bit more about the intention of such limit?
> >
> >
> > Thanks!
> > Tony
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> > email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux