Re: tuning for backup target cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>
> I have certainly seen cases where the OMAPS have not stayed within the
> RocksDB/WAL NVME space and have been going down to disk.

How to monitor OMAPS size and if it does not get out of NVME?

The OP's number suggest IIRC like 120GB-ish for WAL+DB, though depending on
> workload spillover could of course still be a thing.

Correct. But for production deployment the plan is to use 3.2TB for 10
HDDs. In case of performance problems we will move non-ec pool to SSD (by
replacing few HDD by SSDs)

Using cephadm, is it possible to cut part of the NVME drive for OSD and
leave rest space for RocksDB/WALL? Now my deployment is as simple as :

# ceph orch  ls osd osd.dashboard-admin-1710711254620 --export
service_type: osd
service_id: dashboard-admin-1710711254620
service_name: osd.dashboard-admin-1710711254620
placement:
  host_pattern: cephbackup-osd3
spec:
  data_devices:
    rotational: true
  db_devices:
    rotational: false
  filter_logic: AND
  objectstore: bluestore

Thanks

On Mon, 3 Jun 2024 at 17:28, Anthony D'Atri <aad@xxxxxxxxxxxxxx> wrote:

>
> The OP's number suggest IIRC like 120GB-ish for WAL+DB, though depending
> on workload spillover could of course still be a thing.
>
> >
> > I have certainly seen cases where the OMAPS have not stayed within the
> RocksDB/WAL NVME space and have been going down to disk.
> >
> > This was on a large cluster with a lot of objects but the disks that
> where being used for the non-ec pool where seeing a lot more actual disk
> activity than the other disks in the system.
> >
> > Moving the non-ec pool onto NVME helped with a lot of operations that
> needed to be done to cleanup a lot of orphaned objects.
> >
> > Yes this was a large cluster with a lot of ingress data admitedly.
> >
> > Darren Soothill
> >
> > Want a meeting with me: https://calendar.app.google/MUdgrLEa7jSba3du9
> >
> > Looking for help with your Ceph cluster? Contact us at https://croit.io/
> >
> > croit GmbH, Freseniusstr. 31h, 81247 Munich
> > CEO: Martin Verges - VAT-ID: DE310638492
> > Com. register: Amtsgericht Munich HRB 231263
> > Web: https://croit.io/ | YouTube: https://goo.gl/PGE1Bx
> >
> >
> >
> >
> >> On 29 May 2024, at 21:24, Anthony D'Atri <aad@xxxxxxxxxxxxxx> wrote:
> >>
> >>
> >>
> >>> You also have the metadata pools used by RGW that ideally need to be
> on NVME.
> >>
> >> The OP seems to intend shared NVMe for WAL+DB, so that the omaps are on
> NVMe that way.
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Łukasz Borek
lukasz@xxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux