Re: mon db high iops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Is there any suggestion on disk spec? I don’t find any doc about it on ceph
too!

On Fri, Feb 5, 2021 at 11:37 AM Eugen Block <eblock@xxxxxx> wrote:

> Hi,
>
> > My disk latency is 25ms because of the high block size that rocksdb is
> > using.
> > should I provide a high-performance disk than I'm using for my monitor
> > nodes?
>
> what are you currently using on the MON nodes? There are
> recommendations out there [1] to setup MONs with SSDs:
>
> > An SSD or other sufficiently fast storage type is highly recommended
> > for monitors, specifically for the /var/lib/ceph path on each
> > monitor node, as quorum may be unstable with high disk latencies.
> > Two disks in RAID 1 configuration is recommended for redundancy. It
> > is recommended that separate disks or at least separate disk
> > partitions are used for the monitor processes to protect the
> > monitor's available disk space from things like log file creep.
>
> Regards,
> Eugen
>
> [1]
> https://documentation.suse.com/ses/7/single-html/ses-deployment/#sysreq-mon
>
> Zitat von Seena Fallah <seenafallah@xxxxxxxxx>:
>
> > This is my osdmap commit diff:
> > report 4231583130
> >     "osdmap_first_committed": 300814,
> >     "osdmap_last_committed": 304062,
> >
> > My disk latency is 25ms because of the high block size that rocksdb is
> > using.
> > should I provide a high-performance disk than I'm using for my monitor
> > nodes?
> >
> > On Thu, Feb 4, 2021 at 3:09 AM Seena Fallah <seenafallah@xxxxxxxxx>
> wrote:
> >
> >> Hi all,
> >>
> >> My monitor nodes are getting up and down because of paxos lease timeout
> >> and there is a high iops (2k iops) and 500MB/s throughput on
> >> /var/lib/ceph/mon/ceph.../store.db/.
> >> My cluster is in a recovery state and there is a bunch of degraded pgs
> on
> >> my cluster.
> >>
> >> It seems it's doing a 200k block size io on rocksdb. Is that okay?!
> >> Also is there any solution to fix these downtimes for monitors?
> >>
> >> Thanks for your help!
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux