Re: Question about delayed write IOs, octopus, mixed storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here is a partial daemon perf dump, from one of the OSDs.
please let me know what else would be useful to look at.



    "bluefs": {
        "gift_bytes": 0,
        "reclaim_bytes": 0,
        "db_total_bytes": 30006042624,
        "db_used_bytes": 805298176,
        "wal_total_bytes": 0,
        "wal_used_bytes": 0,
        "slow_total_bytes": 80015917056,
        "slow_used_bytes": 0,
        "num_files": 17,
        "log_bytes": 12173312,
        "log_compactions": 1,
        "logged_bytes": 27766784,
        "files_written_wal": 2,
        "files_written_sst": 146,
        "bytes_written_wal": 33731940352,
        "bytes_written_sst": 2067345408,
        "bytes_written_slow": 0,
        "max_bytes_wal": 0,
        "max_bytes_db": 805298176,
        "max_bytes_slow": 0,
        "read_random_count": 78263,
        "read_random_bytes": 2065957797,
        "read_random_disk_count": 22530,
        "read_random_disk_bytes": 1838126860,
        "read_random_buffer_count": 55997,
        "read_random_buffer_bytes": 227830937,
        "read_count": 26224,
        "read_bytes": 1199878580,
        "read_prefetch_count": 26023,
        "read_prefetch_bytes": 1194535977
    },




----- Original Message -----
From: "Maged Mokhtar" <mmokhtar@xxxxxxxxxxx>
To: "Philip Brown" <pbrown@xxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxx>
Sent: Friday, March 12, 2021 8:57:30 AM
Subject: Re:  Question about delayed write IOs, octopus, mixed storage

On 12/03/2021 18:25, Philip Brown wrote:
> well that is a very interesting statistic.
> Where do you come up with the 30GB partition size limit number?
>
> I believe it is using 28GB SSD per HDD disk :-/
> So you are implying that if I "throw away" 1/8 of my HDDs, so that I can get that magic number 30GB+ per HDD, things will magically be improved?
> Before I do that kind of rework, I would like to better understand the theory behind that please :)
>
> I vaguely recall reading something about WAL, ssd, and " db mostly".
> I believe there is some way to check the status of that, but google search is being difficult without a more specific search term.
>
>
> ----- Original Message -----
> From: "Maged Mokhtar" <mmokhtar@xxxxxxxxxxx>
> To: "Philip Brown" <pbrown@xxxxxxxxxx>
> Cc: "ceph-users" <ceph-users@xxxxxxx>
> Sent: Friday, March 12, 2021 8:04:06 AM
> Subject: Re:  Question about delayed write IOs, octopus, mixed storage
>
>
>
> as a side issue, i do not know how cephadm would configure the 2 x 100
> GB SSDs for wal/db serving the 8 HDDs, you need over 30 GB partition
> size else it would result in db mostly on slow HDDs.
>
> /maged

I do not believe you are currently being slowed by this yet, else your 
cluster will show a  "WARN: BlueFS spillover detected"
but it will eventually happen as you write more and your db expands.

You can read more in this post by Nick Fisk
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030913.html

/maged
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux