Re: *****SPAM***** 3 node CEPH PVE hyper-converged cluster serious fragmentation and performance loss in matter of days.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 
> VM don't do many writes and i migrated main testing VM's to 2TB pool which
> in turns fragments faster.
> 
> 
> Did a lot of tests and recreated pools and OSD's in many ways but in a
> matter of days every time each OSD's gets severely fragmented and loses up
> to 80% of write performance (tested with many FIO tests , rados benches ,
> osd benches , RBD benches).

Where is the rados bench before and after your problem?

> 
> If i delete the osd's from node and let it sync from 2 nodes it will be
> perfect for a few days 0.1 - 0.2 bluestore fragmentation but then it is in
> 0.8+ state soon. 

What makes you think that this is the cause of your problem?


> [SPOILER="CEPH bluestore fragmentation"]
> 
> [CODE]osd.3  "fragmentation_rating": 0.090421032864104897
> 
> osd.10 "fragmentation_rating": 0.093359029842755931
> 
> osd.7  "fragmentation_rating": 0.083908842581664561
> 
> osd.8  "fragmentation_rating": 0.067356428512611116
> 
> 
> after 5 days
> 
> 
> osd.3  "fragmentation_rating": 0.2567613553223777
> 
> osd.10 "fragmentation_rating": 0.25025098722978778
> 
> osd.7  "fragmentation_rating": 0.77481281469969676
> 
> osd.8  "fragmentation_rating": 0.82260745733487917
> 
> 
> after few weeks
> 
> 
> 0,882571391878622
> 
> 0,891192311159292
> 

I have the same after >year or so, my performance seems to be the same (running ceph 14)

[@~]# ceph daemon osd.39 bluestore allocator score block
{
    "fragmentation_rating": 0.88113008621473943
}


> 
> 
> osd.0: {
> 
>     "bytes_written": 1073741824,
> 
>     "blocksize": 4194304,
> 
>     "elapsed_sec": 0.41652934400000002,
> 
>     "bytes_per_sec": 2577829964.3638072,
> 
>     "iops": 614.60255726905041
> 
> }
> 

> 
> 
> IOPS in osd bench after some time go to as low as 108 with 455MB/s



To me this looks like normal sequential write performance to an ssd.

my ssd:
[@c01 ~]# ceph tell osd.39 bench
{
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 2.4248152520000001,
    "bytes_per_sec": 442813869.26875037,
    "iops": 105.57505351752052
}

> 
> I noticed posts on the internet asking how to prevent or fix fragmentation
> but no replies to them and RedHat CEPH documentation says "to call Redhat
> to assist with fragmentation."
> 
> Anyone knows what causes fragmentation and how to solve it without
> deleting
> 

I am curious what makes you think this is related to the 'fragmentation_rating'


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux