Re: ceph rbox test on passive compressed pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
It is a hdd pool, all bluestore, configured with ceph-disk. Upgrades 
seem not to have 'updated' bluefs, some osds report like this:

{
    "/dev/sdb2": {
        "osd_uuid": "xxxxxxxxxxx",
        "size": 4000681103360,
        "btime": "2019-01-08 13:45:59.488533",
        "description": "main",
        "bluefs": "1",
        "ceph_fsid": "xxxxxxxxx",
        "kv_backend": "rocksdb",
        "magic": "ceph osd volume v026",
        "mkfs_done": "yes",
        "ready": "ready",
        "require_osd_release": "14",
        "whoami": "3"
    }
}

And some like this:
{
    "/dev/sdh2": {
        "osd_uuid": "xxxxxxxxxxx",
        "size": 3000487051264,
        "btime": "2017-07-14 14:45:59.212792",
        "description": "main",
        "require_osd_release": "14"
    }
}



-----Original Message-----
Cc: ceph-users
Subject: Re:  ceph rbox test on passive compressed pool

On 09/11 09:36, Marc Roos wrote:
> 
> Hi David,
> 
> Just to let you know, this hint is being set, what is the reason for 
> ceph of doing only half the objects? Can it be that there is some 
> issue with my osd's? Like some maybe have an old fs (still using disk 
> not volume)? Is this still to be expected or does ceph under pressure 
> drop compressing?
> 
> https://github.com/ceph-dovecot/dovecot-ceph-plugin/blob/56d6c900cc9ec
> 07dfb98ef2abac07aae466b7610/src/librmb/rados-storage-impl.cpp#L75


I was trying to look into this a bit :), can you give me more info about 
the OSDs that you are using?
What filesystem are they on?

Cheers!
> 
>  Thanks,
> Marc
> 
> 
> 
> -----Original Message-----
> Cc: jan.radon
> Subject: Re:  ceph rbox test on passive compressed pool
> 
> The hints have to be given from the client side as far as I 
> understand, can you share the client code too?
> 
> Also,not seems that there's no guarantees that it will actually do 
> anything (best effort I guess):
> https://docs.ceph.com/docs/mimic/rados/api/librados/#c.rados_set_alloc
> _hint
> 
> Cheers
> 
> 
> On 6 September 2020 15:59:01 BST, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>
> wrote:
> 
> 	
> 	
> 	I have been inserting 10790 exactly the same 64kb text message to 
a
> 
> 	passive compressing enabled pool. I am still counting, but it 
looks 
> like
> 	only half the objects are compressed.  
> 	
> 	mail/b08c3218dbf1545ff430000052412a8e mtime 2020-09-06 
> 16:27:39.000000,
> 	size 63580
> 	mail/00f6043775f1545ff430000052412a8e mtime 2020-09-06 
> 16:25:57.000000,
> 	size 525
> 	mail/b875f40571f1545ff430000052412a8e mtime 2020-09-06 
> 16:25:53.000000,
> 	size 63580
> 	mail/e87c120b19f1545ff430000052412a8e mtime 2020-09-06 
> 16:24:25.000000,
> 	size 525
> 	
> 	I am not sure if this should be expected from passive, these 
docs[1]
> 	hint that passive 'compress if hinted COMPRESSIBLE'. From that I 
> would
> 	conclude that all text messages should be compressed. 
> 	A previous test with a 64kb gzip attachment seemed to not 
compress,
> 
> 	although I did not look at all object sizes.
> 	
> 	
> 	
> 	on 14.2.11
> 	
> 	[1]
> 	
https://documentation.suse.com/ses/5.5/html/ses-all/ceph-pools.html
> #sec-ceph-pool-compression
> 	https://docs.ceph.com/docs/mimic/rados/operations/pools/
> ________________________________
> 
> 	ceph-users mailing list -- ceph-users@xxxxxxx
> 	To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> 
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
> 
> 

--
David Caro

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux