Re: Inline dedup/compression

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

>>I like the idea of having a compress option implemented in e.g. librbd
>>and rgw, both of these cases involve scale-out clients and so concerns
>>of performance overhead can be largely brushed aside (e.g., most
>>OpenStack hypervisors seem to have plenty of free CPU).

Keep in mind that qemu use only 1 thread by disk, so I'm pretty sure that compression on librbd side will impact performance a lot for 1 vm disk.
(of course it'll scale with a lof vms)



----- Mail original -----
De: "Blair Bethwaite" <blair.bethwaite@xxxxxxxxx>
À: "Haomai Wang" <haomaiwang@xxxxxxxxx>
Cc: "James (Fei) Liu-SSI" <james.liu@xxxxxxxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>
Envoyé: Mardi 30 Juin 2015 08:20:47
Objet: Re: Inline dedup/compression

On 30 June 2015 at 16:03, Haomai Wang <haomaiwang@xxxxxxxxx> wrote: 
> On Tue, Jun 30, 2015 at 4:55 AM, James (Fei) Liu-SSI 
> <james.liu@xxxxxxxxxxxxxxx> wrote: 
>> Hi Haomai, 
>> Thanks for moving the idea forward. Regarding to the compression. However, if we do compression on the client level, it is not global. And the compression was only applied to the local client, am I right? I think there is pros and cons in two solutions and we can get into details more for each solution. 
> 
> Yes, I think a lot myself about compression with Ceph. At firstly, we 
> could easily use objectstore backend to implement compress like 
> filestore with zfs/btrfs and keyvaluestore with leveldb/rocksdb etc. 
> The advantages are we can enjoy it now. The cons are we may lose too 
> much for benefit of compression especially for performance. 

If you were going to compress at the OSD I imagine the main 
performance concern would be about adding to write latency? That might 
be mitigated by only compressing the actual datastore and not the 
journal? 

I like the idea of having a compress option implemented in e.g. librbd 
and rgw, both of these cases involve scale-out clients and so concerns 
of performance overhead can be largely brushed aside (e.g., most 
OpenStack hypervisors seem to have plenty of free CPU). 

-- 
Cheers, 
~Blairo 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
the body of a message to majordomo@xxxxxxxxxxxxxxx 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux