On Thu, Aug 25, 2016 at 10:11 PM, Marcin Mirosław <marcin@xxxxxxxx>
wrote:
W dniu 25.08.2016 o 13:09, Christopher James Halse Rogers pisze:
On Thu, Aug 25, 2016 at 7:21 PM, Marcin Mirosław
<marcin@xxxxxxxx> wrote:
W dniu 25.08.2016 o 02:03, Christopher James Halse Rogers pisze:
Hi!
On Thu, Aug 25, 2016 at 7:21 AM, marcin@xxxxxxxx wrote:
[...]
Does it means that cache is unavailable and only tiering will
be in
bcachefs?
And... How to mount tiered FS? When I pass one device in mount
I'm
getting:
bcache: bch_open_as_blockdevs() register_cache_set err
insufficient
devices
Tiering gets you all the advantages of caching, plus you can
(with some
effort) have the combined filesystem size be the sum of the SSD
+ HDD
capacities, rather than the capacity be determined solely by the
capacity of the slow tier (this is not currently the case for
bcachefs).
I think that cacheing has at least such advantages over tiering:
- allow fast read and write to files compressed with slow alghoritm
(gzip)
I think this is getting into ā€œwhat should we call this
thingā€
arguments. A naive cache is just going to promote the gzipped data
to
the fast storage. On the other end, there's nothing much preventing
a
sophisticated tiering system from compressing/decompressing as a
part of
tier demotion/promotion.
When (de|re)compressing would be part of demotion or promotion then I
agree, cache device doesn't have advantages.
You also don't get (de|re)compression for free in a caching strategy
depending on where you put your cache. For example, a traditional
bcache cache backing a compressed filesystem will only ever see
compressed data.
- can be optimized for using SSD drives
It's not clear to me how? Tiering and caching are doing the same
sort of
things.
I don't know how bcachefs handle SSD drives. SSD are faster in
sequential write, I'm not sure if it could be achieved using tier
device. Using cache device on disk format can be different than using
in
bcachefs and can turn random writes into sequential writes.
I believe you mean *HDDs* are faster in sequential write? This is
exactly what tiering gets you - all writes go to the fast, SSD
storage¹, and then data are migrated off to the slower HDD in big,
sequential chunks.
¹: Indeed, IIRC this is the only dataloss bug I've found in bcachefs -
I wrote too much data too fast, the migration from tier 0 to tier 1
couldn't keep up, and tier 0 filled up to the point where it could no
longer write necessary journal entries, resulting in the filesystem
being unmountable.
--
To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html