Re: TIER: combine SSDs and HDDs into a single block device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I played with TIER about a week ago.  Definitely a decent
implementation of HSM and seems to work well from my testing.
Assuming a three tier setup (we'll say SSD, 15K SAS, 7K SATA), the
code drops sequential I/O on tier 2 (the 15K) first, moving it down as
tier 2 fills up and/or blocks are unused.  If you do a lot of I/O on a
particular segment, it moves it up to the faster tiers.

Kent Overstreet (the bcache developer) also took a look at the code
and weighed in.  He said it was impressive what TIER managed to do in
~3K lines of code, although expressed his concern that TIER uses a lot
of rb trees which would limit random read I/O performance.

Current caveats:  TIER devices can't be expanded.  TIER devices also
can't have additional TIERs added after setup.  The developer said
he's working on adding those features and hopes to have them in the
next 2-3 months.

Calvin

On Thu, Aug 2, 2012 at 10:13 AM, Tommi Virtanen <tv@xxxxxxxxxxx> wrote:
> Sounds like bcache in writeback mode. Assumes all underlying block
> devices are RAIDed, or losing one will mean losing data; that is, for
> example RAID1(SSD+SSD) & RAID5(8*HDD).
>
> http://www.lessfs.com/wordpress/?p=776
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux