Re: ceph-disk improvements

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 2 april 2016 om 10:52 schreef Loic Dachary <loic@xxxxxxxxxxx>:
> 
> 
> Hi Wido,
> 
> On 02/04/2016 07:54, Wido den Hollander wrote:
> > 
> >> Op 1 april 2016 om 17:36 schreef Sage Weil <sweil@xxxxxxxxxx>:
> >>
> >>
> >> Hi all,
> >>
> >> There are a couple of looming features for ceph-disk:
> >>
> >> 1- Support for additional devices when using BlueStore.  There can be up 
> >> to three: the main device, a WAL/journal device (small, ~128MB, ideally 
> >> NVRAM), and a fast metadata device (as big as you have available; will be 
> >> used for internal metadata).
> >>
> >> 2- Support for setting up dm-cache, bcache, and/or FlashCache underneath 
> >> filestore or bluestore.
> >>
> > 
> > Keep in mind that you can't create a partition on a bcache device. So when
> > using
> > bcache, the journal has to be filebased and not a partition.
> 
> Is this true of all bcache versions ( https://bcache.evilpiepirate.org/ ) ? Or
> is it a planned feature ? Or is it never going to happen ?

I am not sure. But from my experience with bcache it is not supported on any
kernel. Tried up to version 4.2

> 
> Cheers
> 
> > 
> > If we add the flag --file-based-journal or --no-partitions we can create
> > OSDs on
> > both bcache and dm-cache.
> > 
> > With BlueStore this becomes a problem since it requires the small (XFS)
> > filesystem for it's metadata.
> > 
> > Wido
> > 
> >> The current syntax of
> >>
> >>  ceph-disk prepare [--dmcrypt] [--bluestore] DATADEV [JOURNALDEV]
> >>
> >> isn't terribly expressive.  For example, the journal device size is set 
> >> via a config option, not on the command line.  For bluestore, the metadata 
> >> device will probably want/need explicit user input so they can ensure it's 
> >> 1/Nth of their SSD (if they have N HDDs to each SSD).
> >>
> >> And if we put dmcache in there, that partition will need to be sized too.
> >>
> >> Another consideration is that right now we don't play nice with LVM at 
> >> all.  Should we?  dm-cache is usually used in conjunction with LVM 
> >> (although it doesn't have to be).  Does LVM provide value?  Like, the 
> >> ability for users to add a second SSD to a box and migrate cache, wal, or 
> >> journal partitions around?
> >>
> >> I'm interested in hearing feedback on requirements, approaches, and 
> >> interfaces before we go too far down the road...
> >>
> >> Thanks!
> >> sage
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> 
> -- 
> Loïc Dachary, Artisan Logiciel Libre
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux