Re: Deprecating ext4 support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Fri, 15 Apr 2016 07:02:13 +0200 Michael Metz-Martini | SpeedPartner
GmbH wrote:

> Hi,
> 
> Am 15.04.2016 um 03:07 schrieb Christian Balzer:
> >> We thought this was a good idea so that we can change the replication
> >> size different for doc_root and raw-data if we like. Seems this was a
> >> bad idea for all objects.
> > I'm not sure how you managed to get into that state or if it's a bug
> > after all, but I can't replicate it on the latest hammer.
> > Firstly I created a "default" FS, with the classic metadata and data
> > pools, mounted it and put some files into the root.
> > Then I added a second pool (filegoats) and set the layout for a
> > subdirectory to use it. After re-mounting the FS and copying data to
> > that subdir I get this, exactly what one would expect:
> > ---
> > 
> >     NAME          ID     USED       %USED     MAX AVAIL     OBJECTS 
> >     data          0      82043k         0         1181G         334 
> >     metadata      1       2845k         0         1181G          20 
> >     rbd           2        161G      2.84          787G       41914 
> >     filegoats     10     89034k         0         1181G         336 
> > ---
> > So no duplicate objects (or at least their headers) for me.
> > 
> > If nobody else has anything to say about this, I'd consider filing a
> > bug report.
> Im must admit that we're currently using 0.87 (Giant) and haven't
> upgraded so far. Would be nice to know if upgrade would "clean" this
> state or we should better start with a new cluster ... :(
> 
I can't really comment on that, but you will probably want to wait for
Jewel, being a LTS release and having plenty of CephFS enhancements
including a fsck.

Have you verified what those objects in your data pool are?
And that they are actually there on disk?
If so, I'd expect them all to be zero length. 

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux