Re: Deprecating ext4 support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Fri, 15 Apr 2016 08:20:45 +0200 Michael Metz-Martini | SpeedPartner
GmbH wrote:

> Hi,
> 
> Am 15.04.2016 um 07:43 schrieb Christian Balzer:
> > On Fri, 15 Apr 2016 07:02:13 +0200 Michael Metz-Martini | SpeedPartner
> > GmbH wrote:
> >> Am 15.04.2016 um 03:07 schrieb Christian Balzer:
> >>>> We thought this was a good idea so that we can change the
> >>>> replication size different for doc_root and raw-data if we like.
> >>>> Seems this was a bad idea for all objects.
> [...]
> >>> If nobody else has anything to say about this, I'd consider filing a
> >>> bug report.
> >> Im must admit that we're currently using 0.87 (Giant) and haven't
> >> upgraded so far. Would be nice to know if upgrade would "clean" this
> >> state or we should better start with a new cluster ... :(

Actually, I ran some more tests, with larger and differing data sets.

I can now replicate this behavior here, before:
---
    NAME          ID     USED       %USED     MAX AVAIL     OBJECTS 
    data          0       6224M      0.11         1175G        1870 
    metadata      1      18996k         0         1175G          24 
    filegoats     10       468M         0         1175G        1346 
---

And after copying /usr/ from the client were that CephFS is mounted to the
directory mapped to "filegoats":
---
    data          0       6224M      0.11         1173G       47274 
    metadata      1      42311k         0         1173G        4057 
    filegoats     10      1642M      0.03         1173G       43496 
---

So not a "bug" per se, but not exactly elegant when considering the object
overhead.
This feels a lot like how cache-tiering is implemented as well (evicted
objects get zero'd, not deleted).

I guess the best strategy here is do to have the vast majority of data in
"data" and only special cases in other pools (like SSD based ones).

Would be nice if somebody from the devs, RH could pipe up and the
documentation updated to reflect this.

Christian

> > I can't really comment on that, but you will probably want to wait for
> > Jewel, being a LTS release and having plenty of CephFS enhancements
> > including a fsck.
> > Have you verified what those objects in your data pool are?
> > And that they are actually there on disk?
> > If so, I'd expect them all to be zero length. 
> They exist and are all of size 0 - right.
> 
> /var/lib/ceph/osd/ceph-21/current/0.179_head/DIR_9/DIR_7/DIR_1/DIR_0/DIR_0/DIR_0$
> ls -l
> total 492
> -rw-r--r--. 1 root root 0 Oct  6  2015
> 10003aed5cb.00000000__head_AF000179__0
> -rw-r--r--. 1 root root 0 Oct  6  2015
> 10003d09223.00000000__head_6D000179__0
> [..]
> 
> $ getfattr -d 10003aed5cb.00000000__head_AF000179__0
> # file: 10003aed5cb.00000000__head_AF000179__0
> user.ceph._=0sDQjpAAAABAM1AAAAAAAAABQAAAAxMDAwM2FlZDVjYi4wMDAwMDAwMP7/////////eQEArwAAAAAAAAAAAAAAAAAGAxwAAAAAAAAAAAAAAP////8AAAAAAAAAAP//////////AAAAAHTfAwAAAAAA2hoAAAAAAAAAAAAAAAAAAAICFQAAAAIAAAAAAAAAAGScLgEAAAAADQAAAAAAAAAAAAAAY4zeU3D2EwgCAhUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB03wMAAAAAAAAAAAAAAAAAAAQAAAA=
> user.ceph._parent=0sBQTvAAAAy9WuAwABAAAGAAAAAgIbAAAAldSuAwABAAAHAAAAOF81LmpwZ0gCAAAAAAAAAgIWAAAA1NGuAwABAAACAAAAMTKhAwAAAAAAAAICNAAAAHwIgwMAAQAAIAAAADBlZjY3MTk5OGMzNGE5MjViYzdjZjQxZGYyOTM5NmFlWgAAAAAAAAACAhYAAADce3oDAAEAAAIAAABmNscPAAAAAAAAAgIWAAAAJvV3AwABAAACAAAAMGWGeA0AAAAAAAICGgAAAAEAAAAAAAAABgAAAGltYWdlc28yNQAAAAAABgAAAAAAAAABAAAAAAAAAAAAAAA=
> user.cephos.spill_out=0sMQA=
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux