Re: Deprecating ext4 support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 14 Apr 2016 19:39:01 +0200 Michael Metz-Martini | SpeedPartner
GmbH wrote:

> Hi,
> 
> Am 14.04.2016 um 03:32 schrieb Christian Balzer:
[massive snip]

Thanks for that tree/du output, it matches what I expected.
You'd think XFS wouldn't be that intimidated by directories of that size.

> 
> 
> >> As you can see we have one data-object in pool "data" per file saved
> >> somewhere else. I'm not sure what's this related to, but maybe this
> >> is a must by cephfs.
> > That's rather confusing (even more so since I don't use CephFS), but it
> > feels wrong.
> > From what little I know about CephFS is that you can have only one FS
> > per cluster and the pools can be arbitrarily named (default data and
> > metadata).
> [...]
> > My guess is that you somehow managed to create things in a way that
> > puts references (not the actual data) of everything in "images" to
> > "data".
> You can tune the pool by e.g.
> cephfs /mnt/storage/docroot set_layout -p 4
> 
Yesterday morning I wouldn't have known what that meant, but since then I
did a lot of reading and created a CephFS on the test cluster a well,
including a second data pool and layouts.

> We thought this was a good idea so that we can change the replication
> size different for doc_root and raw-data if we like. Seems this was a
> bad idea for all objects.
> 
I'm not sure how you managed to get into that state or if it's a bug after
all, but I can't replicate it on the latest hammer.

Firstly I created a "default" FS, with the classic metadata and data
pools, mounted it and put some files into the root.
Then I added a second pool (filegoats) and set the layout for a
subdirectory to use it. After re-mounting the FS and copying data to that
subdir I get this, exactly what one would expect:
---

    NAME          ID     USED       %USED     MAX AVAIL     OBJECTS 
    data          0      82043k         0         1181G         334 
    metadata      1       2845k         0         1181G          20 
    rbd           2        161G      2.84          787G       41914 
    filegoats     10     89034k         0         1181G         336 
---
So no duplicate objects (or at least their headers) for me.

If nobody else has anything to say about this, I'd consider filing a bug
report.

Regards,

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux