Re: [ceph-users] use object size of 32k rather than 4M

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Robert

Thanks for your quick reply. Yeah, the number of file really will be the potential problem. But if just the memory problem, we could use more memory in our OSD
servers.

Also, i tested it on XFS use mdtest, here is the result:


$ sudo ~/wulb/bin/mdtest -I 10000 -z 1 -b 1024 -R -F
--------------------------------------------------------------------------
[[10342,1],0]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: 10-180-0-34

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
-- started at 12/23/2015 18:59:16 --

mdtest-1.8.3 was launched with 1 total task(s) on 1 nodes
Command line used: /home/ceph/wulb/bin/mdtest -I 10000 -z 1 -b 1024 -R -F
Path: /home/ceph
FS: 824.5 GiB   Used FS: 4.8%   Inodes: 52.4 Mi   Used Inodes: 0.6%
random seed: 1450868356

1 tasks, 10250000 files

SUMMARY: (of 1 iterations)
   Operation                  Max        Min       Mean    Std Dev
   ---------                  ---        ---       ----    -------
   File creation     :  44660.505  44660.505  44660.505      0.000
   File stat         : 693747.783 693747.783 693747.783      0.000
   File read         : 365319.444 365319.444 365319.444      0.000
   File removal      :  62064.560  62064.560  62064.560      0.000
   Tree creation     :  69680.729  69680.729  69680.729      0.000
   Tree removal      :    352.905    352.905    352.905      0.000


>From what i tested, the speed of File stat and File read are not slow down much.  So, could i say the speed of OP like
lookup a file will not decrease much, just increase the number of the files?


------------------				 
hzwulibin
2015-12-23

-------------------------------------------------------------
发件人:"Van Leeuwen, Robert" <rovanleeuwen@xxxxxxxx>
发送日期:2015-12-23 20:57
收件人:hzwulibin,ceph-devel,ceph-users
抄送:
主题:Re: [ceph-users] use object size of 32k rather than 4M


>In order to reduce the enlarge impact, we want to change the default size of the object from 4M to 32k.
>
>We know that will increase the number of the objects of one OSD and make remove process become longer.
>
>Hmm, here i want to ask your guys is there any other potential problems will 32k size have? If no obvious problem, will could dive into
>it and do more test on it.


I assume the objects on the OSDs filesystem will become 32k when you do this.
So if you have 1TB of data on one OSD you will have 31 million files == 31 million inodes 
This is excluding the directory structure which also might be significant.
If you have 10 OSDs on a server you will easily hit 310 million inodes.
You will need a LOT of memory to make sure the inodes are cached but even then looking up the inode might add significant latency.

My guess is it will be fast in the beginning but it will grind to an hold when the cluster gets fuller due to inodes no longer being in memory.

Also this does not take in any other bottlenecks you might hit in ceph which other users can probably answer better.


Cheers,
Robert van Leeuwen

��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux