Re: struggling to achieve high bandwidth on Ceph dev cluster - HELP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



thanks.

Ceph source code contains a script called vstart.sh  which allows
developers to quickly test their code using a simple deployment on your
development system.

Here: https://docs.ceph.com/en/latest//dev/quick_guide/

Although I completely agree with your manual deployment part, I thought may
be the script can also give a good idea.  May be I need to ask this in
another email that how far I can go with the script.


Some more questions please:
How many OSDs have you been using in your second email tests for 1gbit [1]
and 10gbit [2] ethernet? Or to be precise, what is your cluster for both?

On Wed, Feb 10, 2021 at 11:40 AM Marc <Marc@xxxxxxxxxxxxxxxxx> wrote:

>
> > And you had the hit the nail by asking about *replication factor*.
> > Because
> > I don't know how to change the replication factor. AFAIK, by default it
> > is
> > *3x*. But I would like to change, for example to* 2x*.
>
> ceph osd pool get rbd size
> https://docs.ceph.com/en/latest/man/8/ceph/
>
> > So please excuse me for two naive questions  before my cluster info [1]:
> >
> > - How can I change my replication factor? I am assuming I can change it
> >    through vstart script.
>
> I have no idea what vstart is. If you want to learn ceph (and you should,
> if you are going to play with large amounts of other peoples data) install
> it manually. IMHO deployment tools are for making deployments easier and
> faster and not for for I don't know, so lets run a script.
>
>
> > - How can I change ethernet speed on test cluster? For example, 1gbit
> > ethernet
> >   and 10gbit ethernet. Like you had done it. Assuming I can change it
> > through  vstart script.
>
> Don't do it, it is a waste of time, it is just for reference. I wanted to
> know when I started creating my test cluster.
>
> >  [1]
> > I am running a minimal cluster of 4 OSDs .
>
> I am not sure if you are going to get much more performance out of it
> then. Because you do not utilize the power of many osd's.
>
> This how my individual drives perform under the same rados bench test. All
> around the 20MB/s
>
> [@~]# dstat -d -D sdb,sdc,sdd,sdf,sdl,sdg,sdh,sdi
>
> --dsk/sdb-----dsk/sdc-----dsk/sdd-----dsk/sdf-----dsk/sdl-----dsk/sdg-----dsk/sdh-----dsk/sdi--
>  read  writ: read  writ: read  writ: read  writ: read  writ: read  writ:
> read  writ: read  writ
> 3664k  284k:2507k  172k:2692k  204k:6676k  467k:2405k  322k:3220k
> 230k:1932k  196k:2050k  202k
>    0     0 :   0     0 :   0     0 :   0     0 :   0     0 :   0     0 :
>  0     0 :   0     0
>    0  8192B:   0     0 :   0    28k:   0    44k:   0   928k:   0    28k:
>  0     0 :   0    12k
>    0  4096B:   0     0 :   0    36k:  68k   32k:   0     0 :   0     0 :
>  0     0 :   0     0
>    0     0 :   0     0 :   0     0 :   0     0 :   0     0 :   0     0 :
>  0     0 :   0     0
>    0     0 :   0     0 :   0     0 :   0    24k:   0     0 :   0     0 :
>  0     0 :   0     0
> 4096B  104k:   0     0 :4096B   20k:   0     0 :8192B  152k:   0    80k:
>  0     0 :   0    72k
>    0     0 :   0     0 :   0     0 :   0     0 :   0     0 :   0     0 :
>  0     0 :   0  4096B
>    0     0 :   0    12k:   0     0 :   0     0 :   0    12k:   0    24k:
>  0     0 :   0    12k
>    0    72k:   0    16k:   0    32k:  20k  100k:   0  4096B:   0     0 :
>  0     0 :   0    24k
>    0  8200k:   0     0 :   0    20M:  12k   20M:   0    28M:   0    20M:
>  0    12M:   0    20M
>    0    16M:   0    12M:   0    24M:   0    16M:   0    47M:   0    12M:
>  0  8212k:   0    20M
>    0    24M:   0    11M:   0    28M:   0    28M:   0    49M:   0    44M:
>  0    12M:   0    24M
>    0    38M:   0    13M:   0    42M:   0    32M:   0    28M:   0    31M:
>  0  4104k:   0    21M
>    0    50M:   0  8204k:   0    28M:4096B   44M:   0    61M:   0    33M:
>  0  8204k:   0    12M
>    0    32M:   0    20M:4096B   38M:   0    20M:   0    55M:8192B   39M:
>  0    32M:   0    24M
>    0    16M:   0    24M:4096B   29M:   0    36M:   0    28M:   0    17M:
>  0    37M:   0     0
> 4096B   44M:   0    16M:   0    40M:  44k   31M:4096B   28M:8192B   32M:
>  0    12M:   0    24M
>    0    12M:   0    28M:   0  6196k:   0    18M:   0    52M:   0    32M:
>  0    46M:  12k   40M
>    0    20M:   0    18M:   0    38M:   0    52M:   0    32M:   0    24M:
>  0    27M:   0    43M
>    0   128k:   0  2056k:   0    16k:  20k   12M:   0     0 :   0
> 8212k:4096B   12k:8192B 9804k
>    0   520k:   0   116k:   0   280k:   0   452k:   0   364k:   0   208k:
>  0   152k:   0   144k
>    0    64k:   0    88k:   0    64k:   0   132k:4096B  156k:   0    88k:
>  0    72k:   0   184k
>    0   140k:   0     0 :   0     0 :8192B   20k:   0    12k:   0   112k:
>  0     0 :   0     0
>    0     0 :   0  8192B:   0    12k:  32k 1044k:   0     0 :4096B
>  16k:4096B    0 :   0    24k
>    0     0 :   0     0 :   0    36k:   0    12k:   0     0 :   0     0 :
>  0     0 :   0     0
>    0     0 :   0     0 :   0     0 :   0     0 :   0    20k:   0     0 :
>  0     0 :   0     0
>    0    92k:   0    24k:   0     0 :  12k   60k:   0     0 :   0     0 :
>  0     0 : 320k    0
>    0     0 :   0     0 :   0     0 :  12k   80k:   0    20k:   0     0 :
>  0     0 : 512k    0
>    0     0 :   0     0 :   0     0 :   0     0 :   0     0 :   0     0 :
>  0     0 : 768k    0
>
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux