Re: ceph-deploy 1.2.2 vs fedora 19

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jonas,

You can query the admin sockets of your monitors and osds get a json listing of their running configuration. The command will look something like:

# ceph --admin-daemon /var/run/ceph/ceph-mon.a.asok config show

# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show



You can then inject new settings to running daemons with injectargs:

# ceph tell osd.* injectargs '--osd_max_backfills 10'

Or, your can add those to ceph.conf and restart the daemons.

Cheers,
Mike Dawson


On 12/5/2013 9:54 AM, Jonas Andersson wrote:
I mean, I have OSD's and MON's running now, but I see no mention of them in the current config file (/etc/ceph/ceph.conf) so backing that file up would not allow me to see where monitors/objectstores/journals where placed. Is there a nifty command that allows me to push these defaults to something that can be used as a config file that allows me to see how it was setup once I am done with my tests? I want to be able to do performance tests that that I can attach to individual configs which allows me to revert to the best config found for the ultimate performance once I have it tuned and compared the data?

-----Original Message-----
From: Alfredo Deza [mailto:alfredo.deza@xxxxxxxxxxx]
Sent: Thursday, December 05, 2013 3:34 PM
To: Jonas Andersson
Cc: ceph-users@xxxxxxxx
Subject: Re:  ceph-deploy 1.2.2 vs fedora 19

On Thu, Dec 5, 2013 at 9:18 AM, Jonas Andersson <Jonas.Andersson@xxxxxx> wrote:
Perfect, that worked very well. Thanks a lot.

Another question:

Using http://ceph.com/howto/deploying-ceph-with-ceph-deploy/ as a guide to set up my test-cluster I now have a working cluster with 12 osd's in and up. I've create a client, a 10gb rbd volume, mounted it, written data all good.

Looking at my ceph.conf it seems it's using all defaults:
[root@ceph02 ~]# cat /etc/ceph/ceph.conf [global] fsid =
f865694c-7a50-46a9-9550-f6b160c00313
mon_initial_members = ceph02, ceph03, ceph04 mon_host =
10.130.21.33,10.130.21.34,10.130.21.42
auth_supported = cephx
osd_journal_size = 1024
filestore_xattr_use_omap = true

Is there any way to dump the default running config to the config-file so I can start tinkering around?

What do you mean by dump? You can backup that copy which is the one that ceph-deploy uses and use a new one and push it to your nodes and try that way

"ceph --show-config"  seems to show all parameters running, but I don't see any mentioning of the monitors/osd's at all?

Last question:
I had a too low number of pgs, which caused a health warning. Since I type a lot faster than I think sometimes I adjusted the value to 128 (ceph osd pool set rbd pg_num 128), which turned out to be too high, but I cant seem to be able to tune it down again. How do I achieve this?

Thanks a lot in advance!

Kind regards

Jonas

-----Original Message-----
From: Alfredo Deza [mailto:alfredo.deza@xxxxxxxxxxx]
Sent: Sunday, December 01, 2013 6:30 PM
To: Jonas Andersson
Cc: ceph-users@xxxxxxxx
Subject: Re:  ceph-deploy 1.2.2 vs fedora 19

On Thu, Nov 28, 2013 at 8:25 AM, Jonas Andersson <Jonas.Andersson@xxxxxx> wrote:
Hi all,



I am seeing some weirdness when trying to deploy Ceph Emperor on
fedora 19 using ceph-deploy. Problem occurs when trying to install
ceph-deploy, and seems to point to the version of pushy in your repository:



Since ceph-deploy version 1.3 there is no longer a requirement on
pushy. You should update to the latest version (currently at 1.3.3)


[root@ceph02 ~]# yum install ceph-deploy

Loaded plugins: priorities, protectbase

imc-default
| 1.1 kB  00:00:00

imc-shared
| 1.1 kB  00:00:00

imc-systemimages
| 1.1 kB  00:00:00

imc-systemimages-shared
| 1.1 kB  00:00:00

45 packages excluded due to repository priority protections

0 packages excluded due to repository protections

Resolving Dependencies

--> Running transaction check

---> Package ceph-deploy.noarch 0:1.2.2-0 will be installed

--> Processing Dependency: python-pushy >= 0.5.3 for package:
ceph-deploy-1.2.2-0.noarch

--> Processing Dependency: pushy >= 0.5.3 for package:
ceph-deploy-1.2.2-0.noarch

--> Processing Dependency: or for package: ceph-deploy-1.2.2-0.noarch

--> Processing Dependency: gdisk for package:
--> ceph-deploy-1.2.2-0.noarch

--> Running transaction check

---> Package ceph-deploy.noarch 0:1.2.2-0 will be installed

--> Processing Dependency: python-pushy >= 0.5.3 for package:
ceph-deploy-1.2.2-0.noarch

--> Processing Dependency: or for package: ceph-deploy-1.2.2-0.noarch

---> Package gdisk.x86_64 0:0.8.8-1.fc19 will be installed

--> Processing Dependency: libicuuc.so.50()(64bit) for package:
gdisk-0.8.8-1.fc19.x86_64

--> Processing Dependency: libicuio.so.50()(64bit) for package:
gdisk-0.8.8-1.fc19.x86_64

---> Package pushy.noarch 0:0.5.3-1 will be installed

--> Running transaction check

---> Package ceph-deploy.noarch 0:1.2.2-0 will be installed

--> Processing Dependency: python-pushy >= 0.5.3 for package:
ceph-deploy-1.2.2-0.noarch

--> Processing Dependency: or for package: ceph-deploy-1.2.2-0.noarch

---> Package libicu.x86_64 0:50.1.2-9.fc19 will be installed

--> Finished Dependency Resolution

Error: Package: ceph-deploy-1.2.2-0.noarch (ceph-extras-noarch)

            Requires: python-pushy >= 0.5.3

            Available: python-pushy-0.5.1-6.1.noarch
(ceph-extras-noarch)

                python-pushy = 0.5.1-6.1

Error: Package: ceph-deploy-1.2.2-0.noarch (ceph-extras-noarch)

            Requires: or

You could try using --skip-broken to work around the problem

You could try running: rpm -Va --nofiles -nodigest



To work around this I tried to use pip to install pushy 0.5.3:

[root@ceph02 pushy-master]# pip install pushy

Downloading/unpacking pushy

   Downloading pushy-0.5.3.zip (48kB): 48kB downloaded

   Running setup.py egg_info for package pushy



Installing collected packages: pushy

   Running setup.py install for pushy



Successfully installed pushy

Cleaning up...



Verifying:



[root@ceph02 ~]# pip list | grep pushy

pushy (0.5.3)



However the installer does not seem to notice that pushy is there,
and it fails on the same dependency with the same error.



Any clue what I'm doing wrong here?



Kind regards



Jonas


________________________________

The information in this e-mail is intended only for the person or
entity to which it is addressed.

It may contain confidential and /or privileged material. If someone
other than the intended recipient should receive this e-mail, he /
she shall not be entitled to read, disseminate, disclose or duplicate it.

If you receive this e-mail unintentionally, please inform us
immediately by "reply" and then delete it from your system. Although
this information has been compiled with great care, neither IMC
Financial Markets & Asset Management nor any of its related entities
shall accept any responsibility for any errors, omissions or other
inaccuracies in this information or for the consequences thereof, nor
shall it be bound in any way by the contents of this e-mail or its
attachments. In the event of incomplete or incorrect transmission,
please return the e-mail to the sender and permanently delete this message and any attachments.

Messages and attachments are scanned for all known viruses. Always
scan attachments before opening them.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


________________________________

The information in this e-mail is intended only for the person or entity to which it is addressed.

It may contain confidential and /or privileged material. If someone other than the intended recipient should receive this e-mail, he / she shall not be entitled to read, disseminate, disclose or duplicate it.

If you receive this e-mail unintentionally, please inform us immediately by "reply" and then delete it from your system. Although this information has been compiled with great care, neither IMC Financial Markets & Asset Management nor any of its related entities shall accept any responsibility for any errors, omissions or other inaccuracies in this information or for the consequences thereof, nor shall it be bound in any way by the contents of this e-mail or its attachments. In the event of incomplete or incorrect transmission, please return the e-mail to the sender and permanently delete this message and any attachments.

Messages and attachments are scanned for all known viruses. Always scan attachments before opening them.

________________________________

The information in this e-mail is intended only for the person or entity to which it is addressed.

It may contain confidential and /or privileged material. If someone other than the intended recipient should receive this e-mail, he / she shall not be entitled to read, disseminate, disclose or duplicate it.

If you receive this e-mail unintentionally, please inform us immediately by "reply" and then delete it from your system. Although this information has been compiled with great care, neither IMC Financial Markets & Asset Management nor any of its related entities shall accept any responsibility for any errors, omissions or other inaccuracies in this information or for the consequences thereof, nor shall it be bound in any way by the contents of this e-mail or its attachments. In the event of incomplete or incorrect transmission, please return the e-mail to the sender and permanently delete this message and any attachments.

Messages and attachments are scanned for all known viruses. Always scan attachments before opening them.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux