Hello ceph-users,
I’m trying to set up a linux cluster but it takes me a little longer then I hoped for. There are some things that I do not quite understand yet. Hopefully some of you can help me out.
1) When using ceph-deploy, a ceph.conf file is created in the current directory and in the /etc/ceph directory. Which one is ceph-deploy using and which one should I edit?
2) I have 6 OSD running per machine. I disk zapped them with ceph-deploy disk zap. I prepared/activated them with a separate journal on an ssd card and they are all running.
a) Ceph-deploy disk list doesn’t show me what file system is in use (or ‘Linux Filesystem’ as it mentions, is a filesystem in its own right). Neither does it show you what partition or path it is using for its journal.
b) Running parted doesn’t show me what file system is in use either (except of course that it is a ‘Linux Filesystem’)… I believe parted should do the trick to show me these settings??
c) When a GPT partition is corrupt (missing msdos prelude for example) ceph-deploy disk zap doesn’t work. But after 4 times repeating the command, it works, showing up as ‘Linux Filesystem’.
d) How can you set the file system that ceph-deploy disk zap is formatting the ceph data disk with? I like to zap a disk with XFS for example.
3) Is there a way to set the data partition for ceph-mon with ceph-deploy or should I do it manually in ceph.conf? How do I format that partition (what file system should I use)
4) When running ceph status the following message is what I get:
root@cephnode1:/root#ceph status
cluster:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
health: HEALTH_WARN 37 pgs degraded; 192 pgs stuck unclean
monmap e1: 1 mons at {cephnode1=172.16.1.2:6789/0}, election epoch 1, quorum 0 cephnode1
osdmap e38: 6 osds: 6 up, 6 in
pgmap v65: 192 pgs: 155 active+remapped, 37 active+degraded; 0 bytes data, 213 MB used, 11172GB / 11172GB avail
mdsmap e1: 0/0/1 up
a) How do I get rid of the HEALTH_WARN message? Can I run some tool that initiates a repair?
b) I did not put any data in it yet, but it already uses a whopping 213 MB, why?
5) Last but not least, my config file looks like this
root@cephnode1:/root#cat /etc/ceph/ceph.conf
[global]fsid = xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
mon_initial_members = cephnode1
mon_host = 172.16.1.2
auth_supported = cephxosd_journal_size = 1024
filestore_xattr_use_omap = true
This is really strange, since the documentation states that the minimum requirement for a config with my configuration should at least have the [mon.1] and [osd.1] [osd.2] [osd.3] [osd.4] [osd.5] [osd.6] directives. I have set up separate journaling for my OSD’s but it doesn’t show in my conf file. Also the journaling partitions are 2GB big and not 1024MB (if that is what it means then).
I can really use your help, since I’m stuck for the moment.
Regards,
Johannes
__________ Informatie van ESET Endpoint Antivirus, versie van database viruskenmerken 8730 (20130826) __________
Het bericht is gecontroleerd door ESET Endpoint Antivirus.
http://www.eset.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com