On Mon, Aug 26, 2013 at 10:45 AM, Johannes Klarenbeek <Johannes.Klarenbeek@xxxxxxx> wrote: Hello ceph-users, I’m trying to set up a linux cluster but it takes me a little longer then I hoped for. There are some things that I do not quite understand yet. Hopefully some
of you can help me out.
1)
When using ceph-deploy, a ceph.conf file is created in the current directory and in the /etc/ceph directory. Which one is ceph-deploy using and which one should I edit? The usual workflow is that ceph-deploy will use the one in the current dir to overwrite the remote one (or the one in /etc/ceph/) but will warn if this is the case and error out specifying it needs the `--overwrite-conf` flag to continue. Aha, so the current dir is leading. And if I make changes to that file, it is not overwritten by ceph-deploy?
That ceph-deploy command calls `ceph-disk list` in the remote host which in turn will not (as far as I could see) tell you what exact file system is in use.
How are you calling parted? with what flags? Usually something like: `sudo parted /dev/{device} print` would print some output. Can you show what you are getting back?
I started parted and then used the print command… hmm but it now actually returns something else…my bad. This is what it returns however (and its
using xfs) root@cephnode1:/root@parted /dev/sdd print Model: ATA WDC WD2000FYYZ-0 (scsi) Disk /dev/sdd: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB xfs ceph data
So it didn't work 4 times and then it did? This does sound unexpected. It is. But I have to say, I was fooling around a little with dd to wipe the disk clean.
This is currently not supported but could be added as a feature.
Seems like a important feature. How does ceph-deploy determine what file system the disk is zapped with?
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com