Re: Which SSD method is better for performance?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 13, 2012 at 16:39, Paul Pettigrew
<Paul.Pettigrew@xxxxxxxxxxx> wrote:
> #1. place it in the main [osd] stanza and reference the whole drive as a single partition; or

As explained by others, that will not work.

> #2. partition up the disk, so 1x partition per SATA HDD, and place each partition in the [osd.N] portion
...
> I am asking therefore, is the added work (and constraints) of specifying down to individual partitions per #2 worth it in performance gains? Does it not also have a constraint, in that if I wanted to add more HDD's into the server (we buy 45 bay units, and typically provision HDD's "on demand" i.e. 15x at a time as usage grows), I would have to additionally partition the SSD (taking it offline) - but if it were #1 option, I would only have to add more [osd.N] sections (and not have to worry about getting the SSD with 45x partitions)?

If you need to touch the existing partitions on the SSD when you add
new disks, you'll need to take all the OSDs on that machine down. That
does not sound like a good idea. Maybe you should provision the SSD as
1/45th size journals, in the first place. You'll lose some of the
space, but I really don't see a way around that without needing
downtime on all the OSDs when growing, and a good SSD will use that
internally for wear leveling and asynchronous erase, so it should even
be faster.

> One final related question, if I were to use #1 method (which I would prefer if there is no material performance or other reason to use #2), then that specification (i.e. the "osd journal = /dev/sdb") SSD disk reference would have to be identical on all other hardware nodes, yes (I want to use the same ceph.conf file on all servers per the doco recommendations)? What would happen if for example, the SSD was on /dev/sde on a new node added into the cluster? References to /dev/disk/by-id etc are clearly no help, so should a symlink be used from the get-go? Eg something like "ln -s /dev/sdb /srv/ssd" on one box, and  "ln -s /dev/sde /srv/ssd" on the other box, so that in the [osd] section we could use this line which would find the SSD disk on all nodes "osd journal = /srv/ssd"?

Use labels. /dev/disk/by-label/* with a modern udev.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux