Information
Name
test123
ID
e13d0ffc-3ed4-4a22-b270-987e81b1ca8f
Status
Available
Specs
Size
1 GB
Created
Sept. 13, 2016, 7:12 p.m.
Attachments
Attached To Not attached
[root@OSKVM1 ~]# fdisk -l
Disk /dev/sda: 599.6 GB, 599550590976 bytes, 1170997248 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0002a631
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 1170997247 584985600 8e Linux LVM
Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-swap: 4294 MB, 4294967296 bytes, 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 byte
Disk /dev/mapper/centos-home: 541.0 GB, 540977135616 bytes, 1056595968 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb: 1099.5 GB, 1099526307840 bytes, 2147512320 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 byte
Disk /dev/mapper/cinder--volumes-volume--e13d0ffc--3ed4--4a22--b270--987e81b1ca8f: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
2016-09-13 16:48:18.335 55367 INFO nova.compute.manager [req-d19d0eb4-7ecc-4baa-8733-9c0f07f8890b dff16cdb3bea43a199ec4b29d2ba3309 9ef033cefb684be68105e30ef2b3b651 - - -] [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] Attaching volume d90e4835-58f5-45a8-869e-fc3f30f0eaf3 to /dev/vdb
2016-09-13 16:48:20.548 55367 WARNING os_brick.initiator.connector [req-d19d0eb4-7ecc-4baa-8733-9c0f07f8890b dff16cdb3bea43a199ec4b29d2ba3309 9ef033cefb684be68105e30ef2b3b651 - - -] ISCSI volume not yet found at: [u'/dev/disk/by-path/ip-10.24.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-d90e4835-58f5-45a8-869e-fc3f30f0eaf3-lun-0']. Will rescan & retry. Try number: 0.
2016-09-13 16:48:21.656 55367 WARNING os_brick.initiator.connector [req-d19d0eb4-7ecc-4baa-8733-9c0f07f8890b dff16cdb3bea43a199ec4b29d2ba3309 9ef033cefb684be68105e30ef2b3b651 - - -] ISCSI volume not yet found at: [u'/dev/disk/by-path/ip-10.24.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-d90e4835-58f5-45a8-869e-fc3f30f0eaf3-lun-0']. Will rescan & retry. Try number: 1.
2016-09-13 16:48:25.772 55367 WARNING os_brick.initiator.connector [req-d19d0eb4-7ecc-4baa-8733-9c0f07f8890b dff16cdb3bea43a199ec4b29d2ba3309 9ef033cefb684be68105e30ef2b3b651 - - -] ISCSI volume not yet found at: [u'/dev/disk/by-path/ip-10.24.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-d90e4835-58f5-45a8-869e-fc3f30f0eaf3-lun-0']. Will rescan & retry. Try number: 2.
2016-09-13 16:48:34.875 55367 WARNING os_brick.initiator.connector [req-d19d0eb4-7ecc-4baa-8733-9c0f07f8890b dff16cdb3bea43a199ec4b29d2ba3309 9ef033cefb684be68105e30ef2b3b651 - - -] ISCSI volume not yet found at: [u'/dev/disk/by-path/ip-10.24.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-d90e4835-58f5-45a8-869e-fc3f30f0eaf3-lun-0']. Will rescan & retry. Try number: 3.
2016-09-13 16:48:42.418 55367 INFO nova.compute.resource_tracker [req-58348829-5b26-4835-ba5b-4e8796800b63 - - - - -] Auditing locally available compute resources for node controller
2016-09-13 16:48:43.841 55367 INFO nova.compute.resource_tracker [req-58348829-5b26-4835-ba5b-4e8796800b63 - - - - -] Total usable vcpus: 40, total allocated vcpus: 32
2016-09-13 16:48:43.842 55367 INFO nova.compute.resource_tracker [req-58348829-5b26-4835-ba5b-4e8796800b63 - - - - -] Final resource view: name=controller phys_ram=193168MB used_ram=47104MB phys_disk=503GB used_disk=296GB total_vcpus=40 used_vcpus=32 pci_stats=None
2016-09-13 16:48:43.872 55367 INFO nova.compute.resource_tracker [req-58348829-5b26-4835-ba5b-4e8796800b63 - - - - -] Compute_service record updated for OSKVM1:controller
2016-09-13 16:48:50.951 55367 WARNING os_brick.initiator.connector [req-d19d0eb4-7ecc-4baa-8733-9c0f07f8890b dff16cdb3bea43a199ec4b29d2ba3309 9ef033cefb684be68105e30ef2b3b651 - - -] ISCSI volume not yet found at: [u'/dev/disk/by-path/ip-10.24.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-d90e4835-58f5-45a8-869e-fc3f30f0eaf3-lun-0']. Will rescan & retry. Try number: 4.
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [req-d19d0eb4-7ecc-4baa-8733-9c0f07f8890b dff16cdb3bea43a199ec4b29d2ba3309 9ef033cefb684be68105e30ef2b3b651 - - -] [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] Driver failed to attach volume d90e4835-58f5-45a8-869e-fc3f30f0eaf3 at /dev/vdb
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] Traceback (most recent call last):
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 256, in attach
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] device_type=self['device_type'], encryption=encryption)
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1108, in attach_volume
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] self._connect_volume(connection_info, disk_info)
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1058, in _connect_volume
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] driver.connect_volume(connection_info, disk_info)
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/iscsi.py", line 84, in connect_volume
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] device_info = self.connector.connect_volume(connection_info['data'])
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 254, in inner
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] return f(*args, **kwargs)
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/os_brick/initiator/connector.py", line 500, in connect_volume
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] raise exception.VolumeDeviceNotFound(device=host_devices)
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] VolumeDeviceNotFound: Volume device not found at [u'/dev/disk/by-path/ip-10.24.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-d90e4835-58f5-45a8-869e-fc3f30f0eaf3-lun-0'].
2016-09-13 16:49:16.051 55367 ERROR nova.virt.block_device [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7]
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [req-d19d0eb4-7ecc-4baa-8733-9c0f07f8890b dff16cdb3bea43a199ec4b29d2ba3309 9ef033cefb684be68105e30ef2b3b651 - - -] [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] Failed to attach d90e4835-58f5-45a8-869e-fc3f30f0eaf3 at /dev/vdb
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] Traceback (most recent call last):
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4646, in _attach_volume
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] do_check_attach=False, do_driver_attach=True)
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 52, in wrapped
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] ret_val = method(obj, context, *args, **kwargs)
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 265, in attach
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] connector)
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] six.reraise(self.type_, self.value, self.tb)
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 256, in attach
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] device_type=self['device_type'], encryption=encryption)
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1108, in attach_volume
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] self._connect_volume(connection_info, disk_info)
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1058, in _connect_volume
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] driver.connect_volume(connection_info, disk_info)
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/iscsi.py", line 84, in connect_volume
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] device_info = self.connector.connect_volume(connection_info['data'])
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 254, in inner
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] return f(*args, **kwargs)
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] File "/usr/lib/python2.7/site-packages/os_brick/initiator/connector.py", line 500, in connect_volume
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] raise exception.VolumeDeviceNotFound(device=host_devices)
2016-09-13 16:49:16.773 55367 ERROR nova.compute.manager [instance: 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] VolumeDeviceNotFound: Volume device not found at [u'/dev/disk/by-path/ip-10.24.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-d90e4835-58f5-45a8-869e-fc3f30f0eaf3-lun-0’].
[root@OSKVM1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 558.4G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 557.9G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 4G 0 lvm [SWAP]
└─centos-home 253:2 0 503.8G 0 lvm /home
sdb 8:16 0 1T 0 disk
├─sdb1 8:17 0 1T 0 part
└─cinder--volumes-volume--e13d0ffc--3ed4--4a22--b270--987e81b1ca8f 253:3 0 1G 0 lvm
sdc 8:32 0 10G 0 disk
sdd 8:48 0 1G 0 disk
sde 8:64 0 1G 0 disk
sr0 11:0 1 1024M 0 rom
[root@OSKVM1 ~]# lsscsi -t
[0:2:0:0] disk /dev/sda
[10:0:0:0] cd/dvd sata: /dev/sr0
[12:0:0:0] disk iqn.2001-05.com.equallogic:0-1cb196-5fc83c107-b0a0000004f57ac9-volume1,t,0x1 /dev/sdb
[15:0:0:0] disk iqn.2010-10.org.openstack:volume-8139b29a-2b4e-43fb-bcde-9580738ba650,t,0x1 /dev/sdc
[16:0:0:0] disk iqn.2010-10.org.openstack:volume-d90e4835-58f5-45a8-869e-fc3f30f0eaf3,t,0x1 /dev/sdd
[17:0:0:0] disk iqn.2010-10.org.openstack:volume-e13d0ffc-3ed4-4a22-b270-987e81b1ca8f,t,0x1 /dev/sde
Also most of the terminology looks like from Openstack and SAN, Here
are the right terminology that should be used for Ceph
http://docs.ceph.com/docs/master/glossary/
On Thu, Aug 18, 2016 at 8:57 AM, Gaurav Goyal <er.gauravgoyal@xxxxxxxxx> wrote:
> Hello Mart,
>
> My Apologies for that!
>
> We are couple of office colleagues using the common gmail account. That has
> caused the nuisance.
>
> Thanks for your response!
>
> On Thu, Aug 18, 2016 at 6:00 AM, Mart van Santen <mart@xxxxxxxxxxxx> wrote:
>>
>> Dear Guarav,
>>
>> Please respect everyones time & timezone differences. Flooding the
>> mail-list won't help
>>
>> see below,
>>
>>
>>
>> On 08/18/2016 01:39 AM, Gaurav Goyal wrote:
>>
>> Dear Ceph Users,
>>
>> Awaiting some suggestion please!
>>
>>
>>
>> On Wed, Aug 17, 2016 at 11:15 AM, Gaurav Goyal <er.gauravgoyal@xxxxxxxxx>
>> wrote:
>>>
>>> Hello Mart,
>>>
>>> Thanks a lot for the detailed information!
>>> Please find my response inline and help me to get more knowledge on it
>>>
>>>
>>> Ceph works best with more hardware. It is not really designed for small
>>> scale setups. Of course small setups can work for a PoC or testing, but I
>>> would not advise this for production.
>>>
>>> [Gaurav] : We need this setup for PoC or testing.
>>>
>>> If you want to proceed however, have a good look the manuals or this
>>> mailinglist archive and do invest some time to understand the logic and
>>> workings of ceph before working or ordering hardware
>>>
>>> At least you want:
>>> - 3 monitors, preferable on dedicated servers
>>> [Gaurav] : With my current setup, can i install MON on Host 1 -->
>>> Controller + Compute1, Host 2 and Host 3
>>>
>>> - Per disk you will be running an ceph-osd instance. So a host with 2
>>> disks will run 2 osd instances. More OSD process is better performance, but
>>> also more memory and cpu usage.
>>>
>>> [Gaurav] : Understood, That means having 1T x 4 would be better than 2T x
>>> 2.
>>
>> Yes, more disks will do more IO
>>>
>>>
>>> - Per default ceph uses a replication factor of 3 (it is possible to set
>>> this to 2, but is not advised)
>>> - You can not fill up disks to 100%, also data will not distribute even
>>> over all disks, expect disks to be filled up (on average) maximum to 60-70%.
>>> You want to add more disks once you reach this limit.
>>>
>>> All on all, with a setup of 3 hosts, with 2x2TB disks, this will result
>>> in a net data availablity of (3x2x2TBx0.6)/3 = 2.4 TB
>>>
>>> [Gaurav] : As this is going to be a test lab environment, can we change
>>> the configuration to have more capacity rather than redundancy? How can we
>>> achieve it?
>>
>>
>> Ceph has an excellent documentation. This is easy to find and search for
>> "the number of replicas", you want to set both "size" and "min_size" to 1 on
>> this case
>>
>>> If speed is required, consider SSD's (for data & journals, or only
>>> journals).
>>>
>>> In you email you mention "compute1/2/3", please note, if you use the rbd
>>> kernel driver, this can interfere with the OSD process and is not advised to
>>> run OSD and Kernel driver on the same hardware. If you still want to do
>>> that, split it up using VMs (we have a small testing cluster where we do mix
>>> compute and storage, there we have the OSDs running in VMs)
>>>
>>> [Gaurav] : within my mentioned environment, How can we split rbd kernel
>>> driver and OSD process? Should it be like rbd kernel driver on controller
>>> and OSD processes on compute hosts?
>>>
>>> Since my host 1 is controller + Compute1, Can you please share the steps
>>> to split it up using VMs and suggested by you.
>>
>>
>> We are running kernel rbd on dom0 and osd's in domu, as well a monitor in
>> domu.
>>
>> Regards,
>>
>> Mart
>>
>>
>>
>>
>>>
>>> Regards
>>> Gaurav Goyal
>>>
>>>
>>> On Wed, Aug 17, 2016 at 9:28 AM, Mart van Santen <mart@xxxxxxxxxxxx>
>>> wrote:
>>>>
>>>>
>>>> Dear Gaurav,
>>>>
>>>> Ceph works best with more hardware. It is not really designed for small
>>>> scale setups. Of course small setups can work for a PoC or testing, but I
>>>> would not advise this for production.
>>>>
>>>> If you want to proceed however, have a good look the manuals or this
>>>> mailinglist archive and do invest some time to understand the logic and
>>>> workings of ceph before working or ordering hardware
>>>>
>>>> At least you want:
>>>> - 3 monitors, preferable on dedicated servers
>>>> - Per disk you will be running an ceph-osd instance. So a host with 2
>>>> disks will run 2 osd instances. More OSD process is better performance, but
>>>> also more memory and cpu usage.
>>>> - Per default ceph uses a replication factor of 3 (it is possible to set
>>>> this to 2, but is not advised)
>>>> - You can not fill up disks to 100%, also data will not distribute even
>>>> over all disks, expect disks to be filled up (on average) maximum to 60-70%.
>>>> You want to add more disks once you reach this limit.
>>>>
>>>> All on all, with a setup of 3 hosts, with 2x2TB disks, this will result
>>>> in a net data availablity of (3x2x2TBx0.6)/3 = 2.4 TB
>>>>
>>>>
>>>> If speed is required, consider SSD's (for data & journals, or only
>>>> journals).
>>>>
>>>> In you email you mention "compute1/2/3", please note, if you use the rbd
>>>> kernel driver, this can interfere with the OSD process and is not advised to
>>>> run OSD and Kernel driver on the same hardware. If you still want to do
>>>> that, split it up using VMs (we have a small testing cluster where we do mix
>>>> compute and storage, there we have the OSDs running in VMs)
>>>>
>>>> Hope this helps,
>>>>
>>>> regards,
>>>>
>>>> mart
>>>>
>>>>
>>>>
>>>>
>>>> On 08/17/2016 02:21 PM, Gaurav Goyal wrote:
>>>>
>>>> Dear Ceph Users,
>>>>
>>>> I need your help to redesign my ceph storage network.
>>>>
>>>> As suggested in earlier discussions, i must not use SAN storage. So we
>>>> have decided to removed it.
>>>>
>>>> Now we are ordering Local HDDs.
>>>>
>>>> My Network would be
>>>>
>>>> Host1 --> Controller + Compute1 Host 2--> Compute2 Host 3 --> Compute3
>>>>
>>>> Is it right setup for ceph network? For Host1 and Host2 , we are using 1
>>>> 500GB disk for OS on each host .
>>>>
>>>> Should we use same size storage disks 500GB *8 for ceph environment or i
>>>> can order Disks in size of 2TB for ceph cluster?
>>>>
>>>> Making it
>>>>
>>>> 2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3
>>>>
>>>> 12TB in total. replication factor 2 should make it 6 TB?
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>>>>
>>>> --
>>>> Mart van Santen
>>>> Greenhost
>>>> E: mart@xxxxxxxxxxxx
>>>> T: +31 20 4890444
>>>> W: https://greenhost.nl
>>>>
>>>> A PGP signature can be attached to this e-mail,
>>>> you need PGP software to verify it.
>>>> My public key is available in keyserver(s)
>>>> see: http://tinyurl.com/openpgp-manual
>>>>
>>>> PGP Fingerprint: CA85 EB11 2B70 042D AF66 B29A 6437 01A1 10A3 D3A5
>>>>
>>>> _______________________________________________ ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com