RE: ceph-disk disk partition alignment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Alfredo,
As expected, wanted to confirm that this is happening with ceph-disk also..After running 'ceph-disk prepare' I am seeing the similar behavior..

Disk /dev/sdj: 7681.5 GB, 7681501126656 bytes
256 heads, 63 sectors/track, 116280 cylinders, total 1875366486 sectors
Units = sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 16384 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdj1               1  1875366485  3206498644   ee  GPT
Partition 1 does not start on physical sector boundary.

Thanks & Regards
Somnath

-----Original Message-----
From: Somnath Roy 
Sent: Tuesday, February 23, 2016 8:58 AM
To: 'Alfredo Deza'
Cc: ceph-devel
Subject: RE: ceph-deploy disk partition alignment

Yes, as Loic mentioned the title should be ceph-disk not ceph-deploy :-) I saw ceph-disk is returning hardcoded sector 1 in the code on some condition (I forgot) Anyways, will try with ceph-disk directly to see the behavior..

Thanks & Regards
Somnath

-----Original Message-----
From: Alfredo Deza [mailto:adeza@xxxxxxxxxx]
Sent: Tuesday, February 23, 2016 4:12 AM
To: Somnath Roy
Cc: ceph-devel
Subject: Re: ceph-deploy disk partition alignment

On Mon, Feb 22, 2016 at 1:27 PM, Somnath Roy <Somnath.Roy@xxxxxxxxxxx> wrote:
> Hi,
> I am seeing ceph-deploy is creating data partition from sector 1 ignoring what sgdisk is recommending (256 in my disk). Basically, it should be aligned with physical sector size (reported in /sys/block/<device>/queue/physical_block_size). In my case it is 16K physical and 4K logical...256 is perfectly fine as sgdisk/fdisk internally decides.
> Disk performance will be severely impacted because of partitioning this way from ceph-deploy , any reason why we are doing like this ?

I don't think ceph-deploy is particularly opinionated on this. Have you tried using ceph-disk directly to see if the behavior persists?

For OSDs/disks ceph-deploy mostly proxies to a remote node with ceph-disk.
>
> Thanks & Regards
> Somnath
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux