Re: copy full system from old disk to a new one

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 19/02/2013 20:15, Reindl Harald wrote:


Am 19.02.2013 20:59, schrieb Gordan Bobic:
On 19/02/2013 19:42, Reindl Harald wrote:


Am 19.02.2013 20:24, schrieb Gordan Bobic:
On 19/02/2013 19:05, Reindl Harald wrote:


Am 19.02.2013 20:02, schrieb Gordan Bobic:
what exactly do you need to align on the partitions?

For a start, making sure your RAID implementation puts the metadata
at the end of the disk, rather than the beginning.

"my RAID implementation"?
LINUX SOFTWARE RAID

and this is how the raid-partitions are looking
no problem since years

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0000ae2c
      Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   fd  Linux raid autodetect
/dev/sda2         1026048    31746047    15360000   fd  Linux raid autodetect
/dev/sda3        31746048  3906971647  1937612800   fd  Linux raid autodetect

[root@srv-rhsoft:/downloads]$ sfdisk -d /dev/sda
# partition table of /dev/sda
unit: sectors
/dev/sda1 : start=     2048, size=  1024000, Id=fd, bootable
/dev/sda2 : start=  1026048, size= 30720000, Id=fd
/dev/sda3 : start= 31746048, size=3875225600, Id=fd
/dev/sda4 : start=        0, size=        0, Id= 0

That's the MD partition alignment, not the alignment of the FS space within the MD device. The two are not the
same.

maybe you should read older posts in the thread

Looking for what, exactly?

[root@srv-rhsoft:/downloads]$ tune2fs -l /dev/md1
[...]
This won't tell you the FS alignment against the raw underlying disk sectors.

Blocks per group:         32768

This is sub-optimal in almost all cases except RAID1 or single-disk, as I explained earlier

how do you meassure "sub-optimal"?

If you have a disk that you have to hit for every file access that isn't cached, that's extremely sub-optimal when you could be distributing that load across all of your disks. It's essentialy 1/n of performance you could be getting where n is the number of disks.

the time wasted to re-install a complex setup and start configuration
from scratch, especially if you have more than one clones of the same
machine will never be worth 1,2.3% of theoretical performance

Theoretical performance gains are much greater than that. Real performance gains largely depend on the size of your FS and the amount of free RAM you have for caching the block group header metadata.

And yes - not getting things right the first time is expensive. So get it right first time. :)

so i do not give a damn about a few percent and as long SSD's are
way to expensive to store some TB in RAID setups they are no
option and if they are at a valid price they are free from
early-adopters problems

as said: NOW i would not save any important data on a SSD
and i do not own any unimportant data beause if it is
unimportant i go ahead and delete it at all

You really have little or no idea what you are talking about. I have had many more mechanical disks fail than SSDs, and I have yet to see an SSD (proper SSD, not generic USB/CF/SD flash media) actually fail due to wearout. SSDs at the very least tend to fail more gracefully (become read-only) than mechanical disks (typically massive media failure). Flash write endurance on reasonable branded SSDs simply isn't an issue in any way, even with worst case scenario (e.g. runnaway logging application). Even if your media is only good for 3000 erase cycles and you have a 60GB SSD, that is 180TB of writes before you wear it out. Even if you overwrite the whole disk once per day that is still 8 years of continuous operation. It's not worth worrying about. Not by a long way.

To give you an idea, I have a 24/7 server here, with a 4GB rootfs (ext4, no journal), including /var/log, and gets yum updated reasonably regularly. It was created in May 2011, and has since then seen a grand total of 31GB of writes, according to dumpe2fs. If it were on flash media, that would be about 8 full overwrites. 2992 remaining. Or to put it another way, it has used up about 0.26% of it's life expectancy so far over 20 months. I would worry far more about silent bit-rot on traditional RAID mechanical disks.

maybe you install from scratch regulary
i do never and i am on board since Fedora Core 3

My re-installation cycle broadly follows the EL release cycles.

Gordan
--
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org


[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux