Re: Booting Software RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



..this is nice, but way overcomplicated than it needs to be.
we're talking just slices /containers/ and data /files/ at the end of 
the so-called
day everything is a file in *ix - even printers.. whatever.

... restoring the below would be a challenge for someone not truly 
intimate with
this scheme.  it's great until 'something goes wrong.. and it will'

this reminds me of the early Linux days when the man pages were useless 
to people
i would hire.. no examples, just raw statements from propeller-heads.
who cares if you geek out all night because you don't have a date on Friday?

.. in any event, you never need more than /four/ or /five/ partitioning 
schemes,
anything more, you're just showing off... really.

folks, I explained the other day all it takes and you're up and running.

KEEP IT SIMPLE. this is not complicated stuff, unless you right code in 
FORTRAN or something
sick like that, /enter: assembly.

about the OS (as you're calling it, must be a cross-over from 
Microsoft), that partition SHOULD NEVER
fill up, if so, fire the SysAdmin because he's drinking or using the 
non-drinking kind of Coke... in mass
quantities...

You could EASILY write a script (anyone do that here?) -- to monitor 
that /OS file system ~ and send
alerts based on thresholds and triggers (notify the monitoring people 
before they get even notified.. it's
alot of fun!) -- and put it in the crontab - // cron // - get yourself 
some...

Better living through monitoring..

If a file system EVER filled up on my watch, the whole team will be 
escorted out the door.... with their
badges in my hand... geezus.  I love these 'guru's' making it complicated.

The real talent and skill is in keeping it simple.
YOU WILL have to restore this some day.. and I've said this before, too, 
YOU MAY and probably will
be gone in a few years (if you're not, you're really not that good at 
this game.. as anyone that
stays in an IT job more than FIVE years is not growing, except for 
old.... ) Really?

Oh, I offer this as real-world, world-wide experience. Pick a country, 
I've done this stuff there..
Probably where you're at now, even Canada, as they have electricity 
there now and lots
of CLOUDS based on their weather patterns.

PARTITIONS, one could do....  :

/swap (you know the rules here; twice your memory and NOT any more, 
why??? do you breath more than you need?
/home  (give your users a break; makes restore VERY EASY for them..... 
and they won't hate you
/opt  (lots of code gets installed her if you have a real job and use 
real applications and not home alone on Friday night like I think some 
of these posters are.
/var (if you want for bonus points
/socalledDATA, which really could be put in opt.

Wherever you're installing *ix code... that files sytem should have 52 
to 60 % of your store.

I'm just saying.....

That person that said the OS files system grows and crashes should edit 
their resume as YOU'RE FIRED!
MONITOR.

Oh, those LOG FILES, um, put them on the SAN -- yeah, that's it.

I hope this helps the people that get it and don't overly complicate 
things. It's just Linux -- it was mostly written
by 'one guy...' in Scandinavia as a class-project -- then they all 
jumped in.

I remember that post from that infamous creator.....

Get yourself some simplicity and contribute that... it works.  AND it 
makes you look smarter.

No one likes an arrogant geek.

Wizard of Hass!
Much more than a Linux man --


On 1/30/2014 6:29 AM, James B. Byrne wrote:
> On Wed, January 29, 2014 11:57, Lists wrote:
>> On 01/29/2014 08:15 AM, Matt wrote:
>>> If I am putting both 4TB drives in a single RAID1 array for /vz would
>>> there be any advantage to using LVM on it?
>> My (sometimes unpopular) advice is to set up the partitions on servers
>> into two categories:
>>
>> 1) OS
>> 2) Data
>>
>> OS partitions don't really grow much. Most of our servers' OS partitions
>> total less than 10 GB of used space after years of 24x7 use. I recommend
>> keeping things *very* *simple* here, avoid LVM. I use simple software
>> RAID1 with bare partitions.
>>
>> Data partitions, by definition, would be much more flexible. As your
>> service becomes more popular, you can get caught in a double bind that
>> can be very hard to escape: On one hand, you need to add capacity
>> without causing downtime because people are *using* your service
>> extensively, but on the other hand you can't easily handle a day or so
>> to transfer TBs of data because people are *relying* on your service
>> extensively. To handle these cases you need something that gives you the
>> ability to add capacity without (much) downtime.
>>
>> LVM can be very useful here, because you can add/upgrade storage without
>> taking the system offline, and although there *is* some downtime when
>> you have to grow the filesystem (EG when using Ext* file systems) it's
>> pretty minimal.
>>
>> So I would strongly recommend using something to manage large amounts of
>> data with minimal downtime if/when that becomes a likely scenario.
>>
>> Comparing LVM+XFS to ZFS, ZFS wins IMHO. You get all the benefits of LVM
>> and the file system, along with the almost magical properties that you
>> can get when you combine them into a single, integrated whole. Some of
>> ZFS' data integrity features (See RAIDZ) are in "you can do that?"
>> territory. The main downsides are the slightly higher risk that ZFS on
>> Linux' "non-native" status can cause problems, though in my case, that's
>> no worry since we'll be testing any updates carefully prior to roll out.
>>
>> In any event, realize that any solution like this (LVM + XFS/Ext, ZFS,
>> or BTRFS) will have a significant learning curve. Give yourself *time*
>> to understand exactly what you're working with, and use that time
>> carefully.
>
> Our default partitioning scheme for new hosts, whether virtualised or not,
> looks something like this:
>
> df
> Filesystem                      1K-blocks    Used Available Use% Mounted on
> /dev/mapper/vg_inet01b-lv_root    8063408 1811192   5842616  24% /
> tmpfs                             1961368       0   1961368   0% /dev/shm
> /dev/vda1                          495844  118294    351950  26% /boot
> /dev/mapper/vg_inet01b-lv_tmp     1007896   51800    904896   6% /tmp
> /dev/mapper/vg_inet01b-lv_log     1007896   45084    911612   5% /var/log
> /dev/mapper/vg_inet01b-lv_spool   8063408  150488   7503320   2% /var/spool
>
> The capacities assigned initially vary based on expected need and available
> disk.  As everything is an lv expanding volume sizes when required is not
> exceedingly burdensome.  I used to keep / as a non-lv but for the past few
> years, since CentOS-5 I think, I have made that an lv as well and my
> experience to date has been positive.
>
> Anything expected to continually increase over time goes under /var as a new
> lv.  For example:  On systems with business applications that store
> transaction files we have a dedicated lv mounted at /var/data/appname (for web
> apps) or /var/spool/appname (for everything else).  On a system hosting an
> RDBMS we generally give /var/lib or var/lib/dbmsname its own lv although on
> the dedicated dbms hosts we typically just mount all of /var as an lv.
>
> This is based on past experience, usually bad, where the root file-system
> became filled by unmanaged processes (lack of trimming stale files), or DOS
> attacks (log files generally), or unexpectedly large transaction volumes
> (/var/spool).  As all but one of our hosts have no local users besides
> administrative accounts /home is left in root.  On the remaining host that has
> local user accounts /home is an lv as well.
>
>

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos




[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux