But...the "cloud" isn't the same. One of the most dangerous things one can do is create the perception of security; it's one of the problems I have with the naked-pic cancer machines; a dog and a metal detector would be infinitely more effective, and less obtrusive, but doesn't look as cool as a machine that sees through your clothes. The false sense of security is *dangerous*.
S3-backed images are using S3...there aren't real "partitions" anyway. It's all http requests to a massive web server, that just handles put/post/delete/head/etc requests. It's not discrete storage. One shouldn't consider that it's "secure," nor should one worry about breaking apart "partitions" for performance reasons when using S3. There's really, in the end, no reason to have more than a single volume for an S3-backed instance.
"cloud" is a different paradigm, and it's important to not try to shoe-horn old-school best-practices on to the new platforms that are, really, not at all the same.
That said - an EBS is a SAN-like interface, from my understanding. It is (supposedly) a discrete-ish volume specific to you. You can even encrypt it, with barely any more overhead than a normal encrypted SAN device would have. So yes - break apart the volumes for an EBS-backed instance. Amazon actually recommends this - because it spreads out the I/O. If you create 2 volumes, they won't be in the same SAN, which through usage normalization means your spikes won't line up with other people's spikes, and you can distribute those high-impact periods. EBS-backed instances are a bit less "cloud" like, and somewhat more like what people are used to.
Oh, while we're on the subject - NEVER use S3 for swap. You're better out running out of ram than trying to swap to a web server somewhere else - think about it :)
Brian
On Mon, Dec 6, 2010 at 11:37 AM, István <leccine@xxxxxxxxx> wrote:
Yeah it is viable solution as well, but it requires a bit more attention from the admin side. I just used the xvdc device for /var and it was easier. I hit two birds with one stone since logging also goes there./dev/xvdc1 40G 401M 37G 2% /varI also like the BSD sort of installation when everything is separated so can specify different behavior for /tmp /home /var like enable noexec or nosuid parameters which are typically applied to mount points.But of course you could just modify mysql/apache/.... to sit on the pre-configured devices.I.On Mon, Dec 6, 2010 at 5:58 PM, Brian LaMere <brian@xxxxxxxxxxxxxxxxxxxx> wrote:
you could always just deploy the lamp stuff to the other two dirs mounted - no reason mysql databases couldn't be in /data, for instance. /var/lib/mysql doesn't really make that much sense for a place to actually /leave/ it, after all ;)BrianOn Sun, Dec 5, 2010 at 6:03 AM, István <leccine@xxxxxxxxx> wrote:
Brilliant!. In the meantime I am trying to resize the root fs somehow splitting /dev/xvdc for /var and so on.Thank you guys.Regards,IstvanOn Sun, Dec 5, 2010 at 1:39 PM, Marek Goldmann <mgoldman@xxxxxxxxxx> wrote
Hi Istvan,
Yes, we've talked about this before. Whole 10GB will be used for S3-based AMIs once we publish updated AMIs, right Justin?
Thanks!
--Marek
> _______________________________________________
On 2010-12-05, at 13:10, István wrote:
> Hey,
>
> Don't you think it was a good idea to have at least 10G for / in FC14 EC2 image?
>
>
> 2G is a bit small comparing the available many 100G space, it is hardly enough for typical LAMP installations or any kind of production server even if you store the content in /mnt.
>
>
> Regards,
> Istvan
>
> --
> the sun shines for all
>
> http://blog.l1x.me
> cloud mailing list
> cloud@xxxxxxxxxxxxxxxxxxxxxxx
> https://admin.fedoraproject.org/mailman/listinfo/cloud
_______________________________________________
cloud mailing list
cloud@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/cloud
_______________________________________________
cloud mailing list
cloud@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/cloud
_______________________________________________
cloud mailing list
cloud@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/cloud
--
the sun shines for all
http://blog.l1x.me
_______________________________________________
cloud mailing list
cloud@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/cloud
_______________________________________________ cloud mailing list cloud@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/cloud