Hello list, I am soliciting opinion here as opposed technical help with an idea I have. I've setup a bacula backup system on an AWS volume. Bacula stores a LOT of information in it's mysql database (in my setup, you can also use postgres or sqlite if you chose). Since I've started doing this I notice that the mysql data directory has swelled to over 700GB! That's quite a lot and its' easting up valuable disk space. So I had an idea. What about uses the fuse based s3fs to mount an S3 bucket on the local filesystem and use that as your mysql data dir? In other words mount your s3 bucket on /var/lib/mysql I used this article to setup the s3fs file system http://benjisimon.blogspot.com/2011/01/setting-up-s3-backup-solution-on-centos.html And everything went as planned. So my question to you dear listers is if I do start using a locally mounted s3 bucket as my mysqld data dir, will performance of the database be acceptable? If so, why? If not are there any other reasons why it would NOT be a good idea to do this? The steps I have in mind are basically this: 1) mysqldump --all-databases > alldb.sql 2) stop mysql 3) rm -rf /var/lib/mysql/* 4) mount the s3 bucket on /var/lib/mysql 5) start mysql 6) restore the alldb.sql dump Thanks for your opinions on this! Tim 4) -- GPG me!! gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos