On Sat, Oct 4, 2014 at 3:46 PM, Andrus <kobruleht2@xxxxxx> wrote: > In my db people often looks for different period sales using different > filters and will sum > There are lot of sales and every sale is individual record in sales table. > So increasing sequential scan speed is important. > > I tried > > create table t1(v char(100), p numeric(12,5)); > create table t2(v varchar(100), p numeric(12,5)); > insert into t1 select '', generate_series from generate_series(1,1000000); > insert into t2 select '', generate_series from generate_series(1,1000000); > > and after that measured speed of > > select sum(p) from t1 > > and > > select sum(p) from t2 > > Both of them took approximately 800 ms > > So there is no difference in sequential scan speed. > Replacing char with varchar requires re-writing some parts of code. > Disk space is minor issue compared to cost of code-rewrite. > It looks like it is not reasonable to replace char with varchar. Sure, in this trivial case it's not different (both tables are small, fit in cache, and the numeric calculation is where the bulk of time is getting spent). But if your table is double the size it's going to have impacts on many real world workloads. I'm not in any way saying to go change up your database but I'd definitely avoid char() for all new code. merlin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general