Re: Maintenance question / DB size anomaly...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Drat! I'm wrong again. I thought for sure there wouldn't be a wraparound problem. So does this affect the entire database server, or just this table? Is best way to proceed to immediately ditch this db and promote one of my slaves to a master? I'm just concerned about the data integrity. Note that I don't use OID for anything really, so I'm
hoping I'll be safe.

Thanks again, Tom.

/kurt


pg_controldata output:

-bash-3.00$ pg_controldata
pg_control version number:            74
Catalog version number:               200411041
Database system identifier:           4903924957417782767
Database cluster state:               in production
pg_control last modified:             Wed 20 Jun 2007 03:19:52 PM CDT
Current log file ID:                  952
Next log file segment:                154
Latest checkpoint location:           3B8/920F0D78
Prior checkpoint location:            3B8/8328E4A4
Latest checkpoint's REDO location:    3B8/9200BBF0
Latest checkpoint's UNDO location:    0/0
Latest checkpoint's TimeLineID:       1
Latest checkpoint's NextXID:          1490547335
Latest checkpoint's NextOID:          3714961319
Time of latest checkpoint:            Wed 20 Jun 2007 03:17:50 PM CDT
Database block size:                  8192
Blocks per segment of large relation: 131072
Bytes per WAL segment:                16777216
Maximum length of identifiers:        64
Maximum number of function arguments: 32
Date/time type storage:               floating-point numbers
Maximum length of locale name:        128
LC_COLLATE:                           en_US.UTF-8
LC_CTYPE:                             en_US.UTF-8
-bash-3.00$ echo $PGDATA


Here's the list from pg_clog for June:

-rw-------  1 postgres postgres 262144 Jun  1 03:36 054D
-rw-------  1 postgres postgres 262144 Jun  1 08:16 054E
-rw-------  1 postgres postgres 262144 Jun  1 10:24 054F
-rw-------  1 postgres postgres 262144 Jun  1 17:03 0550
-rw-------  1 postgres postgres 262144 Jun  2 03:32 0551
-rw-------  1 postgres postgres 262144 Jun  2 10:04 0552
-rw-------  1 postgres postgres 262144 Jun  2 19:24 0553
-rw-------  1 postgres postgres 262144 Jun  3 03:38 0554
-rw-------  1 postgres postgres 262144 Jun  3 13:19 0555
-rw-------  1 postgres postgres 262144 Jun  4 00:02 0556
-rw-------  1 postgres postgres 262144 Jun  4 07:12 0557
-rw-------  1 postgres postgres 262144 Jun  4 12:37 0558
-rw-------  1 postgres postgres 262144 Jun  4 19:46 0559
-rw-------  1 postgres postgres 262144 Jun  5 03:36 055A
-rw-------  1 postgres postgres 262144 Jun  5 10:54 055B
-rw-------  1 postgres postgres 262144 Jun  5 18:11 055C
-rw-------  1 postgres postgres 262144 Jun  6 03:38 055D
-rw-------  1 postgres postgres 262144 Jun  6 10:15 055E
-rw-------  1 postgres postgres 262144 Jun  6 15:10 055F
-rw-------  1 postgres postgres 262144 Jun  6 23:21 0560
-rw-------  1 postgres postgres 262144 Jun  7 07:15 0561
-rw-------  1 postgres postgres 262144 Jun  7 13:43 0562
-rw-------  1 postgres postgres 262144 Jun  7 22:53 0563
-rw-------  1 postgres postgres 262144 Jun  8 07:12 0564
-rw-------  1 postgres postgres 262144 Jun  8 14:42 0565
-rw-------  1 postgres postgres 262144 Jun  9 01:30 0566
-rw-------  1 postgres postgres 262144 Jun  9 09:19 0567
-rw-------  1 postgres postgres 262144 Jun  9 20:19 0568
-rw-------  1 postgres postgres 262144 Jun 10 03:39 0569
-rw-------  1 postgres postgres 262144 Jun 10 15:38 056A
-rw-------  1 postgres postgres 262144 Jun 11 03:34 056B
-rw-------  1 postgres postgres 262144 Jun 11 09:14 056C
-rw-------  1 postgres postgres 262144 Jun 11 13:59 056D
-rw-------  1 postgres postgres 262144 Jun 11 19:41 056E
-rw-------  1 postgres postgres 262144 Jun 12 03:37 056F
-rw-------  1 postgres postgres 262144 Jun 12 09:59 0570
-rw-------  1 postgres postgres 262144 Jun 12 17:23 0571
-rw-------  1 postgres postgres 262144 Jun 13 03:32 0572
-rw-------  1 postgres postgres 262144 Jun 13 09:16 0573
-rw-------  1 postgres postgres 262144 Jun 13 16:25 0574
-rw-------  1 postgres postgres 262144 Jun 14 01:28 0575
-rw-------  1 postgres postgres 262144 Jun 14 08:40 0576
-rw-------  1 postgres postgres 262144 Jun 14 15:07 0577
-rw-------  1 postgres postgres 262144 Jun 14 22:00 0578
-rw-------  1 postgres postgres 262144 Jun 15 03:36 0579
-rw-------  1 postgres postgres 262144 Jun 15 12:21 057A
-rw-------  1 postgres postgres 262144 Jun 15 18:10 057B
-rw-------  1 postgres postgres 262144 Jun 16 03:32 057C
-rw-------  1 postgres postgres 262144 Jun 16 09:17 057D
-rw-------  1 postgres postgres 262144 Jun 16 19:32 057E
-rw-------  1 postgres postgres 262144 Jun 17 03:39 057F
-rw-------  1 postgres postgres 262144 Jun 17 13:26 0580
-rw-------  1 postgres postgres 262144 Jun 17 23:11 0581
-rw-------  1 postgres postgres 262144 Jun 18 04:40 0582
-rw-------  1 postgres postgres 262144 Jun 18 12:23 0583
-rw-------  1 postgres postgres 262144 Jun 18 17:22 0584
-rw-------  1 postgres postgres 262144 Jun 18 19:40 0585
-rw-------  1 postgres postgres 262144 Jun 19 03:38 0586
-rw-------  1 postgres postgres 262144 Jun 19 09:30 0587
-rw-------  1 postgres postgres 262144 Jun 19 10:23 0588
-rw-------  1 postgres postgres 262144 Jun 19 16:10 0589
-rw-------  1 postgres postgres 262144 Jun 19 21:45 058A
-rw-------  1 postgres postgres 262144 Jun 20 03:38 058B
-rw-------  1 postgres postgres 262144 Jun 20 12:17 058C
-rw-------  1 postgres postgres 131072 Jun 20 15:13 058D


On Jun 20, 2007, at 2:37 PM, Tom Lane wrote:

so I have to conclude that you've got a wraparound problem. What is the
current XID counter?  (pg_controldata will give you that, along with a
lot of other junk.)  It might also be interesting to take a look at
"ls -l $PGDATA/pg_clog"; the mod times on the files in there would give
us an idea how fast XIDs are being consumed.

			regards, tom lane



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux