Manal Helal wrote:
Hi
I am trying to construct a multidimensional array (as linear array in
memory, but indexed with equations to represent the dimensions) of
size 30 each, and 6 dimensions, which is 30^6=729000000
using a long type for the array crashes, as it is outside the range
values. I used double long data type, and I had the above error: array
subscript is not an integer
Is there a work around this problem that I can implement, or another
way of creating similar arrays?
Are you expecting all 729 million entries to be used, or only a small
fraction? If you only expect a small fraction of the entries to actually
be used then you should probably use an associative (aka hash) map
instead. Bear in mind that even if you actually have the gigabytes of
memory that would be required to store your linear array that actually
making the essentially random storage references that such an array
would require would guarantee cache misses which could severely degrade
the performance of your application. Since an associative array only
uses the number of entries that are actually assigned values it will
tend to avoid cache misses, which may be enough to compensate for the
slightly longer path length to locate an entry. As long as the hash
vector is significantly larger than the maximum number of entries
inserted into the "array", and the hash function on the key returns
sufficiently well distributed values, the time to access a specific
element is essentially constant.
There are other sparse array techniques that you could try, depending
upon the exact pattern of access. For example instead of allocating a
linear array of 30^6 entries, initially allocate a 30 element array of
pointers: Entry ****** dim1[30]; Each time you assign a value to an
element allocate only that chain of 30 element arrays of pointers
necessary to get to the specific element you want. Locating a specific
element requires using the 6 subscripts to run down the chain of
pointers until you get the desired element. This technique is
guaranteed constant access time, but will in many cases average a longer
time than the associative "array".
All compilers have an implementation limit on the maximum size of
arrays. In the old days that was not much of an issue because the limit
was typically larger than the maximum amount of memory available. With
4GB machines being not at all uncommon these days, this assumption is no
longer true.
--
Jim Cobban jcobban@xxxxxxxx
34 Palomino Dr.
Kanata, ON, CANADA
K2M 1M1
+1-613-592-9438