Christian Schoenebeck writes: > Es geschah am Wednesday, 19. April 2006 10:30 als Andrew Haley schrieb: > > > Unfortunately we found a case [1] which did not work at all: a type cast > > > from float vector to integer vector, like: > > > > > > typedef float v4sf __attribute__ ((vector_size(16),aligned(16))); > > > typedef int v4i __attribute__ ((vector_size(sizeof(int)*4))); > > > > > > int main() { > > > const v4sf v = { 1.2f ,2.2f ,3.3f, 4.4f }; > > > const v4i vRes = (v4i) v; > > > } > > > > > > The resulting integer vector vRes would simply contain crap. > > > Is this a bug, not implemented yet or even intentional? > > > > I don't know, because there's not enough information here. Can you > > produce a runnable test case? > > Ok, attached you find one with output. When you run it, it should actually > show this: > > v4sf v = { 1.200000, 2.200000, 3.300000, 4.400000 } > v4i vRes = { 1, 2, 3, 4 } > > but instead I get this: > > v4sf v = { 1.200000, 2.200000, 3.300000, 4.400000 } > v4i vRes = { 1067030938, 1074580685, 1079194419, 1082969293 } > > I can hardly believe this is intentional, is it? I've started a conversation on the gcc discuss list, and we think it's probably a bug. There's a thread at http://gcc.gnu.org/ml/gcc/2006-04/msg00349.html. Andrew.