Es geschah am Wednesday, 19. April 2006 10:30 als Andrew Haley schrieb: > > Unfortunately we found a case [1] which did not work at all: a type cast > > from float vector to integer vector, like: > > > > typedef float v4sf __attribute__ ((vector_size(16),aligned(16))); > > typedef int v4i __attribute__ ((vector_size(sizeof(int)*4))); > > > > int main() { > > const v4sf v = { 1.2f ,2.2f ,3.3f, 4.4f }; > > const v4i vRes = (v4i) v; > > } > > > > The resulting integer vector vRes would simply contain crap. > > Is this a bug, not implemented yet or even intentional? > > I don't know, because there's not enough information here. Can you > produce a runnable test case? Ok, attached you find one with output. When you run it, it should actually show this: v4sf v = { 1.200000, 2.200000, 3.300000, 4.400000 } v4i vRes = { 1, 2, 3, 4 } but instead I get this: v4sf v = { 1.200000, 2.200000, 3.300000, 4.400000 } v4i vRes = { 1067030938, 1074580685, 1079194419, 1082969293 } I can hardly believe this is intentional, is it? CU Christian
#include <stdio.h> typedef float v4sf __attribute__ ((vector_size(16),aligned(16))); typedef int v4i __attribute__ ((vector_size(sizeof(int)*4))); int main() { const v4sf v = { 1.2f ,2.2f ,3.3f, 4.4f }; printf("v4sf v = { %f, %f, %f, %f }\n", ((float*)&v)[0],((float*)&v)[1],((float*)&v)[2],((float*)&v)[3]); const v4i vRes = (v4i) v; printf("v4i vRes = { %d, %d, %d, %d }\n", ((int*)&vRes)[0],((int*)&vRes)[1],((int*)&vRes)[2],((int*)&vRes)[3]); }