Hello Guys,
I created a “custom” DPU design, which basiclly just adds some GPIO. The added GPIO works.
But if I load my .xmodel
and load the InputTensors I get the wrong Dimensions. I recompiled my custom NN Models with both of these arch.json
files .
/ZCU104/binary_container_1/link/vivado/vpl/prj/prj.gen/sources_1/bd/dpu/ip/dpu_DPUCZDX8G_1_0/arch.json
/ZCU104/binary_container_1/link/vivado/vpl/prj/prj.gen/sources_1/bd/dpu/ip/dpu_DPUCZDX8G_2_0/arch.json
Both yield the same wrong Input Tensors. I am using the example mnist notebook and just added my .xmodel
and .bit
This is my Error:
ValueError Traceback (most recent call last)
<ipython-input-19-adf53e77cd67> in <module>
2
3 for i in range(num_pics):
----> 4 image[0,...] = test_data[i]
5
6 job_id = dpu.execute_async(input_data, output_data)
ValueError: could not broadcast input array from shape (28,28,1) into shape (4,14,14)
The Notebook works with the dpu_mnist_classifier.xmodel
, but not with my own compiled model. What could cause this issue?
Edit: This is only an issue for this model, so far:
def create_fc_small(input_shape,output_shape):
x = x_in = Input(input_shape, name='input_1_m')
x = Flatten(name='flatten_1_m')(x)
x = Dense(16, name='dense_1_m')(x)
x = Activation("relu", name='act_1_m')(x)
x = Dense(128, name='dense_2_m')(x)
x = Activation("relu",name='act_2_m')(x)
x = Dense(output_shape, name='dense_3_m')(x)
x = Activation("softmax", name='act_3_m')(x)
model = Model(inputs=[x_in], outputs=[x])
return model
A CNN is working like intended.
Edit2:
It works if I just reshape my Data like this: x_test = x_test.reshape(10000,4,14,14)
, but that cant be intended right?
Greetings Henning