Sudden weird behavior for my application

I had an application running well a couple of days ago. Suddenly, I see weird values in my board when I am computing a different function, for averaging the data in a vector.

The code I have implemented is:


int16_t algo::Process(int input)
{
    bool detection  = false;
    bool excitation = false;
    uint8_t event = 0;
    int32_t debug_avgFilter = 0;
    int32_t debug_stdFilter = 0;
    int32_t debug_threshold = 0;
    int32_t output = 0;

    for (int i = lag_ - 1; i > 0; i--)
    {
        backwardBuffer[i] = backwardBuffer[i - 1];
        filteredY[i] = filteredY[i - 1];
    }
    backwardBuffer[0] = input;
    filteredY[0] = mean(backwardBuffer, lag_);

    output = filteredY[0];
	return output;
}

// Vector functions
// Calculates the mean of a vector given the size of the array
int16_t algo::mean(const int16_t *y, int16_t size)
{
    float sum = 0.0f;
    for (int i = 0; i < size; ++i)
    {
        sum += y[i];
    }
    return sum / size;
}

The front of my application is through this function:

#include "ap_axi_sdata.h"
#include "hls_stream.h"
#include "algo.h"

void example(hls::stream< ap_axis<32,2,5,6> > &A,
	     	 hls::stream< ap_axis<32,2,5,6> > &B)
{
#pragma HLS INTERFACE axis port=A
#pragma HLS INTERFACE axis port=B
#pragma hls interface s_axilite port=return
	algo algo_instance(2000);
	ap_axis<32,2,5,6> tmp;
	ap_axis<32,2,5,6> tmp2;
    while(1)
    {
	A.read(tmp);
	tmp.data = algo_instance.Process(tmp.data.to_int());
	B.write(tmp);
	//tmp2.data = 0;
	//B.write(tmp2);

     if(tmp.last)
     {
         break;
     }
    }
}

and the block design is:

I expect to have as output the mean of the signal, basically the moving window average. I am using a size of 200 samples and a sampling frequency of 2KHsz, that is a 1% of the fs for averaging. I however, get very weird output:

I have been literally racking my brain wit hthe issue. If I set an output value of, let’s say 500, I have that output correctly shown in the plot. But when I use the mean function, the output is all 1e09.

Any ideas? Suggestions? I am really stuck with this.

Hi @GGChe,

Have you validated the HLS code in co-simulation?

Mario

Hi Mario,

No, I haven’t. I don’t really know how to perform the co-simulation. I have a simple test bench, but never performed a co-simulation as it is. I am debugging on the target. How could I do that? Do you have any documentation on hand?

Thanks!

Hi @GGChe,

This is well covered in the documentation, if you already have a testbench, it should be easy to run. This should be much faster than testing on target.

https://docs.xilinx.com/r/2022.1-English/ug1399-vitis-hls/C/RTL-Co-Simulation-in-Vitis-HLS

Mario

I have run the cosimulation but I dont really see anything on it

I mean, this is supposed to be my app and I am feeding 5000 samples but, not sure what I am seeing here

Hi @GGChe,

Do you have a validation in your test bench for the expected output?
If so, this validation should pass after cosim. You only need to look at the waveform if you want to debug at low level.

Mario

I do have an expected output but I cannot see the waveform of the output. I guess this is more a vitis related question. Anyway, I found something interesting. Changes types of data to regular int seems to solve the issue:

algo::algo(int fs)
{
    // Constructor implementation
    fs_ = fs;         // Use the member variable to set the value
    sampleCount_ = 0;

}

// Process the sample
int algo::Process(int input)
{
    for (int i = lag_ - 1; i > 0; i--)
    {
        backwardBuffer[i] = backwardBuffer[i - 1];
        filteredY[i] = filteredY[i - 1];
    }
    backwardBuffer[0] = input;
    filteredY[0] = mean(backwardBuffer, lag_);
	return filteredY[0];
}

int algo::mean(int *y, int size)
{
    int sum = 0;
    for (int i = 0; i < size; ++i)
    {
        sum += y[i];
    }
    return static_cast<int>(sum / size);
}

but if a change the data type to int16_t or ap_fixed<16> it starts to give the same kind of overflow behavior…

Hi @GGChe,

You have not indicated the range that input samples have, so it is probable that you may be running into overflow.

Mario

Ok, I found the problem. The transformation into 16 bits, produced the overflow. Using a ap_int<16> for the mean solved the issue. Now I have the same problem with the variance, I guess the overflow again. I might have to find a solution for that. Thanks for the help and feedback!

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.