The approach we're taking involves using using accumulators based on wave geometries expressed through integer bitmasking. Currently I'm finding that instead of normalize values as traditionally we're able to construct standing waves and interference patterns to essentially match the 'root' of very large vectors that have almost no angularity in the current chunk being streamed. The results build up in constant time with the input and we can pretty much tweak the system for sensors with different bands.

Accumulator values of course are then further vectorized and compressed according to their geometries. Usually switching back to floats about then. Before this step we can also run some specialized signal crashing tests on the hardware, so we are keep a buffer of most recent accumulations for post-processing while next is coming in.

I've been wondering about the Parallella having a good use for this project since I first saw the board. Good to see similar interests going on.

Statistics: Posted by stealthpaladin — Wed Aug 21, 2013 5:02 am

]]>

basically compressed sensing means that you take a huge matrix and you are looking for a huge sparse vector that multiplied with it will output the results your sensors have received. it's a question of optimization, in that you want to minimize the number of entries that vector have non-zero. (sparse vector means most entries are zero, it requires a different way of storing that vector.) if done right you can then compress a big input into a sparse vector with relatively little amount of data. for example in face-recognition the input a a big amount of pixels of a photo, and the output is a short list of photos among a huge set of different people and angles/shadows that can be used to construct that input. keep in mind that this is not just a system of linear equations that need to output some unique number, there are many possible solutions for the linear system and one is looking for the solution with smallest amount of non-zero entries within a huge vector. so when you have 20 photos of your face stored, you hope the solution of face-recognition applied on another photo will use pixels from those 20 photos and no other photos, as then you can be certain that it's your face. for solving that optimization problem some sort of random number generator is required though, but at least the whole process can be parallelized.

Statistics: Posted by piotr5 — Wed Apr 10, 2013 3:56 pm

]]>