You've given me a lot to think about, and I appreciate your thoughtful response. My experience is definitely more high-level, where I typically don't have to think much about things like memory access and how data is moved.

Your questions about previous algorithms are valid, I didn't really give any details. I've implemented solutions before in matlab and lab view, but the questions you bring up are things I haven't really been forced to think about in those contexts. I have used the Fast Radial Symmetry Transform (FRST) previously, which I believe uses the Hough transform as you suggest. This one in particular I've read is well suited to parallel computing. That can determine the positions of the particles and be used to establish regions of interest, which I then need to extract further information about the focus of the particles. The simplest focus measure I've used is simply the image variance of each region which reaches a maximum when the particle is in sharp focus and drops off as it becomes blurry. Other measures I've used take advantage of the Newton ring pattern of the particles and are more computationally intense. In this case it involves calculating a radial intensity profile for each particle and fitting the curve with a model or pattern matching against a set of curves from a calibration.

I've got to go back and do my homework on this, thanks so much for your input.

Ken

Statistics: Posted by halvorka — Tue Sep 17, 2013 12:15 pm

]]>

halvorka wrote:

I already have algorithms for the analysis.

I already have algorithms for the analysis.

It seems that the question of whether or not this will work will depend on two separate factors:

- 1. Whether this analysis algorithm is a good fit for the Epiphany,

- 2. Whether you can apply your general skills to developing the solution

As for your algorithm, you don't give many details. Some things you might want to think about include:

- * is your existing algorithm actually suited to working in parallel?

- * if so, what are the natural "units" of work (eg, is it your circles, a complete image, a region of the image, or something else)?

- * can the algorithm that operates on the unit fit within the memory space of a single Epiphany core?

- * is there a better algorithm that would be more fitting for the architecture (eg, if you're talking about comparing circles, would a neural network be more appropriate?)

- * if the image has to be segmented, then should this be done on the ARM side or the Epiphany side?

- * would the Hough transform be applicable for finding patterns in the image?

- * assuming everything else checks out, can you do a back-of-the-envelope calculation to ensure that your algorithm will be able to do everything it needs in the time available (assuming roughly 3 (ie, 24-bit) * 5,000,000 (5Mp) * 10 (fps) input pixels per second with 8Gb/s max I/O)?

- * if not, can you do pre-processing to alleviate data transfer bandwidth or memory usage bottlenecks?

- * does your solution depend on something that the Epiphany is particularly bad at (eg, large memory space or fast division)

- * etc., etc., etc.

At least one thing is pretty certain, though: if you need to run on batteries, then you're probably not going to find a platform that has as good a performance/watt figure. So from that point of view, it's definitely a good fit

Edit: I overlooked one important point: besides the ARM and Epiphany side of things, there's also the FPGA. If you can express processing units as purely digital building blocks, then there should also be a good bit of potential for using it in lots of image-processing applications.

Statistics: Posted by over9000 — Tue Sep 17, 2013 5:04 am

]]>

I'm considering parallella for a scientific instrument I'm developing, but I'm a little concerned about the learning curve. I have a technical background and programming experience in scientific programs like labview and matlab, but only a familiarity with languages like c++ and fortran. Here is the application: I'm acquiring video from a gigabit ethernet camera at about 75 MB/s (5MP at 15fps) in a small and (basically) isolated space. I'd like to do one of two things:

1) save the video to an SD card (with or without compression) for later analysis

or

2) perform real-time image analysis consisting of center finding for ~5000 identical circular patterns, and calculating the local image variance for each of the 5000 regions. Then I would need to save this data to SD or transmit wirelessly. I already have algorithms for the analysis.

Due to the constraints of the system, this needs to also be done on battery power. Based on browsing these message boards that may pose some additional difficulties.

It seems like parallella may be able to work in this application, but my main concern is the learning curve involved in implementing this. Does anyone here have ideas on how long this might take to implement?

Any input would be appreciated...

Ken

Statistics: Posted by halvorka — Mon Sep 16, 2013 2:03 pm

]]>