Scientific video processing application

Scientific video processing application

Postby halvorka » Mon Sep 16, 2013 2:03 pm

Hi,

I'm considering parallella for a scientific instrument I'm developing, but I'm a little concerned about the learning curve. I have a technical background and programming experience in scientific programs like labview and matlab, but only a familiarity with languages like c++ and fortran. Here is the application: I'm acquiring video from a gigabit ethernet camera at about 75 MB/s (5MP at 15fps) in a small and (basically) isolated space. I'd like to do one of two things:

1) save the video to an SD card (with or without compression) for later analysis
or
2) perform real-time image analysis consisting of center finding for ~5000 identical circular patterns, and calculating the local image variance for each of the 5000 regions. Then I would need to save this data to SD or transmit wirelessly. I already have algorithms for the analysis.

Due to the constraints of the system, this needs to also be done on battery power. Based on browsing these message boards that may pose some additional difficulties.

It seems like parallella may be able to work in this application, but my main concern is the learning curve involved in implementing this. Does anyone here have ideas on how long this might take to implement?

Any input would be appreciated...

Ken
halvorka
 
Posts: 2
Joined: Fri Sep 13, 2013 6:43 pm

Re: Scientific video processing application

Postby over9000 » Tue Sep 17, 2013 5:04 am

halvorka wrote:I already have algorithms for the analysis.

It seems that the question of whether or not this will work will depend on two separate factors:
    1. Whether this analysis algorithm is a good fit for the Epiphany,
    2. Whether you can apply your general skills to developing the solution
As for the second point, I'd suggest taking a bottom-up approach. Read about the instruction set that the Epiphany uses and the overall architecture, particularly as it pertains to memory accesses and getting data into and out of the chip. If you're more used to working with high-level abstractions (eg, focusing more on declarative solutions without having to worry too much about implementation details) then it may be that this is moving beyond your comfort zone. However, unless you actually understand the low-level architectural details, you'll find it very difficult to make progress. Working from the bottom up should pay dividends because it means that when you come to design a solution for the Epiphany, you should be able to avoid a lot of the false starts and rework that comes with finding a particular implementation doesn't work as well as it needs to. You're going to have to refer to the architectural and SDK documentation an awful lot if you're implementing something non-trivial like this, so I think that it pays to understand it as thoroughly as you can before you start. Also, definitely read about the assembly language. You may not need to use it, but it will give a very good insight into what the platform is good at and what it's bad at. It may even be that you will need to learn enough to be able to code some parts of your project in it in the long run, anyway (don't be too scared, though---there are fewer than 40 instructions in all).
As for your algorithm, you don't give many details. Some things you might want to think about include:
    * is your existing algorithm actually suited to working in parallel?
    * if so, what are the natural "units" of work (eg, is it your circles, a complete image, a region of the image, or something else)?
    * can the algorithm that operates on the unit fit within the memory space of a single Epiphany core?
    * is there a better algorithm that would be more fitting for the architecture (eg, if you're talking about comparing circles, would a neural network be more appropriate?)
    * if the image has to be segmented, then should this be done on the ARM side or the Epiphany side?
    * would the Hough transform be applicable for finding patterns in the image?
    * assuming everything else checks out, can you do a back-of-the-envelope calculation to ensure that your algorithm will be able to do everything it needs in the time available (assuming roughly 3 (ie, 24-bit) * 5,000,000 (5Mp) * 10 (fps) input pixels per second with 8Gb/s max I/O)?
    * if not, can you do pre-processing to alleviate data transfer bandwidth or memory usage bottlenecks?
    * does your solution depend on something that the Epiphany is particularly bad at (eg, large memory space or fast division)
    * etc., etc., etc.
As you can see, it's not easy to answer these questions without having a very good idea of both the problem domain and the target architecture...
At least one thing is pretty certain, though: if you need to run on batteries, then you're probably not going to find a platform that has as good a performance/watt figure. So from that point of view, it's definitely a good fit :)
Edit: I overlooked one important point: besides the ARM and Epiphany side of things, there's also the FPGA. If you can express processing units as purely digital building blocks, then there should also be a good bit of potential for using it in lots of image-processing applications.
over9000
 
Posts: 98
Joined: Tue Aug 06, 2013 1:49 am

Re: Scientific video processing application

Postby halvorka » Tue Sep 17, 2013 12:15 pm

Thanks over9000,

You've given me a lot to think about, and I appreciate your thoughtful response. My experience is definitely more high-level, where I typically don't have to think much about things like memory access and how data is moved.

Your questions about previous algorithms are valid, I didn't really give any details. I've implemented solutions before in matlab and lab view, but the questions you bring up are things I haven't really been forced to think about in those contexts. I have used the Fast Radial Symmetry Transform (FRST) previously, which I believe uses the Hough transform as you suggest. This one in particular I've read is well suited to parallel computing. That can determine the positions of the particles and be used to establish regions of interest, which I then need to extract further information about the focus of the particles. The simplest focus measure I've used is simply the image variance of each region which reaches a maximum when the particle is in sharp focus and drops off as it becomes blurry. Other measures I've used take advantage of the Newton ring pattern of the particles and are more computationally intense. In this case it involves calculating a radial intensity profile for each particle and fitting the curve with a model or pattern matching against a set of curves from a calibration.

I've got to go back and do my homework on this, thanks so much for your input.

Ken
halvorka
 
Posts: 2
Joined: Fri Sep 13, 2013 6:43 pm


Return to Image and Video Processing

Who is online

Users browsing this forum: No registered users and 1 guest

cron