I am working on a project that uses a genetic algorithm to generate design permutations that are then analyzed by a separate program and the results returned. This is a perfect candidate for parallel computing, as each design permutation can be analyzed completely independently. I'm hoping the Parallella can adapt to the problem, as my current design space is 1,000*N^2!

I would really like to order the cluster, but I have some feasability questions:

- Everything is written in C or Fortran, so compiling within Ubuntu should be straightforward. Can the Parallella be used as it's own design environment, or will it just be a headless cluster?

- The analysis program is about 2 Mb in size. Ideally, once the design stack is generated, every available core would take a design, analyze it, output results, and fetch the next waiting design. Is this even possible with Parallella? Is the analysis program too big, I.e. only executable by the ARM?

- And most importantly: To leverage the Epiphany cores, would I need to rewrite all my code? If I don't, will it just execute sequentially?

I'm really excited about the possibilities Parallella offers, as the more cores the better for these types of problems. Unfortunately, the only examples I've seen so far involve simple math/matrix operations, and I can't determine just what is possible for larger problems. I just hope my problem fits!