A GPU is designed to do the same, simple operation on a large data set. So I'm guessing it could be faster if you generated a lot of children at each step. But after you generate children you also have to know which child is the best, which generally means some kind of validation and test on the data. So there is a need to switch between operations. That's why I tend to think that if you want to generate less children and more iterations, instead of more children and less iterations, the Parallella might be a better processor for it.
But honestly I really don't know. It would make for an interesting benchmark. But until the Epiphany has access to faster shared memory the Parallella will probably end up being slower - unless the data set the algorithm is being applied to can fit in the 32kB.Statistics: Posted by louloizides — Mon May 11, 2015 11:18 am
]]>