well, that's not really true, it's only single-connection which is slower than on epiphany. so, theoretically it can be fixed by removing this procedual arm-processor altogether.
but seriously, fpga is an obstacle for scalability, as of now. it creates so much heat that you wont get far without worrying about heat-management. you can install a fan, but heat of parallella still is trapped in your room!
besides, why would you need every single of your parallellas to have programmable logic at all? if parallellas communicate through epiphany, no network, no usb, no hdmi are needed! only 3 connections to other boards and one to the main processor!
so what I am suggesting is not to get rid of fpga, just create a new slave-board without it, and sell it as a new product!
well, of course, software first! implementation of both, virtual box and scalability, is needed for testing current fpga code's functionality...
oh, and for clarity's sake, of course x86-emulation is not a scaleable virtual box. you could offer additional processors, but each processor's speed is limited (in the worst-case scenario). similarily memory might be added, but a single thread can still address only 1GB. so, some way of parallell programming is still needed to make use of scalability on x86 programs. however, in terms of graphics-processors there is no such bottleneck in scalability. sadly this direction isn't so well-documented as x86 itself...Statistics: Posted by piotr5 — Tue Dec 30, 2014 2:04 am
]]>