Also, the available bandwidth between these 32 MB of RAM and the Epiphany chip is much lower than the available bandwidth inside each Epiphany chip, and the latency introduced by connecting multiple Parallellas through the network interface is not going to help either.
In other words, it might make sense to provide a big machine on the network, which provides data slices on demand to all Parallella systems on the network. There is no need to keep the data in RAM, since any SSD is able to saturate multiple network links simultaneously - which is not true for the SD cards. Then write your Parallella host program to fetch these data slices and feed them into the Epiphany systems as you need. 8 GB of data is not too large for a single system.Statistics: Posted by sebraa — Fri Jul 31, 2015 3:00 pm
]]>