It's a truism that if an application dataset is larger or differently organised than the limits/constraints of the Parallella architecure then that application will need to work around/within the constraints, such as the 8KB segmentation mentioned.
It's probably a only small subset of applications/developers that really need to access large contiguous datasets in the current Parallella's global memory - so not a big issue.
For those who do need it, perhaps it would be best to just document how to adjust the size and location of the global memory more flexibly/easilly - especially for a E64 build, say, -rather than have it fixed at 32MB. { ... like one of shodruck's Ubunu tips maybe?}
But having access to a large global memory and then actually using it is not straightforward.
The performance hit of accessing the off-chip memory is itself a major constraint - and many developers will have to take this into account even for basic applications.
There's a good analysis/advice by @tnt on the performance impact of reading data/running code from the external global memory on one of the early boards.
.
Basically - don't. Unless you must.
teryStatistics: Posted by greytery — Mon Apr 14, 2014 11:07 am
]]>