I get it that the way to access data from a large in memory data set as columns or rows in a fixed sized array holds some interest, but all these operations involved aren't the same tradeoffs as a "regular" DMA engine, where it's about memory banks and pages, startup time, possibly the data elements size, and the provisions of the cache(s) that can accelerate certain parts of the data access.
Purely as a tool to instruct the Parallella's infrastructure to feed you a stride vector, fine, of course.
Theo V.Statistics: Posted by theover — Mon Jan 02, 2017 8:25 pm
]]>