For example in the matmul code provided in the SDK, the parallel part hinges upon the arithmetic behaviour of sums. To wit : the total of a ( finite ) sum is independent of the order of the operands. This is technically known as commutativity and is a form of symmetry in that the result is invariant with respect to operand presentation.
In a fashion, any time you parameterise a function you are assuming some symmetry in that ( often ) the algorithm enacted by the function call does not depend on the value of the operand within some range ( there are exceptions of course ). Otherwise you have identified a dependency ( of algorithm upon operand range ) and an opportunity for factorisation/separation of code components. The gut idea here is that independent aspects may be generally parallelised while dependencies often lead to serialisation ( as a very broad brush stroke comment ).
A further example - the Fast Fourier Transform is the application of a transform matrix to a vector. For a given transform size ( ie. vector length and dimension ) the transform matrix may be expressed as a product of sparser matrices. Matrix multiplication isn't commutative, but associativity rules apply. So a left sided multiplication will form a linear combination of rows while a right sided one will do that for columns. So any given product AB may be viewed as A sorting B's rows, or B sorting A's columns. Thus in a product like ABC you can do (AB)C or A(BC) and get the same result either way, though there may be more efficiency in doing an AB rather than a BC. Hence one has identified a constancy ( independence of the result ) with respect to associativity. Does that give parallelism per se? In one sense no, because you have to do an AB or a BC before a final multiplication. But in another sense yes, because one can begin the final multiplication ( of A with BC, or AB with C ) before either AB or BC are fully evaluated, provided that such evaluation proceeds in a fashion where a partial/interim result can be used. Here one can leverage the relative sparsity of the matrix factors to good effect. Well, that's my plan anyway ... ![Smile :-)](https://parallella.org/forums/images/smilies/icon_e_smile.gif)
Cheers, Mike.Statistics: Posted by hewsmike — Fri Dec 28, 2012 2:00 am
]]>