OK, given a set of processor cores each running APL, how do we communicate between them ?
(This is a continuation of the discussion begun in : Why port array languages such as Apl to Parallella ? )
A conventional APL system will offer a range of communication processes, now usually including object-oriented methods of accessing .net, Java, R, etc. For AplX see We need to find a set of communication methods that compactly gives us rapid communication between processes running on the cores of a Parallellla.
1) Well the methods that were built into aplc a number of years ago are based on piping :
a) using two pipes in for data and control, two pipes out for result and status : Basically based on std-in, std-out, std-err
b) piping data out of APL through a pipeline of shell commands and back into APL. A new mechanism []Pipe.
c) spawning a new long-lived process and connecting as many pipes to it as needed. A new mechanism []Spawn.
See the aplc.doc :
2) Another possible mechanism is a global APL variable handling program, that local APLs can make read and write request to. We wish to preserve the rigorous APL way of knowing the full specification of the data : its size, shape and type, and this information is kept by the global program.
3) Alternatively it is possible to just share a variable such that a number of processes share the data but the data is kept by the process with the most recent data used to write the variable. Any particular read then accesses the data (and its type and structure information) from this, so a data read is then a local to local process transfer over the global interconnect. This still needs to be 'policed' by a process with a global overview.
Techniques 2 and 3 are similar to the []SVC Shared Variable Control in AplX.
Your thoughts on these and other possible mechanisms are most welcome.
If you have the knowledge / ability / time to implement any of these using the local/shared memory system on the Parallella, that would be most welcome.