by polas » Wed May 27, 2015 12:31 pm
Hi Dar,
Yes you are right - it does the lexing and parsing on the host, translating to a byte code representation which is put onto the cores and interpreted on there (along with symbol table etc...) The reason I adopted this approach is because I wanted to use lex & yacc (rather than having to write my own parser) and there is no way that the generated code & library requirements would fit into the memory per core. I think the added advantage of this is that much of the back-end can be trivially reused for other languages - I was thinking about supporting a simple subset of Python which would require mainly changes to the tokens and grammar.
In terms of your question, the simple answer is I don't know (and not entirely sure how to determine it either.) Everything (symbol table, byte code, data area etc) is placed starting at 0x6000 and packed in, which seems to work ok. Additionally it is possible (via command line options and/or if the byte code reaches a certain size) to locate the byte code in the shared memory instead. Similarly for the data area (used for arrays), this can be on core or in shared memory and the sdim keyword (as opposed to dim) will locate an array in shared memory programatically. Obviously when locating these items in shared memory there is a performance penalty, but the interpreter isn't particularly fast in the first place so it doesn't really matter.
I am currently extending eBASIC (which is limited by my free time) to support hybrid execution of codes across both the cores and host ARM processor.
Cheers,
Nick