Parallella Epiphany IV

Any technical questions about the Epiphany chip and Parallella HW Platform.

Moderator: aolofsson

Re: Parallella Epiphany IV

Postby sebraa » Thu Feb 12, 2015 2:06 pm

piotr5 wrote:tell me, what application did you have in mind which would require more than 32k, for example requiring 256k?
I would have liked to have more space for the data. The algorithm I used becomes more efficient with larger block sizes, plus a double-buffering approach would have allowed for a streaming implementation.

piotr5 wrote:if memory really is preventing epiphany landing at every door-step, then why did such computers like c64 zxspectrum and so on land on so many door-steps back in the eighties? did back then programmers exist which do not exist anymore?
You are pulling a straw-man here.

First, back then there was no alternative to "little memory". So memory efficiency was far more important than CPU throughput, given that there was no usable mass storage either.

Second, nobody cared about the time needed to program some software. Since distributing software took much longer, "time to market" was not such an issue as it is now. Today it is, faster computers and more memory is way cheaper than a programmer. Guess why you need the best computers to play current games?

Third, the applications were simpler. The amount of functionality which made those programmers proud is nowadays just a joke.

piotr5 wrote:for the c64 literally thousands of programs were created, it was a gaming platform.
Try to sell (for money!) a game which could in theory be run on a C64 today. Apart from some strange retro-interested person, nobody cares unless it has 3D and correct physics. (The indie games changed that a bit, though.)

piotr5 wrote:do you really create objects which monolithically store data of size>16k, instead of being full of pointers? it's not better hardware parallella needs, it's better programs, better compilers!
There is an inherent complexity in programs, which cannot be hidden, not by the programmer and not by the compiler. If you modularize, the complexity is in the communication. And a few things invented in the last 40+ years of compiler and language development are nice.

piotr5 wrote:as for EpiphanyIV, maybe it would help to put a countdown on the site displaying how much money is needed for that?
That is an interesting idea, but I don't think Adapteva will do that. Sadly.
sebraa
 
Posts: 495
Joined: Mon Jul 21, 2014 7:54 pm

Re: Parallella Epiphany IV

Postby aolofsson » Thu Feb 12, 2015 2:55 pm

Flemming,
Thanks for the kind words and humorous post. :D We are still drumming and one day we will get it right!
2014 was better than 2013 and 2015 will be better yet!
Andreas
User avatar
aolofsson
 
Posts: 1005
Joined: Tue Dec 11, 2012 6:59 pm
Location: Lexington, Massachusetts,USA

Re: Parallella Epiphany IV

Postby piotr5 » Thu Feb 12, 2015 8:54 pm

sebraa wrote:I would have liked to have more space for the data. The algorithm I used becomes more efficient with larger block sizes, plus a double-buffering approach would have allowed for a streaming implementation.


this I fail to understand. maybe I'm too inexperienced to say that, but wouldn't as you say "shifting complexity over to communication" just do what you want and eliminate the memory-requirements? you'd have to alter the algorithm though. I agree that one just can't achieve the level of compression as with bigger block-size, but one could make 2 cores work together on the same block. (just an example for how to alter a compression algorithm to get better compression ratio. not saying this is what you do, just what I know about.) with 2 cores on the same task, you basically need to make sure data gets exchanged accordingly. reminds me of some multiplication-algorithm: divide the data and then merge part of it again so all necessary permutations are getting processed by one of those cores...

sebraa wrote:Second, nobody cared about the time needed to program some software. Since distributing software took much longer, "time to market" was not such an issue as it is now. Today it is, faster computers and more memory is way cheaper than a programmer. Guess why you need the best computers to play current games?


thanks for this clarification, I didn't know that. however, it seems you missed my point: the reason why games did sell back then was because they contained new ideas nobody ever did think of before. for that you need no incredible hardware-setup. same is true nowadays, indie games are a success if they have an unique idea. so to attract people to parallella we first need to attract people who have unique ideas for games. you said it, better hardware is cheaper than programmers, so why not make better software cheaper than programmers? we'd need to lower the complexity of programming under memory-constraints of epiphany, so let's make use of the large amount of memory and cpu-speed available to the compilers, make the programming languages and libs more complex! what I have in mind is c++, allowing for program-execution during program-compilation. why are there no attempts to handle memory-restrictions from there?

sebraa wrote:There is an inherent complexity in programs, which cannot be hidden, not by the programmer and not by the compiler. If you modularize, the complexity is in the communication. And a few things invented in the last 40+ years of compiler and language development are nice.


what I have in mind in particular is a wrapper around he notion of "pointers". basically on epiphany you need that objects have dynamic address, changing physical location whenever they are needed. you make use of this poiner and somewhere else in the code a fitting dma-copy-instruction is inserted. for that naturally the compiler needs to know of what the other cores will be doing during that time, so the timing is adjusted to the optimal data-flow. this really is some advanced kind of artificial intelligence I'm talking about here. gcc seems to be taking that path, or at least those people who came up with the idea to make use of libgraphite. not really a new programming language is needed (in c++ pointers are abstracted away from mere address anyway), the compilers need to change the way they work and start compiling larger chunks of sourcecode at the same time, multiple files need to be processed for feeding graphite with the required data...

(in case you didn't know, one approach to artificial intelligence for solving mathematical problems is to create a graph and manipulate that accordingly. of course in the implementation you get to see only matrix manipulations, but that's just for making it comprehensible by the computer -- to the programmer it's obfuscation.)

btw, since we're talking of complexity and epiphanyIV, creating programs for epiphanyIII is much easier than for eIV. before the new hardware gets out, this must change! to explain, of the 8x8 cores only 28 are at the edge, that's <50%. for 4x4 cores that's 12 at the edge, 75%! so with epiphanyIV we suddenly get a lot of cores which have 4 neighbours each. also we get a lot of circular paths. therefore, if the programmer has to handle all the possible data-flows, that's a lot of work required for calculating the actual timings and optimal data-flow-paths. is there any tool that could help in that respect? something which analyzed epiphany assembler code and figures out which data is where and when? does such a tool already exist for other hardware?
piotr5
 
Posts: 230
Joined: Sun Dec 23, 2012 2:48 pm

Re: Parallella Epiphany IV

Postby sebraa » Thu Feb 12, 2015 9:48 pm

piotr5 wrote:
sebraa wrote:I would have liked to have more space for the data. The algorithm I used becomes more efficient with larger block sizes, plus a double-buffering approach would have allowed for a streaming implementation.
this I fail to understand. maybe I'm too inexperienced to say that, but wouldn't as you say "shifting complexity over to communication" just do what you want and eliminate the memory-requirements?
Yes it would. And it would slow down the algorithm at the same time, making it completely useless.

Since this is way off-topic, I'll answer the remaining things in private.
sebraa
 
Posts: 495
Joined: Mon Jul 21, 2014 7:54 pm

Re: Parallella Epiphany IV

Postby piotr5 » Fri Feb 13, 2015 11:56 am

sebraa wrote:Since this is way off-topic, I'll answer the remaining things in private.

I think private discussion is a bad idea. we're still talking of stuff others might want to have a word in. also, it is better to stay focused, my intent was to discuss how we could help for the success of parallella, so its creators get the money for doing their Epiphany IV or V. but you're right this is off-topic here, so I posted my reply to the discussion board under the title strategies for increasing sales of Epiphany III and IV.
piotr5
 
Posts: 230
Joined: Sun Dec 23, 2012 2:48 pm

Previous

Return to Epiphany and Parallella Q & A

Who is online

Users browsing this forum: No registered users and 6 guests

cron