Sharing the DRAM in a more natural way.

Any technical questions about the Epiphany chip and Parallella HW Platform.

Moderator: aolofsson

Sharing the DRAM in a more natural way.

Postby notzed » Fri Nov 29, 2013 4:30 am

notzed
 
Posts: 331
Joined: Mon Dec 17, 2012 12:28 am
Location: Australia

Re: Sharing the DRAM in a more natural way.

Postby LamsonNguyen » Sat Nov 30, 2013 10:27 am

I approve of this thread.
LamsonNguyen
 
Posts: 138
Joined: Sun Dec 16, 2012 7:09 pm

Re: Sharing the DRAM in a more natural way.

Postby shodruk » Sat Nov 30, 2013 12:03 pm

Shodruky
shodruk
 
Posts: 464
Joined: Mon Apr 08, 2013 7:03 pm

Re: Sharing the DRAM in a more natural way.

Postby notzed » Sat Nov 30, 2013 11:31 pm

notzed
 
Posts: 331
Joined: Mon Dec 17, 2012 12:28 am
Location: Australia

Re: Sharing the DRAM in a more natural way.

Postby shodruk » Sun Dec 01, 2013 7:03 am

Shodruky
shodruk
 
Posts: 464
Joined: Mon Apr 08, 2013 7:03 pm

Re: Sharing the DRAM in a more natural way.

Postby ysapir » Sun Dec 01, 2013 10:45 am

This topic was discussed briefly in this discussion, mentioning the remap() API to get the same effect (and follows the original idea of @dar:

viewtopic.php?f=13&t=646&p=4037&hilit=remap#p4037

As noted by @shodruk, since the functionality does not guarantee that the requested address will indeed be mapped to the physical address, we felt that encapsulating that for the purpose of the eSDK would be the better solution.

This, obviously, should not hold the user from implementing such mapping himself, as the concept is pretty simple.
User avatar
ysapir
 
Posts: 393
Joined: Tue Dec 11, 2012 7:05 pm

Re: Sharing the DRAM in a more natural way.

Postby tnt » Thu Dec 12, 2013 8:00 am

I'm with ysapir on this. The two addresses are used in different address spaces and the code should reflect that. Not wanting to deal with it is just lazyness. It also restricts the HAL to one particular implementation.
tnt
 
Posts: 408
Joined: Mon Dec 17, 2012 3:21 am

Re: Sharing the DRAM in a more natural way.

Postby fdeutschmann » Thu Dec 12, 2013 5:31 pm

The approach taken by the OP, mapping to a specific requested address range, is quite standard and commonly used for implementing "pointer swizzling" object-oriented systems. mmap will ignore the requested address if it can't meet it (collision), but for this sort of application it should work great, and it would be good if future evolution of the sdk supported this, if only by implicitly avoiding breaking it.

The kittens are safe!

-frank
fdeutschmann
 
Posts: 26
Joined: Sun Sep 22, 2013 10:47 pm
Location: New York, NY

Re: Sharing the DRAM in a more natural way.

Postby mhonman » Fri Dec 13, 2013 9:06 am

I'd agree with the OP on this, because the limited internal RAM of the e-cores means that there needs to be a close working relationship between host and core programs. And C does not exactly lend itself to pointerless programming!

If the shared RAM appears at different places in host and Epiphany address space, one has to adopt a FORTRAN IV mentality, i.e. "everything is an array in a COMMON block". Then only the start addresses of the arrays need to be converted - most sensibly at program startup - and things in the common structure are accessed via offsets from its start.

This is OK to do, although it does add another little layer of complexity (a bit annoying when the focus of the exercise is the parallel algorithm). However there is one really nasty side-effect of doing these address conversions - on the zedboard (at least) - if an error in the conversion results in an invalid address, when that address is dereferenced the host processor locks up and has to be rebooted. Makes debugging a bit tedious...

In the case of Parallella - and especially bearing the stated goal of "supercomputing for everyone" in mind - it would thus be very helpful to have locations in shared RAM and on the Epiphany mapped to their Epiphany-side addresses in the host program, as originally suggested by notzed.
mhonman
 
Posts: 112
Joined: Thu Apr 25, 2013 2:22 pm


Return to Epiphany and Parallella Q & A

Who is online

Users browsing this forum: No registered users and 19 guests