Pooling RAM across cluster

Any technical questions about the Epiphany chip and Parallella HW Platform.

Moderator: aolofsson

Pooling RAM across cluster

Postby dcd4408 » Wed Jul 29, 2015 3:28 pm

I currently have an 8-node cluster and was wondering if there is a way to pool the all the RAM into one large shared memory (i.e. adding all 1 gbs of RAM to one 8 gb RAM).

Can anyone speak to this or point me in the direction of posts that I've yet to find?

Thanks!
dcd4408
 
Posts: 7
Joined: Wed Jul 29, 2015 3:19 pm

Re: Pooling RAM across cluster

Postby dcd4408 » Thu Jul 30, 2015 2:09 pm

dcd4408 wrote:I currently have an 8-node cluster and was wondering if there is a way to pool the all the RAM into one large shared memory (i.e. adding all 1 gbs of RAM to one 8 gb RAM).

Can anyone speak to this or point me in the direction of posts that I've yet to find?


The reason being that I need to load MANY gb into RAM at one time
dcd4408
 
Posts: 7
Joined: Wed Jul 29, 2015 3:19 pm

Re: Pooling RAM across cluster

Postby sebraa » Thu Jul 30, 2015 6:30 pm

Well, if you ignore the Epiphany chip (which cannot access all of that memory anyway), a Parallella is just another Linux system. So look at other Linux clustering solutions.
sebraa
 
Posts: 495
Joined: Mon Jul 21, 2014 7:54 pm

Re: Pooling RAM across cluster

Postby dcd4408 » Fri Jul 31, 2015 1:07 pm

sebraa wrote:Well, if you ignore the Epiphany chip (which cannot access all of that memory anyway), a Parallella is just another Linux system. So look at other Linux clustering solutions.


I don't necessarily want to ignore the Epiphany. I want to expand the memory that all 8 epiphanys have access to as well as make each aware of all 8 gb (minus what is needed by Linux itself)
dcd4408
 
Posts: 7
Joined: Wed Jul 29, 2015 3:19 pm

Re: Pooling RAM across cluster

Postby sebraa » Fri Jul 31, 2015 3:00 pm

The Epiphany architecture is a 32-bit architecture and can only address 4 GB of memory in total. If you need more address space, you have to implement some kind of mapping yourself, which drags down performance. On the Parallella board, the single Epiphany chip can only access 32 MB of memory, not the full 1 GB.

Also, the available bandwidth between these 32 MB of RAM and the Epiphany chip is much lower than the available bandwidth inside each Epiphany chip, and the latency introduced by connecting multiple Parallellas through the network interface is not going to help either.

In other words, it might make sense to provide a big machine on the network, which provides data slices on demand to all Parallella systems on the network. There is no need to keep the data in RAM, since any SSD is able to saturate multiple network links simultaneously - which is not true for the SD cards. Then write your Parallella host program to fetch these data slices and feed them into the Epiphany systems as you need. 8 GB of data is not too large for a single system.
sebraa
 
Posts: 495
Joined: Mon Jul 21, 2014 7:54 pm


Return to Epiphany and Parallella Q & A

Who is online

Users browsing this forum: No registered users and 5 guests

cron