Neural Network

Re: Neural Network

Postby stealthpaladin » Tue Oct 01, 2013 2:46 am

nickoppen wrote:Hi Guys,

Here is the second part of the design - Training. It is a little more complicated that the feed forward pass so please give it a read and let me know if I've missed anything.

http://nicksparallellaideas.blogspot.com/2013/10/training-in-parallel.html

Thanks,

nick


Hey nick, great post there. I'm going to have to read it twice :D
User avatar
stealthpaladin
 
Posts: 41
Joined: Sat Jul 20, 2013 9:46 am

Re: Neural Network

Postby roberto » Wed Oct 02, 2013 8:27 am

nickoppen wrote:Hi Guys,

Here is the second part of the design - Training. It is a little more complicated that the feed forward pass so please give it a read and let me know if I've missed anything.

http://nicksparallellaideas.blogspot.com/2013/10/training-in-parallel.html

Thanks,

nick


Nick,
I can't post there my osservations because you allow ONLY people that have google accout.
Bad decision, man, bad decision.
roberto
 
Posts: 39
Joined: Sat Mar 09, 2013 2:01 pm

Re: Neural Network

Postby nickoppen » Wed Oct 02, 2013 10:50 am

Thanks Roberto, I didn't realise that. I'll try and figure out how to make it more public.

In the mean time, please post your comments on this forum.

nick
Sharing is what makes the internet Great!
User avatar
nickoppen
 
Posts: 263
Joined: Mon Dec 17, 2012 3:21 am
Location: Sydney NSW, Australia

Re: Neural Network

Postby CIB » Wed Oct 02, 2013 7:24 pm



This is actually a really good post about ANNs, even ignoring the fact it's specifically written with epiphany in mind. I never really worked with ANNs, but I've read a few tutorials, and this stuff is as good as any of the top google hits on the subject. Well done!
CIB
 
Posts: 108
Joined: Sat Jul 13, 2013 1:57 pm

Re: Neural Network

Postby CIB » Fri Oct 04, 2013 6:31 pm

@nickoppen: Good read. Seems the big problem is calculating the error for the hidden nodes. It'll be a real bottleneck if we can't get that to process about as fast as the feedforward part of the algorithm. Might have to revise the whole idea of how the weights are stored and distributed to get this to work on the epiphany efficiently(as in, making actual use of the epiphany architecture, and thus having a reason to use the epiphany over highly mass produced mainstream CPUs/GPUs). Or maybe it's not possible at all. That, of course, would be ironic considering the parallella's new logo! =)
CIB
 
Posts: 108
Joined: Sat Jul 13, 2013 1:57 pm

Re: Neural Network

Postby nickoppen » Fri Oct 04, 2013 10:25 pm

Thanks CIB. I agree that the hidden node errors are the trickiest bit. As I think about the various ways of solving this problem I'm liking option 3 the best i.e. push the output errors back to the host cpus get them to do some work. I know that this is not a "pure" epiphany solution but I'm not a purist and I don't see any problem with treating the ARM cores as just another resource.
Sharing is what makes the internet Great!
User avatar
nickoppen
 
Posts: 263
Joined: Mon Dec 17, 2012 3:21 am
Location: Sydney NSW, Australia

Re: Neural Network

Postby nickoppen » Sat Oct 05, 2013 4:52 am

Very happy with the announcement of the "new" "Secret" features of the e-16.

The multi-core multicast transactions looks good for the sharing of the hidden node values and the output deltas. The DMA messaging will be interesting and I hope that the multicast message can be sent and arrive while the cores are still working on the next node value. The e-Mesh traffic monitoring will be great to evaluate the relative performance of the intercore communication.

Let's hope they announce hardware support for floating point vector calculations next.
Sharing is what makes the internet Great!
User avatar
nickoppen
 
Posts: 263
Joined: Mon Dec 17, 2012 3:21 am
Location: Sydney NSW, Australia

Re: Neural Network

Postby nickoppen » Sun Oct 06, 2013 10:48 am

I found this great resource for training data:

http://archive.ics.uci.edu/ml/index.html

Some weird stuff in there but some of the data sets look great.
Sharing is what makes the internet Great!
User avatar
nickoppen
 
Posts: 263
Joined: Mon Dec 17, 2012 3:21 am
Location: Sydney NSW, Australia

Re: Neural Network

Postby CIB » Sun Oct 06, 2013 11:07 am

nickoppen wrote:Thanks CIB. I agree that the hidden node errors are the trickiest bit. As I think about the various ways of solving this problem I'm liking option 3 the best i.e. push the output errors back to the host cpus get them to do some work. I know that this is not a "pure" epiphany solution but I'm not a purist and I don't see any problem with treating the ARM cores as just another resource.


The question is, why use the parallella then? Apart from the problem of distributing the data, the hidden error calculation is as much a parallel task as the feedforward passes. If this step doesn't scale with our number of cores, then IMO it'll have been a rather pointless exercise, as the algorithm will only work well on the parallella(if even there), and can not be adapted for similar hardware with a higher number of cores.
CIB
 
Posts: 108
Joined: Sat Jul 13, 2013 1:57 pm

Re: Neural Network

Postby nickoppen » Mon Oct 07, 2013 9:45 am

Hi CIB,

I don't think that taking on this project for the parallella is a pointless exercise. Firstly, the back propagation part of the process may not have an elegant, scalable solution on this architecture but that does not mean that you won't get speed improvements. Secondly, those speed improvements (and the bottleneck penalties) are not yet quantified and, being neither a hardware guru or mathematician, I can't predict what they'll be. The only way I have is to suck it and see. Also, in getting to know the capabilities better a better solution may become evident.

As regards to why do it on the parallella when it cannot be adapted - there is no "standard" architecture and so whether you write it for the epiphany, Phi or cell, there will always be idiosyncrasies due to the processor architecture. I'm happy to write it for the parallella because I can buy it for $99. The Phi costs around $1500 and the cell is dead.

All that aside, I think that the parallella is cool and a feed forward, back propagation neural network is the only algorithm that I know that would seem to benefit from being written for it, so that's the one I'm doing.

nick
Sharing is what makes the internet Great!
User avatar
nickoppen
 
Posts: 263
Joined: Mon Dec 17, 2012 3:21 am
Location: Sydney NSW, Australia

PreviousNext

Return to Artificial Intelligence

Who is online

Users browsing this forum: No registered users and 2 guests