Page 6 of 8
Re: Neural Network
Posted:
Tue Oct 01, 2013 2:46 am
by stealthpaladin
Re: Neural Network
Posted:
Wed Oct 02, 2013 8:27 am
by roberto
Re: Neural Network
Posted:
Wed Oct 02, 2013 10:50 am
by nickoppen
Thanks Roberto, I didn't realise that. I'll try and figure out how to make it more public.
In the mean time, please post your comments on this forum.
nick
Re: Neural Network
Posted:
Wed Oct 02, 2013 7:24 pm
by CIB
Re: Neural Network
Posted:
Fri Oct 04, 2013 6:31 pm
by CIB
@nickoppen: Good read. Seems the big problem is calculating the error for the hidden nodes. It'll be a real bottleneck if we can't get that to process about as fast as the feedforward part of the algorithm. Might have to revise the whole idea of how the weights are stored and distributed to get this to work on the epiphany efficiently(as in, making actual use of the epiphany architecture, and thus having a reason to use the epiphany over highly mass produced mainstream CPUs/GPUs). Or maybe it's not possible at all. That, of course, would be ironic considering the parallella's new logo! =)
Re: Neural Network
Posted:
Fri Oct 04, 2013 10:25 pm
by nickoppen
Thanks CIB. I agree that the hidden node errors are the trickiest bit. As I think about the various ways of solving this problem I'm liking option 3 the best i.e. push the output errors back to the host cpus get them to do some work. I know that this is not a "pure" epiphany solution but I'm not a purist and I don't see any problem with treating the ARM cores as just another resource.
Re: Neural Network
Posted:
Sat Oct 05, 2013 4:52 am
by nickoppen
Very happy with the announcement of the "new" "Secret" features of the e-16.
The multi-core multicast transactions looks good for the sharing of the hidden node values and the output deltas. The DMA messaging will be interesting and I hope that the multicast message can be sent and arrive while the cores are still working on the next node value. The e-Mesh traffic monitoring will be great to evaluate the relative performance of the intercore communication.
Let's hope they announce hardware support for floating point vector calculations next.
Re: Neural Network
Posted:
Sun Oct 06, 2013 10:48 am
by nickoppen
I found this great resource for training data:
Some weird stuff in there but some of the data sets look great.
Re: Neural Network
Posted:
Sun Oct 06, 2013 11:07 am
by CIB
Re: Neural Network
Posted:
Mon Oct 07, 2013 9:45 am
by nickoppen
Hi CIB,
I don't think that taking on this project for the parallella is a pointless exercise. Firstly, the back propagation part of the process may not have an elegant, scalable solution on this architecture but that does not mean that you won't get speed improvements. Secondly, those speed improvements (and the bottleneck penalties) are not yet quantified and, being neither a hardware guru or mathematician, I can't predict what they'll be. The only way I have is to suck it and see. Also, in getting to know the capabilities better a better solution may become evident.
As regards to why do it on the parallella when it cannot be adapted - there is no "standard" architecture and so whether you write it for the epiphany, Phi or cell, there will always be idiosyncrasies due to the processor architecture. I'm happy to write it for the parallella because I can buy it for $99. The Phi costs around $1500 and the cell is dead.
All that aside, I think that the parallella is cool and a feed forward, back propagation neural network is the only algorithm that I know that would seem to benefit from being written for it, so that's the one I'm doing.
nick