parallella is died. So Long, and Thanks for All the Fish

Forum for anything not suitable for the other forums.

Re: parallella is died. So Long, and Thanks for All the Fish

Postby ed2k » Sat Jul 22, 2017 5:32 am

the lesson I learned is, parallel computing needs fast inter-communication and big cache. many individual computing core can not do much. maybe that is why we need to put smart people in the same room.
ed2k
 
Posts: 113
Joined: Mon Dec 17, 2012 3:27 am

Re: parallella is died. So Long, and Thanks for All the Fish

Postby e97 » Fri Jul 28, 2017 1:33 am

I backed Parallella on Kickstarter. Super impressed that it shipped and works!

Sad to here that Andreas has moved on but it is understandable. Starting a company is hard. Building and shipping products is hard.

I don't think Parallella is dead. It needs a killer app to showcase its power and usefulness.

As a software guy learning about hardware, the board has all the bells and whistles: ARM SoC, FPGA, and Epiphany-III and there are tutorials out there: doing FFT building a cluster .. but I have yet to see an "HOLY SHIT" moment where I've seen someone do something amazing.

I've since bought a separate FPGA development kit because it has lots of materials and mini projects which help me learn how to build and create for an FPGA. Same with RaspberryPi and why its such a big success - lots of show and tell, community and documentation.

If someone showed me Tensorflow or some AI library running on Parallella at a better Compute / Power than GPUs or something - it would be an overnight success.

I will continue to tinker with it and learn and I'm currently deciding whether to keep or sell my Kickstarter edition A101040, if anyone is interested shoot me a PM.

edit: Right now the killer app is *coin mining .. if you could create something that was plug and play and you could efficiently mine *coin (BTC, Litecoin, Etherum) that would be the HOLY SHIT moment as of right now (2017-07-25)
e97
 
Posts: 10
Joined: Sun Jul 19, 2015 1:56 am

Re: parallella is died. So Long, and Thanks for All the Fish

Postby DonQuichotte » Sat Jul 29, 2017 3:57 pm

Hi

(I don't enjoy electronic *coins: not ecology-aware.
Just for studying, why not.)

You have a riecoin solver in the Parallella examples.
It may inspire you.
Good luck
User avatar
DonQuichotte
 
Posts: 46
Joined: Fri Apr 29, 2016 9:58 pm

Re: parallella is died. So Long, and Thanks for All the Fish

Postby KNERDY » Sun Jul 30, 2017 12:04 am

e97 wrote:I backed Parallella on Kickstarter. Super impressed that it shipped and works!

Sad to here that Andreas has moved on but it is understandable. Starting a company is hard. Building and shipping products is hard.

I don't think Parallella is dead. It needs a killer app to showcase its power and usefulness.

As a software guy learning about hardware, the board has all the bells and whistles: ARM SoC, FPGA, and Epiphany-III and there are tutorials out there: doing FFT building a cluster .. but I have yet to see an "HOLY SHIT" moment where I've seen someone do something amazing.

I've since bought a separate FPGA development kit because it has lots of materials and mini projects which help me learn how to build and create for an FPGA. Same with RaspberryPi and why its such a big success - lots of show and tell, community and documentation.

If someone showed me Tensorflow or some AI library running on Parallella at a better Compute / Power than GPUs or something - it would be an overnight success.

I will continue to tinker with it and learn and I'm currently deciding whether to keep or sell my Kickstarter edition A101040, if anyone is interested shoot me a PM.

edit: Right now the killer app is *coin mining .. if you could create something that was plug and play and you could efficiently mine *coin (BTC, Litecoin, Etherum) that would be the HOLY SHIT moment as of right now (2017-07-25)



There is already a "killer" app for crypto currency. They are dedicated hardware which blows away GPUs. Popular one is AntMiner
User avatar
KNERDY
 
Posts: 13
Joined: Thu Jul 20, 2017 9:23 pm

Re: parallella is died. So Long, and Thanks for All the Fish

Postby dobkeratops » Thu Aug 03, 2017 11:12 pm

the lesson I learned is, parallel computing needs fast inter-communication and big cache. many individual computing core can not do much. maybe that is why we need to put smart people in the same room.


Absolutely False.

it just can't run traditional programs.
neural nets and graphics workloads fit perfectly well. it's proven.

note the chinese supercomputer works in a similar way to the epiphany architecture.

we had CELL in gamedev years ago; again it was very bad when it came to trying to adapt traditional software designed for PCs, but when people designed new systems from the ground up (raytracers etc) they *were* blindingly fast on it.

I don't think Parallella is dead. It needs a killer app to showcase its power and usefulness


the killer app is neural nets, all the rage today, applicable to so many walks of life. *perfect fit*. dataflow. unfortunately it seems there was a chicken egg situation getting the advanced chip built, so people keep running these on GPUs. GPUs are already massively parallel, but don't have as much control over local caching. the epiphany 5 would have been able to keep some types of vision net cached permanently, with a similar transistor count to the nvidia chips.
dobkeratops
 
Posts: 189
Joined: Fri Jun 05, 2015 6:42 pm
Location: uk

Re: parallella is died. So Long, and Thanks for All the Fish

Postby DonQuichotte » Fri Aug 04, 2017 9:31 am

False with a class of backtrackers too.
As long as a backtracker algorithm can be split into millions of independant subproblems with small memory footprint,
the Parallella is the best choice for the energy/performance ratio.
Epiphany V would have been tremendous :'(

(I never tried to learn about neural nets, but I was shocked recently when hearing about the Google AlphaGo that has beaten the human world champion.
About Go the computers just a few years ago were unable to beat a modest human amateur...
and now this IA crushing everything else AND being creative like a human genius mind is sometimes, this is just amazing.)

For neural nets there is a new actor with incredible performances: Fathom Movidius stick. 80 to 150 Gflops... for 1 W it seems hard to believe.
I wonder how it deals the IF/ELSE and if it can do anything else than video processing ; and probably this theoretic peak performance is only with an exotic instruction sequence. Anyway, appealing.
User avatar
DonQuichotte
 
Posts: 46
Joined: Fri Apr 29, 2016 9:58 pm

Re: parallella is died. So Long, and Thanks for All the Fish

Postby dobkeratops » Fri Aug 04, 2017 8:12 pm

For neural nets there is a new actor with incredible performances: Fathom Movidius stick. 80 to 150 Gflops... for 1 W it seems hard to believe.


> 80 to 150 Gflops

they might be quoting 'ops' rather than flops but thats what counts.

I think the E5 would have been great for that, depending what SIMD instructions they had (i.e the packed low precision stuff, 8x8bits etc). movidius has 1 big scratchpad shared between it's cores, but with epiphany's 'inter-core addressing' i think you could have partitioned a fraction of each core's scratchpad to do the same thing .. a tiled 'shared scratchpad'. but the vision stuff fits individual cores perfectly well IMO, e.g store different filters on each core then stream the image through them.. whatever

Google AlphaGo t


the details of their TPU are out , it's just a big low precision matrix multiply engine, with some on chip storage again.. the cpu has to guide it, but then it just churns through a big array, with 256x256 x8bit multipliers wired up to accumulators or something. The really interesting thing about the E5 is it would have been able to implement any kind of compression scheme/sparsity for the weights... it has 64mb on chip, even if half was workspace..32mb - there are papers describing how to fit an entire vision net into that ballpark: https://www.oreilly.com/ideas/compressi ... l-networks
that would have been amazing on this chip
dobkeratops
 
Posts: 189
Joined: Fri Jun 05, 2015 6:42 pm
Location: uk

Re: parallella is died. So Long, and Thanks for All the Fish

Postby frankbuss » Thu Aug 17, 2017 11:54 am

ninlar wrote:Some others mentioned that they enjoyed this board because it was "hard." I agree, I like the challenge. But at the same time, some things are a pain in the ass. So many of the GitHub repos have submodules that reference other repositories. The Readme.md files are confusing as to which branch I should be using. If the community is still motivated, I'd love to see the repositories consolidated better and many of the submodules eliminated. It fits well for the examples, but not so much for the Oh! Library, FPGA sources, etc. And then the Ephiphany SDK is posted under the Adepteva account. It should all be brought under a single GitHub account without the submodules. The branches should be killed and replaced with a well known model like GitFlow to remove confusion.


I agree. Nothing wrong being hard, but the website should be updated with some tutorials how to do the basic things. When I look at the parallella.org page at "programming", there are some links to example projects, but which one should I use? Maybe I missed it, but I think a step-by-step tutorial for a "hello world" program for the Epiphany chip, including power on, how to login to the Parallella (if using the headless version), compiling etc., should be linked from the main page.

Same for the advanced things. I'm most interested in the FPGA and the information how to use it is distributed all over the forum, blogs and github. This should be as well linked from the parallella.org webpage, e.g. an updated article for the accelerator. The blog post at https://www.parallella.org/2016/01/21/c ... 5-minutes/ didn't work out of the box for me, there was no /dev/epiphany device, I guess because I installed the headless Parabuntu version without Epiphany support, I had to write my own driver. Could be simplified by using UIO, which would work for any Parabuntu configuration (if enabled in the Linux kernel).

And I guess for many people it would be useful to have a "hello world" example how to use the GPIOs from inside the FPGA, including a Linux driver and user mode application, as I've done for my 64 channel / 100 MHz logic analyzer ( http://www.frank-buss.de/parallella/sampler ). It is not that difficult anymore with the new Vivado IDE when I compare this to the older Xilinx PlanAhead stuff etc., but currently it is a lot of work to setup everything and write the boilerplate code to implement such a project, until you can even start working on your actual project itself, which might discourage people early and then they are using something simpler like a FPGA dev board, where you can just open an example and with one click you can upload your design to the FPGA, and connect a microcontroller to it (which of course is much less powerful, or more complicated in the end for the hardware and interface side).

I think a good idea would be a Vivado project, with an easy to use interface for your own Verilog/VHDL components. Something like just passing the register number and a read/write signal, because the accelerator.v file has still a lot of overhead and is more complicated to use and to extend than necessary, which distracts from your own projects. A Linux kernel module could be using the same interface, so that you have just a simple C file with a read and write function to access the FPGA registers of your project, which will be called from the kernel module, and a user mode application with iotcl examples how to use both functions.
frankbuss
 
Posts: 14
Joined: Mon Dec 17, 2012 3:22 am

Previous

Return to General Discussion

Who is online

Users browsing this forum: Google [Bot] and 12 guests

cron