boost compute, boost lambda.

Forum for anything not suitable for the other forums.

boost compute, boost lambda.

Postby dobkeratops » Mon Apr 17, 2017 11:19 am

would it be possible to get an approach like this working on the epiphany http://kylelutz.blogspot.co.uk/2014/03/custom-opencl-functions-in-c-with.html 'boost compute' (which seems to use tricks like http://www.boost.org/doc/libs/1_60_0/doc/html/lambda.html (boost lambda) to create openCL kernel code; (basically it seems to use template magic to roll 'function objects', and compile them as openCL kernels)

The appeal would be writing portable code (CPU,GPU,e-cores..); you could iterators that could work both on GPU/epiphany, and they'd do something smarter with the dataflow where possible on the Epiphany.

It's not as nice as real lambda functions, but with enough 'iterators' it might be enough to handle a lot of useful cases.

(e.g: one could write a 'convolutional iterator', and you pass in a function to apply to the result- like the activation function for a neural net. Or rely on some expression-template magic to combine 'convolute' & 'map' operations.)

I think 'boost compute' still relies on the ability to invoke the openCL compiler from within an application.
I invisage the framework rolling stub code in the epiphany kernels for synchronization/DMA, and inlining the code generated by the lambda-function-objects.

(other inspiration along these lines is microsoft C++AMP. I had some more complex ideas in mind originally ..writing an LLVM preprocessor to extract functions in build process. I'm not sure there's LLVM support for epiphany ),
dobkeratops
 
Posts: 162
Joined: Fri Jun 05, 2015 6:42 pm
Location: uk

Re: boost compute, boost lambda.

Postby jar » Tue Apr 18, 2017 1:59 pm

Yes, it has been done with CLETE from BDT years ago. The solution was a bit rough around the edges. We have a new approach to this and I'll see about getting the paper and code available this week. It's still a bit rough, but it begins to address OpenCL's shortcomings for Epiphany (or anything besides a GPU) as well as the high-level C++.

One of the practical problems with using OpenCL is that there is a lot of latency, particularly if you have to compile your kernel at runtime, copy memory from some host space to device space (less of an an issue with Parallella UVA), and copy your kernel binary to the device (though not a unique issue to OpenCL).

Forum member, upcfrost, is porting LLVM. I'm not a compiler guy, but if a compiler had the ability to target multiple architectures with a (kernel) function attribute, it would greatly improve programmability by enabling monolithic code and clearly-defined kernel interfaces.
User avatar
jar
 
Posts: 243
Joined: Mon Dec 17, 2012 3:27 am

Re: boost compute, boost lambda.

Postby dobkeratops » Tue Apr 18, 2017 9:04 pm

oh nice.. LLVM is so useful with the IR (I haven't worked on the compiler but have had a bash at writing a pet-language using LLVM IR)

compiling shaders/kernels on startup does seem like a hazard for embedded scenarios. I think extracting lambdas/kernel functions in the build process should get around that.
dobkeratops
 
Posts: 162
Joined: Fri Jun 05, 2015 6:42 pm
Location: uk


Return to General Discussion

Who is online

Users browsing this forum: No registered users and 4 guests