Page 1 of 1

opencv and epiphany

PostPosted: Tue Mar 10, 2015 12:20 pm
by Nader_Jendoubi
i wanna implement a simple openCV code (to make image transformation(harris corner detector) to a video steam from a webcam and show it in real time)
i have few question to ask
should i use the same method in the face detect example ( because i have just a video stream and the code and i don't have classifier)
should i use the DRAM or just use the local memory of the epiphany (memory banks)
The openCV code must be implemented in the host side (ARM)??
How to make a link between the openCV code and the ARM(host) side code??
Do i need some header files to include with the project??



Re: opencv and epiphany

PostPosted: Sat Apr 04, 2015 6:09 am
by nickoppen
Hi Nader,

I think that your edge detection project is a good one and well suited to the epiphany chip.

I don't know the answers to your questions but here's what you should do. START! Decide what you want to code with, the eSDK or OpenCL. Chip of a small sub-project to start with and implement it. Getting to the end of a tiny project will answer a few of the hundreds of questions that you will find along the way. That is what everybody is doing.

If you want to use OpenCL, have a look at my blog I'm working on a neural network simulator and I've had some ideas how to do it (see the earliest post). Then, along the way I've written posts on each little sub-project that I've undertaken, starting with getting something to compile. I've posted example code on github along the way if you want to have a look at it.

That's the only way that is available at the current time. At some time in the future there might be a book entitled, "OpenCV on the Parallella" but right now there is the documentation by Adapteva and Brown Deer and a few examples and blog posts and that's it.

Please consider writing up your experiences in a blog of some sort because... "Sharing is what makes the internet Great".