So, I'm working on my own information representation system, which has a few things in common with prolog. For that reason, I recently looked up prolog and the way it does proof search. Incidentally, the concept is almost identical to my idea, which also means that this should be incredibly simple to do in parallel.
Here's the gist of it: http://www.learnprolognow.org/lpnpage.p ... pn-htmlse6
So from what I understand, each core could simply be assigned an item that it tries to prove. The database to check for proofs could be stored on core-local memory(if very small), on on-die memory(if small) or if large-ish, the database could be stored in SDRAM and "streamed" through the epiphany. Each core can issue new "search commands", which will be scheduled to be run on the complete database on one of the cores. On the ARM cores, a "search graph" would be built, and the results from the individual cores would be assembled here as they come in.
Overally, it seems this would be a problem that could make very good use of both the parallella's parallel nature, *and* of the parallela's ability to do branches, execute different code on each core, etc.
Implementation is another matter. As I said, I'm working on something like this myself, and will probably eventually write a proof search optimized for the epiphany for my system. But whether prolog itself is a simple enough framework that pulling the proof search out into the epiphany is worthwhile, I don't know.