RPi Camera bounty

Any technical questions about the Epiphany chip and Parallella HW Platform.

Moderator: aolofsson

RPi Camera bounty

Postby tnt » Sun Jun 07, 2015 6:41 pm

Hi,

Just wanted to share my current thoughts on this challenge. Note that I don't think I'll really go for the bounty myself since this would be quite a bit of work on the software side too and I have no real interest in that, but the hw interfacing is interesting to me.

Here's a few of the relevant points :

RPi Camera board has no schematics

This part of the RPi is closed. The pinout is known though and we also know the sensor is a OV5647.
The things to notice that aren't clear from the pinout are :
* CAM_GPIO is the "enable" pin. It must be pulled high to enable the onboard voltage regulator. When low, it's all shutdown in a low-power mode
* CAM_CLK is not a clock at all. It controls the red LED on the board, apply 'high' signal to light it up. (current limit resistor is on the camera board already)
* The clock for the sensor is generated on-board and it's 25 MHz

OV5647 sensor

The datasheet for the sensor is in theory under NDA.
But it's easy enough to find a leaked preliminary version : http://www.seeedstudio.com/wiki/images/ ... 7_full.pdf

However the register description is not perfect, some are missing and some are just wrong (for some regs you can actually find two different description in the same PDF ... neither of which seems to match reality). But it's better than nothing.

The code configuring the sensor from the RPi is closed-source AFAICT, and it wouldn't be all that useful anyway because the RPi uses the GPU to manually control exposure / white balance / pixel defect correction / ... and a bunch of other stuff that the camera can do on its own. Thankfully you can find source code for other SoC that use the OV5647 and find some pre-built register set that can be used as a base.

CSI-2 & D-PHY specs are not public

Not really an issue, google finds them without too much issues.
http://electronix.ru/forum/index.php?ac ... t&id=67362

Electrical nterface

The LVDS25 receiver of the Zynq should be able to interpret the HS signal from the sensor, it's within its speicified range, especially when configuring the sensor to its maximum HS output common mode.

Another potential issue is that the 100ohm termination resistor is going to be present all the time and not just when in HS mode, so each LP signal will influence the other (and that's visible on the captures below). One way to fight this a bit is to set the LP drive strength to it's maximum in the sensor configuration. Or possibly not use the internal 100ohm resistor and use an external one of slightly higher value.

Finally the main problem is that the packet start detection and alignement relies on the LP->HS transition ... and the LVDS receiver won't be able to detect that. We may need to use another GPIO and a transistor/mosfet to detect the LP 1->0 transition so that alignement can be performed. The porcupine board will need mods for this.

I2C might also be an issue. The camera board has 18k pulldowns on the I2C lines, so the pullups will need to be strong enough to
drive the line high enough for the Zynq to detect as '1'. That I2C bus was originally designed to work

Sample captures

Here are a few capture looking at the 'P' signal of data lane 0.

* This is a capture of a "Frame Start" packet :
img_frame_start.png
img_frame_start.png (20.19 KiB) Viewed 24915 times

You can clearly see the influence of the termination resistor on LP-mode. When this signal goes to '0' but the other one is still at '1', this signal isn't going down all the way to 0v but more like 200mv. Then finally when the other signal goes to 0 too, then both are at 0V until the HS drivers are enabled.

* This is a capture of several lines of data when shining a flashlight at the camera.
img_flashlight.png
img_flashlight.png (34.54 KiB) Viewed 24915 times

You can see the block of data stuck at '1' where the image is saturated.

* This is a zoomed in view of the header of a line of data
img_data.png
img_data.png (20.24 KiB) Viewed 24915 times

You can see the SoT sequence followed by the data type identifier for 'RAW10'.
Last edited by tnt on Wed Jun 10, 2015 5:32 am, edited 1 time in total.
tnt
 
Posts: 408
Joined: Mon Dec 17, 2012 3:21 am

Re: RPi Camera bounty

Postby patc » Mon Jun 08, 2015 3:33 pm

Thanks for sharing this infos. Interesting what you said about "no schematics", "NDA", "not public". A couple of years ago I had no prior experience with camera stuff and was surprised with the general lack of reference designs out there. At that time I somehow got the impression it was some kind of jealously guarded secret!

I wanted to display 30 frames/second on a LCD and do simple image recognition with a STM32F4 MCU so eventually I settled for the Toshiba TCM8230 (have not tried out the higher resolution TCM8240 though).

I designed a simple interface for a couple of bucks and it worked really great, at least for my needs: a vision system for a homemade pick-and-place https://www.youtube.com/watch?v=DJ44HgG9Oa0

starting at 02:09, see that quick flash before the head gets positioned right above the 402 to pick up ? that's when the image is being transferred to the MCU for analysis.

Anyway back to your post, may be a prospective designer should check with Andreas whether it is mandatory to use the RPi camera module to qualify for this bounty...
patc
 
Posts: 83
Joined: Wed Aug 06, 2014 7:18 pm

Re: RPi Camera bounty

Postby tnt » Mon Jun 08, 2015 8:51 pm

Well the RPi camera isn't a bad choice, it's readily available and very cheap. And even though all doc is not available, there is enough known to get a data out of it and people have succeeded in making it work with other SoCs already.
tnt
 
Posts: 408
Joined: Mon Dec 17, 2012 3:21 am

Re: RPi Camera bounty

Postby aolofsson » Tue Jun 09, 2015 6:15 pm

patc wrote:I designed a simple interface for a couple of bucks and it worked really great, at least for my needs: a vision system for a homemade pick-and-place https://www.youtube.com/watch?v=DJ44HgG9Oa0

Anyway back to your post, may be a prospective designer should check with Andreas whether it is mandatory to use the RPi camera module to qualify for this bounty...


patc,
Another amazing project. Thanks for sharing! let me think about the bounty to see if we can modify it. I am certainly not married to the Raspberry Pi module. In fact the closed source nature of it is quite annoying...
Andreas
User avatar
aolofsson
 
Posts: 1005
Joined: Tue Dec 11, 2012 6:59 pm
Location: Lexington, Massachusetts,USA

Re: RPi Camera bounty

Postby aolofsson » Tue Jun 09, 2015 6:19 pm

tnt wrote:Just wanted to share my current thoughts on this challenge. Note that I don't think I'll really go for the bounty myself since this would be quite a bit of work on the software side too and I have no real interest in that, but the hw interfacing is interesting to me.


Thanks for sharing this information!! I am sure this will be very useful for many. Agree that this would be a lot of work, but I do think that the infrastructure that would come out of it would be generally useful for anyone working with FPGAs. (thus a lot of work, but a lot of value to a lot of people). Wish we could offer a bigger bounty to everyone :D :D
Andreas
User avatar
aolofsson
 
Posts: 1005
Joined: Tue Dec 11, 2012 6:59 pm
Location: Lexington, Massachusetts,USA

Re: RPi Camera bounty

Postby tnt » Wed Jun 10, 2015 6:08 am

Yeah the closed source nature of the RPi module is a bit annoying but :
* They can't do anything about the sensor datasheet and it's pretty much the same situation for all of them. And Omnivision is definitely one of the big ones.
* Same thing for the CSI interface. It's very common for a lot of camera and uses few GPIOs which is nice.
* It's available for cheap with a semi-convenient pinout (i.e. not the raw sensor or annoying very small impossible to find connector) and there is even clones of it by now.

So all in all only the schematic is missing and there isn't much in there. All the missing details I filled above.

It seems the IO can indeed capture the data without much trouble. Below is the capture of a shift register where I save the last 8 captured bits on data lane 0 :

img_shift.png
img_shift.png (13.67 KiB) Viewed 24809 times


(Note that due to layout of my gpio breakout, the P/N pairs are swapped and so every bit is inverted)

You can see 0x47 ( = 0xB8 ) which is the SoT (Start of Transmission) sequence for the link synchronization. And then 4 cycles later (i.e. 8 DDR bits later) : 0xD4 ( = 0x2B ) which is the Data Type identifier for RAW10 data frame. (and yes, on the picture I screwed up and marked 'EoT' instead of 'SoT').

The test setup looks at the state of a shift register inside the FPGA that I exported to some other GPIO so I could monitor it with my scope because I don't have JTAG on this breakout board and so I can't use chipscope :p Don't pay attention to the analog trace signal quality, the grounding lead was not properly setup for those high speed signal and I was just using it to trigger on the LP -> HS transition.

I have used IDELAY in the PHY but they're currently fixed at 0 ... I just put them there so I can later on control them from the ARM and possibly do some sort of link margin calibration if need be.

I will probably add a small BJT connector to another GPIO to detect the LP->HS transition to provide a sync point on where to start looking for the SoT sequence (being so short it happens naturally even inside the data stream). The 1.2v should be high enough to turn a small BJT on and the ~ 0->500mV range of the HS mode should keep it off.
tnt
 
Posts: 408
Joined: Mon Dec 17, 2012 3:21 am

Re: RPi Camera bounty

Postby tnt » Wed Jun 10, 2015 8:07 pm

Just a small update: Turns out that trying to detect LP with a BJT or a FET didn't work at all ... either not switching fast enough or at all or it would stay on when HS transfer happen ...

So I ended up just using another LVDS input and abuse it as a comparator. I feed the P branch with data lane 0 positive signal (through a 1k to minimize the loading) and the N branch with a 0.8v fixed voltage from a resistor divider. And inside the FPGA, I read a '1' when in LP mode and a '0' when in HS mode.

And this seems to work fine. Total component count is jut 4 1k resistors.
tnt
 
Posts: 408
Joined: Mon Dec 17, 2012 3:21 am

Re: RPi Camera bounty

Postby tnt » Fri Jun 12, 2015 12:05 am

A bit more progress:

I did a quick hack to find the sync pattern and trigger capture aligned to them, push that into a FIFO toward an AXI DMA core and then recover a few megabytes of packets at a time from the ARM.

This was basically to confirm I could see valid packets in there and indeed I can :)

I wrote a quick python script to parse the packets and extract the payload, then fed that to gimp so I could do some quick and dirty debayering and the result is in attachement.
Attachments
rpi.jpg
rpi.jpg (335.92 KiB) Viewed 24743 times
tnt
 
Posts: 408
Joined: Mon Dec 17, 2012 3:21 am

Re: RPi Camera bounty

Postby 9600 » Fri Jun 12, 2015 7:24 am

tnt wrote:I wrote a quick python script to parse the packets and extract the payload, then fed that to gimp so I could do some quick and dirty debayering and the result is in attachement.


So, basically, you win? I'd call that more than a "bit more" progress* :D

Cheers,

Andrew

* - unless "bit" is some sort of intended pun.
Andrew Back (a.k.a. 9600 / carrierdetect)
User avatar
9600
 
Posts: 997
Joined: Mon Dec 17, 2012 3:25 am

Re: RPi Camera bounty

Postby tnt » Fri Jun 12, 2015 8:29 am

Well no, I said "a bit" more because in the grand scheme of things there is still a _lot_ to do.

Currently I'm just feeding raw data directly to the ARM, including garbage between the packets and it's also not a continuous stream it's just a 4 Mbyte 'snapshot' of packets. That leaves a ton of work for the ARM and you couldn't really do video like that without using a lot of CPU.

Next steps are :
* Implement a packet state machine so I only push full packets inside the FIFO. This includes properly detecting the sync, the header and validating the ECC and counting the number of words to end the packet properly.
* Instead of streaming raw packets to the ARM, implement something that interprets those packet and reconstruct RAW video lines from them, properly aligned to frame start.
* Switch from a simple DMA to a Video DMA so I can feed 2D data to it and recover it in a frame buffer
* Implement some form of simple debayering in hardware so I can send RGB data instead of RAW so the ARM doesn't have to handle that (which is pretty CPU intensive).

And that's probably where I'll stop ... someone else would need to do the whole software stack of writing a proper V4L driver so it's usable from applications and properly support the sensor options and different video modes and such.
tnt
 
Posts: 408
Joined: Mon Dec 17, 2012 3:21 am

Next

Return to Epiphany and Parallella Q & A

Who is online

Users browsing this forum: No registered users and 2 guests

cron