Stepping into next generation ADAS multi-camera architectures: Page 2 of 6

December 14, 2016 //By Thorsten Lorenzen, Texas Instruments
Stepping into next generation ADAS multi-camera architectures
Highly integrated approach to achieve extended synchronization and advanced HDR image quality to enable Automotive Surround View & Mirror Replacement Applications.

Multi-Channel Pixel Processing

This section discusses the signal chain of the TI reference design “TIDA-00455” (See Figure 1). FPD-Link III serializers as such as the DS90UB933 transform an incoming parallel LVCMOS or a serial CSI-2 data bus into a single high-speed differential pair. The serializers can accept up to 12-bits of data + 2 bits (e.g. HSYNC/VSYNC) + pixel clock (PCLK). In turn, the FPD-Link III deserializer hub (e.g. DS90UB964) when coupled with serializers receives the streams from up to four image sensors concurrently. It provides two MIPI CSI-2 output ports consisting of four physical lanes each. The deserializer decodes incoming streams to be multiplexed on one or two of the MIPI CSI-2 output ports. In order to keep incoming video streams separated, the MIPI CSI-2 ports offer up to four virtual channels. Every data stream is partitioned into packets designated for each virtual channel. A unique channel identification number (VC-ID) in the packet header identifies the virtual channel. Virtually separated the video streams will be sent out through the CSI-2 ports. The image pixels of the video streams arrive at the ISP, separated in long (L), short (S) and very short (VS) exposure values indicated by its MSB each. When multiple pre-processing steps as such as lens correction, white balancing or defective pixel correction are completed, the ISP combines the values with different exposures to generate image frames. Dark areas will be filled with pixels from L exposure values while bright areas will be filled with pixels from either S or VS values. As a result, the image pixels of each video stream provide extended dynamic range. The weighted output of the combination feeds back into the blocks as such as automatic gain control (AGC), automatic exposure control (AEC) in order to calculate statistics. The statistics including histogram can be transferred to the host as part of idle rows within the video stream. Finally, high quality video streams will be transmitted into the vision processor for image rendering and vision analytics purposes prior to be displayed on a monitor.

 

 


Figure 1. Quad camera design used for surround view applications.

Design category: 

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.