We show example images captured with a proof-of-concept gigapixel camera, demonstrating that high resolution can be achieved with a compact form factor and low complexity. Based on our analysis, we advocate for computational camera designs consisting of a spherical lens shared by several small planar sensors. We then show that resolution can be further increased when image priors are introduced. We derive an analytic scaling law showing that, for lenses with spherical aberrations, resolution can be increased beyond the aberration limit by applying a postcapture deblurring step. In the past, it has generally been accepted that the resolution of lenses with geometric aberrations cannot be increased beyond a certain threshold. However, the resolution of any camera is fundamentally limited by geometric aberrations. Higher resolution implies greater fidelity and, thus, greater accuracy when performing automated vision tasks, such as object detection, recognition, and tracking. The resolution of a camera system determines the fidelity of visual features in captured images. Our method paves the way to end-to-end design of a neural network and lens using the complete set of optical parameters within the full sensor field of view. We provide here interpretations of the joint optical and processing optimization results obtained with the proposed method in these simple cases. We validate our method for image restoration applications using three proves of concept of focus setting optimization of a given sensor. Using the gradient backpropagation framework for neural network optimization, any of the lens parameter can then be jointly optimized along with the neural network parameters. The proposed ray tracing model makes no thin lens nor paraxial approximation, and is valid for any field of view and point source position. Using a differential ray tracing (DRT) model, we simulate the sensor point-spread function (PSF) and its partial derivative with respect to any of the sensor lens parameters. In this paper we propose a new method to jointly design a sensor and its neural-network based processing. We compare our technique with other approaches that also aim to increase the DoF such as Wavefront coding. Then, by copying the high frequencies of the sharpest color onto the other colors, we show theoretically and experimentally that it is possible to achieve a sharp image for all the colors within a larger range of DoF. Comparing sharpness across colors gives an estimation of the object distance and therefore allows choosing the right set of digital filters as a function of the object distance. Typically, red is made sharp for objects at infinity, green for intermediate distances, and blue for close distances. Our lens design seeks to increase the longitudinal chromatic aberration in a desired fashion such that, for a given object distance, at least one color plane of the RGB image contains the in-focus scene information. In this paper we present an approach to extend the Depth-of-Field (DoF) for cell phone miniature camera by concurrently optimizing optical system and post-capture digital processing techniques.
0 Comments
Leave a Reply. |