You are currently viewing the abstract.View Full Text
Three-dimensional (3D) images can be captured by, for example, holographic imaging or stereoimaging techniques. To avoid using expensive optical components that are limited to specialized bands of wavelengths, Sun et al. (p. 844; see the Perspective by Faccio and Leach) projected pulses of randomly textured light onto an object. They were able to reconstruct an image of the 3D object by detecting the reflected light with several photodetectors without any need for lenses. The patterned light beams can thus in principle be substituted for light sources of any wavelength.
Computational imaging enables retrieval of the spatial information of an object with the use of single-pixel detectors. By projecting a series of known random patterns and measuring the backscattered intensity, it is possible to reconstruct a two-dimensional (2D) image. We used several single-pixel detectors in different locations to capture the 3D form of an object. From each detector we derived a 2D image that appeared to be illuminated from a different direction, even though only a single digital projector was used for illumination. From the shading of the images, the surface gradients could be derived and the 3D object reconstructed. We compare our result to that obtained from a stereophotogrammetric system using multiple cameras. Our simplified approach to 3D imaging can readily be extended to nonvisible wavebands.