Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Depth of field
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Overcoming DOF limitations== Some methods and equipment allow altering the apparent {{abbr|DOF|depth of field}}, and some even allow the {{abbr|DOF|depth of field}} to be determined after the image is made. These are based or supported by computational imaging processes. For example, [[focus stacking]] combines multiple images focused on different planes, resulting in an image with a greater (or less, if so desired) apparent depth of field than any of the individual source images. Similarly, in order to [[3D reconstruction|reconstruct]] the 3-dimensional shape of an object, a [[depth map]] can be generated from multiple photographs with different depths of field. Xiong and Shafer concluded, in part, "...{{nbsp}}the improvements on precisions of focus ranging and defocus ranging can lead to efficient shape recovery methods."<ref>Xiong, Yalin, and Steven A. Shafer. "[https://apps.dtic.mil/sti/pdfs/ADA266644.pdf Depth from focusing and defocusing]." Computer Vision and Pattern Recognition, 1993. Proceedings CVPR'93., 1993 IEEE Computer Society Conference on. IEEE, 1993.</ref> Another approach is focus sweep. The focal plane is swept across the entire relevant range during a single exposure. This creates a blurred image, but with a convolution kernel that is nearly independent of object depth, so that the blur is almost entirely removed after computational deconvolution. This has the added benefit of dramatically reducing motion blur.<ref>Bando et al. "[http://web.media.mit.edu/~bandy/invariant/ Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis]." ACM Transactions on Graphics, Vol. 32, No. 2, Article 13, 2013.</ref> [[Light scanning photomacrography|Light Scanning Photomacrography]] (LSP) is another technique used to overcome depth of field limitations in macro and micro photography. This method allows for high-magnification imaging with exceptional depth of field. LSP involves scanning a thin light plane across the subject that is mounted on a moving stage perpendicular to the light plane. This ensures the entire subject remains in sharp focus from the nearest to the farthest details, providing comprehensive depth of field in a single image. Initially developed in the 1960s and further refined in the 1980s and 1990s, LSP was particularly valuable in scientific and biomedical photography before digital focus stacking became prevalent.<ref>Root, N. (January 1991) [https://pubmed.ncbi.nlm.nih.gov/2010421 "A simplified unit for making deep-field (scanning) Macrographs"]. Journal of Biological Photography, Vol. 59, No. 1, pp. 3-8.</ref><ref name="Clarke2024">Clarke, T. "[https://www.mccrone.com/mm/scanning-light-photomacrography-system/ Constructing a Scanning Light Photomacrography System]." The McCrone Group (accessed July 7, 2024).</ref> Other technologies use a combination of lens design and post-processing: [[Wavefront coding]] is a method by which controlled aberrations are added to the optical system so that the focus and depth of field can be improved later in the process.<ref>{{Cite journal|last1=Mary|first1=D.|last2=Roche|first2=M.|last3=Theys|first3=C.|last4=Aime|first4=C.|title=Introduction to Wavefront Coding for Incoherent Imaging|journal=EAS Publications Series|volume=59|year=2013|pages=77β92|issn=1633-4760|doi=10.1051/eas/1359005|bibcode=2013EAS....59...77R |s2cid=120502243 |url=https://www.edp-open.org/images/stories/books/fulldl/eas_59/eas59_pp077-092.pdf |archive-url=https://web.archive.org/web/20220614000804/https://www.edp-open.org/images/stories/books/fulldl/eas_59/eas59_pp077-092.pdf |archive-date=2022-06-14 |url-status=live }}</ref> The lens design can be changed even more: in colour [[apodization]] the lens is modified such that each colour channel has a different lens aperture. For example, the red channel may be {{f/|2.4}}, green may be {{f/|2.4}}, whilst the blue channel may be {{f/|5.6}}. Therefore, the blue channel will have a greater depth of field than the other colours. The image processing identifies blurred regions in the red and green channels and in these regions copies the sharper edge data from the blue channel. The result is an image that combines the best features from the different {{nowrap|f-numbers}}.{{sfn|Kay|Mather|Walton|2011}} At the extreme, a [[plenoptic camera]] captures [[4D light field]] information about a scene, so the focus and depth of field can be altered after the photo is taken.
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Depth of field
(section)
Add topic