Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Machine vision
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Imaging=== The imaging device (e.g. camera) can either be separate from the main image processing unit or combined with it in which case the combination is generally called a [[smart camera]] or smart sensor.<ref>{{cite book | title = Smart Cameras | editor = Belbachir, Ahmed Nabil| publisher = Springer | date = 2009 | isbn = 978-1-4419-0952-7}}{{page needed|date=December 2012}}</ref><ref name= "VSD201302">{{cite journal| url=http://www.vision-systems.com/articles/print/volume-18/issue-2/departments/leading-edge-views/explore-the-fundamentals-of-machine-vision-part-i.html | title=Explore the Fundamentals of Machine Vision: Part 1| volume=18 | issue=2 | date=February 2013 |author=Dechow, David |journal=Vision Systems Design |pages=14β15| access-date=2013-03-05}}</ref> Inclusion of the full processing function into the same enclosure as the camera is often referred to as embedded processing.<ref name ="PhotonicsSpectra2019">''Critical Considerations for Embedded Vision Design'' by Dave Rice and Amber Thousand ''Photonics Spectra'' magazine published by Laurin Publishing Co. July 2019 issue Pages 60-64</ref> When separated, the connection may be made to specialized intermediate hardware, a custom processing appliance, or a [[frame grabber]] within a computer using either an analog or standardized digital interface ([[Camera Link]], [[CoaXPress]]).<ref name = coaxexpress>{{cite journal| url=http://www.vision-systems.com/articles/2011/05/coaxpress-standard-camera-frame-grabber-support.html | title=CoaXPress standard gets camera, frame grabber support | date= May 31, 2011 |author=Wilson, Andrew |journal=Vision Systems Design |access-date=2012-11-28}}</ref><ref name = VSDCompliantCameras>{{cite journal| url=http://www.vision-systems.com/articles/2012/11/cameras-certified-as-compliant-with-coaxpress-standard.html | title=Cameras certified as compliant with CoaXPress standard | author=Wilson, Dave |journal=Vision Systems Design | date= November 12, 2012 |access-date=2013-03-05}}</ref><ref name = Davies2nd/><ref name = Dinev>{{cite journal |author=Dinev, Petko |title=Digital or Analog? Selecting the Right Camera for an Application Depends on What the Machine Vision System is Trying to Achieve |journal=Vision & Sensors |date=March 2008 |pages=10β14 |url=http://www.visionsensorsmag.com/Articles/Feature_Article/BNP_GUID_9-5-2006_A_10000000000000276728 |archive-url=https://web.archive.org/web/20200314042249/http://www.visionsensorsmag.com/Articles/Feature_Article/BNP_GUID_9-5-2006_A_10000000000000276728 |url-status=dead |archive-date=2020-03-14 |access-date=2012-05-12 }}</ref> MV implementations also use digital cameras capable of direct connections (without a framegrabber) to a computer via [[IEEE 1394|FireWire]], [[USB]] or [[Gigabit Ethernet]] interfaces.<ref name = Dinev/><ref name = VSDInterfaces>{{cite journal | url=http://www.vision-systems.com/articles/print/volume-16/issue-12/features/looking-to-the-future-of-vision.html | title=Product Focus - Looking to the Future of Vision | author=Wilson, Andrew | journal=Vision Systems Design |volume=16| issue=12 | date=December 2011 |access-date=2013-03-05}}</ref> While conventional (2D visible light) imaging is most commonly used in MV, alternatives include [[Multispectral image|multispectral imaging]], [[hyperspectral imaging]], imaging various infrared bands,<ref name =InfraredVSDApril2011>{{cite journal |author=Wilson, Andrew | title=The Infrared Choice | journal= Vision Systems Design |date= April 2011 |pages=20β23 | url=http://www.vision-systems.com/articles/print/volume-16/issue-4/features/the-infrared-choice.html |volume=16 |issue=4|access-date=2013-03-05}}</ref> line scan imaging, [[3D imaging]] of surfaces and X-ray imaging.<ref name = NASAarticle>{{cite journal|journal= [[NASA Tech Briefs]] |volume= 35 |issue= 6 |date= June 2011 |title=Machine Vision Fundamentals, How to Make Robots See|author=Turek, Fred D. |pages=60β62 |url= http://www.techbriefs.com/privacy-footer-69/10531 | access-date=2011-11-29}}</ref> Key differentiations within MV 2D visible light imaging are monochromatic vs. color, [[frame rate]], resolution, and whether or not the imaging process is simultaneous over the entire image, making it suitable for moving processes.<ref name = WestHSRT>West, Perry ''High Speed, Real-Time Machine Vision '' CyberOptics, pages 1-38</ref> Though the vast majority of machine vision applications are solved using two-dimensional imaging, machine vision applications utilizing 3D imaging are a growing niche within the industry.<ref name=DN201202>{{cite journal |title=3D Machine Vison Comes into Focus |author=Murray, Charles J |journal=[[Design News]] |date=February 2012 |url=http://www.designnews.com/document.asp?doc_id=237971 |access-date=2012-05-12 |url-status=dead |archive-url=https://web.archive.org/web/20120605095256/http://www.designnews.com/document.asp?doc_id=237971 |archive-date=2012-06-05 }}</ref><ref name=Davies4th410-411>{{cite book|pages=410β411|author=Davies, E.R. | edition=4th | date=2012 | title=Computer and Machine Vision: Theory, Algorithms, Practicalities | publisher=Academic Press| isbn=9780123869081 | url=https://books.google.com/books?id=AhVjXf2yKtkC&pg=PA410 | access-date=2012-05-13}}</ref> The most commonly used method for 3D imaging is scanning based triangulation which utilizes motion of the product or image during the imaging process. A laser is projected onto the surfaces of an object. In machine vision this is accomplished with a scanning motion, either by moving the workpiece, or by moving the camera & laser imaging system. The line is viewed by a camera from a different angle; the deviation of the line represents shape variations. Lines from multiple scans are assembled into a [[depth map]] or point cloud.<ref name = QualityMagazine/> Stereoscopic vision is used in special cases involving unique features present in both views of a pair of cameras.<ref name = QualityMagazine>''3-D Imaging: A practical Overview for Machine Vision'' By Fred Turek & Kim Jackson Quality Magazine, March 2014 issue, Volume 53/Number 3 Pages 6-8</ref> Other 3D methods used for machine vision are [[Time-of-flight camera|time of flight]] and grid based.<ref name =QualityMagazine/><ref name =DN201202/> One method is grid array based systems using pseudorandom structured light system as employed by the Microsoft Kinect system circa 2012.<ref name = hybrid>http://research.microsoft.com/en-us/people/fengwu/depth-icip-12.pdf HYBRID STRUCTURED LIGHT FOR SCALABLE DEPTH SENSING Yueyi Zhang, Zhiwei Xiong, Feng Wu University of Science and Technology of China, Hefei, China Microsoft Research Asia, Beijing, China</ref><ref name = pseudorandom>R.Morano, C.Ozturk, R.Conn, S.Dubin, S.Zietz, J.Nissano, "Structured light using pseudorandom codes", IEEE Transactions on Pattern Analysis and Machine Intelligence 20 (3)(1998)322β327</ref>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Machine vision
(section)
Add topic