Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Attention
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Modelling=== In the domain of [[computer vision]], efforts have been made to model the mechanism of human attention, especially the bottom-up intentional mechanism<ref name="Li J, Levine MD, An X, Xu X, He H 2012">{{cite journal | vauthors = Li J, Levine MD, An X, Xu X, He H | title = Visual saliency based on scale-space analysis in the frequency domain | journal = IEEE Transactions on Pattern Analysis and Machine Intelligence | volume = 35 | issue = 4 | pages = 996–1010 | date = April 2013 | pmid = 22802112 | doi = 10.1109/TPAMI.2012.147 | arxiv = 1605.01999 | s2cid = 350786 }}</ref> and its semantic significance in classification of video contents.<ref name="Zang Wang Liu Zhang 2018 pp. 97–108">{{cite book | last1=Zang | first1=Jinliang | last2=Wang | first2=Le | last3=Liu | first3=Ziyi | last4=Zhang | first4=Qilin | last5=Hua | first5=Gang | last6=Zheng | first6=Nanning | title=Artificial Intelligence Applications and Innovations | chapter=Attention-Based Temporal Weighted Convolutional Neural Network for Action Recognition | publisher=Springer International Publishing | publication-place=Cham | volume=519 | date=2018 | isbn=978-3-319-92006-1 | doi=10.1007/978-3-319-92007-8_9 | pages=97–108}}</ref><ref name="Wang Zang Zhang Niu p=1979">{{cite journal | vauthors = Wang L, Zang J, Zhang Q, Niu Z, Hua G, Zheng N | title = Action Recognition by an Attention-Aware Temporal Weighted Convolutional Neural Network | journal = Sensors | volume = 18 | issue = 7 | pages = 1979 | date = June 2018 | pmid = 29933555 | pmc = 6069475 | doi = 10.3390/s18071979 | bibcode = 2018Senso..18.1979W | url = https://qilin-zhang.github.io/_pages/pdfs/sensors-18-01979-Action_Recognition_by_an_Attention-Aware_Temporal_Weighted_Convolutional_Neural_Network.pdf | doi-access = free }}</ref> Both [[Visual spatial attention|spatial attention]] and [[Visual temporal attention|temporal attention]] have been incorporated in such classification efforts. Generally speaking, there are two kinds of models to mimic the bottom-up salience mechanism in static images. One is based on the spatial contrast analysis. For example, a center–surround mechanism has been used to define salience across scales, inspired by the putative neural mechanism.<ref>{{cite journal | vauthors = Itti L, Koch C, Niebur E |title=A Model of Saliency-Based Visual Attention for Rapid Scene Analysis |journal=IEEE Trans Pattern Anal Mach Intell|volume=20 |issue=11 |pages=1254–1259 |year=1998|doi=10.1109/34.730558 |citeseerx=10.1.1.53.2366 |s2cid=3108956 }}</ref> It has also been hypothesized that some visual inputs are intrinsically salient in certain background contexts and that these are actually task-independent. This model has established itself as the exemplar for salience detection and consistently used for comparison in the literature;<ref name="Li J, Levine MD, An X, Xu X, He H 2012"/> the other kind of model is based on the frequency domain analysis. This method was first proposed by Hou et al..<ref>{{Cite book |vauthors=Hou X, Zhang L |doi=10.1109/CVPR.2007.383267 |chapter-url=http://www.klab.caltech.edu/~xhou/papers/cvpr07.pdf |access-date=2015-01-10 |archive-url=https://web.archive.org/web/20150212171627/http://www.klab.caltech.edu/~xhou/papers/cvpr07.pdf |archive-date=2015-02-12 |url-status=dead |title=2007 IEEE Conference on Computer Vision and Pattern Recognition |pages=1–8 |year=2007 |isbn=978-1-4244-1179-5 |chapter=Saliency Detection: A Spectral Residual Approach |citeseerx=10.1.1.579.1650 |s2cid=15611611 }}</ref> This method was called SR. Then, the PQFT method was also introduced. Both SR and PQFT only use the phase information.<ref name="Li J, Levine MD, An X, Xu X, He H 2012"/> In 2012, the HFT method was introduced, and both the amplitude and the phase information are made use of.<ref>{{cite journal | vauthors = Li J, Levine MD, An X, Xu X, He H | title = Visual saliency based on scale-space analysis in the frequency domain | journal = IEEE Transactions on Pattern Analysis and Machine Intelligence | volume = 35 | issue = 4 | pages = 996–1010 | date = April 2013 | pmid = 22802112 | doi = 10.1109/TPAMI.2012.147 | url = http://www.cim.mcgill.ca/~lijian/06243147.pdf | arxiv = 1605.01999 | s2cid = 350786 | archive-url = https://web.archive.org/web/20130301015810/http://www.cim.mcgill.ca/~lijian/06243147.pdf | url-status = dead | archive-date = 2013-03-01 }}</ref> The Neural Abstraction Pyramid<ref>{{Cite book| vauthors = Behnke S |url=http://link.springer.com/10.1007/b11963|title=Hierarchical Neural Networks for Image Interpretation|date=2003|publisher=Springer Berlin Heidelberg|isbn=978-3-540-40722-5|series=Lecture Notes in Computer Science|volume=2766|location=Berlin, Heidelberg|doi=10.1007/b11963|s2cid=1304548}}</ref> is a hierarchical recurrent convolutional model, which incorporates bottom-up and top-down flow of information to iteratively interpret images.
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Attention
(section)
Add topic