Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Reinforcement learning
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Monte Carlo methods ==== [[Monte Carlo sampling|Monte Carlo methods]]<ref>{{Cite journal |last1=Singh |first1=Satinder P. |last2=Sutton |first2=Richard S. |date=1996-03-01 |title=Reinforcement learning with replacing eligibility traces |url=https://link.springer.com/article/10.1007/BF00114726 |journal=Machine Learning |language=en |volume=22 |issue=1 |pages=123–158 |doi=10.1007/BF00114726 |issn=1573-0565}}</ref> are used to solve reinforcement learning problems by averaging sample returns. Unlike methods that require full knowledge of the environment's dynamics, Monte Carlo methods rely solely on actual or [[Simulation|simulated]] experience—sequences of states, actions, and rewards obtained from interaction with an environment. This makes them applicable in situations where the complete dynamics are unknown. Learning from actual experience does not require prior knowledge of the environment and can still lead to optimal behavior. When using simulated experience, only a model capable of generating sample transitions is required, rather than a full specification of [[Markov chain|transition probabilities]], which is necessary for [[dynamic programming]] methods. Monte Carlo methods apply to episodic tasks, where experience is divided into episodes that eventually terminate. Policy and value function updates occur only after the completion of an episode, making these methods incremental on an episode-by-episode basis, though not on a step-by-step (online) basis. The term "Monte Carlo" generally refers to any method involving [[random sampling]]; however, in this context, it specifically refers to methods that compute averages from ''complete'' returns, rather than ''partial'' returns. These methods function similarly to the [[Multi-armed bandit|bandit algorithms]], in which returns are averaged for each state-action pair. The key difference is that actions taken in one state affect the returns of subsequent states within the same episode, making the problem [[non-stationary]]. To address this non-stationarity, Monte Carlo methods use the framework of general policy iteration (GPI). While dynamic programming computes [[value function]]s using full knowledge of the [[Markov decision process]] (MDP), Monte Carlo methods learn these functions through sample returns. The value functions and policies interact similarly to dynamic programming to achieve [[Mathematical optimization|optimality]], first addressing the prediction problem and then extending to policy improvement and control, all based on sampled experience.<ref name=":0" />
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Reinforcement learning
(section)
Add topic