<< Chapter < Page Chapter >> Page >

Reinforcement learning and control

We now begin our study of reinforcement learning and adaptive control.

In supervised learning, we saw algorithms that tried to make their outputs mimic the labels y given in the training set. In that setting, the labels gave an unambiguous“right answer” for each of the inputs x . In contrast, for many sequential decision making and control problems, it is very difficult to provide thistype of explicit supervision to a learning algorithm. For example, if wehave just built a four-legged robot and are trying to program it to walk, then initially we have no idea what the “correct” actions to take are to make itwalk, and so do not know how to provide explicit supervision for a learning algorithm to try to mimic.

In the reinforcement learning framework, we will instead provide our algorithms only a reward function, which indicates to the learning agent when itis doing well, and when it is doing poorly. In the four-legged walking example, the reward function might give therobot positive rewards for moving forwards, and negative rewards for either moving backwards or falling over. It will then be the learningalgorithm's job to figure out how to choose actions over time so as to obtain large rewards.

Reinforcement learning has been successful in applications as diverse as autonomous helicopter flight, robot legged locomotion, cell-phone networkrouting, marketing strategy selection, factory control, and efficient web-page indexing.Our study of reinforcement learning will begin with a definition of the Markov decision processes (MDP) , which provides the formalism in which RL problems are usually posed.

Markov decision processes

A Markov decision process is a tuple ( S , A , { P s a } , γ , R ) , where:

  • S is a set of states . (For example, in autonomous helicopter flight, S might be the set of all possible positions and orientations of the helicopter.)
  • A is a set of actions . (For example, the set of all possible directions in which you can push the helicopter's control sticks.)
  • P s a are the state transition probabilities. For each state s S and action a A , P s a is a distribution over the state space. We'll say more about this later, but briefly, P s a gives the distribution over what states we will transition to if we takeaction a in state s .
  • γ [ 0 , 1 ) is called the discount factor .
  • R : S × A R is the reward function . (Rewards are sometimes also written as a function of a state S only, in which case we would have R : S R ).

The dynamics of an MDP proceeds as follows: We start in some state s 0 , and get to choose some action a 0 A to take in the MDP. As a result of our choice, the state of the MDPrandomly transitions to some successor state s 1 , drawn according to s 1 P s 0 a 0 . Then, we get to pick another action a 1 . As a result of this action, the state transitions again, now tosome s 2 P s 1 a 1 . We then pick a 2 , and so on.... Pictorially, we can represent this process as follows:

s 0 a 0 s 1 a 1 s 2 a 2 s 3 a 3 ...

Upon visiting the sequence of states s 0 , s 1 , ... with actions a 0 , a 1 , ... , our total payoff is given by

R ( s 0 , a 0 ) + γ R ( s 1 , a 1 ) + γ 2 R ( s 2 , a 2 ) + .

Questions & Answers

material that allows electric current to pass through
Deng Reply
material which don't allow electric current is called
Deng
insulators
Covenant
how to study physic and understand
Ewa Reply
what is conservative force with examples
Moses
what is work
Fredrick Reply
the transfer of energy by a force that causes an object to be displaced; the product of the component of the force in the direction of the displacement and the magnitude of the displacement
AI-Robot
why is it from light to gravity
Esther Reply
difference between model and theory
Esther
Is the ship moving at a constant velocity?
Kamogelo Reply
The full note of modern physics
aluet Reply
introduction to applications of nuclear physics
aluet Reply
the explanation is not in full details
Moses Reply
I need more explanation or all about kinematics
Moses
yes
zephaniah
I need more explanation or all about nuclear physics
aluet
Show that the equal masses particles emarge from collision at right angle by making explicit used of fact that momentum is a vector quantity
Muhammad Reply
yh
Isaac
A wave is described by the function D(x,t)=(1.6cm) sin[(1.2cm^-1(x+6.8cm/st] what are:a.Amplitude b. wavelength c. wave number d. frequency e. period f. velocity of speed.
Majok Reply
what is frontier of physics
Somto Reply
A body is projected upward at an angle 45° 18minutes with the horizontal with an initial speed of 40km per second. In hoe many seconds will the body reach the ground then how far from the point of projection will it strike. At what angle will the horizontal will strike
Gufraan Reply
Suppose hydrogen and oxygen are diffusing through air. A small amount of each is released simultaneously. How much time passes before the hydrogen is 1.00 s ahead of the oxygen? Such differences in arrival times are used as an analytical tool in gas chromatography.
Ezekiel Reply
please explain
Samuel
what's the definition of physics
Mobolaji Reply
what is physics
Nangun Reply
the science concerned with describing the interactions of energy, matter, space, and time; it is especially interested in what fundamental mechanisms underlie every phenomenon
AI-Robot
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask