Download Adaptive Representations for Reinforcement Learning by Shimon Whiteson PDF

By Shimon Whiteson

This publication provides new algorithms for reinforcement studying, a sort of desktop studying during which an self reliant agent seeks a regulate coverage for a sequential choice job. given that present equipment ordinarily depend on manually designed answer representations, brokers that immediately adapt their very own representations have the aptitude to dramatically enhance functionality. This e-book introduces novel techniques for instantly getting to know high-performing representations. the 1st procedure synthesizes temporal distinction tools, the conventional method of reinforcement studying, with evolutionary tools, that could study representations for a wide category of optimization difficulties. This synthesis is finished by way of customizing evolutionary tips on how to the online nature of reinforcement studying and utilizing them to adapt representations for price functionality approximators. the second one technique instantly learns representations in line with piecewise-constant approximations of price features. It starts off with coarse representations and steadily refines them in the course of studying, interpreting the present coverage and price functionality to infer the easiest refinements. This publication additionally introduces a singular strategy for devising enter representations. this system addresses the function choice challenge by means of extending an set of rules that evolves the topology and weights of neural networks such that it evolves their inputs too. as well as introducing those new tools, this publication provides large empirical leads to a number of domain names demonstrating that those concepts can considerably enhance functionality over tools with handbook representations.

Show description

Read or Download Adaptive Representations for Reinforcement Learning PDF

Similar nonfiction_6 books

Additional resources for Adaptive Representations for Reinforcement Learning

Example text

E. the final generation champions) and evaluated them each for 1,000 additional episodes. In mountain car, using on-line evolution has no noticeable effect: the best policies of off-line and all three versions of on-line NEAT receive an average score of approximately −52, which matches the best results achieved in previous research on this domain (129; 144). While the mountain car domain is simple enough that all the methods find approximately optimal policies, the same is not true in scheduling, where ε -greedy performs substantially worse.

1 verify that both on-line evolutionary computation and evolutionary function approximation can significantly boost performance in reinforcement learning tasks. This section presents experiments that assess how well these two ideas work together. 38 4 Evolutionary Function Approximation Uniform Moving Average Score Per Episode Uniform Moving Average Score Per Episode 0 -10000 Softmax NEAT Softmax NEAT+Q Softmax NEAT+Q -10500 Softmax NEAT -50 -11000 Off−Line NEAT+Q -11500 -100 Off−Line NEAT+Q Score Score -12000 -150 -12500 Off−Line NEAT -13000 -200 -13500 Off−Line NEAT -14000 -250 -14500 -300 -15000 0 200 400 600 Episode (x1000) 800 1000 0 (a) Mountain Car 200 400 600 Episode (x1000) 800 1000 (b) Server Job Scheduling Fig.

We tested both Darwinian and Lamarckian NEAT+Q in this manner. Both perform well, though which is preferable appears to be domain dependent. For simplicity, in this section and those that follow, we present results only for Darwinian NEAT+Q. 4 we present a comparison of the two approaches. To test Q-learning without NEAT, we tried 24 different configurations in each domain. These configurations correspond to every possible combination of the following parameter settings. The networks had feed-forward topologies with 0, 4, or 8 hidden nodes.

Download PDF sample

Rated 4.78 of 5 – based on 14 votes