Comments (2)
Hi, the episode_reward
is just used for reporting. The actual algorithms use properly discounted rewards to estimate the Q values, for example (compare https://github.com/matthiasplappert/keras-rl/blob/master/rl/agents/dqn.py#L252).
Getting RL algorithms to work can often be quite tricky. Starting points are:
- Scale your inputs (i.e. the observations) such that they are all on the same scale and, preferably, have a mean of 0 and a variance of 1.
- Experiment with different scales for your reward function. This can be especially problematic if your rewards are orders of magnitudes different throughout the game (i.e. very low rewards first and very large rewards in later stages of the env).
I hope that helps.
from keras-rl.
Hi, thank you very much for your suggestion.
My environment has linar constraints for the state-action space of the form Ax<=b, so that I have included a penalty function that squares the output if the constraints are violated. Hence, the rewards could be high at some point. But as you said, this may be an issue, so I am going to try another thing: I will consider any violated constraint as a terminal state with some fixed negative reward.
Also, the state space is [0, 5]*ones(2,1) and the action space is [0, 0.5]*ones(4,1). I will try to shift and scale them.
from keras-rl.
Related Issues (20)
- Numpy data wrangling in rl/callbacks.py HOT 1
- C:\Python\Python37\lib\site-packages\keras_rl-0.4.2-py3.7.egg\rl\agents\dqn.py in __init__(self, model, policy, test_policy, enable_double_dqn, enable_dueling_network, dueling_type, *args, **kwargs) HOT 4
- [Question] Custom Environment HOT 1
- ValueError: probabilities contain NaN in policy.py HOT 5
- Module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects' HOT 2
- Undocumented version requirements for python, tensorflow, and keras HOT 3
- Training performance is quite slow HOT 3
- Issue importing keras-rl on tensorflow-macos HOT 4
- ImportError: cannot import name 'PrioritizedMemory' from 'rl.memory' HOT 1
- how to implement a custom environment? HOT 1
- which version the tensorflow and numpy it is inDQN_carpole? HOT 1
- <rl.agents.dqn.DQNAgent object at 0x000001FE98873A88> HOT 1
- How to set the "nb_steps_warmup" and "nb_steps" properly? HOT 1
- Multiple Actions in DQN (binary action vector)
- Value error when running DQN.fit HOT 2
- gym.Env.reset() no longer returns observation of type np.array but a tuple of (observation, info)
- Keras-RL2 DQN agent fails to learn on some environments
- Frame Skipping in DQN
- ValueError: Could not interpret optimizer identifier: <keras.src.optimizers.adam.Adam object at 0x79d9071160e0>
- Recommendation to synchronize tf, keras and keras rl versions? HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from keras-rl.