C++ implementation of the continuous LunarLander environment.
-
Updated
Jan 13, 2021 - C++
C++ implementation of the continuous LunarLander environment.
Deep Q Learning (DQN) neural net to optimize a lunar lander control policy using OpenAI Gym environment.
Landing pad is always at coordinates (0,0). Coordinates are the first two numbers in state vector. Reward for moving from the top of the screen to landing pad and zero speed is about 100..140 points. If lander moves away from landing pad it loses reward back. Episode finishes if the lander crashes or comes to rest, receiving additional -100 or +…
A lunar-lander type retro video game, written in Zig for the WASM-4 fantasy console.
Developed TD Actor-Critic and solved Grid-world, Open AI 'Lunar Lander-v2' and 'Cartpole-v1' environments.
Because sometimes you want to K.I.S.S. your thrusters.
A solution for LunarLander from OpenAI Gym using deep Q-Learning implemented in python using only tensorflow
Project for Artificial Intelligence (CS181) Fall23 ShanghaiTech
React.js Planetary Lander Game
BASIC language subset/dialect in C++
Current project represents a solution of Lunar-Lander problem from OpenAI GYM library of environments. The training of agent was performed using Actor-Critic DQN model.
Programming Assignments for Reinforcement Learning Specialization
Lunar Landing using DQN and DDQN
A library for calculating Lunar Calendar in Vietnamese
Behaviour Cloning On OpenAI Environment
Implement DQN agent on OpenAI's environment LunarLander-v2
Implementation of the DDPG algorithm to safely land a lunar lander from Gymnasium environments
TensorFlow model that plays lunar lander game
Add a description, image, and links to the lunar-lander topic page so that developers can more easily learn about it.
To associate your repository with the lunar-lander topic, visit your repo's landing page and select "manage topics."