
Welcome!
Welcome to Dynosaur, a machine learning project I started to learn all about reinforcement learning and beat the Google Dinosaur Jumper Game.Branches

Neuroev
Neuroev is the first version of Dynosaur. Borrowing from principles of evolution, neural network weights and biases are modified using genetic operators. Try it out here.
NEAT
Similar to Neuroev, NEAT (short for Neuroevolution of Augmenting Topologies) modifies neural network weights and biases, while also mutating the shape of the network as well. Try it out here.
Parallel NEAT
In order to see evolution run at a faster rate, Parallel NEAT was designed. Instead of simulating the dinosaurs sequentially, a whole generation of dinosaurs are simulated at once, drastically reducing simulation time. Try it out here (Refer to the documentation to start).
Continuous NEAT
Continuous NEAT improves off of Parallel NEAT by making each simulation independent of each other. This reduces the time spent per generation waiting for higher fitness dinosaurs to finish as well as variable evolutionary times. Try it out here (Refer to the documentation to start).
Backprop
Backprop utilizes an LSTM network to learn when to jump and duck using the user's game decisions as labeled data. Try it out here.
Q
Q is a branch of Dynosaur that uses Q learning, a reinforcement learning technique, to train the dinosaurs to jump over obstacles. Try it out here (Refer to the documentation to start).