Optimizers are used to evolve models, either through supervised or unsupervised learning. NeuralFit offers in-house algorithms as well as algorithms from literature.
During the current alpha phase, only one (in-house) optimizer is offered. This is because the prime goal of the alpha phase is to set up the main framework of NeuralFit in order to be able to keep up with the rate at which new algorithms are published in literature.
The mutation rate. Higher values result in more mutation and possibly faster convergence. Can range from
0(no mutation) to
1(all parameters mutate, resulting in pseudo-random search). Currently the mutation rate parameter is not yet implemented.