## Getting started

The current page provides a simple walkthrough to get you started with NeuralFit. If you want a more technical documentation, please visit the documentation. There is also a library of examples that you can use as a starting point for your project.

##### Installation instructions

NeuralFit is tested and supported on 64-bit machines running Windows or Ubuntu with Python 3.7-3.10 installed. You can install NeuralFit via pip with version 19.3 or higher.

pip install neuralfit

If you want to export models to Keras, make sure you also have tensorflow installed!

NeuralFit is free to use, however a supporter license is available that offers larger networks and additional functionality. In case you own a license, you can set it as shown below.

import neuralfit as nf

nf.set_license('<your license key>')

Note that an active internet connection is required at all times when using the library, even if you have no paid license.

### A simple example

From this point on we will discuss a simple example to show the basics of NeuralFit. You can consult the full example code here.

##### 1. Creating a model

The first step is to create a model, specifying the inputs and outputs. We will consider the XOR gate as our dataset, which requires 2 inputs and 1 output.

model = nf.Model(2, 1)
##### 2. Specifying a dataset

We can specify the dataset by defining an input x and a target output y. Both should always be numpy arrays and match the dimensions of the network: the input should have shape (n_samples, n_inputs) and the output should have shape (n_samples, n_outputs). Note that it is not yet possible to provide N-dimensional samples, as the current evolutionary algorithm does not have any spatial considerations. For the XOR gate, the dataset is as follows.

x = np.asarray([[0, 0], [0, 1], [1, 0], [1, 1]]) # shape: (4,2)
y = np.asarray([[0], [1], [1], [0]])             # shape: (4,1)
##### 3. Compiling the model

Before evolving the model, we have to specify how the loss is calculated and which other metrics and monitors we want to keep track of. A metric calculates a single scalar that indicates the perfomance of the model on the dataset, while a monitor is a single scalar that keeps track of the model properties. Please consult the list of metrics and monitors for more information.

model.compile(optimizer='alpha', loss='mse', monitors=['size'])

The above code will use the mean-squared-error to evaluate the performance of the genomes. During evolution, it trackes the size of the best performing genome.

##### 4. Evolving the model

After compilation, you can evolve the model by passing the dataset and specifying the number of epochs. Currently no early stopping is supported, but this will be implemented in the near future.

model.evolve(x, y, epochs=100)

Running the above function will generate an output, indicating how the performance changes as a function of the number of epochs. Note that we can also see our metrics and monitors here (i.e. the size of the best performing genome).

Epoch 86/100 - 1/1 [==============================] - 9ms 4ms/step - loss: 0.000062 - size: 7
Epoch 87/100 - 1/1 [==============================] - 9ms 4ms/step - loss: 0.000054 - size: 7
Epoch 88/100 - 1/1 [==============================] - 10ms 5ms/step - loss: 0.000054 - size: 7
Epoch 89/100 - 1/1 [==============================] - 11ms 5ms/step - loss: 0.000054 - size: 7
Epoch 90/100 - 1/1 [==============================] - 8ms 4ms/step - loss: 0.000045 - size: 7
Epoch 91/100 - 1/1 [==============================] - 12ms 6ms/step - loss: 0.000045 - size: 7
Epoch 92/100 - 1/1 [==============================] - 8ms 4ms/step - loss: 0.000032 - size: 7
Epoch 93/100 - 1/1 [==============================] - 10ms 5ms/step - loss: 0.000032 - size: 7
Evolution is stochastic; every run will be different and there is no guarantee on convergence based on past convergence.
##### 5. Evaluating the model

Once training has finished, you can use the evolved model to do predictions. To see how well the model performs, we simply predict on the training dataset.

print(model.predict(x))
[[0.        ]  // target: 0
[0.99853663]  // target: 1
[1.01800795]  // target: 1
[0.        ]] // target: 0

Furthermore, you can always evaluate the performance on different metrics than the one used for evolution by recompiling the model and running the evaluate function. In the example below, we evaluate the mean-absolute-error of the model.

model.compile('mae')
print(model.evaluate(x,y))
[0.010809107499241069]