Tamas Hubai | d7eba71 | 2022-12-05 04:14:36 +0100 | [diff] [blame] | 1 | # Trainable Neural Network |
manarabdelaty | f2b6ea2 | 2021-04-20 19:07:40 +0200 | [diff] [blame] | 2 | |
Tamas Hubai | d7eba71 | 2022-12-05 04:14:36 +0100 | [diff] [blame] | 3 |  |
Manar | c7bcaf9 | 2021-04-16 18:21:23 +0200 | [diff] [blame] | 4 | |
Tamas Hubai | d7eba71 | 2022-12-05 04:14:36 +0100 | [diff] [blame] | 5 | Implements a simple neural network that supports on-chip training in addition to inference. |
| 6 | The two hidden layers use leaky ReLU as their activation function while the output layer uses a |
| 7 | rough approximation of softmax. |
Manar | c7bcaf9 | 2021-04-16 18:21:23 +0200 | [diff] [blame] | 8 | |
Tamas Hubai | d7eba71 | 2022-12-05 04:14:36 +0100 | [diff] [blame] | 9 | Unlike usual neural network implementations we use fixed-point saturation arithmetic and |
| 10 | each neuron and synapse is represented as an individual instance. |
| 11 | Inputs, outputs and weights, as well as forward and backward propagation can be managed |
| 12 | through the wishbone bus. |
Manar | c7bcaf9 | 2021-04-16 18:21:23 +0200 | [diff] [blame] | 13 | |