commit | 85d323389ba061616a70f2b3e212ecc4a3dbcb31 | [log] [tgz] |
---|---|---|
author | Tamas Hubai <efabless@htamas.net> | Mon Dec 05 10:05:35 2022 +0100 |
committer | Tamas Hubai <efabless@htamas.net> | Mon Dec 05 10:05:35 2022 +0100 |
tree | 118f16c2f864b6958b59f162012524ad59a30945 | |
parent | cdc0ab22ee5297b76aea412260056d0f45fcce46 [diff] |
Harden user_project_wrapper
Implements a simple neural network that supports on-chip training in addition to inference. The two hidden layers use leaky ReLU as their activation function while the output layer uses a rough approximation of softmax.
Unlike usual neural network implementations we use fixed-point saturation arithmetic and each neuron and synapse is represented as an individual instance. Inputs, outputs and weights, as well as forward and backward propagation can be managed through the wishbone bus.