Recently, there has been interest in using logic gates to approximate linear functions in neural networks, such as in DiffLogic, to reduce computational resource consumption. However, I’ve been considering an alternative approach: using MOSFETs as artificial neurons. Since MOSFETs inherently exhibit a linear output under certain conditions. I talked it through with ChatGPT. Here is something worth mentioning:
MOSFETs can emulate the three fundamental properties of biological neurons:
- Weighted Summation: Multiple MOSFETs can be used in parallel, with gate voltages controlling different weights.
- Non-Linear Activation: MOSFETs exhibit inherent non-linearity (quadratic in saturation mode, exponential in subthreshold), similar to common activation functions like ReLU and sigmoid.
- Synaptic Plasticity: The conductance of a MOSFET can be dynamically tuned via bias voltages, mimicking real neural adaptation.
A simple MOSFET neuron could work as follows:
Input (Synaptic Signal): Applied as gate voltage Vg.
Weight Adjustment: Controlled by gate biasing.
Output (Activation Value): Determined by the drain current Id.
The MOSFET’s drain current follows a non-linear equation:
Id= k(Vg−Vt)**2
It also said that RRAM could be a good solution for Hybrid digital-annog integration of physical neural networks.
Looking forward to a constructive discussion