Exploring MOSFET-Based Neurons: A More Efficient Alternative to Logic-Gate-Based Neural Networks?

Recently, there has been interest in using logic gates to approximate linear functions in neural networks, such as in DiffLogic, to reduce computational resource consumption. However, I’ve been considering an alternative approach: using MOSFETs as artificial neurons. Since MOSFETs inherently exhibit a linear output under certain conditions. I talked it through with ChatGPT. Here is something worth mentioning:

MOSFETs can emulate the three fundamental properties of biological neurons:

  1. Weighted Summation: Multiple MOSFETs can be used in parallel, with gate voltages controlling different weights.
  2. Non-Linear Activation: MOSFETs exhibit inherent non-linearity (quadratic in saturation mode, exponential in subthreshold), similar to common activation functions like ReLU and sigmoid.
  3. Synaptic Plasticity: The conductance of a MOSFET can be dynamically tuned via bias voltages, mimicking real neural adaptation.

A simple MOSFET neuron could work as follows:
Input (Synaptic Signal): Applied as gate voltage Vg​.
Weight Adjustment: Controlled by gate biasing.
Output (Activation Value): Determined by the drain current Id​.

The MOSFET’s drain current follows a non-linear equation:
Id= k(Vg−Vt)**2

It also said that RRAM could be a good solution for Hybrid digital-annog integration of physical neural networks.

Looking forward to a constructive discussion

Observation:
Analog computers have been around for many decades.

True. Compared to digital computers, analog computers are less reliable in several aspects. However, due to their physical properties, analog circuits seem to have an advantage in mimicking neurons, as they inherently possess an activation function.