AGI Hardware is Here: Neural Computation boils down to Computation (2/X)
In 1943, Warren McCulloch and Walter Pitts published “A Logical Calculus of the Ideas Immanent in Nervous Activity” which showed neurons could be modeled as logical units performing calculation.
One could model a neuron as a simple logical unit:
- It receives inputs from other neurons
- It computes based on the inputs from the other neurons
- Depending on the computation, it either fires.
- Warren and Walter described it firing as 0 or 1, but in actuality, it’s more complex.
In reality, the biological neuron fires discrete action potentials that have the same amplitude and shape. The rate of firing and timing of the spikes is the information, much like using FM radio to encode information.
Artificial neural networks (ANN) use continuous floating point values, and the developer chooses different types of neurons for the right computation, like Sigmoid which outputs somewhere between 0 to 1, ReLU which outputs the maximum input, Tanh which is -1 to 1, or many others.
And if you were going to take a look at the PyTorch code, you’ll see the intention: They're simulating the math, not the mechanism. So when a developer defines a neural network in PyTorch, PyTorch is saying: "multiply inputs by weights, sum them, apply nonlinearity." That's the functional behavior we care about from a synapse and neuron.
Keep in mind that ANN does the following differently:
- The synapse doesn't literally multiply - it releases neurotransmitters proportional to some internal state
- The neuron doesn't literally compute a sigmoid - it integrates current until threshold
- There's no "backward pass" in biology - learning happens through local chemical signals (STDP, neuromodulators)
The key takeaway is that in biology, all the steps of neurotransmitter diffusion, the membrane charging, the vesicle dynamics - that is the multiply-accumulate operation. There is no separation between "data" (weights) and "compute" (neurons).
So when a developer defines a neural network in PyTorch, he or she is saying: "multiply inputs by weights, sum them, apply nonlinearity." And that's the functional behavior we care about from a synapse and neuron.