Zubnet AILearnWiki › Neuron
Fundamentals

Neuron

Artificial Neuron, Perceptron, Node
The basic computational unit of a neural network. An artificial neuron receives inputs, multiplies each by a weight, sums them, adds a bias, and passes the result through an activation function to produce an output. Thousands to billions of these neurons, organized in layers and connected by learned weights, form the neural networks that power all modern AI.

Why it matters

Neurons are the atoms of deep learning. Understanding a single neuron — weighted sum plus activation — makes the rest of neural network architecture intuitive. A layer is a group of neurons. A network is a stack of layers. Training is adjusting the weights. Everything else is details (important details, but details).

Deep Dive

The artificial neuron is loosely inspired by biological neurons but shouldn't be taken as a literal analogy. A biological neuron receives electrical signals through dendrites, integrates them in the cell body, and fires (or doesn't) through the axon. An artificial neuron computes: output = activation(w1·x1 + w2·x2 + ... + wn·xn + bias). The weights (w) determine how much each input matters. The bias shifts the activation threshold. The activation function (ReLU, GELU) introduces non-linearity.

From Perceptron to Deep Learning

The perceptron (Rosenblatt, 1958) was the first artificial neuron — a single unit that could learn to classify linearly separable data. Minsky and Papert showed in 1969 that a single perceptron couldn't learn XOR (a simple non-linear function), contributing to the first AI winter. The solution: stack multiple layers of neurons (multi-layer perceptrons / MLPs), which can learn any function given enough neurons. This is the universal approximation theorem — the theoretical foundation of deep learning.

Neurons in Modern LLMs

A model like Llama-70B has roughly 70 billion parameters (weights and biases connecting neurons). Each feedforward layer has thousands of neurons. But modern research shows that individual neurons often don't correspond to single concepts — instead, concepts are encoded as directions in activation space across many neurons (superposition). A single neuron might participate in encoding dozens of different features, making interpretation challenging.

Related Concepts

← All Terms
← Neural Network Normalization →