Zubnet AIAprenderWiki › Neuron
Fundamentos

Neuron

Artificial Neuron, Perceptron, Node
La unidad computacional básica de una red neuronal. Una neurona artificial recibe entradas, multiplica cada una por un peso, las suma, añade un sesgo, y pasa el resultado por una función de activación para producir una salida. Miles a miles de millones de estas neuronas, organizadas en capas y conectadas por pesos aprendidos, forman las redes neuronales que impulsan toda la IA moderna.

Por qué importa

Las neuronas son los átomos del deep learning. Entender una sola neurona — suma ponderada más activación — hace que el resto de la arquitectura de redes neuronales sea intuitivo. Una capa es un grupo de neuronas. Una red es una pila de capas. El entrenamiento es ajustar los pesos. Todo lo demás son detalles (detalles importantes, pero detalles).

Deep Dive

The artificial neuron is loosely inspired by biological neurons but shouldn't be taken as a literal analogy. A biological neuron receives electrical signals through dendrites, integrates them in the cell body, and fires (or doesn't) through the axon. An artificial neuron computes: output = activation(w1·x1 + w2·x2 + ... + wn·xn + bias). The weights (w) determine how much each input matters. The bias shifts the activation threshold. The activation function (ReLU, GELU) introduces non-linearity.

From Perceptron to Deep Aprendering

The perceptron (Rosenblatt, 1958) was the first artificial neuron — a single unit that could learn to classify linearly separable data. Minsky and Papert showed in 1969 that a single perceptron couldn't learn XOR (a simple non-linear function), contributing to the first AI winter. The solution: stack multiple layers of neurons (multi-layer perceptrons / MLPs), which can learn any function given enough neurons. This is the universal approximation theorem — the theoretical foundation of deep learning.

Neurons in Modern LLMs

A model like Llama-70B has roughly 70 billion parameters (weights and biases connecting neurons). Each feedforward layer has thousands of neurons. But modern research shows that individual neurons often don't correspond to single concepts — instead, concepts are encoded as directions in activation space across many neurons (superposition). A single neuron might participate in encoding dozens of different features, making interpretation challenging.

Conceptos relacionados

← Todos los términos
← Neural Network Normalization →