Zubnet AIसीखेंWiki › Neuron
मूल सिद्धांत

Neuron

Artificial Neuron, Perceptron, Node
एक neural network की basic computational unit। एक artificial neuron inputs receive करता है, हर एक को एक weight से multiply करता है, उन्हें sum करता है, एक bias add करता है, और result को एक activation function से pass करके output produce करता है। हज़ारों से billions तक ये neurons, layers में organized और learned weights से connected, modern AI को power देने वाले neural networks बनाते हैं।

यह क्यों matter करता है

Neurons deep learning के atoms हैं। एक single neuron समझना — weighted sum plus activation — बाकी neural network architecture को intuitive बना देता है। एक layer neurons का group है। एक network layers का stack है। Training weights को adjust करना है। बाकी सब details हैं (important details, लेकिन details)।

Deep Dive

The artificial neuron is loosely inspired by biological neurons but shouldn't be taken as a literal analogy. A biological neuron receives electrical signals through dendrites, integrates them in the cell body, and fires (or doesn't) through the axon. An artificial neuron computes: output = activation(w1·x1 + w2·x2 + ... + wn·xn + bias). The weights (w) determine how much each input matters. The bias shifts the activation threshold. The activation function (ReLU, GELU) introduces non-linearity.

From Perceptron to Deep सीखेंing

The perceptron (Rosenblatt, 1958) was the first artificial neuron — a single unit that could learn to classify linearly separable data. Minsky and Papert showed in 1969 that a single perceptron couldn't learn XOR (a simple non-linear function), contributing to the first AI winter. The solution: stack multiple layers of neurons (multi-layer perceptrons / MLPs), which can learn any function given enough neurons. This is the universal approximation theorem — the theoretical foundation of deep learning.

Neurons in Modern LLMs

A model like Llama-70B has roughly 70 billion parameters (weights and biases connecting neurons). Each feedforward layer has thousands of neurons. But modern research shows that individual neurons often don't correspond to single concepts — instead, concepts are encoded as directions in activation space across many neurons (superposition). A single neuron might participate in encoding dozens of different features, making interpretation challenging.

संबंधित अवधारणाएँ

← सभी Terms
← Neural Network Normalization →