Zubnet AI學習Wiki › Regression
基礎

Regression

Linear Regression, Prediction
一種預測連續數值而非類別的機器學習任務。「明天的溫度是多少?」(迴歸:預測一個數字)vs.「明天會下雨嗎?」(分類:預測一個類別)。線性迴歸擬合一條直線;神經網路迴歸能學習輸入輸出之間任意的非線性關係。

為什麼重要

迴歸是兩個基本的 ML 任務之一(另一個是分類),驅動從股價預測到房地產估值到科學建模的一切。它也是理解機器學習最簡單的入門點 — 給資料點擬合一條線是大多數人能視覺化的東西,從線性迴歸到神經網路的跳躍在概念上很小。

Deep Dive

Linear regression: y = w1·x1 + w2·x2 + ... + bias. Find the weights that minimize the difference between predicted and actual values (usually mean squared error). This is the simplest ML model and is still widely used when relationships are roughly linear. Logistic regression (despite the name) is actually classification — it predicts probabilities of categories by applying a sigmoid function to the linear output.

Neural Network Regression

Replace the linear function with a neural network and you can learn arbitrarily complex relationships. The output layer has a single neuron with no activation function (or a linear activation), and the loss function is typically mean squared error or mean absolute error. This is used for: predicting prices, estimating time-to-completion, forecasting demand, and any task where the output is a number rather than a label.

Regression in LLMs

Interestingly, LLMs can perform regression through text: "Given these house features, predict the price" can be handled by prompting an LLM. Research shows LLMs perform surprisingly well on simple regression tasks, though they're less reliable than dedicated regression models for precision-critical applications. Where LLMs shine is when the regression requires understanding unstructured context: "Given this product review, predict the star rating" combines text understanding with numerical prediction.

相關概念

← 所有術語
← Red Teaming Reinforcement 學習ing →