Zubnet AI学习Wiki › Regression
基础

Regression

Linear Regression, Prediction
一种预测连续数值而非类别的机器学习任务。“明天的温度是多少?”(回归:预测一个数字)vs.“明天会下雨吗?”(分类:预测一个类别)。线性回归拟合一条直线;神经网络回归能学习输入输出之间任意的非线性关系。

为什么重要

回归是两个基本的 ML 任务之一(另一个是分类),驱动从股价预测到房地产估值到科学建模的一切。它也是理解机器学习最简单的入门点 — 给数据点拟合一条线是大多数人能可视化的东西,从线性回归到神经网络的跳跃在概念上很小。

Deep Dive

Linear regression: y = w1·x1 + w2·x2 + ... + bias. Find the weights that minimize the difference between predicted and actual values (usually mean squared error). This is the simplest ML model and is still widely used when relationships are roughly linear. Logistic regression (despite the name) is actually classification — it predicts probabilities of categories by applying a sigmoid function to the linear output.

Neural Network Regression

Replace the linear function with a neural network and you can learn arbitrarily complex relationships. The output layer has a single neuron with no activation function (or a linear activation), and the loss function is typically mean squared error or mean absolute error. This is used for: predicting prices, estimating time-to-completion, forecasting demand, and any task where the output is a number rather than a label.

Regression in LLMs

Interestingly, LLMs can perform regression through text: "Given these house features, predict the price" can be handled by prompting an LLM. Research shows LLMs perform surprisingly well on simple regression tasks, though they're less reliable than dedicated regression models for precision-critical applications. Where LLMs shine is when the regression requires understanding unstructured context: "Given this product review, predict the star rating" combines text understanding with numerical prediction.

相关概念

← 所有术语
← Red Teaming Reinforcement 学习ing →