Linear regression: y = w1·x1 + w2·x2 + ... + bias. Find the weights that minimize the difference between predicted and actual values (usually mean squared error). This is the simplest ML model and is still widely used when relationships are roughly linear. Logistic regression (despite the name) is actually classification — it predicts probabilities of categories by applying a sigmoid function to the linear output.
Replace the linear function with a neural network and you can learn arbitrarily complex relationships. The output layer has a single neuron with no activation function (or a linear activation), and the loss function is typically mean squared error or mean absolute error. This is used for: predicting prices, estimating time-to-completion, forecasting demand, and any task where the output is a number rather than a label.
Interestingly, LLMs can perform regression through text: "Given these house features, predict the price" can be handled by prompting an LLM. Research shows LLMs perform surprisingly well on simple regression tasks, though they're less reliable than dedicated regression models for precision-critical applications. Where LLMs shine is when the regression requires understanding unstructured context: "Given this product review, predict the star rating" combines text understanding with numerical prediction.