I have been brushing up on my probability and measure theory lately. So I understand how measure theory defines a sigma algebra over a sample space, and then a "measure" assigns a numerical value to the sets in the sigma algebra. However, I am struggling to understand the connection between the sigma algebra in the probability space, and a regression model and regressor variables in statistics. Actually, in looking at measure theory texts, like Axler, or Billingsly, the authors end with the assignment of probability, but do not discuss the connection of statistics with measure theory.
But I am trying to remember how measure theory/probability formally connects to statistical models. I recall from grad school the notion of the CLT as the sum of random variables being normal, and the idea of sampling. But I was looking for a more mathematical or formal understanding of how probability connects back to specific statistical models.
Let me be a little more precise. So in the idea of a sigma algebra, we have a set containing all of the possible outcomes in the sample space, as well as the complements to those outcomes, and countable unions and intersections. Now in a linear regression we have regressor variables. I am trying to just understand--in a precise mathematical sense--how the elements in the sigma algebra relate to the regressor variables in a regression--or if there is any relationship?
In a linear regression our model looks like:
$$ \bf{Y} = XB + \epsilon $$ Where $\bf{Y}$ is a vector of outcomes, $\bf{X}$ is a design matrix containing the regressor variables, $\bf{B}$ is a vector of coefficients, and $\bf{\epsilon}$ is a vector of observation level errors.
So in a mathematical sense, is the sigma algebra from measure theory related to the variables in $\bf{X}$, or is that just related to the error term $\bf{\epsilon}$?