What is the null hypothesis for the individual p-values in multiple regression?

Multi tool use
$begingroup$
I have a linear regression model for a dependent variable $Y$ based on two independent variables, $X1$ and $X2$, so I have a general form of a regression equation
$Y = A + B_1 cdot X_1 + B_2 cdot X_2 + epsilon$,
where $A$ is the intercept, $epsilon$ is the error term, and $B_1$ and $B_2$ are the respective coefficients of $X_1$ and $X_2$. I perform a multiple regression with software (statsmodel in Python) and I get coefficients for the model: $A = a, B_1 = b_1, B_2 = b_2$. The model also gives me $p$ values for each coefficient: $p_a$, $p_1$, and $p_2$. My question is: What is the null hypothesis for those individual $p$ values? For example, to obtain $p_1$ I know that the null hypothesis entails a 0 coefficient for $B_1$, but what about the other variables? In other words, If the null hypothesis is $Y = A + 0 cdot X_1 + B_2 cdot X_2$, what are the values of $A$ and $B_2$ for the null hypothesis from which the $p$-value for $B_1$ is derived?
regression p-value
$endgroup$
add a comment |
$begingroup$
I have a linear regression model for a dependent variable $Y$ based on two independent variables, $X1$ and $X2$, so I have a general form of a regression equation
$Y = A + B_1 cdot X_1 + B_2 cdot X_2 + epsilon$,
where $A$ is the intercept, $epsilon$ is the error term, and $B_1$ and $B_2$ are the respective coefficients of $X_1$ and $X_2$. I perform a multiple regression with software (statsmodel in Python) and I get coefficients for the model: $A = a, B_1 = b_1, B_2 = b_2$. The model also gives me $p$ values for each coefficient: $p_a$, $p_1$, and $p_2$. My question is: What is the null hypothesis for those individual $p$ values? For example, to obtain $p_1$ I know that the null hypothesis entails a 0 coefficient for $B_1$, but what about the other variables? In other words, If the null hypothesis is $Y = A + 0 cdot X_1 + B_2 cdot X_2$, what are the values of $A$ and $B_2$ for the null hypothesis from which the $p$-value for $B_1$ is derived?
regression p-value
$endgroup$
2
$begingroup$
Your model is missing an error term.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 19:48
add a comment |
$begingroup$
I have a linear regression model for a dependent variable $Y$ based on two independent variables, $X1$ and $X2$, so I have a general form of a regression equation
$Y = A + B_1 cdot X_1 + B_2 cdot X_2 + epsilon$,
where $A$ is the intercept, $epsilon$ is the error term, and $B_1$ and $B_2$ are the respective coefficients of $X_1$ and $X_2$. I perform a multiple regression with software (statsmodel in Python) and I get coefficients for the model: $A = a, B_1 = b_1, B_2 = b_2$. The model also gives me $p$ values for each coefficient: $p_a$, $p_1$, and $p_2$. My question is: What is the null hypothesis for those individual $p$ values? For example, to obtain $p_1$ I know that the null hypothesis entails a 0 coefficient for $B_1$, but what about the other variables? In other words, If the null hypothesis is $Y = A + 0 cdot X_1 + B_2 cdot X_2$, what are the values of $A$ and $B_2$ for the null hypothesis from which the $p$-value for $B_1$ is derived?
regression p-value
$endgroup$
I have a linear regression model for a dependent variable $Y$ based on two independent variables, $X1$ and $X2$, so I have a general form of a regression equation
$Y = A + B_1 cdot X_1 + B_2 cdot X_2 + epsilon$,
where $A$ is the intercept, $epsilon$ is the error term, and $B_1$ and $B_2$ are the respective coefficients of $X_1$ and $X_2$. I perform a multiple regression with software (statsmodel in Python) and I get coefficients for the model: $A = a, B_1 = b_1, B_2 = b_2$. The model also gives me $p$ values for each coefficient: $p_a$, $p_1$, and $p_2$. My question is: What is the null hypothesis for those individual $p$ values? For example, to obtain $p_1$ I know that the null hypothesis entails a 0 coefficient for $B_1$, but what about the other variables? In other words, If the null hypothesis is $Y = A + 0 cdot X_1 + B_2 cdot X_2$, what are the values of $A$ and $B_2$ for the null hypothesis from which the $p$-value for $B_1$ is derived?
regression p-value
regression p-value
edited Dec 31 '18 at 19:47
Michael M
6,32832035
6,32832035
asked Dec 30 '18 at 19:24
tmldwntmldwn
312
312
2
$begingroup$
Your model is missing an error term.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 19:48
add a comment |
2
$begingroup$
Your model is missing an error term.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 19:48
2
2
$begingroup$
Your model is missing an error term.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 19:48
$begingroup$
Your model is missing an error term.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 19:48
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
The null hypothesis is
$$
H_0: B1 = 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R},
$$
which basically means that the null hypothesis does not restrict B2 and A.
The alternative hypothesis is
$$
H_1: B1 neq 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R}.
$$
In a way, the null hypothesis in the multiple regression model is a composite hypothesis. It is "fortunate" that we can construct a pivotal test statistic that does not depend on the true value of B2 and A, so that we do not suffer a penalty from testing a composite null hypothesis.
In other words, there are a lot of different distributions of $(Y, X1, X2)$ that are compatible with the null hypothesis $H_0$. However, all of these distributions lead to the same behavior of the the test statistic that is used to test $H_0$.
In my answer, I have not addressed the distribution of $epsilon$ and implicitly assumed that it is an independent centered normal random variable. If we only assume something like
$$
E[epsilon mid X1, X2] = 0
$$
then a similar conclusion holds asymptotically (under regularity assumptions).
$endgroup$
$begingroup$
But as I understand it, doesn't the null hypothesis have to be a probability distribution? If I have specific values for the coefficients, I can generate a probability distribution by adding noise (epsilon) to the regression equation. But if I don't have specific values for coefficients, how would I generate the null probability distribution?
$endgroup$
– tmldwn
Dec 30 '18 at 20:35
$begingroup$
A composite null hypothesis is a whole set of possible probability measures.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 20:52
$begingroup$
I have edited my answer to emphasize this point.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 21:28
2
$begingroup$
@tmldwn: Here, the marginal distribution of the t-statistic does indeed not depend on where we are in the null. If you find this hard to understand then I suggest you go carefully through the derivation of the distribution of the t-statistic. Note that the t-statistic depends on the LS estimator. In a way this automatically adjusts the test statistic correctly for the "true" hypothesis in the null space (we don't have to take a stand on what A, B2 are because we don't need them to compute the test statistic).
$endgroup$
– Andreas Dzemski
Dec 31 '18 at 11:02
1
$begingroup$
This answer is completely wrong. As explained in this document, there is anova for the whole regression, but a t-test for each coeffieicnt: reliawiki.org/index.php/…
$endgroup$
– Josh
Dec 31 '18 at 15:47
|
show 5 more comments
$begingroup$
You can make the same assupmtions for the other variables as the X1. The ANOVA table of the regression gives specific information about each variable significance and the overall significance as well.As far as regression analysis is concerned, the acceptance of null hypothesis implies that the coefficient of the variable is zero, given a certain level of significance.
If you want to acquire a more intuitive aspect of the issue, you can study more about Hypothesis testing.
$endgroup$
add a comment |
$begingroup$
The $p$-values are the result of a series of $t$-tests. The null hypothesis is that $B_j=0$, while the alternative hypothesis (again, for each coefficient) is, $B_jne0$
(see here for more details: http://reliawiki.org/index.php/Multiple_Linear_Regression_Analysis#Test_on_Individual_Regression_Coefficients_.28t__Test.29)
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f385005%2fwhat-is-the-null-hypothesis-for-the-individual-p-values-in-multiple-regression%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The null hypothesis is
$$
H_0: B1 = 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R},
$$
which basically means that the null hypothesis does not restrict B2 and A.
The alternative hypothesis is
$$
H_1: B1 neq 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R}.
$$
In a way, the null hypothesis in the multiple regression model is a composite hypothesis. It is "fortunate" that we can construct a pivotal test statistic that does not depend on the true value of B2 and A, so that we do not suffer a penalty from testing a composite null hypothesis.
In other words, there are a lot of different distributions of $(Y, X1, X2)$ that are compatible with the null hypothesis $H_0$. However, all of these distributions lead to the same behavior of the the test statistic that is used to test $H_0$.
In my answer, I have not addressed the distribution of $epsilon$ and implicitly assumed that it is an independent centered normal random variable. If we only assume something like
$$
E[epsilon mid X1, X2] = 0
$$
then a similar conclusion holds asymptotically (under regularity assumptions).
$endgroup$
$begingroup$
But as I understand it, doesn't the null hypothesis have to be a probability distribution? If I have specific values for the coefficients, I can generate a probability distribution by adding noise (epsilon) to the regression equation. But if I don't have specific values for coefficients, how would I generate the null probability distribution?
$endgroup$
– tmldwn
Dec 30 '18 at 20:35
$begingroup$
A composite null hypothesis is a whole set of possible probability measures.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 20:52
$begingroup$
I have edited my answer to emphasize this point.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 21:28
2
$begingroup$
@tmldwn: Here, the marginal distribution of the t-statistic does indeed not depend on where we are in the null. If you find this hard to understand then I suggest you go carefully through the derivation of the distribution of the t-statistic. Note that the t-statistic depends on the LS estimator. In a way this automatically adjusts the test statistic correctly for the "true" hypothesis in the null space (we don't have to take a stand on what A, B2 are because we don't need them to compute the test statistic).
$endgroup$
– Andreas Dzemski
Dec 31 '18 at 11:02
1
$begingroup$
This answer is completely wrong. As explained in this document, there is anova for the whole regression, but a t-test for each coeffieicnt: reliawiki.org/index.php/…
$endgroup$
– Josh
Dec 31 '18 at 15:47
|
show 5 more comments
$begingroup$
The null hypothesis is
$$
H_0: B1 = 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R},
$$
which basically means that the null hypothesis does not restrict B2 and A.
The alternative hypothesis is
$$
H_1: B1 neq 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R}.
$$
In a way, the null hypothesis in the multiple regression model is a composite hypothesis. It is "fortunate" that we can construct a pivotal test statistic that does not depend on the true value of B2 and A, so that we do not suffer a penalty from testing a composite null hypothesis.
In other words, there are a lot of different distributions of $(Y, X1, X2)$ that are compatible with the null hypothesis $H_0$. However, all of these distributions lead to the same behavior of the the test statistic that is used to test $H_0$.
In my answer, I have not addressed the distribution of $epsilon$ and implicitly assumed that it is an independent centered normal random variable. If we only assume something like
$$
E[epsilon mid X1, X2] = 0
$$
then a similar conclusion holds asymptotically (under regularity assumptions).
$endgroup$
$begingroup$
But as I understand it, doesn't the null hypothesis have to be a probability distribution? If I have specific values for the coefficients, I can generate a probability distribution by adding noise (epsilon) to the regression equation. But if I don't have specific values for coefficients, how would I generate the null probability distribution?
$endgroup$
– tmldwn
Dec 30 '18 at 20:35
$begingroup$
A composite null hypothesis is a whole set of possible probability measures.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 20:52
$begingroup$
I have edited my answer to emphasize this point.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 21:28
2
$begingroup$
@tmldwn: Here, the marginal distribution of the t-statistic does indeed not depend on where we are in the null. If you find this hard to understand then I suggest you go carefully through the derivation of the distribution of the t-statistic. Note that the t-statistic depends on the LS estimator. In a way this automatically adjusts the test statistic correctly for the "true" hypothesis in the null space (we don't have to take a stand on what A, B2 are because we don't need them to compute the test statistic).
$endgroup$
– Andreas Dzemski
Dec 31 '18 at 11:02
1
$begingroup$
This answer is completely wrong. As explained in this document, there is anova for the whole regression, but a t-test for each coeffieicnt: reliawiki.org/index.php/…
$endgroup$
– Josh
Dec 31 '18 at 15:47
|
show 5 more comments
$begingroup$
The null hypothesis is
$$
H_0: B1 = 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R},
$$
which basically means that the null hypothesis does not restrict B2 and A.
The alternative hypothesis is
$$
H_1: B1 neq 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R}.
$$
In a way, the null hypothesis in the multiple regression model is a composite hypothesis. It is "fortunate" that we can construct a pivotal test statistic that does not depend on the true value of B2 and A, so that we do not suffer a penalty from testing a composite null hypothesis.
In other words, there are a lot of different distributions of $(Y, X1, X2)$ that are compatible with the null hypothesis $H_0$. However, all of these distributions lead to the same behavior of the the test statistic that is used to test $H_0$.
In my answer, I have not addressed the distribution of $epsilon$ and implicitly assumed that it is an independent centered normal random variable. If we only assume something like
$$
E[epsilon mid X1, X2] = 0
$$
then a similar conclusion holds asymptotically (under regularity assumptions).
$endgroup$
The null hypothesis is
$$
H_0: B1 = 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R},
$$
which basically means that the null hypothesis does not restrict B2 and A.
The alternative hypothesis is
$$
H_1: B1 neq 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R}.
$$
In a way, the null hypothesis in the multiple regression model is a composite hypothesis. It is "fortunate" that we can construct a pivotal test statistic that does not depend on the true value of B2 and A, so that we do not suffer a penalty from testing a composite null hypothesis.
In other words, there are a lot of different distributions of $(Y, X1, X2)$ that are compatible with the null hypothesis $H_0$. However, all of these distributions lead to the same behavior of the the test statistic that is used to test $H_0$.
In my answer, I have not addressed the distribution of $epsilon$ and implicitly assumed that it is an independent centered normal random variable. If we only assume something like
$$
E[epsilon mid X1, X2] = 0
$$
then a similar conclusion holds asymptotically (under regularity assumptions).
edited Dec 30 '18 at 21:26
answered Dec 30 '18 at 19:47
Andreas DzemskiAndreas Dzemski
3545
3545
$begingroup$
But as I understand it, doesn't the null hypothesis have to be a probability distribution? If I have specific values for the coefficients, I can generate a probability distribution by adding noise (epsilon) to the regression equation. But if I don't have specific values for coefficients, how would I generate the null probability distribution?
$endgroup$
– tmldwn
Dec 30 '18 at 20:35
$begingroup$
A composite null hypothesis is a whole set of possible probability measures.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 20:52
$begingroup$
I have edited my answer to emphasize this point.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 21:28
2
$begingroup$
@tmldwn: Here, the marginal distribution of the t-statistic does indeed not depend on where we are in the null. If you find this hard to understand then I suggest you go carefully through the derivation of the distribution of the t-statistic. Note that the t-statistic depends on the LS estimator. In a way this automatically adjusts the test statistic correctly for the "true" hypothesis in the null space (we don't have to take a stand on what A, B2 are because we don't need them to compute the test statistic).
$endgroup$
– Andreas Dzemski
Dec 31 '18 at 11:02
1
$begingroup$
This answer is completely wrong. As explained in this document, there is anova for the whole regression, but a t-test for each coeffieicnt: reliawiki.org/index.php/…
$endgroup$
– Josh
Dec 31 '18 at 15:47
|
show 5 more comments
$begingroup$
But as I understand it, doesn't the null hypothesis have to be a probability distribution? If I have specific values for the coefficients, I can generate a probability distribution by adding noise (epsilon) to the regression equation. But if I don't have specific values for coefficients, how would I generate the null probability distribution?
$endgroup$
– tmldwn
Dec 30 '18 at 20:35
$begingroup$
A composite null hypothesis is a whole set of possible probability measures.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 20:52
$begingroup$
I have edited my answer to emphasize this point.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 21:28
2
$begingroup$
@tmldwn: Here, the marginal distribution of the t-statistic does indeed not depend on where we are in the null. If you find this hard to understand then I suggest you go carefully through the derivation of the distribution of the t-statistic. Note that the t-statistic depends on the LS estimator. In a way this automatically adjusts the test statistic correctly for the "true" hypothesis in the null space (we don't have to take a stand on what A, B2 are because we don't need them to compute the test statistic).
$endgroup$
– Andreas Dzemski
Dec 31 '18 at 11:02
1
$begingroup$
This answer is completely wrong. As explained in this document, there is anova for the whole regression, but a t-test for each coeffieicnt: reliawiki.org/index.php/…
$endgroup$
– Josh
Dec 31 '18 at 15:47
$begingroup$
But as I understand it, doesn't the null hypothesis have to be a probability distribution? If I have specific values for the coefficients, I can generate a probability distribution by adding noise (epsilon) to the regression equation. But if I don't have specific values for coefficients, how would I generate the null probability distribution?
$endgroup$
– tmldwn
Dec 30 '18 at 20:35
$begingroup$
But as I understand it, doesn't the null hypothesis have to be a probability distribution? If I have specific values for the coefficients, I can generate a probability distribution by adding noise (epsilon) to the regression equation. But if I don't have specific values for coefficients, how would I generate the null probability distribution?
$endgroup$
– tmldwn
Dec 30 '18 at 20:35
$begingroup$
A composite null hypothesis is a whole set of possible probability measures.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 20:52
$begingroup$
A composite null hypothesis is a whole set of possible probability measures.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 20:52
$begingroup$
I have edited my answer to emphasize this point.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 21:28
$begingroup$
I have edited my answer to emphasize this point.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 21:28
2
2
$begingroup$
@tmldwn: Here, the marginal distribution of the t-statistic does indeed not depend on where we are in the null. If you find this hard to understand then I suggest you go carefully through the derivation of the distribution of the t-statistic. Note that the t-statistic depends on the LS estimator. In a way this automatically adjusts the test statistic correctly for the "true" hypothesis in the null space (we don't have to take a stand on what A, B2 are because we don't need them to compute the test statistic).
$endgroup$
– Andreas Dzemski
Dec 31 '18 at 11:02
$begingroup$
@tmldwn: Here, the marginal distribution of the t-statistic does indeed not depend on where we are in the null. If you find this hard to understand then I suggest you go carefully through the derivation of the distribution of the t-statistic. Note that the t-statistic depends on the LS estimator. In a way this automatically adjusts the test statistic correctly for the "true" hypothesis in the null space (we don't have to take a stand on what A, B2 are because we don't need them to compute the test statistic).
$endgroup$
– Andreas Dzemski
Dec 31 '18 at 11:02
1
1
$begingroup$
This answer is completely wrong. As explained in this document, there is anova for the whole regression, but a t-test for each coeffieicnt: reliawiki.org/index.php/…
$endgroup$
– Josh
Dec 31 '18 at 15:47
$begingroup$
This answer is completely wrong. As explained in this document, there is anova for the whole regression, but a t-test for each coeffieicnt: reliawiki.org/index.php/…
$endgroup$
– Josh
Dec 31 '18 at 15:47
|
show 5 more comments
$begingroup$
You can make the same assupmtions for the other variables as the X1. The ANOVA table of the regression gives specific information about each variable significance and the overall significance as well.As far as regression analysis is concerned, the acceptance of null hypothesis implies that the coefficient of the variable is zero, given a certain level of significance.
If you want to acquire a more intuitive aspect of the issue, you can study more about Hypothesis testing.
$endgroup$
add a comment |
$begingroup$
You can make the same assupmtions for the other variables as the X1. The ANOVA table of the regression gives specific information about each variable significance and the overall significance as well.As far as regression analysis is concerned, the acceptance of null hypothesis implies that the coefficient of the variable is zero, given a certain level of significance.
If you want to acquire a more intuitive aspect of the issue, you can study more about Hypothesis testing.
$endgroup$
add a comment |
$begingroup$
You can make the same assupmtions for the other variables as the X1. The ANOVA table of the regression gives specific information about each variable significance and the overall significance as well.As far as regression analysis is concerned, the acceptance of null hypothesis implies that the coefficient of the variable is zero, given a certain level of significance.
If you want to acquire a more intuitive aspect of the issue, you can study more about Hypothesis testing.
$endgroup$
You can make the same assupmtions for the other variables as the X1. The ANOVA table of the regression gives specific information about each variable significance and the overall significance as well.As far as regression analysis is concerned, the acceptance of null hypothesis implies that the coefficient of the variable is zero, given a certain level of significance.
If you want to acquire a more intuitive aspect of the issue, you can study more about Hypothesis testing.
answered Dec 30 '18 at 19:52
LogicseekerLogicseeker
185
185
add a comment |
add a comment |
$begingroup$
The $p$-values are the result of a series of $t$-tests. The null hypothesis is that $B_j=0$, while the alternative hypothesis (again, for each coefficient) is, $B_jne0$
(see here for more details: http://reliawiki.org/index.php/Multiple_Linear_Regression_Analysis#Test_on_Individual_Regression_Coefficients_.28t__Test.29)
$endgroup$
add a comment |
$begingroup$
The $p$-values are the result of a series of $t$-tests. The null hypothesis is that $B_j=0$, while the alternative hypothesis (again, for each coefficient) is, $B_jne0$
(see here for more details: http://reliawiki.org/index.php/Multiple_Linear_Regression_Analysis#Test_on_Individual_Regression_Coefficients_.28t__Test.29)
$endgroup$
add a comment |
$begingroup$
The $p$-values are the result of a series of $t$-tests. The null hypothesis is that $B_j=0$, while the alternative hypothesis (again, for each coefficient) is, $B_jne0$
(see here for more details: http://reliawiki.org/index.php/Multiple_Linear_Regression_Analysis#Test_on_Individual_Regression_Coefficients_.28t__Test.29)
$endgroup$
The $p$-values are the result of a series of $t$-tests. The null hypothesis is that $B_j=0$, while the alternative hypothesis (again, for each coefficient) is, $B_jne0$
(see here for more details: http://reliawiki.org/index.php/Multiple_Linear_Regression_Analysis#Test_on_Individual_Regression_Coefficients_.28t__Test.29)
edited Dec 31 '18 at 17:07
StatsStudent
5,38332042
5,38332042
answered Dec 31 '18 at 15:49
JoshJosh
17411
17411
add a comment |
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f385005%2fwhat-is-the-null-hypothesis-for-the-individual-p-values-in-multiple-regression%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
OEuka7zIadvjf 9eDAX0 3W8,E1H5AgZu,4xsP6xQZ LCd5,blidpuPHDya0QXlvHcN 6kxnEdDPXMq6p eWUVJ hTFPHJ2DaywXSw9
2
$begingroup$
Your model is missing an error term.
$endgroup$
– Andreas Dzemski
Dec 30 '18 at 19:48