3atv精品不卡视频,97人人超碰国产精品最新,中文字幕av一区二区三区人妻少妇,久久久精品波多野结衣,日韩一区二区三区精品

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Logistic regression--转

發布時間:2025/4/5 编程问答 19 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Logistic regression--转 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

原文地址:https://en.wikipedia.org/wiki/Logistic_regression

In?statistics,?logistic regression, or?logit regression, or?logit model[1]?is a?regression?model where the?dependent variable (DV)?is?categorical.

Logistic regression was developed by statistician?David Cox?in 1958[2][3]. The binary logistic model is used to estimate the probability of a binary response based on one or more predictor (or independent) variables (features). As such it is not a classification method. It could be called a?qualitative response/discrete choice model?in the terminology ofeconomics.

Logistic regression measures the relationship between the categorical dependent variable and one or more independent variables by estimating probabilities using a?logistic function, which is the cumulative logistic distribution. Thus, it treats the same set of problems as?probit regression?using similar techniques, with the latter using a cumulative normal distribution curve instead. Equivalently, in the latent variable interpretations of these two methods, the first assumes a standard?logistic distribution?of errors and the second a standard?normal distribution?of errors.[citation needed]

Logistic regression can be seen as a special case of the?generalized linear model?and thus analogous to?linear regression. The model of logistic regression, however, is based on quite different assumptions (about the relationship between dependent and independent variables) from those of linear regression. In particular the key differences of these two models can be seen in the following two features of logistic regression. First, the conditional distribution?{\displaystyle y\mid x}?is a?Bernoulli distribution?rather than a?Gaussian distribution, because the dependent variable is binary. Second, the predicted values are probabilities and are therefore restricted to (0,1) through the?logistic distribution functionbecause logistic regression predicts the?probability?of particular outcomes.

Logistic regression is an alternative to Fisher's 1936 method,?linear discriminant analysis.[4]?If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis.[citation needed]

?

Contents

??[hide]?
  • 1Fields and example applications
    • 1.1Example: Probability of passing an exam versus hours of study
  • 2Basics
  • 3Latent variable interpretation
  • 4Logistic function, odds, odds ratio, and logit
    • 4.1Definition of the logistic function
    • 4.2Definition of the inverse of the logistic function
    • 4.3Interpretation of these terms
    • 4.4Definition of the odds
    • 4.5Definition of the odds ratio
    • 4.6Multiple explanatory variables
  • 5Model fitting
    • 5.1Estimation
      • 5.1.1Maximum likelihood estimation
    • 5.2Evaluating goodness of fit
      • 5.2.1Deviance and likelihood ratio tests
      • 5.2.2Pseudo-R2s
      • 5.2.3Hosmer–Lemeshow test
  • 6Coefficients
    • 6.1Likelihood ratio test
    • 6.2Wald statistic
    • 6.3Case-control sampling
  • 7Formal mathematical specification
    • 7.1Setup
    • 7.2As a generalized linear model
    • 7.3As a latent-variable model
    • 7.4As a two-way latent-variable model
      • 7.4.1Example
    • 7.5As a "log-linear" model
    • 7.6As a single-layer perceptron
    • 7.7In terms of binomial data
  • 8Bayesian logistic regression
    • 8.1Gibbs sampling with an approximating distribution
  • 9Extensions
  • 10Software
  • 11See also
  • 12References
  • 13Further reading
  • 14External links

?

Fields and example applications

Logistic regression is used widely in many fields, including the medical and social sciences. For example, the Trauma and Injury Severity Score (TRISS), which is widely used to predict mortality in injured patients, was originally developed by Boyd et al. using logistic regression.[5]?Many other medical scales used to assess severity of a patient have been developed using logistic regression.[6][7][8][9]?Logistic regression may be used to predict whether a patient has a given disease (e.g.?diabetes;?coronary heart disease), based on observed characteristics of the patient (age, sex,?body mass index, results of various?blood tests, etc.).[1][10]?Another example might be to predict whether an American voter will vote Democratic or Republican, based on age, income, sex, race, state of residence, votes in previous elections, etc.[11]?The technique can also be used in?engineering, especially for predicting the probability of failure of a given process, system or product.[12][13]?It is also used in?marketing?applications such as prediction of a customer's propensity to purchase a product or halt a subscription, etc.[citation needed]?In?economics?it can be used to predict the likelihood of a person's choosing to be in the labor force, and a business application would be to predict the likelihood of a homeowner defaulting on a?mortgage.?Conditional random fields, an extension of logistic regression to sequential data, are used in?natural language processing.

Example: Probability of passing an exam versus hours of study[edit]

A group of 20 students spend between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability that the student will pass the exam?

The table shows the number of hours each student spent studying, and whether they passed (1) or failed (0).

Hours0.500.751.001.251.501.751.752.002.252.502.753.003.253.504.004.254.504.755.005.50
Pass00000010101010111111

The graph shows the probability of passing the exam versus the number of hours studying, with the logistic regression curve fitted to the data.

Graph of a logistic regression curve showing probability of passing an exam versus hours studying

The logistic regression analysis gives the following output.

?CoefficientStd.Errorz-valueP-value (Wald)
Intercept-4.07771.7610-2.3160.0206
Hours1.50460.62872.3930.0167

The output indicates that hours studying is significantly associated with the probability of passing the exam (p=0.0167,?Wald test). The output also provides the coefficients for Intercept = -4.0777 and Hours = 1.5046. These coefficients are entered in the logistic regression equation to estimate the probability of passing the exam:

  • Probability of passing exam =1/(1+exp(-(-4.0777+1.5046* Hours)))

For example, for a student who studies 2 hours, entering the value Hours =2 in the equation gives the estimated probability of passing the exam of p=0.26:

  • Probability of passing exam =1/(1+exp(-(-4.0777+1.5046*2))) = 0.26.

Similarly, for a student who studies 4 hours, the estimated probability of passing the exam is p=0.87:

  • Probability of passing exam =1/(1+exp(-(-4.0777+1.5046*4))) = 0.87.

This table shows the probability of passing the exam for several values of hours studying.

Hours of studyProbability of passing exam
10.07
20.26
30.61
40.87
50.97

The output from the logistic regression analysis gives a p-value of p=0.0167, which is based on the Wald z-score. Rather than the Wald method, the recommended method to calculate the p-value for logistic regression is the?Likelihood Ratio Test?(LRT), which for this data gives p=0.0006.

Basics

Logistic regression can be binomial, ordinal or multinomial. Binomial or binary logistic regression deals with situations in which the observed outcome for adependent variable?can have only two possible types (for example, "dead" vs. "alive" or "win" vs. "loss").?Multinomial logistic regression?deals with situations where the outcome can have three or more possible types (e.g., "disease A" vs. "disease B" vs. "disease C") that are not ordered.?Ordinal logistic regressiondeals with dependent variables that are ordered. In binary logistic regression, the outcome is usually coded as "0" or "1", as this leads to the most straightforward interpretation.[14]?If a particular observed outcome for the dependent variable is the noteworthy possible outcome (referred to as a "success" or a "case") it is usually coded as "1" and the contrary outcome (referred to as a "failure" or a "noncase") as "0". Logistic regression is used to predict theodds?of being a case based on the values of the?independent variables?(predictors). The odds are defined as the probability that a particular outcome is a case divided by the probability that it is a noncase.

Like other forms of?regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting binary dependent variables (treating the dependent variable as the outcome of aBernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated. In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive). To do that logistic regression first takes the?odds?of the event happening for different levels of each independent variable, then takes the ratio of those odds (which is continuous but cannot be negative) and then takes thelogarithm?of that ratio. This is referred to as?logit?or log-odds) to create a continuous criterion as a transformed version of the dependent variable.

Thus the logit transformation is referred to as the?link function?in logistic regression—although the dependent variable in logistic regression is binomial, the logit is the continuous criterion upon which linear regression is conducted.[14]

The logit of success is then fitted to the predictors using?linear regression?analysis. The predicted value of the logit is converted back into predicted odds via the inverse of the natural logarithm, namely the?exponential function. Thus, although the observed dependent variable in logistic regression is a zero-or-one variable, the logistic regression estimates the odds, as a continuous variable, that the dependent variable is a success (a case). In some applications the odds are all that is needed. In others, a specific yes-or-no prediction is needed for whether the dependent variable is or is not a case; this categorical prediction can be based on the computed odds of a success, with predicted odds above some chosen cutoff value being translated into a prediction of a success.

Latent variable interpretation

The logistic regression can be understood simply as finding the?{\displaystyle \beta }?parameters that best fit:

{\displaystyle y=1}?if?{\displaystyle \beta _{0}+\beta _{1}x+\epsilon >0}
{\displaystyle y=0}, otherwise

where?{\displaystyle \epsilon }?is an error distributed by the standard?logistic distribution. (If the standard normal distribution is used instead, it is a probit regression.)

The associated latent variable is?{\displaystyle y\prime =\beta _{0}+\beta _{1}x+\epsilon }. The error term?{\displaystyle \epsilon }?is not observed, and so the?{\displaystyle y\prime }?is also an unobservable, hence termed "latent". (The observed data are values of?{\displaystyle y}?and?{\displaystyle x}.) Unlike ordinary regression, however, the?{\displaystyle \beta }?parameters cannot be expressed by any direct formula of the?{\displaystyle y}?and?{\displaystyle x}?values in the observed data. Instead they are to be found by an iterative search process, usually implemented by a software program, that finds the maximum of a complicated "likelihood expression" that is a function of all of the observed?{\displaystyle y}?and?{\displaystyle x}?values. The estimation approach is explained below.

Logistic function, odds, odds ratio, and logit

Figure 1. The standard logistic function?{\displaystyle \sigma (t)}; note that?{\displaystyle \sigma (t)\in (0,1)}?for all?{\displaystyle t}.

Definition of the logistic function

An explanation of logistic regression can begin with an explanation of the standard?logistic function. The logistic function is useful because it can take an input with any value from negative to positive infinity, whereas the output always takes values between zero and one[14]?and hence is interpretable as a probability. The logistic function?{\displaystyle \sigma (t)}?is defined as follows:

{\displaystyle \sigma (t)={\frac {e^{t}}{e^{t}+1}}={\frac {1}{1+e^{-t}}}}

A graph of the logistic function on the?t-interval (-6,6) is shown in Figure 1.

Let us assume that?{\displaystyle t}?is a linear function of a single?explanatory variable?{\displaystyle x}?(the case where?{\displaystyle t}?is a?linear combination?of multiple explanatory variables is treated similarly). We can then express?{\displaystyle t}?as follows:

{\displaystyle t=\beta _{0}+\beta _{1}x}

And the logistic function can now be written as:

{\displaystyle F(x)={\frac {1}{1+e^{-(\beta _{0}+\beta _{1}x)}}}}

Note that?{\displaystyle F(x)}?is interpreted as the probability of the dependent variable equaling a "success" or "case" rather than a failure or non-case. It's clear that theresponse variables?{\displaystyle Y_{i}}?are not identically distributed:?{\displaystyle P(Y_{i}=1\mid X)}?differs from one data point?{\displaystyle X_{i}}?to another, though they are independent given?design matrix?{\displaystyle X}?and shared with parameters?{\displaystyle \beta }.[1]

Definition of the inverse of the logistic function

We can now define the inverse of the logistic function,?{\displaystyle g}, the?logit?(log odds):

{\displaystyle g(F(x))=\ln \left({\frac {F(x)}{1-F(x)}}\right)=\beta _{0}+\beta _{1}x,}

and equivalently, after exponentiating both sides:

{\displaystyle {\frac {F(x)}{1-F(x)}}=e^{\beta _{0}+\beta _{1}x}.}

Interpretation of these terms

In the above equations, the terms are as follows:

  • {\displaystyle g(\cdot )}?refers to the logit function. The equation for?{\displaystyle g(F(x))}?illustrates that the?logit?(i.e., log-odds or natural logarithm of the odds) is equivalent to the linear regression expression.
  • {\displaystyle \ln }?denotes the?natural logarithm.
  • {\displaystyle F(x)}?is the probability that the dependent variable equals a case, given some linear combination of the predictors. The formula for?{\displaystyle F(x)}?illustrates that the probability of the dependent variable equaling a case is equal to the value of the logistic function of the linear regression expression. This is important in that it shows that the value of the linear regression expression can vary from negative to positive infinity and yet, after transformation, the resulting expression for the probability?{\displaystyle F(x)}?ranges between 0 and 1.
  • {\displaystyle \beta _{0}}?is the?intercept?from the linear regression equation (the value of the criterion when the predictor is equal to zero).
  • {\displaystyle \beta _{1}x}?is the regression coefficient multiplied by some value of the predictor.
  • base?{\displaystyle e}?denotes the exponential function.

Definition of the odds

The odds of the dependent variable equaling a case (given some linear combination?{\displaystyle x}?of the predictors) is equivalent to the exponential function of the linear regression expression. This illustrates how the?logit?serves as a link function between the probability and the linear regression expression. Given that the logit ranges between negative and positive infinity, it provides an adequate criterion upon which to conduct linear regression and the logit is easily converted back into the odds.[14]

So we define odds of the dependent variable equaling a case (given some linear combination?{\displaystyle x}?of the predictors) as follows:

{\displaystyle {\text{odds}}=e^{\beta _{0}+\beta _{1}x}.}

Definition of the odds ratio

For a continuous independent variable the odds ratio can be defined as:

{\displaystyle \mathrm {OR} ={\frac {\operatorname {odds} (x+1)}{\operatorname {odds} (x)}}={\frac {\frac {F(x+1)}{1-F(x+1)}}{\frac {F(x)}{1-F(x)}}}={\frac {e^{\beta _{0}+\beta _{1}(x+1)}}{e^{\beta _{0}+\beta _{1}x}}}=e^{\beta _{1}}}

This exponential relationship provides an interpretation for?{\displaystyle \beta _{1}}: The odds multiply by?{\displaystyle e^{\beta _{1}}}?for every 1-unit increase in x.[15]

For a binary independent variable the odds ratio is defined as?{\displaystyle {\frac {ad}{bc}}}?where a, b, c and d are cells in a 2x2?contingency table.[16]

Multiple explanatory variables

If there are multiple explanatory variables, the above expression?{\displaystyle \beta _{0}+\beta _{1}x}?can be revised to?{\displaystyle \beta _{0}+\beta _{1}x_{1}+\beta _{2}x_{2}+\cdots +\beta _{m}x_{m}.}?Then when this is used in the equation relating the logged odds of a success to the values of the predictors, the linear regression will be a?multiple regression?with?m?explanators; the parameters?{\displaystyle \beta _{j}}?for all?j?= 0, 1, 2, ...,?m?are all estimated.

Model fitting

Estimation

Because the model can be expressed as a?generalized linear model?(see?below), for 0<p<1,?ordinary least squares?can suffice, with?R-squared?as the measure ofgoodness of fit?in the fitting space. When p=0 or 1, more complex methods are required.[citation needed]

Maximum likelihood estimation[edit]

The regression coefficients are usually estimated using?maximum likelihood?estimation.[17]?Unlike linear regression with normally distributed residuals, it is not possible to find a closed-form expression for the coefficient values that maximize the likelihood function, so that an iterative process must be used instead; for example?Newton's method. This process begins with a tentative solution, revises it slightly to see if it can be improved, and repeats this revision until improvement is minute, at which point the process is said to have converged.[18]

In some instances the model may not reach convergence. Nonconvergence of a model indicates that the coefficients are not meaningful because the iterative process was unable to find appropriate solutions. A failure to converge may occur for a number of reasons: having a large ratio of predictors to cases,multicollinearity,?sparseness, or complete separation.

  • Having a large ratio of variables to cases results in an overly conservative Wald statistic (discussed below) and can lead to nonconvergence.
  • Multicollinearity refers to unacceptably high correlations between predictors. As multicollinearity increases, coefficients remain unbiased but standard errors increase and the likelihood of model convergence decreases.[17]?To detect multicollinearity amongst the predictors, one can conduct a linear regression analysis with the predictors of interest for the sole purpose of examining the tolerance statistic?[17]?used to assess whether multicollinearity is unacceptably high.
  • Sparseness in the data refers to having a large proportion of empty cells (cells with zero counts). Zero cell counts are particularly problematic with categorical predictors. With continuous predictors, the model can infer values for the zero cell counts, but this is not the case with categorical predictors. The model will not converge with zero cell counts for categorical predictors because the natural logarithm of zero is an undefined value, so that final solutions to the model cannot be reached. To remedy this problem, researchers may collapse categories in a theoretically meaningful way or add a constant to all cells.[17]
  • Another numerical problem that may lead to a lack of convergence is complete separation, which refers to the instance in which the predictors perfectly predict the criterion?– all cases are accurately classified. In such instances, one should reexamine the data, as there is likely some kind of error.[14]

As a rule of thumb, logistic regression models require a minimum of about 10 events per explaining variable (where?event?denotes the cases belonging to the less frequent category in the dependent variable).[19]

Evaluating goodness of fit[edit]

Discrimination?in linear regression models is generally measured using?R2. Since this has no direct analog in logistic regression, various methods[20]:ch.21including the following can be used instead.

Deviance and likelihood ratio tests[edit]

In linear regression analysis, one is concerned with partitioning variance via the?sum of squares?calculations – variance in the criterion is essentially divided into variance accounted for by the predictors and residual variance. In logistic regression analysis,?deviance?is used in lieu of sum of squares calculations.[21]?Deviance is analogous to the sum of squares calculations in linear regression[14]?and is a measure of the lack of fit to the data in a logistic regression model.[21]?When a "saturated" model is available (a model with a theoretically perfect fit), deviance is calculated by comparing a given model with the saturated model.[14]?This computation gives the?likelihood-ratio test:[14]

{\displaystyle D=-2\ln {\frac {\text{likelihood of the fitted model}}{\text{likelihood of the saturated model}}}.}

In the above equation?D?represents the deviance and ln represents the natural logarithm. The log of this likelihood ratio (the ratio of the fitted model to the saturated model) will produce a negative value, hence the need for a negative sign.?D?can be shown to follow an approximate?chi-squared distribution.[14]Smaller values indicate better fit as the fitted model deviates less from the saturated model. When assessed upon a chi-square distribution, nonsignificant chi-square values indicate very little unexplained variance and thus, good model fit. Conversely, a significant chi-square value indicates that a significant amount of the variance is unexplained.

When the saturated model is not available (a common case), deviance is calculated simply as -2·(log likelihood of the fitted model), and the reference to the saturated model's log likelihood can be removed from all that follows without harm.

Two measures of deviance are particularly important in logistic regression: null deviance and model deviance. The null deviance represents the difference between a model with only the intercept (which means "no predictors") and the saturated model. The model deviance represents the difference between a model with at least one predictor and the saturated model.[21]?In this respect, the null model provides a baseline upon which to compare predictor models. Given that deviance is a measure of the difference between a given model and the saturated model, smaller values indicate better fit. Thus, to assess the contribution of a predictor or set of predictors, one can subtract the model deviance from the null deviance and assess the difference on a?{\displaystyle \chi _{s-p}^{2},}?chi-square distribution withdegrees of freedom[14]?equal to the difference in the number of parameters estimated.

Let

{\displaystyle {\begin{aligned}D_{\text{null}}&=-2\ln {\frac {\text{likelihood of null model}}{\text{likelihood of the saturated model}}}\\\ D_{\text{fitted}}&=-2\ln {\frac {\text{likelihood of fitted model}}{\text{likelihood of the saturated model}}}.\end{aligned}}}

Then the difference of both is:

{\displaystyle {\begin{aligned}D_{\text{null}}-D_{\text{fitted}}&=-2\left(\ln {\frac {\text{likelihood of null model}}{\text{likelihood of the saturated model}}}-\ln {\frac {\text{likelihood of fitted model}}{\text{likelihood of the saturated model}}}\right)\\&=-2\ln {\frac {\frac {\text{likelihood of null model}}{\text{likelihood of the saturated model}}}{\frac {\text{likelihood of fitted model}}{\text{likelihood of the saturated model}}}}\\&=-2\ln {\frac {\text{likelihood of the null model}}{\text{likelihood of fitted model}}}.\end{aligned}}}

If the model deviance is significantly smaller than the null deviance then one can conclude that the predictor or set of predictors significantly improved model fit. This is analogous to the?F-test used in linear regression analysis to assess the significance of prediction.[21]

Pseudo-R2s

In linear regression the squared multiple correlation,?R2?is used to assess goodness of fit as it represents the proportion of variance in the criterion that is explained by the predictors.[21]?In logistic regression analysis, there is no agreed upon analogous measure, but there are several competing measures each with limitations.[21][22]

Four of the most commonly used indices and one less commonly used one are examined on this page:

  • Likelihood ratio?R2L
  • Cox and Snell?R2CS
  • Nagelkerke?R2N
  • McFadden?R2McF
  • Tjur?R2T

R2L?is given by?[21]

{\displaystyle R_{\text{L}}^{2}={\frac {D_{\text{null}}-D_{\text{fitted}}}{D_{\text{null}}}}.}

This is the most analogous index to the squared multiple correlation in linear regression.[17]?It represents the proportional reduction in the deviance wherein the deviance is treated as a measure of variation analogous but not identical to the?variance?in?linear regression?analysis.[17]?One limitation of the likelihood ratio?R2?is that it is not monotonically related to the odds ratio,[21]?meaning that it does not necessarily increase as the odds ratio increases and does not necessarily decrease as the odds ratio decreases.

R2CS?is an alternative index of goodness of fit related to the?R2?value from linear regression.[22]?It is given by:

{\displaystyle R_{\text{CS}}^{2}=1-\left({\frac {L_{M}}{L_{0}}}\right)^{2/n}}.

where?LM?and?L0?are the likelihoods for the model being fitted and the null model, respectively. The Cox and Snell index is problematic as its maximum value is?{\displaystyle 1-L_{0}^{2/n}}. The highest this upper bound can be is 0.75, but it can easily be as low as 0.48 when the marginal proportion of cases is small.[22]

R2N?provides a correction to the Cox and Snell?R2?so that the maximum value is equal to 1. Nevertheless, the Cox and Snell and likelihood ratio?R2s show greater agreement with each other than either does with the Nagelkerke?R2.[21]?Of course, this might not be the case for values exceeding .75 as the Cox and Snell index is capped at this value. The likelihood ratio?R2?is often preferred to the alternatives as it is most analogous to?R2?in?linear regression, is independent of the base rate (both Cox and Snell and Nagelkerke?R2s increase as the proportion of cases increase from 0 to .5) and varies between 0 and 1.

R2McF?is defined as

{\displaystyle R_{\text{McF}}^{2}=1-{\frac {\ln(L_{M})}{\ln(L_{0})}}},

and is preferred over?R2CS?by Allison.[22]?The two expressions?R2McF?and?R2CS?are then related respectively by,

{\displaystyle {\begin{matrix}R_{\text{CS}}^{2}=1-\left({\dfrac {1}{L_{0}}}\right)^{\frac {2(R_{\text{McF}}^{2})}{n}}\\[1.5em]R_{\text{McF}}^{2}=-{\dfrac {n}{2}}\cdot {\dfrac {\ln(1-R_{\text{CS}}^{2})}{\ln(L_{0})}}\end{matrix}}}

However, Allison now prefers?R2T?which is a relatively new measure developed by Tjur.[23]?It can be calculated in two steps:[22]

  • For each level of the dependent variable, find the mean of the predicted probabilities of an event.
  • Take the absolute value of the difference between these means
  • A word of caution is in order when interpreting pseudo-R2?statistics. The reason these indices of fit are referred to as?pseudo?R2?is that they do not represent the proportionate reduction in error as the?R2?in?linear regression?does.[21]?Linear regression assumes?homoscedasticity, that the error variance is the same for all values of the criterion. Logistic regression will always be?heteroscedastic?– the error variances differ for each value of the predicted score. For each value of the predicted score there would be a different value of the proportionate reduction in error. Therefore, it is inappropriate to think of?R2?as a proportionate reduction in error in a universal sense in logistic regression.[21]

    Hosmer–Lemeshow test

    The?Hosmer–Lemeshow test?uses a test statistic that asymptotically follows a?{\displaystyle \chi ^{2}}?distribution?to assess whether or not the observed event rates match expected event rates in subgroups of the model population. This test is considered to be obsolete by some statisticians because of its dependence on arbitrary binning of predicted probabilities and relative low power.[24]

    Coefficients

    After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor.[21]?In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient – the odds ratio (see?definition). In linear regression, the significance of a regression coefficient is assessed by computing a?t?test. In logistic regression, there are several different tests designed to assess the significance of an individual predictor, most notably the likelihood ratio test and the Wald statistic.

    Likelihood ratio test

    The?likelihood-ratio test?discussed above to assess model fit is also the recommended procedure to assess the contribution of individual "predictors" to a given model.[14][17][21]?In the case of a single predictor model, one simply compares the deviance of the predictor model with that of the null model on a chi-square distribution with a single degree of freedom. If the predictor model has a significantly smaller deviance (c.f chi-square using the difference in degrees of freedom of the two models), then one can conclude that there is a significant association between the "predictor" and the outcome. Although some common statistical packages (e.g. SPSS) do provide likelihood ratio test statistics, without this computationally intensive test it would be more difficult to assess the contribution of individual predictors in the multiple logistic regression case. To assess the contribution of individual predictors one can enter the predictors hierarchically, comparing each new model with the previous to determine the contribution of each predictor.[21]?There is some debate among statisticians about the appropriateness of so-called "stepwise" procedures. The fear is that they may not preserve nominal statistical properties and may become misleading.[1]

    Wald statistic

    Alternatively, when assessing the contribution of individual predictors in a given model, one may examine the significance of the?Wald statistic. The Wald statistic, analogous to the?t-test in linear regression, is used to assess the significance of coefficients. The Wald statistic is the ratio of the square of the regression coefficient to the square of the standard error of the coefficient and is asymptotically distributed as a chi-square distribution.[17]

    {\displaystyle W_{j}={\frac {B_{j}^{2}}{SE_{B_{j}}^{2}}}}

    Although several statistical packages (e.g., SPSS, SAS) report the Wald statistic to assess the contribution of individual predictors, the Wald statistic has limitations. When the regression coefficient is large, the standard error of the regression coefficient also tends to be large increasing the probability ofType-II error. The Wald statistic also tends to be biased when data are sparse.[21]

    Case-control sampling

    Suppose cases are rare. Then we might wish to sample them more frequently than their prevalence in the population. For example, suppose there is a disease that affects 1 person in 10,000 and to collect our data we need to do a complete physical. It may be too expensive to do thousands of physicals of healthy people in order to obtain data for only a few diseased individuals. Thus, we may evaluate more diseased individuals. This is also called unbalanced data. As a rule of thumb, sampling controls at a rate of five times the number of cases will produce sufficient control data.[25]

    If we form a logistic model from such data, if the model is correct, the?{\displaystyle \beta _{j}}?parameters are all correct except for?{\displaystyle \beta _{0}}. We can correct?{\displaystyle \beta _{0}}?if we know the true prevalence as follows:[25]

    {\displaystyle {\hat {\beta _{0}^{*}}}={\hat {\beta _{0}}}+\log {{\pi } \over {1-\pi }}-\log {{\tilde {\pi }} \over {1-{\tilde {\pi }}}}}

    where?{\displaystyle \pi }?is the true prevalence and?{\displaystyle {\tilde {\pi }}}?is the prevalence in the sample.

    Formal mathematical specification

    There are various equivalent specifications of logistic regression, which fit into different types of more general models. These different specifications allow for different sorts of useful generalizations.

    Setup

    The basic setup of logistic regression is the same as for standard?linear regression.

    It is assumed that we have a series of?N?observed data points. Each data point?i?consists of a set of?m?explanatory variables?x1,i?...?xm,i?(also calledindependent variables, predictor variables, input variables, features, or attributes), and an associated?binary-valued?outcome variable?Yi?(also known as adependent variable, response variable, output variable, outcome variable or class variable), i.e. it can assume only the two possible values 0 (often meaning "no" or "failure") or 1 (often meaning "yes" or "success"). The goal of logistic regression is to explain the relationship between the explanatory variables and the outcome, so that an outcome can be predicted for a new set of explanatory variables.

    Some examples:

    • The observed outcomes are the presence or absence of a given disease (e.g. diabetes) in a set of patients, and the explanatory variables might be characteristics of the patients thought to be pertinent (sex, race, age,?blood pressure,?body-mass index, etc.).
    • The observed outcomes are the votes (e.g.?Democratic?or?Republican) of a set of people in an election, and the explanatory variables are the demographic characteristics of each person (e.g. sex, race, age, income, etc.). In such a case, one of the two outcomes is arbitrarily coded as 1, and the other as 0.

    As in linear regression, the outcome variables?Yi?are assumed to depend on the explanatory variables?x1,i?...?xm,i.

    Explanatory variables

    As shown above in the above examples, the explanatory variables may be of any?type:?real-valued,?binary,?categorical, etc. The main distinction is betweencontinuous variables?(such as income, age and?blood pressure) and?discrete variables?(such as sex or race). Discrete variables referring to more than two possible choices are typically coded using?dummy variables?(or?indicator variables), that is, separate explanatory variables taking the value 0 or 1 are created for each possible value of the discrete variable, with a 1 meaning "variable does have the given value" and a 0 meaning "variable does not have that value". For example, a four-way discrete variable of?blood type?with the possible values "A, B, AB, O" can be converted to four separate two-way dummy variables, "is-A, is-B, is-AB, is-O", where only one of them has the value 1 and all the rest have the value 0. This allows for separate regression coefficients to be matched for each possible value of the discrete variable. (In a case like this, only three of the four dummy variables are independent of each other, in the sense that once the values of three of the variables are known, the fourth is automatically determined. Thus, it is necessary to encode only three of the four possibilities as dummy variables. This also means that when all four possibilities are encoded, the overall model is not?identifiable?in the absence of additional constraints such as a regularization constraint. Theoretically, this could cause problems, but in reality almost all logistic regression models are fitted with regularization constraints.)

    Outcome variables

    Formally, the outcomes?Yi?are described as being?Bernoulli-distributed?data, where each outcome is determined by an unobserved probability?pi?that is specific to the outcome at hand, but related to the explanatory variables. This can be expressed in any of the following equivalent forms:

    {\displaystyle {\begin{aligned}Y_{i}\mid x_{1,i},\ldots ,x_{m,i}\ &\sim \operatorname {Bernoulli} (p_{i})\\\mathbb {E} [Y_{i}\mid x_{1,i},\ldots ,x_{m,i}]&=p_{i}\\\Pr(Y_{i}=y\mid x_{1,i},\ldots ,x_{m,i})&={\begin{cases}p_{i}&{\text{if }}y=1\\1-p_{i}&{\text{if }}y=0\end{cases}}\\\Pr(Y_{i}=y\mid x_{1,i},\ldots ,x_{m,i})&=p_{i}^{y}(1-p_{i})^{(1-y)}\end{aligned}}}

    The meanings of these four lines are:

  • The first line expresses the?probability distribution?of each?Yi: Conditioned on the explanatory variables, it follows a?Bernoulli distribution?with parameters?pi, the probability of the outcome of 1 for trial?i. As noted above, each separate trial has its own probability of success, just as each trial has its own explanatory variables. The probability of success?pi?is not observed, only the outcome of an individual Bernoulli trial using that probability.
  • The second line expresses the fact that the?expected value?of each?Yi?is equal to the probability of success?pi, which is a general property of the Bernoulli distribution. In other words, if we run a large number of Bernoulli trials using the same probability of success?pi, then take the average of all the 1 and 0 outcomes, then the result would be close to?pi. This is because doing an average this way simply computes the proportion of successes seen, which we expect to converge to the underlying probability of success.
  • The third line writes out the?probability mass function?of the Bernoulli distribution, specifying the probability of seeing each of the two possible outcomes.
  • The fourth line is another way of writing the probability mass function, which avoids having to write separate cases and is more convenient for certain types of calculations. This relies on the fact that?Yi?can take only the value 0 or 1. In each case, one of the exponents will be 1, "choosing" the value under it, while the other is 0, "canceling out" the value under it. Hence, the outcome is either?pi?or 1???pi, as in the previous line.
  • Linear predictor function

    The basic idea of logistic regression is to use the mechanism already developed for?linear regression?by modeling the probability?pi?using a?linear predictor function, i.e. a?linear combination?of the explanatory variables and a set of?regression coefficients?that are specific to the model at hand but the same for all trials. The linear predictor function?{\displaystyle f(i)}?for a particular data point?i?is written as:

    {\displaystyle f(i)=\beta _{0}+\beta _{1}x_{1,i}+\cdots +\beta _{m}x_{m,i},}

    where?{\displaystyle \beta _{0},\ldots ,\beta _{m}}?are?regression coefficients?indicating the relative effect of a particular explanatory variable on the outcome.

    The model is usually put into a more compact form as follows:

    • The regression coefficients?β0,?β1, ...,?βm?are grouped into a single vector?β?of size?m?+?1.
    • For each data point?i, an additional explanatory pseudo-variable?x0,i?is added, with a fixed value of 1, corresponding to the?intercept?coefficient?β0.
    • The resulting explanatory variables?x0,i,?x1,i, ...,?xm,i?are then grouped into a single vector?Xi?of size?m?+?1.

    This makes it possible to write the linear predictor function as follows:

    {\displaystyle f(i)={\boldsymbol {\beta }}\cdot \mathbf {X} _{i},}

    using the notation for a?dot product?between two vectors.

    As a generalized linear model

    The particular model used by logistic regression, which distinguishes it from standard?linear regression?and from other types of?regression analysis?used forbinary-valued?outcomes, is the way the probability of a particular outcome is linked to the linear predictor function:

    {\displaystyle \operatorname {logit} (\mathbb {E} [Y_{i}\mid x_{1,i},\ldots ,x_{m,i}])=\operatorname {logit} (p_{i})=\ln \left({\frac {p_{i}}{1-p_{i}}}\right)=\beta _{0}+\beta _{1}x_{1,i}+\cdots +\beta _{m}x_{m,i}}

    Written using the more compact notation described above, this is:

    {\displaystyle \operatorname {logit} (\mathbb {E} [Y_{i}\mid \mathbf {X} _{i}])=\operatorname {logit} (p_{i})=\ln \left({\frac {p_{i}}{1-p_{i}}}\right)={\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}

    This formulation expresses logistic regression as a type of?generalized linear model, which predicts variables with various types of?probability distributionsby fitting a linear predictor function of the above form to some sort of arbitrary transformation of the expected value of the variable.

    The intuition for transforming using the logit function (the natural log of the odds) was explained above. It also has the practical effect of converting the probability (which is bounded to be between 0 and 1) to a variable that ranges over?{\displaystyle (-\infty ,+\infty )}?— thereby matching the potential range of the linear prediction function on the right side of the equation.

    Note that both the probabilities?pi?and the regression coefficients are unobserved, and the means of determining them is not part of the model itself. They are typically determined by some sort of optimization procedure, e.g.?maximum likelihood estimation, that finds values that best fit the observed data (i.e. that give the most accurate predictions for the data already observed), usually subject to?regularization?conditions that seek to exclude unlikely values, e.g. extremely large values for any of the regression coefficients. The use of a regularization condition is equivalent to doing?maximum a posteriori?(MAP) estimation, an extension of maximum likelihood. (Regularization is most commonly done using?a squared regularizing function, which is equivalent to placing a zero-mean?Gaussian?prior distribution?on the coefficients, but other regularizers are also possible.) Whether or not regularization is used, it is usually not possible to find a closed-form solution; instead, an iterative numerical method must be used, such as?iteratively reweighted least squares?(IRLS) or, more commonly these days, a?quasi-Newton method?such as the?L-BFGS method.

    The interpretation of the?βj?parameter estimates is as the additive effect on the log of the?odds?for a unit change in the?jth explanatory variable. In the case of a dichotomous explanatory variable, for instance gender,?{\displaystyle e^{\beta }}?is the estimate of the odds of having the outcome for, say, males compared with females.

    An equivalent formula uses the inverse of the logit function, which is the?logistic function, i.e.:

    {\displaystyle \mathbb {E} [Y_{i}\mid \mathbf {X} _{i}]=p_{i}=\operatorname {logit} ^{-1}({\boldsymbol {\beta }}\cdot \mathbf {X} _{i})={\frac {1}{1+e^{-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}}

    The formula can also be written as a?probability distribution?(specifically, using a?probability mass function):

    {\displaystyle \operatorname {Pr} (Y_{i}=y\mid \mathbf {X} _{i})={p_{i}}^{y}(1-p_{i})^{1-y}=\left({\frac {e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}{1+e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}\right)^{y}\left(1-{\frac {e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}{1+e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}\right)^{1-y}={\frac {e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}\cdot y}}{1+e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}}

    As a latent-variable model

    The above model has an equivalent formulation as a?latent-variable model. This formulation is common in the theory of?discrete choice?models, and makes it easier to extend to certain more complicated models with multiple, correlated choices, as well as to compare logistic regression to the closely related?probit model.

    Imagine that, for each trial?i, there is a continuous?latent variable?Yi*?(i.e. an unobserved?random variable) that is distributed as follows:

    {\displaystyle Y_{i}^{\ast }={\boldsymbol {\beta }}\cdot \mathbf {X} _{i}+\varepsilon \,}

    where

    {\displaystyle \varepsilon \sim \operatorname {Logistic} (0,1)\,}

    i.e. the latent variable can be written directly in terms of the linear predictor function and an additive random?error variable?that is distributed according to a standard?logistic distribution.

    Then?Yi?can be viewed as an indicator for whether this latent variable is positive:

    {\displaystyle Y_{i}={\begin{cases}1&{\text{if }}Y_{i}^{\ast }>0\ {\text{ i.e. }}-\varepsilon <{\boldsymbol {\beta }}\cdot \mathbf {X} _{i},\\0&{\text{otherwise.}}\end{cases}}}

    The choice of modeling the error variable specifically with a standard logistic distribution, rather than a general logistic distribution with the location and scale set to arbitrary values, seems restrictive, but in fact it is not. It must be kept in mind that we can choose the regression coefficients ourselves, and very often can use them to offset changes in the parameters of the error variable's distribution. For example, a logistic error-variable distribution with a non-zero location parameter?μ?(which sets the mean) is equivalent to a distribution with a zero location parameter, where?μ?has been added to the intercept coefficient. Both situations produce the same value for?Yi*?regardless of settings of explanatory variables. Similarly, an arbitrary scale parameter?s?is equivalent to setting the scale parameter to 1 and then dividing all regression coefficients by?s. In the latter case, the resulting value of?Yi*?will be smaller by a factor of?s?than in the former case, for all sets of explanatory variables — but critically, it will always remain on the same side of 0, and hence lead to the same?Yi?choice.

    (Note that this predicts that the irrelevancy of the scale parameter may not carry over into more complex models where more than two choices are available.)

    It turns out that this formulation is exactly equivalent to the preceding one, phrased in terms of the?generalized linear model?and without any?latent variables. This can be shown as follows, using the fact that the?cumulative distribution function?(CDF) of the standard?logistic distribution?is the?logistic function, which is the inverse of the?logit function, i.e.

    {\displaystyle \Pr(\varepsilon <x)=\operatorname {logit} ^{-1}(x)}

    Then:

    {\displaystyle {\begin{aligned}\Pr(Y_{i}=1\mid \mathbf {X} _{i})&=\Pr(Y_{i}^{\ast }>0\mid \mathbf {X} _{i})&\\&=\Pr({\boldsymbol {\beta }}\cdot \mathbf {X} _{i}+\varepsilon >0)&\\&=\Pr(\varepsilon >-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&\\&=\Pr(\varepsilon <{\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&&{\text{(because the logistic distribution is symmetric)}}\\&=\operatorname {logit} ^{-1}({\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&\\&=p_{i}&&{\text{(see above)}}\end{aligned}}}

    This formulation—which is standard in?discrete choice?models—makes clear the relationship between logistic regression (the "logit model") and the?probit model, which uses an error variable distributed according to a standard?normal distribution?instead of a standard logistic distribution. Both the logistic and normal distributions are symmetric with a basic unimodal, "bell curve" shape. The only difference is that the logistic distribution has somewhat?heavier tails, which means that it is less sensitive to outlying data (and hence somewhat more?robust?to model mis-specifications or erroneous data).

    As a two-way latent-variable model

    Yet another formulation uses two separate latent variables:

    {\displaystyle {\begin{aligned}Y_{i}^{0\ast }&={\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}+\varepsilon _{0}\,\\Y_{i}^{1\ast }&={\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}+\varepsilon _{1}\,\end{aligned}}}

    where

    {\displaystyle {\begin{aligned}\varepsilon _{0}&\sim \operatorname {EV} _{1}(0,1)\\\varepsilon _{1}&\sim \operatorname {EV} _{1}(0,1)\end{aligned}}}

    where?EV1(0,1) is a standard type-1?extreme value distribution: i.e.

    {\displaystyle \Pr(\varepsilon _{0}=x)=\Pr(\varepsilon _{1}=x)=e^{-x}e^{-e^{-x}}}

    Then

    {\displaystyle Y_{i}={\begin{cases}1&{\text{if }}Y_{i}^{1\ast }>Y_{i}^{0\ast },\\0&{\text{otherwise.}}\end{cases}}}

    This model has a separate latent variable and a separate set of regression coefficients for each possible outcome of the dependent variable. The reason for this separation is that it makes it easy to extend logistic regression to multi-outcome categorical variables, as in the?multinomial logit?model. In such a model, it is natural to model each possible outcome using a different set of regression coefficients. It is also possible to motivate each of the separate latent variables as the theoretical?utility?associated with making the associated choice, and thus motivate logistic regression in terms of?utility theory. (In terms of utility theory, a rational actor always chooses the choice with the greatest associated utility.) This is the approach taken by economists when formulatingdiscrete choice?models, because it both provides a theoretically strong foundation and facilitates intuitions about the model, which in turn makes it easy to consider various sorts of extensions. (See the example below.)

    The choice of the type-1?extreme value distribution?seems fairly arbitrary, but it makes the mathematics work out, and it may be possible to justify its use through?rational choice theory.

    It turns out that this model is equivalent to the previous model, although this seems non-obvious, since there are now two sets of regression coefficients and error variables, and the error variables have a different distribution. In fact, this model reduces directly to the previous one with the following substitutions:

    {\displaystyle {\boldsymbol {\beta }}={\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{0}}
    {\displaystyle \varepsilon =\varepsilon _{1}-\varepsilon _{0}}

    An intuition for this comes from the fact that, since we choose based on the maximum of two values, only their difference matters, not the exact values — and this effectively removes one?degree of freedom. Another critical fact is that the difference of two type-1 extreme-value-distributed variables is a logistic distribution, i.e. if?{\displaystyle \varepsilon =\varepsilon _{1}-\varepsilon _{0}\sim \operatorname {Logistic} (0,1).}

    We can demonstrate the equivalent as follows:

    {\displaystyle {\begin{aligned}&\Pr(Y_{i}=1\mid \mathbf {X} _{i})\\[4pt]={}&\Pr(Y_{i}^{1\ast }>Y_{i}^{0\ast }\mid \mathbf {X} _{i})&\\={}&\Pr(Y_{i}^{1\ast }-Y_{i}^{0\ast }>0\mid \mathbf {X} _{i})&\\={}&\Pr({\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}+\varepsilon _{1}-({\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}+\varepsilon _{0})>0)&\\={}&\Pr(({\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}-{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i})+(\varepsilon _{1}-\varepsilon _{0})>0)&\\={}&\Pr(({\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{0})\cdot \mathbf {X} _{i}+(\varepsilon _{1}-\varepsilon _{0})>0)&\\={}&\Pr(({\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{0})\cdot \mathbf {X} _{i}+\varepsilon >0)&&{\text{(substitute }}\varepsilon {\text{ as above)}}\\={}&\Pr({\boldsymbol {\beta }}\cdot \mathbf {X} _{i}+\varepsilon >0)&&{\text{(substitute }}{\boldsymbol {\beta }}{\text{ as above)}}\\={}&\Pr(\varepsilon >-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&&{\text{(now, same as above model)}}\\={}&\Pr(\varepsilon <{\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&\\={}&\operatorname {logit} ^{-1}({\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&\\={}&p_{i}\end{aligned}}}

    Example

    As an example, consider a province-level election where the choice is between a right-of-center party, a left-of-center party, and a secessionist party (e.g. the?Parti Québécois, which wants?Quebec?to secede from?Canada). We would then use three latent variables, one for each choice. Then, in accordance with?utility theory, we can then interpret the latent variables as expressing the?utility?that results from making each of the choices. We can also interpret the regression coefficients as indicating the strength that the associated factor (i.e. explanatory variable) has in contributing to the utility — or more correctly, the amount by which a unit change in an explanatory variable changes the utility of a given choice. A voter might expect that the right-of-center party would lower taxes, especially on rich people. This would give low-income people no benefit, i.e. no change in utility (since they usually don't pay taxes); would cause moderate benefit (i.e. somewhat more money, or moderate utility increase) for middle-incoming people; and would cause significant benefits for high-income people. On the other hand, the left-of-center party might be expected to raise taxes and offset it with increased welfare and other assistance for the lower and middle classes. This would cause significant positive benefit to low-income people, perhaps weak benefit to middle-income people, and significant negative benefit to high-income people. Finally, the secessionist party would take no direct actions on the economy, but simply secede. A low-income or middle-income voter might expect basically no clear utility gain or loss from this, but a high-income voter might expect negative utility, since he/she is likely to own companies, which will have a harder time doing business in such an environment and probably lose money.

    These intuitions can be expressed as follows:

    Estimated strength of regression coefficient for different outcomes (party choices) and different values of explanatory variables?Center-rightCenter-leftSecessionistHigh-incomeMiddle-incomeLow-income
    strong +strong ?strong ?
    moderate +weak +none
    nonestrong +none

    This clearly shows that

  • Separate sets of regression coefficients need to exist for each choice. When phrased in terms of utility, this can be seen very easily. Different choices have different effects on net utility; furthermore, the effects vary in complex ways that depend on the characteristics of each individual, so there need to be separate sets of coefficients for each characteristic, not simply a single extra per-choice characteristic.
  • Even though income is a continuous variable, its effect on utility is too complex for it to be treated as a single variable. Either it needs to be directly split up into ranges, or higher powers of income need to be added so that?polynomial regression?on income is effectively done.
  • As a "log-linear" model

    Yet another formulation combines the two-way latent variable formulation above with the original formulation higher up without latent variables, and in the process provides a link to one of the standard formulations of the?multinomial logit.

    Here, instead of writing the?logit?of the probabilities?pi?as a linear predictor, we separate the linear predictor into two, one for each of the two outcomes:

    {\displaystyle {\begin{aligned}\ln \Pr(Y_{i}=0)&={\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}-\ln Z\,\\\ln \Pr(Y_{i}=1)&={\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}-\ln Z\,\\\end{aligned}}}

    Note that two separate sets of regression coefficients have been introduced, just as in the two-way latent variable model, and the two equations appear a form that writes the?logarithm?of the associated probability as a linear predictor, with an extra term?{\displaystyle -lnZ}?at the end. This term, as it turns out, serves as thenormalizing factor?ensuring that the result is a distribution. This can be seen by exponentiating both sides:

    {\displaystyle {\begin{aligned}\Pr(Y_{i}=0)&={\frac {1}{Z}}e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}\,\\\Pr(Y_{i}=1)&={\frac {1}{Z}}e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}\,\\\end{aligned}}}

    In this form it is clear that the purpose of?Z?is to ensure that the resulting distribution over?Yi?is in fact a?probability distribution, i.e. it sums to 1. This means that?Z?is simply the sum of all un-normalized probabilities, and by dividing each probability by?Z, the probabilities become "normalized". That is:

    {\displaystyle Z=e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}

    and the resulting equations are

    {\displaystyle {\begin{aligned}\Pr(Y_{i}=0)&={\frac {e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}}{e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}\,\\\Pr(Y_{i}=1)&={\frac {e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}{e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}\,\end{aligned}}}

    Or generally:

    {\displaystyle \Pr(Y_{i}=c)={\frac {e^{{\boldsymbol {\beta }}_{c}\cdot \mathbf {X} _{i}}}{\sum _{h}e^{{\boldsymbol {\beta }}_{h}\cdot \mathbf {X} _{i}}}}}

    This shows clearly how to generalize this formulation to more than two outcomes, as in?multinomial logit. Note that this general formulation is exactly theSoftmax function?as in

    {\displaystyle \Pr(Y_{i}=c)=\operatorname {softmax} (c,{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i},{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i},\dots ).}

    In order to prove that this is equivalent to the previous model, note that the above model is overspecified, in that?{\displaystyle \Pr(Y_{i}=0)}?and?{\displaystyle \Pr(Y_{i}=1)}?cannot be independently specified: rather?{\displaystyle \Pr(Y_{i}=0)+\Pr(Y_{i}=1)=1}?so knowing one automatically determines the other. As a result, the model is?nonidentifiable, in that multiple combinations of?β0?and?β1?will produce the same probabilities for all possible explanatory variables. In fact, it can be seen that adding any constant vector to both of them will produce the same probabilities:

    {\displaystyle {\begin{aligned}\Pr(Y_{i}=1)&={\frac {e^{({\boldsymbol {\beta }}_{1}+\mathbf {C} )\cdot \mathbf {X} _{i}}}{e^{({\boldsymbol {\beta }}_{0}+\mathbf {C} )\cdot \mathbf {X} _{i}}+e^{({\boldsymbol {\beta }}_{1}+\mathbf {C} )\cdot \mathbf {X} _{i}}}}\,\\&={\frac {e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}e^{\mathbf {C} \cdot \mathbf {X} _{i}}}{e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}e^{\mathbf {C} \cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}e^{\mathbf {C} \cdot \mathbf {X} _{i}}}}\,\\&={\frac {e^{\mathbf {C} \cdot \mathbf {X} _{i}}e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}{e^{\mathbf {C} \cdot \mathbf {X} _{i}}(e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}})}}\,\\&={\frac {e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}{e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}\,\\\end{aligned}}}

    As a result, we can simplify matters, and restore identifiability, by picking an arbitrary value for one of the two vectors. We choose to set?{\displaystyle {\boldsymbol {\beta }}_{0}=\mathbf {0} .}?Then,

    {\displaystyle e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}=e^{\mathbf {0} \cdot \mathbf {X} _{i}}=1}

    and so

    {\displaystyle \Pr(Y_{i}=1)={\frac {e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}{1+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}={\frac {1}{1+e^{-{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}=p_{i}}

    which shows that this formulation is indeed equivalent to the previous formulation. (As in the two-way latent variable formulation, any settings where?{\displaystyle {\boldsymbol {\beta }}={\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{0}}?will produce equivalent results.)

    Note that most treatments of the?multinomial logit?model start out either by extending the "log-linear" formulation presented here or the two-way latent variable formulation presented above, since both clearly show the way that the model could be extended to multi-way outcomes. In general, the presentation with latent variables is more common in?econometrics?and?political science, where?discrete choice?models and?utility theory?reign, while the "log-linear" formulation here is more common in?computer science, e.g.?machine learning?and?natural language processing.

    As a single-layer perceptron[edit]

    The model has an equivalent formulation

    {\displaystyle p_{i}={\frac {1}{1+e^{-(\beta _{0}+\beta _{1}x_{1,i}+\cdots +\beta _{k}x_{k,i})}}}.\,}

    This functional form is commonly called a single-layer?perceptron?or single-layer?artificial neural network. A single-layer neural network computes a continuous output instead of a?step function. The derivative of?pi?with respect to?X?=?(x1, ...,?xk) is computed from the general form:

    {\displaystyle y={\frac {1}{1+e^{-f(X)}}}}

    where?f(X) is an?analytic function?in?X. With this choice, the single-layer neural network is identical to the logistic regression model. This function has a continuous derivative, which allows it to be used in?backpropagation. This function is also preferred because its derivative is easily calculated:

    {\displaystyle {\frac {\mathrm ze8trgl8bvbq y}{\mathrm ze8trgl8bvbq X}}=y(1-y){\frac {\mathrm ze8trgl8bvbq f}{\mathrm ze8trgl8bvbq X}}.\,}

    In terms of binomial data[edit]

    A closely related model assumes that each?i?is associated not with a single Bernoulli trial but with?ni?independent identically distributed?trials, where the observation?Yi?is the number of successes observed (the sum of the individual Bernoulli-distributed random variables), and hence follows a?binomial distribution:

    {\displaystyle Y_{i}\ \sim \operatorname {Bin} (n_{i},p_{i}),{\text{ for }}i=1,\dots ,n}

    An example of this distribution is the fraction of seeds (pi) that germinate after?ni?are planted.

    In terms of?expected values, this model is expressed as follows:

    {\displaystyle p_{i}=\mathbb {E} \left[\left.{\frac {Y_{i}}{n_{i}}}\,\right|\,\mathbf {X} _{i}\right],}

    so that

    {\displaystyle \operatorname {logit} \left(\mathbb {E} \left[\left.{\frac {Y_{i}}{n_{i}}}\,\right|\,\mathbf {X} _{i}\right]\right)=\operatorname {logit} (p_{i})=\ln \left({\frac {p_{i}}{1-p_{i}}}\right)={\boldsymbol {\beta }}\cdot \mathbf {X} _{i},}

    Or equivalently:

    {\displaystyle \operatorname {Pr} (Y_{i}=y\mid \mathbf {X} _{i})={n_{i} \choose y}p_{i}^{y}(1-p_{i})^{n_{i}-y}={n_{i} \choose y}\left({\frac {1}{1+e^{-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}\right)^{y}\left(1-{\frac {1}{1+e^{-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}\right)^{n_{i}-y}}

    This model can be fit using the same sorts of methods as the above more basic model.

    Bayesian logistic regression

    Comparison of?logistic function?with a scaled inverse?probit function?(i.e. the?CDF?of the?normal distribution), comparing?{\displaystyle \sigma (x)}?vs.?{\displaystyle \Phi ({\sqrt {\frac {\pi }{8}}}x)}, which makes the slopes the same at the origin. This shows the?heavier tails?of the logistic distribution.

    In a?Bayesian statistics?context,?prior distributions?are normally placed on the regression coefficients, usually in the form of?Gaussian distributions. Unfortunately, the Gaussian distribution is not the?conjugate prior?of thelikelihood function?in logistic regression. As a result, the?posterior distribution?is difficult to calculate, even using standard simulation algorithms (e.g.?Gibbs sampling)[citation needed].

    There are various possibilities:

    • Don't do a proper Bayesian analysis, but simply compute a?maximum a posteriori?point estimate of the parameters. This is common, for example, in "maximum entropy" classifiers in?machine learning.
    • Use a more general approximation method such as the?Metropolis–Hastings algorithm.
    • Draw a Markov chain Monte Carlo sample from the exact posterior by using the Independent Metropolis–Hastings algorithm with heavy-tailed multivariate candidate distribution found by matching the mode and curvature at the mode of the normal approximation to the posterior and then using the Student’s t shape with low degrees of freedom.[26]?This is shown to have excellent convergence properties.
    • Use a?latent variable model?and approximate the logistic distribution using a more tractable distribution, e.g. a?Student's t-distribution?or a?mixture?of?normal distributions.
    • Do?probit regression?instead of logistic regression. This is actually a special case of the previous situation, using a?normal distribution?in place of a Student's t, mixture of normals, etc. This will be less accurate but has the advantage that probit regression is extremely common, and a ready-made Bayesian implementation may already be available.
    • Use the?Laplace approximation?of the posterior distribution.[27]?This approximates the posterior with a Gaussian distribution. This is not a terribly good approximation, but it suffices if all that is desired is an estimate of the posterior mean and variance. In such a case, an approximation scheme such as?variational Bayescan be used.[28]

    Gibbs sampling with an approximating distribution[edit]

    As shown above, logistic regression is equivalent to a?latent variable model?with an?error variable?distributed according to a standard?logistic distribution. The overall distribution of the latent variable?{\displaystyle Y_{i}\ast }?is also a logistic distribution, with the mean equal to?{\displaystyle {\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}?(i.e. the fixed quantity added to the error variable). This model considerably simplifies the application of techniques such as?Gibbs sampling. However, sampling the regression coefficients is still difficult, because of the lack of?conjugacy?between the normal and logistic distributions. Changing the prior distribution over the regression coefficients is of no help, because the logistic distribution is not in the?exponential family?and thus has no?conjugate prior.

    One possibility is to use a more general?Markov chain Monte Carlo?technique, such as the?Metropolis–Hastings algorithm, which can sample arbitrary distributions. Another possibility, however, is to replace the logistic distribution with a similar-shaped distribution that is easier to work with using Gibbs sampling. In fact, the logistic and normal distributions have a similar shape, and thus one possibility is simply to have normally distributed errors. Because the normal distribution is conjugate to itself, sampling the regression coefficients becomes easy. In fact, this model is exactly the model used in?probit regression.

    However, the normal and logistic distributions differ in that the logistic has?heavier tails. As a result, it is more?robust?to inaccuracies in the underlying model (which are inevitable, in that the model is essentially always an approximation) or to errors in the data. Probit regression loses some of this robustness.

    Another alternative is to use errors distributed as a?Student's t-distribution. The Student's t-distribution has heavy tails, and is easy to sample from because it is the?compound distribution?of a normal distribution with variance distributed as an?inverse gamma distribution. In other words, if a normal distribution is used for the error variable, and another?latent variable, following an inverse gamma distribution, is added corresponding to the variance of this error variable, the?marginal distribution?of the error variable will follow a Student's t distribution. Because of the various conjugacy relationships, all variables in this model are easy to sample from.

    The Student's t distribution that best approximates a standard logistic distribution can be determined by?matching the moments?of the two distributions. The Student's t distribution has three parameters, and since the?skewness?of both distributions is always 0, the first four moments can all be matched, using the following equations:

    {\displaystyle {\begin{aligned}\mu &=0\\{\frac {\nu }{\nu -2}}s^{2}&={\frac {\pi ^{2}}{3}}\\{\frac {6}{\nu -4}}&={\frac {6}{5}}\end{aligned}}}

    This yields the following values:

    {\displaystyle {\begin{aligned}\mu &=0\\s&={\sqrt {{\frac {7}{9}}{\frac {\pi ^{2}}{3}}}}\\\nu &=9\end{aligned}}}

    The following graphs compare the standard logistic distribution with the Student's t distribution that matches the first four moments using the above-determined values, as well as the normal distribution that matches the first two moments. Note how much closer the Student's t distribution agrees, especially in the tails. Beyond about two standard deviations from the mean, the logistic and normal distributions diverge rapidly, but the logistic and Student's t distributions don't start diverging significantly until more than 5 standard deviations away.

    (Another possibility, also amenable to Gibbs sampling, is to approximate the logistic distribution using a?mixture density?of?normal distributions.)

    Comparison of logistic and approximating distributions (t, normal). Tails of distributions.
    Further tails of distributions. Extreme tails of distributions.

    Extensions

    There are large numbers of extensions:

    • Multinomial logistic regression?(or?multinomial logit) handles the case of a multi-way?categorical?dependent variable (with unordered values, also called "classification"). Note that the general case of having dependent variables with more than two values is termed?polytomous regression.
    • Ordered logistic regression?(or?ordered logit) handles?ordinal?dependent variables (ordered values).
    • Mixed logit?is an extension of multinomial logit that allows for correlations among the choices of the dependent variable.
    • An extension of the logistic model to sets of interdependent variables is the?conditional random field.

    Software

    Most?statistical software?can do binary logistic regression.

    • SAS
      • PROC LOGISTIC for basic logistic regression.[29]
      • PROC CATMOD when all the variables are categorical.[30]
      • PROC GLIMMIX for?multilevel model?logistic regression.[31]
    • R
      • glm?in the stats package (using family = binomial)[32]
      • GLMNET package for an efficient implementation regularized logistic regression
      • lmer for mixed effects logistic regression
    • python
      • Logistic Regression with ARD prior?code?,?tutorial
      • Bayesian Logistic Regression with Laplace Approximation?code,?tutorial
      • Variational Logistic Regression?code,?tutorial
    • NCSS
      • Logistic Regression in NCSS

    See also

    • Logistic function
    • Discrete choice
    • Jarrow–Turnbull model
    • Limited dependent variable
    • Multinomial logit model
    • Ordered logit
    • Hosmer–Lemeshow test
    • Brier score
    • MLPACK?- contains a?C++?implementation of logistic regression
    • Local case-control sampling
    • Logistic model tree

    References[edit]

  • David A. Freedman?(2009).?Statistical Models: Theory and Practice.Cambridge University Press. p.?128.
  • Walker, SH; Duncan, DB (1967). "Estimation of the probability of an event as a function of several independent variables".?Biometrika?54: 167–178.doi:10.2307/2333860.
  • Cox, DR (1958). "The regression analysis of binary sequences (with discussion)".?J Roy Stat Soc B?20: 215–242.
  • ?Gareth James; Daniela Witten; Trevor Hastie; Robert Tibshirani (2013).?An Introduction to Statistical Learning. Springer. p.?6.
  • Boyd, C. R.; Tolson, M. A.; Copes, W. S. (1987). "Evaluating trauma care: The TRISS method. Trauma Score and the Injury Severity Score".?The Journal of trauma27?(4): 370–378.?doi:10.1097/00005373-198704000-00005.?PMID?3106646.
  • Kologlu M., Elker D., Altun H., Sayek I. Validation of MPI and OIA II in two different groups of patients with secondary peritonitis // Hepato-Gastroenterology. – 2001. – Vol. 48, № 37. – P. 147-151.
  • ?Biondo S., Ramos E., Deiros M. et al. Prognostic factors for mortality in left colonic peritonitis: a new scoring system // J. Am. Coll. Surg. – 2000. – Vol. 191, № 6. – Р. 635-642.
  • ?Marshall J.C., Cook D.J., Christou N.V. et al. Multiple Organ Dysfunction Score: A reliable descriptor of a complex clinical outcome // Crit. Care Med. – 1995. – Vol. 23. – P. 1638-1652.
  • ?Le Gall J.-R., Lemeshow S., Saulnier F. A new Simplified Acute Physiology Score (SAPS II) based on a European/North American multicenter study // JAMA. – 1993. – Vol. 270. – P. 2957-2963.
  • ?Truett, J; Cornfield, J; Kannel, W (1967). "A multivariate analysis of the risk of coronary heart disease in Framingham".?Journal of chronic diseases?20?(7): 511–24.?doi:10.1016/0021-9681(67)90082-3.?PMID?6028270.
  • ?Harrell, Frank E. (2001).?Regression Modeling Strategies. Springer-Verlag.ISBN?0-387-95232-2.
  • ?M. Strano; B.M. Colosimo (2006).?"Logistic regression analysis for experimental determination of forming limit diagrams".?International Journal of Machine Tools and Manufacture?46?(6): 673–682.?doi:10.1016/j.ijmachtools.2005.07.005.
  • ?Palei, S. K.; Das, S. K. (2009). "Logistic regression model for prediction of roof fall risks in bord and pillar workings in coal mines: An approach".?Safety Science?47: 88–96.?doi:10.1016/j.ssci.2008.01.002.
  • ^?Jump up to:a?b?c?d?e?f?g?h?i?j?k?Hosmer, David W.; Lemeshow, Stanley (2000).?Applied Logistic Regression?(2nd ed.). Wiley.?ISBN?0-471-35632-8.[page?needed]
  • http://www.planta.cn/forum/files_planta/introduction_to_categorical_data_analysis_805.pdf
  • ?Everitt, Brian (1998).?The Cambridge Dictionary of Statistics. Cambridge, UK New York: Cambridge University Press.?ISBN?0521593468.
  • ?Menard, Scott W. (2002).?Applied Logistic Regression?(2nd ed.). SAGE.?ISBN?978-0-7619-2208-7.[page?needed]
  • Menard ch 1.3
  • Peduzzi, P; Concato, J; Kemper, E; Holford, TR; Feinstein, AR (December 1996). "A simulation study of the number of events per variable in logistic regression analysis.".?Journal of Clinical Epidemiology?49?(12): 1373–9.?doi:10.1016/s0895-4356(96)00236-3.?PMID?8970487.
  • ?Greene, William N. (2003).?Econometric Analysis?(Fifth ed.). Prentice-Hall.ISBN?0-13-066189-9.
  • ?Cohen, Jacob; Cohen, Patricia; West, Steven G.; Aiken, Leona S. (2002).?Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences?(3rd ed.). Routledge.?ISBN?978-0-8058-2223-6.[page?needed]
  • Measures of Fit for Logistic Regression
  • ?Tjur, Tue (2009). "Coefficients of determination in logistic regression models".?American Statistician: 366–372.
  • ?Hosmer, D.W. (1997). "A comparison of goodness-of-fit tests for the logistic regression model".?Stat in Med?16: 965–980.?doi:10.1002/(sici)1097-0258(19970515)16:9<965::aid-sim509>3.3.co;2-f.
  • https://class.stanford.edu/c4x/HumanitiesScience/StatLearning/asset/classification.pdf?slide 16
  • ?Bolstad, William M. (2010).?Understandeing Computational Bayesian Statistics. Wiley.?ISBN?978-0-470-04609-8.[page?needed]
  • ?Bishop, Christopher M. "Chapter 4. Linear Models for Classification".?Pattern Recognition and Machine Learning. Springer Science+Business Media, LLC. pp.?217–218.?ISBN?978-0387-31073-2.
  • ?Bishop, Christopher M. "Chapter 10. Approximate Inference".?Pattern Recognition and Machine Learning. Springer Science+Business Media, LLC. pp.?498–505.ISBN?978-0387-31073-2.
  • https://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#logistic_toc.htm
  • https://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#statug_catmod_sect003.htm
  • https://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#glimmix_toc.htm
  • Gelman, Andrew; Hill, Jennifer (2007).?Data Analysis Using Regression and Multilevel/Hierarchical Models. New York: Cambridge University Press. pp.?79–108.?ISBN?978-0-521-68689-1.
  • Further reading

    • Agresti, Alan. (2002).?Categorical Data Analysis. New York: Wiley-Interscience.?ISBN?0-471-36093-7.
    • Amemiya, Takeshi (1985).?"Qualitative Response Models".?Advanced Econometrics. Oxford: Basil Blackwell. pp.?267–359.?ISBN?0-631-13345-3.
    • Balakrishnan, N. (1991).?Handbook of the Logistic Distribution. Marcel Dekker, Inc.?ISBN?978-0-8247-8587-1.
    • Gouriéroux, Christian?(2000).?"The Simple Dichotomy".?Econometrics of Qualitative Dependent Variables. New York: Cambridge University Press. pp.?6–37.ISBN?0-521-58985-1.
    • Greene, William H. (2003).?Econometric Analysis, fifth edition. Prentice Hall.?ISBN?0-13-066189-9.
    • Hilbe, Joseph M. (2009).?Logistic Regression Models. Chapman & Hall/CRC Press.?ISBN?978-1-4200-7575-5.
    • Hosmer, David (2013).?Applied logistic regression. Hoboken, New Jersey: Wiley.?ISBN?978-0470582473.
    • Howell, David C. (2010).?Statistical Methods for Psychology, 7th ed. Belmont, CA; Thomson Wadsworth.?ISBN?978-0-495-59786-5.
    • Peduzzi, P.; J. Concato; E. Kemper; T.R. Holford; A.R. Feinstein (1996). "A simulation study of the number of events per variable in logistic regression analysis".?Journal of Clinical Epidemiology?49?(12): 1373–1379.?doi:10.1016/s0895-4356(96)00236-3.?PMID?8970487.

    External links


    • Econometrics Lecture (topic: Logit model)
      ?on?YouTube?by?Mark Thoma
    • Logistic Regression Interpretation
    • Logistic Regression tutorial
    • Open source Excel add-in implementation of Logistic Regressio
      n

    轉載于:https://www.cnblogs.com/davidwang456/articles/5592886.html

    總結

    以上是生活随笔為你收集整理的Logistic regression--转的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

    少妇厨房愉情理9仑片视频 | 国产一精品一av一免费 | 亚洲无人区午夜福利码高清完整版 | 在线观看欧美一区二区三区 | 国产精品亚洲综合色区韩国 | 精品aⅴ一区二区三区 | 久久亚洲中文字幕无码 | 天堂一区人妻无码 | 最近免费中文字幕中文高清百度 | 国产精品亚洲综合色区韩国 | 国精品人妻无码一区二区三区蜜柚 | 亚洲一区二区三区无码久久 | 精品久久久无码中文字幕 | 国产又爽又猛又粗的视频a片 | 狠狠色欧美亚洲狠狠色www | 午夜精品久久久久久久 | 亚洲娇小与黑人巨大交 | 欧美人与物videos另类 | 任你躁国产自任一区二区三区 | 东京热无码av男人的天堂 | 亚洲第一无码av无码专区 | 人妻人人添人妻人人爱 | 大乳丰满人妻中文字幕日本 | 99久久精品午夜一区二区 | 国产精品国产自线拍免费软件 | 97精品人妻一区二区三区香蕉 | 日日橹狠狠爱欧美视频 | 国产精品亚洲专区无码不卡 | 久久综合九色综合欧美狠狠 | 亚洲狠狠色丁香婷婷综合 | 久久国产36精品色熟妇 | 国产成人精品一区二区在线小狼 | 蜜桃臀无码内射一区二区三区 | 久久久中文字幕日本无吗 | 亚洲国产精品一区二区美利坚 | 中文字幕无码人妻少妇免费 | 无码av中文字幕免费放 | 波多野结衣一区二区三区av免费 | 国产午夜无码视频在线观看 | 国产乱人偷精品人妻a片 | 国产精品久久久av久久久 | 日日碰狠狠丁香久燥 | 日本va欧美va欧美va精品 | 波多野结衣一区二区三区av免费 | 日本精品久久久久中文字幕 | 中文字幕人妻丝袜二区 | 性做久久久久久久久 | 2020最新国产自产精品 | 欧美激情综合亚洲一二区 | 又大又硬又黄的免费视频 | 亚洲精品中文字幕久久久久 | 亚洲自偷精品视频自拍 | 成年女人永久免费看片 | 久久亚洲精品中文字幕无男同 | 波多野结衣av在线观看 | 欧美日韩色另类综合 | 精品欧美一区二区三区久久久 | 欧美日韩综合一区二区三区 | 亚洲欧洲中文日韩av乱码 | www一区二区www免费 | 免费看男女做好爽好硬视频 | 午夜无码人妻av大片色欲 | 久久无码中文字幕免费影院蜜桃 | 秋霞成人午夜鲁丝一区二区三区 | 色窝窝无码一区二区三区色欲 | 夜精品a片一区二区三区无码白浆 | 亚洲欧洲日本无在线码 | 激情五月综合色婷婷一区二区 | 国产亚洲欧美日韩亚洲中文色 | 国产精品无码久久av | 无码人妻丰满熟妇区五十路百度 | 久久无码中文字幕免费影院蜜桃 | 亚洲欧美日韩成人高清在线一区 | 亚洲国产精品久久人人爱 | 欧美日韩视频无码一区二区三 | 国产激情精品一区二区三区 | 欧美变态另类xxxx | 亚洲精品国偷拍自产在线观看蜜桃 | 国产午夜亚洲精品不卡 | 免费观看的无遮挡av | 久久久久免费看成人影片 | 伊人久久大香线蕉av一区二区 | 男女猛烈xx00免费视频试看 | 18禁黄网站男男禁片免费观看 | 欧美精品一区二区精品久久 | 亚洲伊人久久精品影院 | 伊人久久婷婷五月综合97色 | 少妇太爽了在线观看 | 免费播放一区二区三区 | 国产69精品久久久久app下载 | 亚洲熟妇色xxxxx欧美老妇y | 久久99精品国产.久久久久 | 荫蒂被男人添的好舒服爽免费视频 | 午夜免费福利小电影 | 欧美黑人性暴力猛交喷水 | 丰满妇女强制高潮18xxxx | 国产国产精品人在线视 | 成人女人看片免费视频放人 | 丰满人妻翻云覆雨呻吟视频 | 国产无套内射久久久国产 | 午夜成人1000部免费视频 | 99久久婷婷国产综合精品青草免费 | 久久精品视频在线看15 | 宝宝好涨水快流出来免费视频 | 国产av剧情md精品麻豆 | 亚洲国产精品久久久天堂 | 国内少妇偷人精品视频免费 | 国产舌乚八伦偷品w中 | 亚洲欧美日韩成人高清在线一区 | 亚洲第一无码av无码专区 | 亚洲无人区一区二区三区 | 精品国产一区av天美传媒 | 精品国产麻豆免费人成网站 | 男女性色大片免费网站 | 久久精品中文闷骚内射 | 欧美精品免费观看二区 | а√天堂www在线天堂小说 | 激情内射亚州一区二区三区爱妻 | 久久久国产一区二区三区 | 纯爱无遮挡h肉动漫在线播放 | 日韩欧美成人免费观看 | 牲欲强的熟妇农村老妇女 | 国产欧美精品一区二区三区 | 亚洲综合无码一区二区三区 | 久久久久久a亚洲欧洲av冫 | 欧美性生交xxxxx久久久 | 成人试看120秒体验区 | 日韩欧美成人免费观看 | 国产电影无码午夜在线播放 | 色老头在线一区二区三区 | 国产乱人偷精品人妻a片 | 国产真实夫妇视频 | 少妇被黑人到高潮喷出白浆 | 老子影院午夜精品无码 | 久久人妻内射无码一区三区 | 午夜熟女插插xx免费视频 | 国产一精品一av一免费 | 亚洲毛片av日韩av无码 | 波多野结衣乳巨码无在线观看 | 中文精品久久久久人妻不卡 | 亚洲人亚洲人成电影网站色 | 亚洲一区二区三区香蕉 | 久久精品国产一区二区三区 | 色老头在线一区二区三区 | 国产激情艳情在线看视频 | 日本精品久久久久中文字幕 | 嫩b人妻精品一区二区三区 | 丰满少妇弄高潮了www | 久久婷婷五月综合色国产香蕉 | 青青青手机频在线观看 | 日本在线高清不卡免费播放 | 国产精品多人p群无码 | 国产在线一区二区三区四区五区 | 成在人线av无码免费 | 激情内射日本一区二区三区 | 一本久道久久综合狠狠爱 | 国产精品嫩草久久久久 | 伊人久久大香线蕉午夜 | 中文字幕人妻丝袜二区 | 日韩亚洲欧美中文高清在线 | 日韩精品无码一本二本三本色 | 亚洲欧美日韩综合久久久 | 国产性生大片免费观看性 | 四虎影视成人永久免费观看视频 | 无码人妻出轨黑人中文字幕 | 日本熟妇大屁股人妻 | 国产性生交xxxxx无码 | 成人av无码一区二区三区 | 久久久久久a亚洲欧洲av冫 | 亚洲自偷自偷在线制服 | 亚洲日本va午夜在线电影 | 国产精品美女久久久网av | 国产乱人伦app精品久久 国产在线无码精品电影网 国产国产精品人在线视 | 六十路熟妇乱子伦 | 成熟女人特级毛片www免费 | 国产亚洲tv在线观看 | 国产精品视频免费播放 | 日韩精品a片一区二区三区妖精 | 天天躁日日躁狠狠躁免费麻豆 | 亚洲一区二区观看播放 | 1000部夫妻午夜免费 | 国产一精品一av一免费 | 国产高清av在线播放 | 久久久久se色偷偷亚洲精品av | 免费人成在线视频无码 | 久久久无码中文字幕久... | 亚洲欧美综合区丁香五月小说 | 夜精品a片一区二区三区无码白浆 | 奇米影视7777久久精品人人爽 | 午夜福利试看120秒体验区 | 日本www一道久久久免费榴莲 | 色五月五月丁香亚洲综合网 | 欧美老熟妇乱xxxxx | 国产99久久精品一区二区 | 国产乱人伦偷精品视频 | 久久综合网欧美色妞网 | 精品无人区无码乱码毛片国产 | 中文字幕av无码一区二区三区电影 | 兔费看少妇性l交大片免费 | 97se亚洲精品一区 | av无码电影一区二区三区 | 一个人看的www免费视频在线观看 | 欧美丰满老熟妇xxxxx性 | 人妻少妇精品久久 | 国产办公室秘书无码精品99 | 精品亚洲韩国一区二区三区 | 国产绳艺sm调教室论坛 | 在线播放无码字幕亚洲 | 九九在线中文字幕无码 | 国产精品自产拍在线观看 | 伦伦影院午夜理论片 | 人人妻人人澡人人爽欧美精品 | 久久99精品久久久久久 | 亚洲国产精品久久久天堂 | 在线а√天堂中文官网 | 精品久久久中文字幕人妻 | 少妇邻居内射在线 | 午夜精品一区二区三区在线观看 | 一本久久a久久精品亚洲 | 人人爽人人澡人人高潮 | 国产精品毛多多水多 | 无码福利日韩神码福利片 | 亚洲成在人网站无码天堂 | 最近中文2019字幕第二页 | 国产色精品久久人妻 | 天天摸天天碰天天添 | 国产午夜视频在线观看 | 亚洲国产精品无码久久久久高潮 | 乱人伦人妻中文字幕无码 | 日本护士毛茸茸高潮 | 男女超爽视频免费播放 | 欧美 日韩 亚洲 在线 | 久久久中文字幕日本无吗 | 日本熟妇乱子伦xxxx | 狠狠噜狠狠狠狠丁香五月 | 精品国产精品久久一区免费式 | 欧美一区二区三区视频在线观看 | 欧美人与善在线com | 国产人妻精品午夜福利免费 | 亚洲 欧美 激情 小说 另类 | 岛国片人妻三上悠亚 | 永久免费观看国产裸体美女 | 精品日本一区二区三区在线观看 | 免费无码的av片在线观看 | 亚洲国产欧美在线成人 | 久久久久久av无码免费看大片 | 少妇太爽了在线观看 | 装睡被陌生人摸出水好爽 | 国内精品久久久久久中文字幕 | 99精品久久毛片a片 | 台湾无码一区二区 | 国产亚洲精品久久久闺蜜 | 国产亚av手机在线观看 | 国产午夜亚洲精品不卡下载 | 小sao货水好多真紧h无码视频 | 内射老妇bbwx0c0ck | 日本一卡2卡3卡4卡无卡免费网站 国产一区二区三区影院 | 伊在人天堂亚洲香蕉精品区 | 少女韩国电视剧在线观看完整 | 亚洲人成网站色7799 | 亚洲精品一区二区三区大桥未久 | 日本肉体xxxx裸交 | 熟妇人妻中文av无码 | 国内综合精品午夜久久资源 | 88国产精品欧美一区二区三区 | 国产无av码在线观看 | 亚洲欧洲中文日韩av乱码 | 成人亚洲精品久久久久 | 丰满少妇高潮惨叫视频 | 色婷婷av一区二区三区之红樱桃 | 亚洲熟悉妇女xxx妇女av | 国产香蕉尹人综合在线观看 | 成人无码精品1区2区3区免费看 | 99麻豆久久久国产精品免费 | 国产美女精品一区二区三区 | 成人影院yy111111在线观看 | 狠狠色丁香久久婷婷综合五月 | 国产区女主播在线观看 | 国产超级va在线观看视频 | 亚洲精品一区二区三区大桥未久 | 夜夜高潮次次欢爽av女 | 亚洲色无码一区二区三区 | 国产av一区二区三区最新精品 | 夫妻免费无码v看片 | 亚洲国产av精品一区二区蜜芽 | 亚洲熟妇色xxxxx亚洲 | 丰满肥臀大屁股熟妇激情视频 | 成人性做爰aaa片免费看不忠 | 欧美兽交xxxx×视频 | 国产精品亚洲专区无码不卡 | 久久综合九色综合欧美狠狠 | 一本色道久久综合狠狠躁 | 图片区 小说区 区 亚洲五月 | 成人一区二区免费视频 | 玩弄人妻少妇500系列视频 | 免费无码的av片在线观看 | 波多野结衣av一区二区全免费观看 | 波多野结衣av在线观看 | 夜夜夜高潮夜夜爽夜夜爰爰 | 日韩视频 中文字幕 视频一区 | 伊人久久婷婷五月综合97色 | 久久久国产一区二区三区 | 色妞www精品免费视频 | 国内精品人妻无码久久久影院 | 四虎永久在线精品免费网址 | 麻豆国产丝袜白领秘书在线观看 | 欧美亚洲日韩国产人成在线播放 | 在线看片无码永久免费视频 | 中文亚洲成a人片在线观看 | 国产精品永久免费视频 | 中文字幕无码免费久久9一区9 | 精品夜夜澡人妻无码av蜜桃 | 人人妻人人澡人人爽人人精品 | 特大黑人娇小亚洲女 | 亚洲一区二区三区四区 | 欧美日韩视频无码一区二区三 | 日日碰狠狠丁香久燥 | 人妻天天爽夜夜爽一区二区 | 日本va欧美va欧美va精品 | 久久99精品久久久久久 | 东京热男人av天堂 | 久久久久亚洲精品男人的天堂 | 国产又爽又黄又刺激的视频 | 99久久婷婷国产综合精品青草免费 | 亚洲中文字幕久久无码 | 久久99精品久久久久婷婷 | 国产午夜无码精品免费看 | 天天拍夜夜添久久精品大 | 精品夜夜澡人妻无码av蜜桃 | 性欧美牲交xxxxx视频 | 精品无码一区二区三区的天堂 | 无码任你躁久久久久久久 | 任你躁国产自任一区二区三区 | 久久国内精品自在自线 | 中文字幕色婷婷在线视频 | 狂野欧美激情性xxxx | 婷婷综合久久中文字幕蜜桃三电影 | 鲁鲁鲁爽爽爽在线视频观看 | 精品一区二区三区波多野结衣 | 国产精品a成v人在线播放 | 中文字幕av伊人av无码av | 国产人妻精品一区二区三区不卡 | 国产做国产爱免费视频 | 2020最新国产自产精品 | 国产在线精品一区二区三区直播 | 一本色道久久综合狠狠躁 | 日韩少妇内射免费播放 | 国产精品久久久久久久影院 | 国产午夜手机精彩视频 | 18黄暴禁片在线观看 | 黄网在线观看免费网站 | 日本欧美一区二区三区乱码 | 永久免费观看国产裸体美女 | 国产av一区二区精品久久凹凸 | 欧美三级不卡在线观看 | 中文精品久久久久人妻不卡 | 久久精品国产一区二区三区肥胖 | 麻豆成人精品国产免费 | 国精品人妻无码一区二区三区蜜柚 | 成在人线av无码免费 | 欧美激情内射喷水高潮 | 一本久久伊人热热精品中文字幕 | 国产成人久久精品流白浆 | 色婷婷综合中文久久一本 | 国产av一区二区三区最新精品 | 中文字幕乱码中文乱码51精品 | 狠狠色丁香久久婷婷综合五月 | 麻豆成人精品国产免费 | 亚洲va中文字幕无码久久不卡 | 亚洲s码欧洲m码国产av | 国产精品-区区久久久狼 | 人妻插b视频一区二区三区 | 亚洲中文字幕在线无码一区二区 | 精品亚洲成av人在线观看 | 国产三级精品三级男人的天堂 | 国产黄在线观看免费观看不卡 | 国产精品毛片一区二区 | 思思久久99热只有频精品66 | 国产国产精品人在线视 | 九九久久精品国产免费看小说 | 亚洲啪av永久无码精品放毛片 | 又大又硬又黄的免费视频 | 亚洲国产综合无码一区 | 任你躁国产自任一区二区三区 | 亚洲精品国偷拍自产在线观看蜜桃 | 成熟女人特级毛片www免费 | 精品亚洲韩国一区二区三区 | 国产乱码精品一品二品 | 国产欧美熟妇另类久久久 | 曰韩少妇内射免费播放 | 人人妻人人藻人人爽欧美一区 | 国产三级精品三级男人的天堂 | 精品偷自拍另类在线观看 | 国产精品久久久久9999小说 | 亚洲爆乳精品无码一区二区三区 | 玩弄人妻少妇500系列视频 | 亚洲一区二区三区香蕉 | 奇米影视888欧美在线观看 | 国产疯狂伦交大片 | 一本久久a久久精品vr综合 | 久久精品一区二区三区四区 | 欧美丰满熟妇xxxx性ppx人交 | 成人一在线视频日韩国产 | 狠狠cao日日穞夜夜穞av | 永久免费观看美女裸体的网站 | 色婷婷综合激情综在线播放 | 狠狠躁日日躁夜夜躁2020 | a在线亚洲男人的天堂 | 国产特级毛片aaaaaaa高清 | 牲欲强的熟妇农村老妇女视频 | 强辱丰满人妻hd中文字幕 | 夜夜高潮次次欢爽av女 | 婷婷六月久久综合丁香 | 国产欧美熟妇另类久久久 | 国产色视频一区二区三区 | 国产人妻人伦精品1国产丝袜 | 久久97精品久久久久久久不卡 | 永久免费观看国产裸体美女 | 亚洲人成网站色7799 | 又湿又紧又大又爽a视频国产 | 日韩成人一区二区三区在线观看 | 亚洲国产精品无码一区二区三区 | 日本一卡二卡不卡视频查询 | 国产一区二区三区日韩精品 | 午夜性刺激在线视频免费 | 国产午夜福利100集发布 | 亚洲国产精华液网站w | 扒开双腿吃奶呻吟做受视频 | 亚洲の无码国产の无码步美 | 99久久人妻精品免费一区 | 亚洲日韩av片在线观看 | 久久97精品久久久久久久不卡 | 国内精品九九久久久精品 | 成熟妇人a片免费看网站 | 白嫩日本少妇做爰 | 久青草影院在线观看国产 | 丰满护士巨好爽好大乳 | 久久视频在线观看精品 | 欧美熟妇另类久久久久久多毛 | 无码任你躁久久久久久久 | 欧美成人高清在线播放 | 日本精品高清一区二区 | 亚洲人成网站在线播放942 | 少妇一晚三次一区二区三区 | 国产精品内射视频免费 | 久久久久久久女国产乱让韩 | 精品国产一区二区三区四区在线看 | 亚洲精品久久久久久久久久久 | 蜜臀av在线播放 久久综合激激的五月天 | 丰满护士巨好爽好大乳 | 色婷婷av一区二区三区之红樱桃 | 又紧又大又爽精品一区二区 | 无码任你躁久久久久久久 | 国产激情综合五月久久 | 午夜熟女插插xx免费视频 | 国产网红无码精品视频 | 亚洲s码欧洲m码国产av | 国产精品美女久久久 | 国产明星裸体无码xxxx视频 | 一本久久a久久精品vr综合 | 任你躁国产自任一区二区三区 | 成人无码精品1区2区3区免费看 | 一二三四社区在线中文视频 | 天天拍夜夜添久久精品 | 亚洲国产精华液网站w | 免费看男女做好爽好硬视频 | 一区二区三区乱码在线 | 欧洲 | 欧美熟妇另类久久久久久不卡 | 亚洲乱亚洲乱妇50p | 中文字幕色婷婷在线视频 | 人人澡人摸人人添 | www国产亚洲精品久久网站 | 九月婷婷人人澡人人添人人爽 | 色综合久久久无码网中文 | 永久黄网站色视频免费直播 | 天天拍夜夜添久久精品 | 欧美 丝袜 自拍 制服 另类 | 草草网站影院白丝内射 | 免费网站看v片在线18禁无码 | 成人精品视频一区二区 | 亚洲 高清 成人 动漫 | 水蜜桃色314在线观看 | 动漫av网站免费观看 | 亚洲 a v无 码免 费 成 人 a v | 亚洲一区二区观看播放 | 99久久人妻精品免费一区 | 人妻有码中文字幕在线 | 美女扒开屁股让男人桶 | 婷婷色婷婷开心五月四房播播 | 亚洲精品一区二区三区四区五区 | 久久综合九色综合97网 | 亚洲精品久久久久avwww潮水 | 国内丰满熟女出轨videos | 黑人大群体交免费视频 | 日本在线高清不卡免费播放 | 人人妻人人澡人人爽欧美一区九九 | 免费播放一区二区三区 | 色偷偷人人澡人人爽人人模 | 亚洲伊人久久精品影院 | 丝袜 中出 制服 人妻 美腿 | 亚洲熟妇色xxxxx亚洲 | 超碰97人人射妻 | 亚洲综合在线一区二区三区 | 国产午夜视频在线观看 | 久久www免费人成人片 | 国模大胆一区二区三区 | 粉嫩少妇内射浓精videos | 亚洲成av人影院在线观看 | 久久久精品456亚洲影院 | 亚洲色成人中文字幕网站 | 久久久久99精品成人片 | 婷婷五月综合激情中文字幕 | 亚洲日本va午夜在线电影 | 国产肉丝袜在线观看 | yw尤物av无码国产在线观看 | 亚洲色www成人永久网址 | 波多野42部无码喷潮在线 | 扒开双腿疯狂进出爽爽爽视频 | 自拍偷自拍亚洲精品被多人伦好爽 | 国产人妻人伦精品1国产丝袜 | 精品一区二区不卡无码av | 欧美野外疯狂做受xxxx高潮 | 色综合视频一区二区三区 | 国产精品自产拍在线观看 | 日本饥渴人妻欲求不满 | 精品成在人线av无码免费看 | 国产精品内射视频免费 | 久久国产精品萌白酱免费 | 日本乱人伦片中文三区 | 丰满肥臀大屁股熟妇激情视频 | 蜜臀av无码人妻精品 | 波多野结衣一区二区三区av免费 | 人人妻人人澡人人爽欧美一区九九 | 亚洲va中文字幕无码久久不卡 | 欧美人与动性行为视频 | 国产成人人人97超碰超爽8 | 成 人影片 免费观看 | 天下第一社区视频www日本 | 免费看男女做好爽好硬视频 | 日韩 欧美 动漫 国产 制服 | 欧美黑人性暴力猛交喷水 | 日韩av激情在线观看 | av人摸人人人澡人人超碰下载 | 欧美日韩综合一区二区三区 | 全球成人中文在线 | 亚洲色成人中文字幕网站 | 国产乱子伦视频在线播放 | 99精品久久毛片a片 | 亚洲成av人影院在线观看 | 男人的天堂av网站 | 人妻少妇被猛烈进入中文字幕 | 人妻天天爽夜夜爽一区二区 | 国产成人精品久久亚洲高清不卡 | 成人免费视频在线观看 | 亚洲男女内射在线播放 | 色一情一乱一伦一视频免费看 | 精品国精品国产自在久国产87 | 亚洲国产欧美日韩精品一区二区三区 | 国内精品久久毛片一区二区 | 久久99精品久久久久婷婷 | 无码国内精品人妻少妇 | 男女猛烈xx00免费视频试看 | 亚洲天堂2017无码中文 | 一本久久伊人热热精品中文字幕 | aⅴ在线视频男人的天堂 | 精品人妻av区 | 蜜臀av在线播放 久久综合激激的五月天 | 欧洲vodafone精品性 | 亚洲成熟女人毛毛耸耸多 | 国产精品美女久久久久av爽李琼 | 欧美人与物videos另类 | 久久久久久久女国产乱让韩 | 国产凸凹视频一区二区 | 久久精品成人欧美大片 | 日韩人妻系列无码专区 | 久久精品国产99久久6动漫 | 国产亚洲精品久久久闺蜜 | 成人aaa片一区国产精品 | 国产人妻久久精品二区三区老狼 | 色综合天天综合狠狠爱 | www国产精品内射老师 | 亚洲国产高清在线观看视频 | 欧美性猛交xxxx富婆 | 欧美大屁股xxxxhd黑色 | 中文字幕乱码中文乱码51精品 | 自拍偷自拍亚洲精品10p | 东京热一精品无码av | 在线成人www免费观看视频 | 欧美三级不卡在线观看 | 乱中年女人伦av三区 | 少妇性l交大片欧洲热妇乱xxx | 久久国内精品自在自线 | 无人区乱码一区二区三区 | 成人片黄网站色大片免费观看 | 久久久婷婷五月亚洲97号色 | 国产性生交xxxxx无码 | 久久国语露脸国产精品电影 | 撕开奶罩揉吮奶头视频 | 婷婷六月久久综合丁香 | 日本精品高清一区二区 | 久久久精品欧美一区二区免费 | 日日噜噜噜噜夜夜爽亚洲精品 | 国产内射老熟女aaaa | 中文字幕无码热在线视频 | 丝袜美腿亚洲一区二区 | 国产精品久久久午夜夜伦鲁鲁 | 亚洲区小说区激情区图片区 | 正在播放老肥熟妇露脸 | 亚洲熟妇色xxxxx欧美老妇 | 免费无码的av片在线观看 | 无码午夜成人1000部免费视频 | 久久久中文久久久无码 | 国产真实夫妇视频 | 国产乱人伦app精品久久 国产在线无码精品电影网 国产国产精品人在线视 | 正在播放东北夫妻内射 | 丰满人妻翻云覆雨呻吟视频 | 国产精品资源一区二区 | 性色av无码免费一区二区三区 | 伊人久久大香线蕉av一区二区 | 国产婷婷色一区二区三区在线 | 亚洲男人av天堂午夜在 | 成人无码精品1区2区3区免费看 | 亚洲 日韩 欧美 成人 在线观看 | 久久国产精品_国产精品 | 小sao货水好多真紧h无码视频 | 特大黑人娇小亚洲女 | 久久午夜夜伦鲁鲁片无码免费 | 黑森林福利视频导航 | 亚洲国产精品无码久久久久高潮 | 熟妇人妻无码xxx视频 | 国产一区二区三区影院 | 中国大陆精品视频xxxx | 国产精品资源一区二区 | 国内精品人妻无码久久久影院蜜桃 | 色婷婷香蕉在线一区二区 | 思思久久99热只有频精品66 | 人妻无码久久精品人妻 | 亚洲午夜无码久久 | 99久久人妻精品免费一区 | 久久精品国产亚洲精品 | 国产成人精品视频ⅴa片软件竹菊 | 午夜福利一区二区三区在线观看 | 精品日本一区二区三区在线观看 | 成人免费无码大片a毛片 | 日韩成人一区二区三区在线观看 | 国产特级毛片aaaaaaa高清 | 麻豆果冻传媒2021精品传媒一区下载 | 一二三四社区在线中文视频 | 国产精品久久精品三级 | 男女下面进入的视频免费午夜 | 欧美 日韩 人妻 高清 中文 | 午夜性刺激在线视频免费 | 国产在线无码精品电影网 | 免费无码午夜福利片69 | 精品亚洲韩国一区二区三区 | 欧美人与动性行为视频 | 少妇无码一区二区二三区 | 日日摸天天摸爽爽狠狠97 | 少妇久久久久久人妻无码 | 亚洲中文字幕久久无码 | 一本久久a久久精品vr综合 | 国产亚洲欧美日韩亚洲中文色 | 久久久成人毛片无码 | 精品熟女少妇av免费观看 | 午夜男女很黄的视频 | 牲欲强的熟妇农村老妇女视频 | 爱做久久久久久 | 撕开奶罩揉吮奶头视频 | 性色欲情网站iwww九文堂 | 亚洲国产综合无码一区 | 岛国片人妻三上悠亚 | 妺妺窝人体色www婷婷 | 四虎4hu永久免费 | 99精品久久毛片a片 | 曰韩少妇内射免费播放 | 青春草在线视频免费观看 | 国产xxx69麻豆国语对白 | 欧洲美熟女乱又伦 | 熟女体下毛毛黑森林 | 99精品无人区乱码1区2区3区 | 久久精品国产亚洲精品 | 欧美 亚洲 国产 另类 | 在线精品国产一区二区三区 | 国产两女互慰高潮视频在线观看 | 亚洲熟妇色xxxxx欧美老妇y | 国产偷抇久久精品a片69 | 日产精品高潮呻吟av久久 | 99久久久国产精品无码免费 | 久久精品女人的天堂av | 日欧一片内射va在线影院 | 亚洲自偷自拍另类第1页 | 麻豆成人精品国产免费 | 亚洲国产精华液网站w | 伊人久久大香线蕉午夜 | 久久精品国产99久久6动漫 | 性史性农村dvd毛片 | 色一情一乱一伦一区二区三欧美 | 欧洲精品码一区二区三区免费看 | 一二三四社区在线中文视频 | 亚洲色欲色欲天天天www | 粗大的内捧猛烈进出视频 | 97久久国产亚洲精品超碰热 | 一本久久伊人热热精品中文字幕 | 国产激情一区二区三区 | 中文字幕无码免费久久99 | 国产又粗又硬又大爽黄老大爷视 | 国产精品高潮呻吟av久久 | 免费国产成人高清在线观看网站 | 亚洲人成无码网www | 国内精品一区二区三区不卡 | 国内精品九九久久久精品 | 亚洲色欲色欲欲www在线 | 国产国语老龄妇女a片 | 又大又紧又粉嫩18p少妇 | 人人超人人超碰超国产 | 性开放的女人aaa片 | 丰满人妻翻云覆雨呻吟视频 | 亚洲色欲久久久综合网东京热 | 亚洲狠狠婷婷综合久久 | 精品国产乱码久久久久乱码 | 午夜性刺激在线视频免费 | 中文字幕色婷婷在线视频 | 国产口爆吞精在线视频 | 亚洲精品午夜国产va久久成人 | 成年美女黄网站色大免费视频 | 人妻体内射精一区二区三四 | 日本丰满熟妇videos | 国产va免费精品观看 | 狠狠躁日日躁夜夜躁2020 | 大肉大捧一进一出好爽视频 | 青青久在线视频免费观看 | 日日麻批免费40分钟无码 | 强开小婷嫩苞又嫩又紧视频 | 国产免费久久久久久无码 | 欧美人与动性行为视频 | 中文精品久久久久人妻不卡 | 国产片av国语在线观看 | 免费国产成人高清在线观看网站 | 亚洲精品久久久久中文第一幕 | 小泽玛莉亚一区二区视频在线 | 亚洲熟悉妇女xxx妇女av | 少妇高潮一区二区三区99 | 亚洲日韩中文字幕在线播放 | 久久久久久国产精品无码下载 | 国产舌乚八伦偷品w中 | 图片小说视频一区二区 | 国产精品香蕉在线观看 | 国产亚洲tv在线观看 | 99麻豆久久久国产精品免费 | 久久99久久99精品中文字幕 | 久青草影院在线观看国产 | 欧美老妇与禽交 | 国产精华av午夜在线观看 | 精品人妻人人做人人爽 | 99久久久无码国产精品免费 | 激情内射日本一区二区三区 | 少妇一晚三次一区二区三区 | 小鲜肉自慰网站xnxx | 亚洲欧美色中文字幕在线 | 国产精品爱久久久久久久 | 无码播放一区二区三区 | 无套内谢的新婚少妇国语播放 | 国内精品人妻无码久久久影院蜜桃 | 在教室伦流澡到高潮hnp视频 | 奇米影视7777久久精品人人爽 | 麻豆国产97在线 | 欧洲 | 捆绑白丝粉色jk震动捧喷白浆 | 三级4级全黄60分钟 | 丰满人妻精品国产99aⅴ | 精品夜夜澡人妻无码av蜜桃 | 国产亚洲视频中文字幕97精品 | 麻豆国产人妻欲求不满 | 天天爽夜夜爽夜夜爽 | 黑森林福利视频导航 | a片免费视频在线观看 | 亚洲自偷自拍另类第1页 | 少妇一晚三次一区二区三区 | 国产日产欧产精品精品app | 超碰97人人做人人爱少妇 | 又色又爽又黄的美女裸体网站 | 欧美肥老太牲交大战 | 欧美日韩亚洲国产精品 | 青青久在线视频免费观看 | 性欧美videos高清精品 | 精品国产精品久久一区免费式 | 欧美人与牲动交xxxx | 精品一区二区三区波多野结衣 | 国产舌乚八伦偷品w中 | 亚洲成av人片在线观看无码不卡 | 国产精品自产拍在线观看 | 久久久久久久久蜜桃 | 国产农村乱对白刺激视频 | 久青草影院在线观看国产 | 久久久久se色偷偷亚洲精品av | 日日躁夜夜躁狠狠躁 | 亚洲阿v天堂在线 | 亚洲一区二区三区含羞草 | 97无码免费人妻超级碰碰夜夜 | 暴力强奷在线播放无码 | 无码人妻丰满熟妇区五十路百度 | 露脸叫床粗话东北少妇 | 国产欧美熟妇另类久久久 | 国产综合在线观看 | 中文字幕精品av一区二区五区 | 少妇人妻偷人精品无码视频 | 呦交小u女精品视频 | 丝袜 中出 制服 人妻 美腿 | 2019午夜福利不卡片在线 | 日韩 欧美 动漫 国产 制服 | 亚洲国产精品成人久久蜜臀 | 中文字幕久久久久人妻 | 色综合久久久久综合一本到桃花网 | 免费国产成人高清在线观看网站 | 天海翼激烈高潮到腰振不止 | 野外少妇愉情中文字幕 | 人人澡人人妻人人爽人人蜜桃 | 国产口爆吞精在线视频 | 亚洲欧洲中文日韩av乱码 | 国产精品亚洲专区无码不卡 | 内射白嫩少妇超碰 | 丰满少妇女裸体bbw | 国内综合精品午夜久久资源 | 精品乱码久久久久久久 | 少女韩国电视剧在线观看完整 | 人人爽人人爽人人片av亚洲 | 国产内射爽爽大片视频社区在线 | 狠狠色色综合网站 | 亚欧洲精品在线视频免费观看 | 欧美性生交xxxxx久久久 | www国产精品内射老师 | 中文字幕中文有码在线 | 国产性生交xxxxx无码 | 久久精品视频在线看15 | 麻豆md0077饥渴少妇 | 亚洲中文字幕无码一久久区 | 无码国产激情在线观看 | 国内揄拍国内精品少妇国语 | 激情五月综合色婷婷一区二区 | 一本久久a久久精品vr综合 | 欧美高清在线精品一区 | 女人被男人躁得好爽免费视频 | 国产精品欧美成人 | 国产suv精品一区二区五 | 亚洲经典千人经典日产 | 亚洲区小说区激情区图片区 | 亚洲а∨天堂久久精品2021 | 伊人久久大香线焦av综合影院 | 秋霞成人午夜鲁丝一区二区三区 | 少妇一晚三次一区二区三区 | 人人妻人人澡人人爽人人精品浪潮 | 精品水蜜桃久久久久久久 | 精品日本一区二区三区在线观看 | 欧美日韩视频无码一区二区三 | 天天做天天爱天天爽综合网 | 亚洲一区av无码专区在线观看 | 亚洲狠狠婷婷综合久久 | 欧美日韩一区二区综合 | 免费无码一区二区三区蜜桃大 | 成熟女人特级毛片www免费 | 久久国产36精品色熟妇 | 亚洲va中文字幕无码久久不卡 | 婷婷综合久久中文字幕蜜桃三电影 | 狠狠噜狠狠狠狠丁香五月 | 99久久精品国产一区二区蜜芽 | 久久99精品久久久久婷婷 | 少妇邻居内射在线 | 欧美真人作爱免费视频 | 亚洲日本va午夜在线电影 | 亚洲综合在线一区二区三区 | 色窝窝无码一区二区三区色欲 | aⅴ亚洲 日韩 色 图网站 播放 | 亚洲gv猛男gv无码男同 | 小泽玛莉亚一区二区视频在线 | 久激情内射婷内射蜜桃人妖 | 欧美丰满老熟妇xxxxx性 | 婷婷五月综合缴情在线视频 | 曰韩无码二三区中文字幕 | 女人和拘做爰正片视频 | 亚洲の无码国产の无码影院 | 国产亚av手机在线观看 | 四虎影视成人永久免费观看视频 | 女人被男人爽到呻吟的视频 | 中文字幕无码av激情不卡 | 又大又硬又爽免费视频 | 丝袜 中出 制服 人妻 美腿 | 国产精品亚洲专区无码不卡 | 无码毛片视频一区二区本码 | 久久久精品国产sm最大网站 | 日韩精品无码免费一区二区三区 | 欧美丰满少妇xxxx性 | 丰满妇女强制高潮18xxxx | 久久精品一区二区三区四区 | 国产激情无码一区二区app | 中文字幕人成乱码熟女app | 性色欲情网站iwww九文堂 | 日本一卡2卡3卡四卡精品网站 | 午夜熟女插插xx免费视频 | 国产69精品久久久久app下载 | 亚洲综合无码一区二区三区 | 99麻豆久久久国产精品免费 | 亚洲色无码一区二区三区 | 学生妹亚洲一区二区 | 亚洲人成网站在线播放942 | 国产成人无码av片在线观看不卡 | 玩弄人妻少妇500系列视频 | 国产精品美女久久久 | www国产亚洲精品久久久日本 | 国产精品成人av在线观看 | 亚洲日韩乱码中文无码蜜桃臀网站 | 无码av最新清无码专区吞精 | 亚洲成av人影院在线观看 | 欧美黑人乱大交 | 又大又硬又黄的免费视频 | 国产农村妇女高潮大叫 | 国产偷抇久久精品a片69 | 狠狠色欧美亚洲狠狠色www | 中国女人内谢69xxxxxa片 | 野狼第一精品社区 | 精品无码一区二区三区爱欲 | 免费人成在线视频无码 | 国产成人综合在线女婷五月99播放 | 夜先锋av资源网站 | √8天堂资源地址中文在线 | 日本一区二区三区免费播放 | 一区二区三区乱码在线 | 欧洲 | 精品夜夜澡人妻无码av蜜桃 | 捆绑白丝粉色jk震动捧喷白浆 | 内射老妇bbwx0c0ck | 午夜时刻免费入口 | 狠狠色噜噜狠狠狠狠7777米奇 | 狠狠色噜噜狠狠狠狠7777米奇 | 亚洲欧美日韩成人高清在线一区 | 成年美女黄网站色大免费全看 | 天堂亚洲2017在线观看 | 成熟人妻av无码专区 | 中文字幕精品av一区二区五区 | 欧美人妻一区二区三区 | 99精品久久毛片a片 | 77777熟女视频在线观看 а天堂中文在线官网 | 欧美 亚洲 国产 另类 | 欧美激情内射喷水高潮 | 97夜夜澡人人双人人人喊 | 亚洲国产精品美女久久久久 | 久久99精品久久久久久动态图 | 精品偷自拍另类在线观看 | 欧美日韩久久久精品a片 | 久久精品中文闷骚内射 | 日本熟妇乱子伦xxxx | 欧美人与善在线com | 欧美日韩亚洲国产精品 | 3d动漫精品啪啪一区二区中 | a片免费视频在线观看 | 女人被男人躁得好爽免费视频 | 东北女人啪啪对白 | 国内综合精品午夜久久资源 | 国产精品二区一区二区aⅴ污介绍 | 国产成人精品视频ⅴa片软件竹菊 | 日本精品少妇一区二区三区 | 高潮毛片无遮挡高清免费 | 性生交片免费无码看人 | 国产网红无码精品视频 | 国产精品嫩草久久久久 | 日本一区二区更新不卡 | 成人欧美一区二区三区黑人免费 | 在线观看欧美一区二区三区 | 国产亚洲视频中文字幕97精品 | 欧美国产日韩久久mv | 久久久久久国产精品无码下载 | 99视频精品全部免费免费观看 | 中文字幕无线码免费人妻 | 纯爱无遮挡h肉动漫在线播放 | 欧美 日韩 人妻 高清 中文 | 免费中文字幕日韩欧美 | 日韩精品久久久肉伦网站 | 午夜福利试看120秒体验区 | 日韩人妻少妇一区二区三区 | 久久久久免费精品国产 | 色婷婷综合激情综在线播放 | 爱做久久久久久 | 日本一卡二卡不卡视频查询 | 中文字幕日产无线码一区 | 动漫av网站免费观看 | 国产亚洲tv在线观看 | 国产精品久久久久7777 | 欧美人与动性行为视频 | 国产国产精品人在线视 | 久久精品人妻少妇一区二区三区 | 久久这里只有精品视频9 | 亚洲s码欧洲m码国产av | a国产一区二区免费入口 | 国产精品美女久久久 | 久久久中文字幕日本无吗 | 精品国产青草久久久久福利 | 2020最新国产自产精品 | 白嫩日本少妇做爰 | 精品人人妻人人澡人人爽人人 | 精品 日韩 国产 欧美 视频 | 久久精品人人做人人综合试看 | 熟女少妇在线视频播放 | 国产午夜亚洲精品不卡下载 | 国产国语老龄妇女a片 | 欧美亚洲日韩国产人成在线播放 | 精品一区二区不卡无码av | 久久久久免费看成人影片 | 国产在线无码精品电影网 | 国产精品.xx视频.xxtv | 成人欧美一区二区三区黑人 | 四虎影视成人永久免费观看视频 | 国产xxx69麻豆国语对白 | 久久久www成人免费毛片 | 欧美日韩人成综合在线播放 | 色五月丁香五月综合五月 | 亚洲天堂2017无码 | 免费无码午夜福利片69 | 亚洲国产欧美在线成人 | 无码中文字幕色专区 | 国产亚洲视频中文字幕97精品 | 内射白嫩少妇超碰 | 成人三级无码视频在线观看 | 亚洲精品一区二区三区在线 | 少妇性l交大片 | 国产精品多人p群无码 | 欧美午夜特黄aaaaaa片 | 国产尤物精品视频 | 天天拍夜夜添久久精品 | 亚洲精品午夜无码电影网 | 麻豆精品国产精华精华液好用吗 | 亚洲欧洲无卡二区视頻 | 男人和女人高潮免费网站 | 欧洲熟妇色 欧美 | 香蕉久久久久久av成人 | 激情综合激情五月俺也去 | 亚洲高清偷拍一区二区三区 | 日本欧美一区二区三区乱码 | 少妇性荡欲午夜性开放视频剧场 | 欧美野外疯狂做受xxxx高潮 | 奇米影视888欧美在线观看 | 人人妻人人澡人人爽欧美精品 | 性欧美牲交在线视频 | 水蜜桃亚洲一二三四在线 | 欧美日韩一区二区综合 | 国产精品久久久久久久影院 | 国产精品无码永久免费888 | 久久久久成人精品免费播放动漫 | 99在线 | 亚洲 | 国产精品久久国产三级国 | 国产精品成人av在线观看 | 婷婷丁香五月天综合东京热 | 俺去俺来也在线www色官网 | 天天爽夜夜爽夜夜爽 | 国产精品自产拍在线观看 | 欧美日本精品一区二区三区 | 国产精品毛多多水多 | 免费国产黄网站在线观看 | 影音先锋中文字幕无码 | 18无码粉嫩小泬无套在线观看 | 成人无码精品一区二区三区 | 亚洲欧美国产精品久久 | 高潮喷水的毛片 | 日韩亚洲欧美中文高清在线 | 日产精品99久久久久久 | 精品国偷自产在线视频 | 中文久久乱码一区二区 | 99精品视频在线观看免费 | 亚洲色偷偷偷综合网 | 亚洲s码欧洲m码国产av | 亚洲国产精品成人久久蜜臀 | 国产亚洲人成a在线v网站 | 久激情内射婷内射蜜桃人妖 | 精品一区二区三区无码免费视频 | 国产精品久久久av久久久 | 人妻无码αv中文字幕久久琪琪布 | 精品国产一区二区三区四区 | 精品偷自拍另类在线观看 | 欧美性猛交xxxx富婆 | 亚洲精品国产品国语在线观看 | 午夜福利不卡在线视频 | 中国大陆精品视频xxxx | 男人的天堂2018无码 | 自拍偷自拍亚洲精品10p | 中文字幕无码日韩专区 | 一区二区传媒有限公司 | 无码一区二区三区在线观看 | 欧美丰满熟妇xxxx性ppx人交 | 亚欧洲精品在线视频免费观看 | 精品偷拍一区二区三区在线看 | 99er热精品视频 | 婷婷色婷婷开心五月四房播播 | 欧美亚洲日韩国产人成在线播放 | 人妻有码中文字幕在线 | 色综合久久中文娱乐网 | 国产偷国产偷精品高清尤物 | 国产熟妇另类久久久久 | 亚洲精品www久久久 | 国语自产偷拍精品视频偷 | 国产精品国产三级国产专播 | 99久久无码一区人妻 | 女人被男人爽到呻吟的视频 | 国产亚洲精品久久久久久久久动漫 | 荫蒂添的好舒服视频囗交 | 国产午夜手机精彩视频 | 国产精品久久久一区二区三区 | 亚洲色无码一区二区三区 | 日本成熟视频免费视频 | 午夜精品一区二区三区的区别 | 国产亚洲人成在线播放 | 亚洲精品国产精品乱码视色 | 成人免费视频在线观看 | 欧美人与物videos另类 | 丰满人妻被黑人猛烈进入 | 人妻熟女一区 | 人妻少妇精品久久 | 荡女精品导航 | 美女毛片一区二区三区四区 | 免费无码的av片在线观看 | 精品 日韩 国产 欧美 视频 | 亚洲高清偷拍一区二区三区 | 日日摸夜夜摸狠狠摸婷婷 | 久精品国产欧美亚洲色aⅴ大片 | 一本大道伊人av久久综合 | 国产黄在线观看免费观看不卡 | 麻豆av传媒蜜桃天美传媒 | 无码国产色欲xxxxx视频 | 国产亚洲欧美日韩亚洲中文色 | 欧美人与动性行为视频 | 日日麻批免费40分钟无码 | 精品国产aⅴ无码一区二区 | 亚洲国产午夜精品理论片 | 精品无码av一区二区三区 | 超碰97人人射妻 | 成人精品视频一区二区三区尤物 | 国产亚洲精品久久久闺蜜 | 国产免费观看黄av片 | 日本成熟视频免费视频 | 老熟妇乱子伦牲交视频 | 久久综合给久久狠狠97色 | 亚洲欧美日韩综合久久久 | 午夜肉伦伦影院 | 国产亚洲精品精品国产亚洲综合 | 性生交片免费无码看人 | 国内综合精品午夜久久资源 | 国产一区二区不卡老阿姨 | 久久午夜无码鲁丝片 | 欧美成人午夜精品久久久 | 欧美日韩人成综合在线播放 | 丰满人妻一区二区三区免费视频 | 国产婷婷色一区二区三区在线 | 免费观看激色视频网站 | 一二三四社区在线中文视频 | 秋霞成人午夜鲁丝一区二区三区 | √天堂资源地址中文在线 | 久久天天躁狠狠躁夜夜免费观看 | 亚洲综合色区中文字幕 | 免费无码肉片在线观看 | 国产两女互慰高潮视频在线观看 | 久久国产自偷自偷免费一区调 | 国内少妇偷人精品视频免费 | 波多野结衣一区二区三区av免费 | 波多野结衣aⅴ在线 | 人妻互换免费中文字幕 | 日产国产精品亚洲系列 | 人妻与老人中文字幕 | 帮老师解开蕾丝奶罩吸乳网站 | 国产精品久久久久久久影院 | 中文字幕无码免费久久9一区9 | 少妇无套内谢久久久久 | 国产av无码专区亚洲awww | 在线观看免费人成视频 | 亚洲国产精品一区二区第一页 | 免费观看激色视频网站 | 亚洲一区二区三区国产精华液 | 人妻尝试又大又粗久久 | 一本久久a久久精品亚洲 | 亚洲啪av永久无码精品放毛片 | 国产成人精品一区二区在线小狼 | 午夜福利试看120秒体验区 | 奇米影视888欧美在线观看 | 久久亚洲日韩精品一区二区三区 | 国产精品-区区久久久狼 | 伊在人天堂亚洲香蕉精品区 | 少妇高潮一区二区三区99 | 亚洲精品午夜国产va久久成人 | 中文精品久久久久人妻不卡 | 99久久精品日本一区二区免费 | 欧美老人巨大xxxx做受 | 狠狠cao日日穞夜夜穞av | 久久熟妇人妻午夜寂寞影院 | 久久精品中文字幕一区 | 国产无套粉嫩白浆在线 | 亚洲人成网站在线播放942 | 国产精品香蕉在线观看 | 欧美激情综合亚洲一二区 | 欧美 丝袜 自拍 制服 另类 | 色 综合 欧美 亚洲 国产 | 无码人妻黑人中文字幕 | 欧美日韩色另类综合 | 女人高潮内射99精品 | 亚洲国产欧美在线成人 | 亚洲精品一区二区三区四区五区 | 亚洲 日韩 欧美 成人 在线观看 | 色综合久久中文娱乐网 | 精品欧美一区二区三区久久久 | 久久久久久九九精品久 | 久久视频在线观看精品 | 男女爱爱好爽视频免费看 | 未满小14洗澡无码视频网站 | 国精产品一品二品国精品69xx | 亚洲国产av美女网站 | 丝袜 中出 制服 人妻 美腿 | 婷婷五月综合激情中文字幕 | 国产精品沙发午睡系列 | 精品午夜福利在线观看 | 久久久精品456亚洲影院 | yw尤物av无码国产在线观看 | 男人的天堂av网站 | 日日干夜夜干 | 人人妻人人澡人人爽精品欧美 | 久久国产精品二国产精品 | 欧美黑人乱大交 | 在线观看国产一区二区三区 | 黑人巨大精品欧美黑寡妇 | 精品日本一区二区三区在线观看 | 高潮毛片无遮挡高清免费视频 | 亚洲男人av天堂午夜在 | 欧美人妻一区二区三区 | 中文字幕无码av激情不卡 | 精品乱码久久久久久久 | а√资源新版在线天堂 | 欧美野外疯狂做受xxxx高潮 | 午夜精品久久久久久久 | 曰本女人与公拘交酡免费视频 | 骚片av蜜桃精品一区 | 日本一区二区更新不卡 | 国产sm调教视频在线观看 | 国产区女主播在线观看 | 无码国产乱人伦偷精品视频 | 国产免费久久精品国产传媒 | 亚洲の无码国产の无码影院 | 娇妻被黑人粗大高潮白浆 | 亚洲国产精品一区二区美利坚 | 国产精品久久久久久久影院 | 国产三级精品三级男人的天堂 | 国产亚洲精品久久久久久国模美 | 少妇无码一区二区二三区 | 人人妻人人澡人人爽欧美精品 | 欧美性猛交内射兽交老熟妇 | 国产精品久久久午夜夜伦鲁鲁 | 蜜臀av在线观看 在线欧美精品一区二区三区 | 熟女体下毛毛黑森林 | 亚洲综合伊人久久大杳蕉 | 精品偷自拍另类在线观看 | 国产精品高潮呻吟av久久 | 亚洲啪av永久无码精品放毛片 | 亚洲精品无码国产 | 麻豆精品国产精华精华液好用吗 | 麻豆精产国品 | 免费人成在线观看网站 | 亚洲一区二区三区在线观看网站 | 国产一区二区三区四区五区加勒比 | 欧美人与物videos另类 | 亚洲一区av无码专区在线观看 | 亚洲经典千人经典日产 | 国产香蕉97碰碰久久人人 | 日韩av无码一区二区三区不卡 | 精品午夜福利在线观看 | 中文字幕无码乱人伦 | 国内少妇偷人精品视频 | 欧美人与禽猛交狂配 | 国产日产欧产精品精品app | 国产精品嫩草久久久久 | 狠狠色丁香久久婷婷综合五月 | 久久综合香蕉国产蜜臀av | 国产精品久久久 | 亚洲色www成人永久网址 | 一个人看的www免费视频在线观看 | 东京无码熟妇人妻av在线网址 | 国产精品va在线观看无码 | 国产精品理论片在线观看 | 一本精品99久久精品77 | 亚洲综合另类小说色区 | 欧美日韩视频无码一区二区三 | 97夜夜澡人人爽人人喊中国片 | 欧美日本日韩 | 欧美xxxx黑人又粗又长 | 国产乱人伦av在线无码 | 午夜免费福利小电影 | 亚洲乱码日产精品bd | 少妇无码一区二区二三区 | 精品夜夜澡人妻无码av蜜桃 | 午夜性刺激在线视频免费 | 婷婷综合久久中文字幕蜜桃三电影 | 在线观看免费人成视频 | 人妻人人添人妻人人爱 | 国产 浪潮av性色四虎 | а√资源新版在线天堂 | 久久天天躁夜夜躁狠狠 | 国产亲子乱弄免费视频 | 色诱久久久久综合网ywww | 日本丰满护士爆乳xxxx | 国产av剧情md精品麻豆 | 美女扒开屁股让男人桶 | 国精产品一品二品国精品69xx | 精品偷自拍另类在线观看 | 国产极品视觉盛宴 | 日本护士毛茸茸高潮 | 无码人中文字幕 | 小鲜肉自慰网站xnxx | 国产在线无码精品电影网 | 一本大道久久东京热无码av | 亚洲一区av无码专区在线观看 | 思思久久99热只有频精品66 | 久久精品一区二区三区四区 | 色诱久久久久综合网ywww | 无码av最新清无码专区吞精 | 国色天香社区在线视频 | 国内老熟妇对白xxxxhd | 欧美国产日产一区二区 | 日欧一片内射va在线影院 | 久久99久久99精品中文字幕 | 国产精品久久久久久亚洲毛片 | 好男人www社区 | 麻豆国产人妻欲求不满 | 日本丰满熟妇videos | 99视频精品全部免费免费观看 | 欧美激情内射喷水高潮 | 国产真实伦对白全集 | 久久天天躁夜夜躁狠狠 | 一本色道久久综合亚洲精品不卡 | 未满小14洗澡无码视频网站 | 久精品国产欧美亚洲色aⅴ大片 | 日本欧美一区二区三区乱码 | 国产网红无码精品视频 | 亚洲欧美日韩国产精品一区二区 | 国产高清不卡无码视频 | 高中生自慰www网站 | 色诱久久久久综合网ywww | 日韩人妻无码中文字幕视频 | 在线精品国产一区二区三区 | 小sao货水好多真紧h无码视频 | 日日碰狠狠躁久久躁蜜桃 | 狠狠亚洲超碰狼人久久 | 97无码免费人妻超级碰碰夜夜 | 亚洲国产日韩a在线播放 | 呦交小u女精品视频 | 国产办公室秘书无码精品99 | 夜夜高潮次次欢爽av女 | 精品一区二区三区无码免费视频 | 色偷偷人人澡人人爽人人模 | 特级做a爰片毛片免费69 | 国产精品毛多多水多 | 精品久久8x国产免费观看 | 熟女俱乐部五十路六十路av | 成人亚洲精品久久久久软件 | 精品久久8x国产免费观看 | 牲欲强的熟妇农村老妇女 | 国产精品理论片在线观看 | 久久精品国产一区二区三区 | 精品国产青草久久久久福利 | 国产两女互慰高潮视频在线观看 | 久久精品国产一区二区三区肥胖 | 国语自产偷拍精品视频偷 | 久久精品国产亚洲精品 | 精品人妻av区 | 麻豆果冻传媒2021精品传媒一区下载 | 人人爽人人澡人人人妻 | 少妇性俱乐部纵欲狂欢电影 | 日本一本二本三区免费 | 欧美一区二区三区 | 亚洲国产精品一区二区美利坚 | 国产亚洲精品久久久久久久久动漫 | 300部国产真实乱 | 国产精品国产自线拍免费软件 | 小sao货水好多真紧h无码视频 | 免费乱码人妻系列无码专区 | 99re在线播放 | 2020最新国产自产精品 | 久久精品一区二区三区四区 | 中文无码精品a∨在线观看不卡 | 亚洲精品成人av在线 | 国内精品人妻无码久久久影院 | 特大黑人娇小亚洲女 | 久久久久久av无码免费看大片 | 中文字幕无码免费久久9一区9 | 性欧美大战久久久久久久 | 成年美女黄网站色大免费视频 | 国产明星裸体无码xxxx视频 | 国产精品久久久久久亚洲影视内衣 | www国产亚洲精品久久网站 | 国产精品福利视频导航 | 狠狠综合久久久久综合网 | 国产午夜亚洲精品不卡 | 久久国产自偷自偷免费一区调 | 国产舌乚八伦偷品w中 | 久久99精品久久久久婷婷 | 国产猛烈高潮尖叫视频免费 | 麻豆果冻传媒2021精品传媒一区下载 | 精品国产一区二区三区av 性色 | 中文字幕乱码人妻无码久久 | 国产成人人人97超碰超爽8 | 啦啦啦www在线观看免费视频 | 97se亚洲精品一区 | 欧洲vodafone精品性 | 日韩少妇白浆无码系列 | 高清无码午夜福利视频 | 蜜臀aⅴ国产精品久久久国产老师 | 国产精品久久久 | 激情国产av做激情国产爱 | 夜夜夜高潮夜夜爽夜夜爰爰 | 亚洲中文字幕va福利 | 在线精品亚洲一区二区 | 又大又硬又爽免费视频 | 中文字幕人妻无码一夲道 | 天堂在线观看www | 天天燥日日燥 | 色情久久久av熟女人妻网站 | 啦啦啦www在线观看免费视频 | 国产午夜福利100集发布 | 日本精品少妇一区二区三区 | 人人妻人人澡人人爽欧美精品 | 亚洲国产成人av在线观看 | 国产成人精品久久亚洲高清不卡 | 国产亚av手机在线观看 | 久久久久99精品国产片 | 午夜精品久久久内射近拍高清 | 伊人久久大香线焦av综合影院 | 无码吃奶揉捏奶头高潮视频 | 久久久久亚洲精品中文字幕 | 国产精品亚洲а∨无码播放麻豆 | 超碰97人人射妻 | 曰本女人与公拘交酡免费视频 | 中文字幕无线码 | 成人无码视频免费播放 | 国产另类ts人妖一区二区 | 免费乱码人妻系列无码专区 | 国产免费久久久久久无码 | 人妻体内射精一区二区三四 | 亚洲国产精品久久久久久 | 中文精品久久久久人妻不卡 | 久久久久久亚洲精品a片成人 | 亚洲午夜久久久影院 | 领导边摸边吃奶边做爽在线观看 | 日本一区二区三区免费播放 | 国产性生交xxxxx无码 | 天堂一区人妻无码 | 999久久久国产精品消防器材 | 少妇性l交大片欧洲热妇乱xxx | 欧美成人免费全部网站 | 国产凸凹视频一区二区 | 久久人人爽人人爽人人片av高清 | 亚洲国产av美女网站 | 国产 浪潮av性色四虎 | 国产成人精品必看 | 日本大乳高潮视频在线观看 | 国产精品久久久午夜夜伦鲁鲁 | 久久国产精品二国产精品 | 久久精品国产99精品亚洲 | 亚洲精品一区二区三区婷婷月 | 超碰97人人做人人爱少妇 | 久久久久久亚洲精品a片成人 | 成人精品视频一区二区 | 国产精品视频免费播放 | www国产亚洲精品久久网站 | 亚洲人成网站免费播放 | 欧美性色19p | 亚洲娇小与黑人巨大交 | 成人影院yy111111在线观看 | 强开小婷嫩苞又嫩又紧视频 | 乱码午夜-极国产极内射 | 少妇人妻av毛片在线看 | 无码人妻精品一区二区三区下载 | 蜜桃av蜜臀av色欲av麻 999久久久国产精品消防器材 | 国产真实夫妇视频 | 婷婷丁香五月天综合东京热 | 久久亚洲精品中文字幕无男同 | 中文字幕乱码人妻无码久久 | 中国大陆精品视频xxxx | 狠狠cao日日穞夜夜穞av | 男女爱爱好爽视频免费看 | 婷婷六月久久综合丁香 | 久久亚洲日韩精品一区二区三区 | 天干天干啦夜天干天2017 | 宝宝好涨水快流出来免费视频 | 中文字幕人成乱码熟女app | 中文字幕无码av波多野吉衣 | 成人一在线视频日韩国产 | 色综合久久久无码中文字幕 | 狠狠cao日日穞夜夜穞av | 久久精品国产一区二区三区 | 久久久久免费精品国产 | 婷婷五月综合缴情在线视频 | 88国产精品欧美一区二区三区 | 亚洲精品久久久久avwww潮水 | 国产精品久久久久久久影院 | 国产午夜精品一区二区三区嫩草 | 久久久久免费看成人影片 | 国产猛烈高潮尖叫视频免费 | 人妻体内射精一区二区三四 | 色老头在线一区二区三区 | 国产97人人超碰caoprom | 欧美黑人性暴力猛交喷水 | 在教室伦流澡到高潮hnp视频 | 色综合视频一区二区三区 | 亚洲色大成网站www国产 | 亚洲国产欧美日韩精品一区二区三区 | 疯狂三人交性欧美 | 99久久精品日本一区二区免费 | 黑森林福利视频导航 | 亚洲成色在线综合网站 | 亚洲中文字幕在线观看 | 少妇高潮一区二区三区99 | 国产两女互慰高潮视频在线观看 | 99久久精品午夜一区二区 | 日韩精品成人一区二区三区 | 国产人成高清在线视频99最全资源 | 无码av最新清无码专区吞精 | 5858s亚洲色大成网站www | 日本免费一区二区三区最新 | 人人妻人人澡人人爽欧美精品 | 午夜福利电影 | 久久aⅴ免费观看 | 青青草原综合久久大伊人精品 | 男人的天堂av网站 | 国产成人无码av片在线观看不卡 | 国产97在线 | 亚洲 | 久久99热只有频精品8 | 色综合久久中文娱乐网 | 伊人久久大香线焦av综合影院 | 国精产品一区二区三区 | 一个人看的www免费视频在线观看 | 狠狠噜狠狠狠狠丁香五月 | 丰满少妇熟乱xxxxx视频 | 人妻无码αv中文字幕久久琪琪布 | 国产亚洲日韩欧美另类第八页 | 成人欧美一区二区三区黑人免费 | 中文字幕亚洲情99在线 | 一本大道久久东京热无码av | 色综合久久网 | 亚洲综合在线一区二区三区 | 亚洲性无码av中文字幕 | а√天堂www在线天堂小说 | 国产成人精品久久亚洲高清不卡 | 乱码午夜-极国产极内射 | 人人妻人人澡人人爽人人精品浪潮 | 色欲久久久天天天综合网精品 | 99久久精品日本一区二区免费 | 精品久久久中文字幕人妻 | 午夜精品久久久内射近拍高清 | 亚洲成av人片在线观看无码不卡 | 久久亚洲中文字幕无码 | 中文字幕无码乱人伦 | 欧洲熟妇精品视频 | 粉嫩少妇内射浓精videos | 麻豆md0077饥渴少妇 | 久久www免费人成人片 | 久久综合给久久狠狠97色 | 日本www一道久久久免费榴莲 | 清纯唯美经典一区二区 | 300部国产真实乱 | 人妻夜夜爽天天爽三区 | 97夜夜澡人人爽人人喊中国片 | 4hu四虎永久在线观看 | 无码精品国产va在线观看dvd | 成人免费视频视频在线观看 免费 | 极品尤物被啪到呻吟喷水 | 中文字幕av日韩精品一区二区 | 亚洲无人区午夜福利码高清完整版 | 人人妻人人澡人人爽精品欧美 | 人人妻人人澡人人爽人人精品 | 成年美女黄网站色大免费视频 | 国产亚洲精品久久久闺蜜 | 四虎永久在线精品免费网址 | 亚洲中文字幕乱码av波多ji | 久久伊人色av天堂九九小黄鸭 | 人妻与老人中文字幕 | 国产真人无遮挡作爱免费视频 | 亚洲爆乳大丰满无码专区 | 亚洲国产欧美日韩精品一区二区三区 | 1000部啪啪未满十八勿入下载 | 国产极品美女高潮无套在线观看 | 水蜜桃av无码 | 国产欧美亚洲精品a | 2020久久超碰国产精品最新 | 无码人妻久久一区二区三区不卡 | 六十路熟妇乱子伦 | 无码人妻丰满熟妇区毛片18 | 亚洲精品无码国产 | 日本成熟视频免费视频 | 无码人中文字幕 | 鲁大师影院在线观看 | 亚洲精品欧美二区三区中文字幕 | 荫蒂被男人添的好舒服爽免费视频 | 中文亚洲成a人片在线观看 | 樱花草在线社区www | 国产成人一区二区三区在线观看 | 色欲综合久久中文字幕网 | 荫蒂添的好舒服视频囗交 | 久久综合给久久狠狠97色 | 日本熟妇人妻xxxxx人hd | 亚洲欧洲日本无在线码 | 欧美黑人性暴力猛交喷水 | 亚洲一区av无码专区在线观看 | 人妻人人添人妻人人爱 | 无码播放一区二区三区 | 影音先锋中文字幕无码 | 国产九九九九九九九a片 | 国产激情无码一区二区app | 国产成人无码专区 | 一个人看的www免费视频在线观看 | 国产人成高清在线视频99最全资源 | 国语精品一区二区三区 | 又湿又紧又大又爽a视频国产 | 偷窥村妇洗澡毛毛多 | 妺妺窝人体色www婷婷 | 激情内射亚州一区二区三区爱妻 | 亚洲综合精品香蕉久久网 | 波多野结衣乳巨码无在线观看 | 女人高潮内射99精品 | 人妻与老人中文字幕 | 成人一区二区免费视频 | 精品无码一区二区三区爱欲 | 欧美老妇交乱视频在线观看 | 日日夜夜撸啊撸 | 国产精品无码永久免费888 | 美女张开腿让人桶 | 特级做a爰片毛片免费69 | 久久久国产一区二区三区 | 成人片黄网站色大片免费观看 | 国产一区二区三区精品视频 | 欧美三级不卡在线观看 | 欧美午夜特黄aaaaaa片 | 少妇人妻大乳在线视频 | 欧美日韩人成综合在线播放 | 自拍偷自拍亚洲精品10p | 妺妺窝人体色www在线小说 | 亚洲无人区午夜福利码高清完整版 | 成人无码精品1区2区3区免费看 | 伦伦影院午夜理论片 | 国产成人无码av在线影院 | 国产精品香蕉在线观看 | 亚洲成色在线综合网站 | 国产午夜福利亚洲第一 | av在线亚洲欧洲日产一区二区 | 少妇无码一区二区二三区 | 国产农村妇女高潮大叫 | 少妇激情av一区二区 | 无码精品人妻一区二区三区av | 久久午夜无码鲁丝片秋霞 | 成熟女人特级毛片www免费 | 日韩视频 中文字幕 视频一区 | 亚洲国产精品久久人人爱 | 久久久久成人片免费观看蜜芽 | 久久久国产一区二区三区 | 国产精品毛多多水多 | 久久婷婷五月综合色国产香蕉 | 国产成人精品必看 | 人人妻人人澡人人爽欧美精品 | 精品熟女少妇av免费观看 | 无遮无挡爽爽免费视频 | 高潮毛片无遮挡高清免费视频 | 久久久久久国产精品无码下载 | 色爱情人网站 | 欧美刺激性大交 | 少妇性l交大片欧洲热妇乱xxx | 好男人社区资源 | 图片区 小说区 区 亚洲五月 | 中文字幕 亚洲精品 第1页 | 狠狠躁日日躁夜夜躁2020 | 国精产品一品二品国精品69xx | 黑人玩弄人妻中文在线 | 国产精品视频免费播放 | 亚洲色无码一区二区三区 | 免费男性肉肉影院 | 日日噜噜噜噜夜夜爽亚洲精品 | 欧美激情一区二区三区成人 | 女高中生第一次破苞av | 亚洲综合在线一区二区三区 |