ML之LiRLasso:基于datasets糖尿病数据集利用LiR和Lasso算法进行(9→1)回归预测(三维图散点图可视化)
生活随笔
收集整理的這篇文章主要介紹了
ML之LiRLasso:基于datasets糖尿病数据集利用LiR和Lasso算法进行(9→1)回归预测(三维图散点图可视化)
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
ML之LiR&Lasso:基于datasets糖尿病數據集利用LiR和Lasso算法進行(9→1)回歸預測(三維圖散點圖可視化)
?
?
?
目錄
基于datasets糖尿病數據集利用LiR和Lasso算法進行(9→1)回歸預測(三維圖散點圖可視化)
設計思路
輸出結果
Lasso核心代碼
?
?
相關文章
ML之LiR&Lasso:基于datasets糖尿病數據集利用LiR和Lasso算法進行(9→1)回歸預測(三維圖散點圖可視化)
ML之LiR&Lasso:基于datasets糖尿病數據集利用LiR和Lasso算法進行(9→1)回歸預測(三維圖散點圖可視化)實現
?
基于datasets糖尿病數據集利用LiR和Lasso算法進行(9→1)回歸預測(三維圖散點圖可視化)
設計思路
?
?
輸出結果
Lasso核心代碼
class Lasso Found at: sklearn.linear_model._coordinate_descentclass Lasso(ElasticNet):"""Linear Model trained with L1 prior as regularizer (aka the Lasso)The optimization objective for Lasso is::(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1Technically the Lasso model is optimizing the same objective function asthe Elastic Net with ``l1_ratio=1.0`` (no L2 penalty).Read more in the :ref:`User Guide <lasso>`.Parameters----------alpha : float, default=1.0Constant that multiplies the L1 term. Defaults to 1.0.``alpha = 0`` is equivalent to an ordinary least square, solvedby the :class:`LinearRegression` object. For numericalreasons, using ``alpha = 0`` with the ``Lasso`` object is not advised.Given this, you should use the :class:`LinearRegression` object.fit_intercept : bool, default=TrueWhether to calculate the intercept for this model. If setto False, no intercept will be used in calculations(i.e. data is expected to be centered).normalize : bool, default=FalseThis parameter is ignored when ``fit_intercept`` is set to False.If True, the regressors X will be normalized before regression bysubtracting the mean and dividing by the l2-norm.If you wish to standardize, please use:class:`sklearn.preprocessing.StandardScaler` before calling ``fit``on an estimator with ``normalize=False``.precompute : 'auto', bool or array-like of shape (n_features, n_features),\default=FalseWhether to use a precomputed Gram matrix to speed upcalculations. If set to ``'auto'`` let us decide. The Grammatrix can also be passed as argument. For sparse inputthis option is always ``True`` to preserve sparsity.copy_X : bool, default=TrueIf ``True``, X will be copied; else, it may be overwritten.max_iter : int, default=1000The maximum number of iterationstol : float, default=1e-4The tolerance for the optimization: if the updates aresmaller than ``tol``, the optimization code checks thedual gap for optimality and continues until it is smallerthan ``tol``.warm_start : bool, default=FalseWhen set to True, reuse the solution of the previous call to fit asinitialization, otherwise, just erase the previous solution.See :term:`the Glossary <warm_start>`.positive : bool, default=FalseWhen set to ``True``, forces the coefficients to be positive.random_state : int, RandomState instance, default=NoneThe seed of the pseudo random number generator that selects a randomfeature to update. Used when ``selection`` == 'random'.Pass an int for reproducible output across multiple function calls.See :term:`Glossary <random_state>`.selection : {'cyclic', 'random'}, default='cyclic'If set to 'random', a random coefficient is updated every iterationrather than looping over features sequentially by default. This(setting to 'random') often leads to significantly faster convergenceespecially when tol is higher than 1e-4.Attributes----------coef_ : ndarray of shape (n_features,) or (n_targets, n_features)parameter vector (w in the cost function formula)sparse_coef_ : sparse matrix of shape (n_features, 1) or \(n_targets, n_features)``sparse_coef_`` is a readonly property derived from ``coef_``intercept_ : float or ndarray of shape (n_targets,)independent term in decision function.n_iter_ : int or list of intnumber of iterations run by the coordinate descent solver to reachthe specified tolerance.Examples-------->>> from sklearn import linear_model>>> clf = linear_model.Lasso(alpha=0.1)>>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2])Lasso(alpha=0.1)>>> print(clf.coef_)[0.85 0. ]>>> print(clf.intercept_)0.15...See also--------lars_pathlasso_pathLassoLarsLassoCVLassoLarsCVsklearn.decomposition.sparse_encodeNotes-----The algorithm used to fit the model is coordinate descent.To avoid unnecessary memory duplication the X argument of the fit methodshould be directly passed as a Fortran-contiguous numpy array."""path = staticmethod(enet_path)@_deprecate_positional_argsdef __init__(self, alpha=1.0, *, fit_intercept=True, normalize=False, precompute=False, copy_X=True, max_iter=1000, tol=1e-4, warm_start=False, positive=False, random_state=None, selection='cyclic'):super().__init__(alpha=alpha, l1_ratio=1.0, fit_intercept=fit_intercept, normalize=normalize, precompute=precompute, copy_X=copy_X, max_iter=max_iter, tol=tol, warm_start=warm_start, positive=positive, random_state=random_state, selection=selection)############################################################################### # Functions for CV with paths functions?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
總結
以上是生活随笔為你收集整理的ML之LiRLasso:基于datasets糖尿病数据集利用LiR和Lasso算法进行(9→1)回归预测(三维图散点图可视化)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: DL之DCGAN:基于keras框架利用
- 下一篇: ML之LiRLassoR:利用bosto