【Python机器学习】实验04(1) 多分类(基于逻辑回归)实践

这篇具有很好参考价值的文章主要介绍了【Python机器学习】实验04(1) 多分类(基于逻辑回归)实践。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

多分类以及机器学习实践

如何对多个类别进行分类

Iris数据集是常用的分类实验数据集,由Fisher, 1936收集整理。Iris也称鸢尾花卉数据集,是一类多重变量分析的数据集。数据集包含150个数据样本,分为3类,每类50个数据,每个数据包含4个属性。可通过花萼长度,花萼宽度,花瓣长度,花瓣宽度4个属性预测鸢尾花卉属于(Setosa,Versicolour,Virginica)三个种类中的哪一类。

iris以鸢尾花的特征作为数据来源,常用在分类操作中。该数据集由3种不同类型的鸢尾花的各50个样本数据构成。其中的一个种类与另外两个种类是线性可分离的,后两个种类是非线性可分离的。

该数据集包含了4个属性:
Sepal.Length(花萼长度),单位是cm;
Sepal.Width(花萼宽度),单位是cm;
Petal.Length(花瓣长度),单位是cm;
Petal.Width(花瓣宽度),单位是cm;

种类:Iris Setosa(山鸢尾)、Iris Versicolour(杂色鸢尾),以及Iris Virginica(维吉尼亚鸢尾)。

1.1 数据的预处理

import sklearn.datasets as datasets
import pandas as pd
import numpy as np
data=datasets.load_iris()
data
{'data': array([[5.1, 3.5, 1.4, 0.2],
        [4.9, 3. , 1.4, 0.2],
        [4.7, 3.2, 1.3, 0.2],
        [4.6, 3.1, 1.5, 0.2],
        [5. , 3.6, 1.4, 0.2],
        [5.4, 3.9, 1.7, 0.4],
        [4.6, 3.4, 1.4, 0.3],
        [5. , 3.4, 1.5, 0.2],
        [4.4, 2.9, 1.4, 0.2],
        [4.9, 3.1, 1.5, 0.1],
        [5.4, 3.7, 1.5, 0.2],
        [4.8, 3.4, 1.6, 0.2],
        [4.8, 3. , 1.4, 0.1],
        [4.3, 3. , 1.1, 0.1],
        [5.8, 4. , 1.2, 0.2],
        [5.7, 4.4, 1.5, 0.4],
        [5.4, 3.9, 1.3, 0.4],
        [5.1, 3.5, 1.4, 0.3],
        [5.7, 3.8, 1.7, 0.3],
        [5.1, 3.8, 1.5, 0.3],
        [5.4, 3.4, 1.7, 0.2],
        [5.1, 3.7, 1.5, 0.4],
        [4.6, 3.6, 1. , 0.2],
        [5.1, 3.3, 1.7, 0.5],
        [4.8, 3.4, 1.9, 0.2],
        [5. , 3. , 1.6, 0.2],
        [5. , 3.4, 1.6, 0.4],
        [5.2, 3.5, 1.5, 0.2],
        [5.2, 3.4, 1.4, 0.2],
        [4.7, 3.2, 1.6, 0.2],
        [4.8, 3.1, 1.6, 0.2],
        [5.4, 3.4, 1.5, 0.4],
        [5.2, 4.1, 1.5, 0.1],
        [5.5, 4.2, 1.4, 0.2],
        [4.9, 3.1, 1.5, 0.2],
        [5. , 3.2, 1.2, 0.2],
        [5.5, 3.5, 1.3, 0.2],
        [4.9, 3.6, 1.4, 0.1],
        [4.4, 3. , 1.3, 0.2],
        [5.1, 3.4, 1.5, 0.2],
        [5. , 3.5, 1.3, 0.3],
        [4.5, 2.3, 1.3, 0.3],
        [4.4, 3.2, 1.3, 0.2],
        [5. , 3.5, 1.6, 0.6],
        [5.1, 3.8, 1.9, 0.4],
        [4.8, 3. , 1.4, 0.3],
        [5.1, 3.8, 1.6, 0.2],
        [4.6, 3.2, 1.4, 0.2],
        [5.3, 3.7, 1.5, 0.2],
        [5. , 3.3, 1.4, 0.2],
        [7. , 3.2, 4.7, 1.4],
        [6.4, 3.2, 4.5, 1.5],
        [6.9, 3.1, 4.9, 1.5],
        [5.5, 2.3, 4. , 1.3],
        [6.5, 2.8, 4.6, 1.5],
        [5.7, 2.8, 4.5, 1.3],
        [6.3, 3.3, 4.7, 1.6],
        [4.9, 2.4, 3.3, 1. ],
        [6.6, 2.9, 4.6, 1.3],
        [5.2, 2.7, 3.9, 1.4],
        [5. , 2. , 3.5, 1. ],
        [5.9, 3. , 4.2, 1.5],
        [6. , 2.2, 4. , 1. ],
        [6.1, 2.9, 4.7, 1.4],
        [5.6, 2.9, 3.6, 1.3],
        [6.7, 3.1, 4.4, 1.4],
        [5.6, 3. , 4.5, 1.5],
        [5.8, 2.7, 4.1, 1. ],
        [6.2, 2.2, 4.5, 1.5],
        [5.6, 2.5, 3.9, 1.1],
        [5.9, 3.2, 4.8, 1.8],
        [6.1, 2.8, 4. , 1.3],
        [6.3, 2.5, 4.9, 1.5],
        [6.1, 2.8, 4.7, 1.2],
        [6.4, 2.9, 4.3, 1.3],
        [6.6, 3. , 4.4, 1.4],
        [6.8, 2.8, 4.8, 1.4],
        [6.7, 3. , 5. , 1.7],
        [6. , 2.9, 4.5, 1.5],
        [5.7, 2.6, 3.5, 1. ],
        [5.5, 2.4, 3.8, 1.1],
        [5.5, 2.4, 3.7, 1. ],
        [5.8, 2.7, 3.9, 1.2],
        [6. , 2.7, 5.1, 1.6],
        [5.4, 3. , 4.5, 1.5],
        [6. , 3.4, 4.5, 1.6],
        [6.7, 3.1, 4.7, 1.5],
        [6.3, 2.3, 4.4, 1.3],
        [5.6, 3. , 4.1, 1.3],
        [5.5, 2.5, 4. , 1.3],
        [5.5, 2.6, 4.4, 1.2],
        [6.1, 3. , 4.6, 1.4],
        [5.8, 2.6, 4. , 1.2],
        [5. , 2.3, 3.3, 1. ],
        [5.6, 2.7, 4.2, 1.3],
        [5.7, 3. , 4.2, 1.2],
        [5.7, 2.9, 4.2, 1.3],
        [6.2, 2.9, 4.3, 1.3],
        [5.1, 2.5, 3. , 1.1],
        [5.7, 2.8, 4.1, 1.3],
        [6.3, 3.3, 6. , 2.5],
        [5.8, 2.7, 5.1, 1.9],
        [7.1, 3. , 5.9, 2.1],
        [6.3, 2.9, 5.6, 1.8],
        [6.5, 3. , 5.8, 2.2],
        [7.6, 3. , 6.6, 2.1],
        [4.9, 2.5, 4.5, 1.7],
        [7.3, 2.9, 6.3, 1.8],
        [6.7, 2.5, 5.8, 1.8],
        [7.2, 3.6, 6.1, 2.5],
        [6.5, 3.2, 5.1, 2. ],
        [6.4, 2.7, 5.3, 1.9],
        [6.8, 3. , 5.5, 2.1],
        [5.7, 2.5, 5. , 2. ],
        [5.8, 2.8, 5.1, 2.4],
        [6.4, 3.2, 5.3, 2.3],
        [6.5, 3. , 5.5, 1.8],
        [7.7, 3.8, 6.7, 2.2],
        [7.7, 2.6, 6.9, 2.3],
        [6. , 2.2, 5. , 1.5],
        [6.9, 3.2, 5.7, 2.3],
        [5.6, 2.8, 4.9, 2. ],
        [7.7, 2.8, 6.7, 2. ],
        [6.3, 2.7, 4.9, 1.8],
        [6.7, 3.3, 5.7, 2.1],
        [7.2, 3.2, 6. , 1.8],
        [6.2, 2.8, 4.8, 1.8],
        [6.1, 3. , 4.9, 1.8],
        [6.4, 2.8, 5.6, 2.1],
        [7.2, 3. , 5.8, 1.6],
        [7.4, 2.8, 6.1, 1.9],
        [7.9, 3.8, 6.4, 2. ],
        [6.4, 2.8, 5.6, 2.2],
        [6.3, 2.8, 5.1, 1.5],
        [6.1, 2.6, 5.6, 1.4],
        [7.7, 3. , 6.1, 2.3],
        [6.3, 3.4, 5.6, 2.4],
        [6.4, 3.1, 5.5, 1.8],
        [6. , 3. , 4.8, 1.8],
        [6.9, 3.1, 5.4, 2.1],
        [6.7, 3.1, 5.6, 2.4],
        [6.9, 3.1, 5.1, 2.3],
        [5.8, 2.7, 5.1, 1.9],
        [6.8, 3.2, 5.9, 2.3],
        [6.7, 3.3, 5.7, 2.5],
        [6.7, 3. , 5.2, 2.3],
        [6.3, 2.5, 5. , 1.9],
        [6.5, 3. , 5.2, 2. ],
        [6.2, 3.4, 5.4, 2.3],
        [5.9, 3. , 5.1, 1.8]]),
 'target': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
        1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
        1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
        2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
        2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]),
 'frame': None,
 'target_names': array(['setosa', 'versicolor', 'virginica'], dtype='<U10'),
 'DESCR': '.. _iris_dataset:\n\nIris plants dataset\n--------------------\n\n**Data Set Characteristics:**\n\n    :Number of Instances: 150 (50 in each of three classes)\n    :Number of Attributes: 4 numeric, predictive attributes and the class\n    :Attribute Information:\n        - sepal length in cm\n        - sepal width in cm\n        - petal length in cm\n        - petal width in cm\n        - class:\n                - Iris-Setosa\n                - Iris-Versicolour\n                - Iris-Virginica\n                \n    :Summary Statistics:\n\n    ============== ==== ==== ======= ===== ====================\n                    Min  Max   Mean    SD   Class Correlation\n    ============== ==== ==== ======= ===== ====================\n    sepal length:   4.3  7.9   5.84   0.83    0.7826\n    sepal width:    2.0  4.4   3.05   0.43   -0.4194\n    petal length:   1.0  6.9   3.76   1.76    0.9490  (high!)\n    petal width:    0.1  2.5   1.20   0.76    0.9565  (high!)\n    ============== ==== ==== ======= ===== ====================\n\n    :Missing Attribute Values: None\n    :Class Distribution: 33.3% for each of 3 classes.\n    :Creator: R.A. Fisher\n    :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)\n    :Date: July, 1988\n\nThe famous Iris database, first used by Sir R.A. Fisher. The dataset is taken\nfrom Fisher\'s paper. Note that it\'s the same as in R, but not as in the UCI\nMachine Learning Repository, which has two wrong data points.\n\nThis is perhaps the best known database to be found in the\npattern recognition literature.  Fisher\'s paper is a classic in the field and\nis referenced frequently to this day.  (See Duda & Hart, for example.)  The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant.  One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\n.. topic:: References\n\n   - Fisher, R.A. "The use of multiple measurements in taxonomic problems"\n     Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to\n     Mathematical Statistics" (John Wiley, NY, 1950).\n   - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.\n     (Q327.D83) John Wiley & Sons.  ISBN 0-471-22361-1.  See page 218.\n   - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System\n     Structure and Classification Rule for Recognition in Partially Exposed\n     Environments".  IEEE Transactions on Pattern Analysis and Machine\n     Intelligence, Vol. PAMI-2, No. 1, 67-71.\n   - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule".  IEEE Transactions\n     on Information Theory, May 1972, 431-433.\n   - See also: 1988 MLC Proceedings, 54-64.  Cheeseman et al"s AUTOCLASS II\n     conceptual clustering system finds 3 classes in the data.\n   - Many, many more ...',
 'feature_names': ['sepal length (cm)',
  'sepal width (cm)',
  'petal length (cm)',
  'petal width (cm)'],
 'filename': 'iris.csv',
 'data_module': 'sklearn.datasets.data'}
data_x=data["data"]
data_y=data["target"]
data_x.shape,data_y.shape
((150, 4), (150,))
data_y=data_y.reshape([len(data_y),1])
data_y
array([[0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2]])
#法1 ,用拼接的方法
data=np.hstack([data_x,data_y])
#法二: 用插入的方法
np.insert(data_x,data_x.shape[1],data_y,axis=1)
array([[5.1, 3.5, 1.4, ..., 2. , 2. , 2. ],
       [4.9, 3. , 1.4, ..., 2. , 2. , 2. ],
       [4.7, 3.2, 1.3, ..., 2. , 2. , 2. ],
       ...,
       [6.5, 3. , 5.2, ..., 2. , 2. , 2. ],
       [6.2, 3.4, 5.4, ..., 2. , 2. , 2. ],
       [5.9, 3. , 5.1, ..., 2. , 2. , 2. ]])
data=pd.DataFrame(data,columns=["F1","F2","F3","F4","target"])
data
F1 F2 F3 F4 target
0 5.1 3.5 1.4 0.2 0.0
1 4.9 3.0 1.4 0.2 0.0
2 4.7 3.2 1.3 0.2 0.0
3 4.6 3.1 1.5 0.2 0.0
4 5.0 3.6 1.4 0.2 0.0
... ... ... ... ... ...
145 6.7 3.0 5.2 2.3 2.0
146 6.3 2.5 5.0 1.9 2.0
147 6.5 3.0 5.2 2.0 2.0
148 6.2 3.4 5.4 2.3 2.0
149 5.9 3.0 5.1 1.8 2.0

150 rows × 5 columns

data.insert(0,"ones",1)
data
ones F1 F2 F3 F4 target
0 1 5.1 3.5 1.4 0.2 0.0
1 1 4.9 3.0 1.4 0.2 0.0
2 1 4.7 3.2 1.3 0.2 0.0
3 1 4.6 3.1 1.5 0.2 0.0
4 1 5.0 3.6 1.4 0.2 0.0
... ... ... ... ... ... ...
145 1 6.7 3.0 5.2 2.3 2.0
146 1 6.3 2.5 5.0 1.9 2.0
147 1 6.5 3.0 5.2 2.0 2.0
148 1 6.2 3.4 5.4 2.3 2.0
149 1 5.9 3.0 5.1 1.8 2.0

150 rows × 6 columns

data["target"]=data["target"].astype("int32")
data
ones F1 F2 F3 F4 target
0 1 5.1 3.5 1.4 0.2 0
1 1 4.9 3.0 1.4 0.2 0
2 1 4.7 3.2 1.3 0.2 0
3 1 4.6 3.1 1.5 0.2 0
4 1 5.0 3.6 1.4 0.2 0
... ... ... ... ... ... ...
145 1 6.7 3.0 5.2 2.3 2
146 1 6.3 2.5 5.0 1.9 2
147 1 6.5 3.0 5.2 2.0 2
148 1 6.2 3.4 5.4 2.3 2
149 1 5.9 3.0 5.1 1.8 2

150 rows × 6 columns

1.2 训练数据的准备

data_x
array([[5.1, 3.5, 1.4, 0.2],
       [4.9, 3. , 1.4, 0.2],
       [4.7, 3.2, 1.3, 0.2],
       [4.6, 3.1, 1.5, 0.2],
       [5. , 3.6, 1.4, 0.2],
       [5.4, 3.9, 1.7, 0.4],
       [4.6, 3.4, 1.4, 0.3],
       [5. , 3.4, 1.5, 0.2],
       [4.4, 2.9, 1.4, 0.2],
       [4.9, 3.1, 1.5, 0.1],
       [5.4, 3.7, 1.5, 0.2],
       [4.8, 3.4, 1.6, 0.2],
       [4.8, 3. , 1.4, 0.1],
       [4.3, 3. , 1.1, 0.1],
       [5.8, 4. , 1.2, 0.2],
       [5.7, 4.4, 1.5, 0.4],
       [5.4, 3.9, 1.3, 0.4],
       [5.1, 3.5, 1.4, 0.3],
       [5.7, 3.8, 1.7, 0.3],
       [5.1, 3.8, 1.5, 0.3],
       [5.4, 3.4, 1.7, 0.2],
       [5.1, 3.7, 1.5, 0.4],
       [4.6, 3.6, 1. , 0.2],
       [5.1, 3.3, 1.7, 0.5],
       [4.8, 3.4, 1.9, 0.2],
       [5. , 3. , 1.6, 0.2],
       [5. , 3.4, 1.6, 0.4],
       [5.2, 3.5, 1.5, 0.2],
       [5.2, 3.4, 1.4, 0.2],
       [4.7, 3.2, 1.6, 0.2],
       [4.8, 3.1, 1.6, 0.2],
       [5.4, 3.4, 1.5, 0.4],
       [5.2, 4.1, 1.5, 0.1],
       [5.5, 4.2, 1.4, 0.2],
       [4.9, 3.1, 1.5, 0.2],
       [5. , 3.2, 1.2, 0.2],
       [5.5, 3.5, 1.3, 0.2],
       [4.9, 3.6, 1.4, 0.1],
       [4.4, 3. , 1.3, 0.2],
       [5.1, 3.4, 1.5, 0.2],
       [5. , 3.5, 1.3, 0.3],
       [4.5, 2.3, 1.3, 0.3],
       [4.4, 3.2, 1.3, 0.2],
       [5. , 3.5, 1.6, 0.6],
       [5.1, 3.8, 1.9, 0.4],
       [4.8, 3. , 1.4, 0.3],
       [5.1, 3.8, 1.6, 0.2],
       [4.6, 3.2, 1.4, 0.2],
       [5.3, 3.7, 1.5, 0.2],
       [5. , 3.3, 1.4, 0.2],
       [7. , 3.2, 4.7, 1.4],
       [6.4, 3.2, 4.5, 1.5],
       [6.9, 3.1, 4.9, 1.5],
       [5.5, 2.3, 4. , 1.3],
       [6.5, 2.8, 4.6, 1.5],
       [5.7, 2.8, 4.5, 1.3],
       [6.3, 3.3, 4.7, 1.6],
       [4.9, 2.4, 3.3, 1. ],
       [6.6, 2.9, 4.6, 1.3],
       [5.2, 2.7, 3.9, 1.4],
       [5. , 2. , 3.5, 1. ],
       [5.9, 3. , 4.2, 1.5],
       [6. , 2.2, 4. , 1. ],
       [6.1, 2.9, 4.7, 1.4],
       [5.6, 2.9, 3.6, 1.3],
       [6.7, 3.1, 4.4, 1.4],
       [5.6, 3. , 4.5, 1.5],
       [5.8, 2.7, 4.1, 1. ],
       [6.2, 2.2, 4.5, 1.5],
       [5.6, 2.5, 3.9, 1.1],
       [5.9, 3.2, 4.8, 1.8],
       [6.1, 2.8, 4. , 1.3],
       [6.3, 2.5, 4.9, 1.5],
       [6.1, 2.8, 4.7, 1.2],
       [6.4, 2.9, 4.3, 1.3],
       [6.6, 3. , 4.4, 1.4],
       [6.8, 2.8, 4.8, 1.4],
       [6.7, 3. , 5. , 1.7],
       [6. , 2.9, 4.5, 1.5],
       [5.7, 2.6, 3.5, 1. ],
       [5.5, 2.4, 3.8, 1.1],
       [5.5, 2.4, 3.7, 1. ],
       [5.8, 2.7, 3.9, 1.2],
       [6. , 2.7, 5.1, 1.6],
       [5.4, 3. , 4.5, 1.5],
       [6. , 3.4, 4.5, 1.6],
       [6.7, 3.1, 4.7, 1.5],
       [6.3, 2.3, 4.4, 1.3],
       [5.6, 3. , 4.1, 1.3],
       [5.5, 2.5, 4. , 1.3],
       [5.5, 2.6, 4.4, 1.2],
       [6.1, 3. , 4.6, 1.4],
       [5.8, 2.6, 4. , 1.2],
       [5. , 2.3, 3.3, 1. ],
       [5.6, 2.7, 4.2, 1.3],
       [5.7, 3. , 4.2, 1.2],
       [5.7, 2.9, 4.2, 1.3],
       [6.2, 2.9, 4.3, 1.3],
       [5.1, 2.5, 3. , 1.1],
       [5.7, 2.8, 4.1, 1.3],
       [6.3, 3.3, 6. , 2.5],
       [5.8, 2.7, 5.1, 1.9],
       [7.1, 3. , 5.9, 2.1],
       [6.3, 2.9, 5.6, 1.8],
       [6.5, 3. , 5.8, 2.2],
       [7.6, 3. , 6.6, 2.1],
       [4.9, 2.5, 4.5, 1.7],
       [7.3, 2.9, 6.3, 1.8],
       [6.7, 2.5, 5.8, 1.8],
       [7.2, 3.6, 6.1, 2.5],
       [6.5, 3.2, 5.1, 2. ],
       [6.4, 2.7, 5.3, 1.9],
       [6.8, 3. , 5.5, 2.1],
       [5.7, 2.5, 5. , 2. ],
       [5.8, 2.8, 5.1, 2.4],
       [6.4, 3.2, 5.3, 2.3],
       [6.5, 3. , 5.5, 1.8],
       [7.7, 3.8, 6.7, 2.2],
       [7.7, 2.6, 6.9, 2.3],
       [6. , 2.2, 5. , 1.5],
       [6.9, 3.2, 5.7, 2.3],
       [5.6, 2.8, 4.9, 2. ],
       [7.7, 2.8, 6.7, 2. ],
       [6.3, 2.7, 4.9, 1.8],
       [6.7, 3.3, 5.7, 2.1],
       [7.2, 3.2, 6. , 1.8],
       [6.2, 2.8, 4.8, 1.8],
       [6.1, 3. , 4.9, 1.8],
       [6.4, 2.8, 5.6, 2.1],
       [7.2, 3. , 5.8, 1.6],
       [7.4, 2.8, 6.1, 1.9],
       [7.9, 3.8, 6.4, 2. ],
       [6.4, 2.8, 5.6, 2.2],
       [6.3, 2.8, 5.1, 1.5],
       [6.1, 2.6, 5.6, 1.4],
       [7.7, 3. , 6.1, 2.3],
       [6.3, 3.4, 5.6, 2.4],
       [6.4, 3.1, 5.5, 1.8],
       [6. , 3. , 4.8, 1.8],
       [6.9, 3.1, 5.4, 2.1],
       [6.7, 3.1, 5.6, 2.4],
       [6.9, 3.1, 5.1, 2.3],
       [5.8, 2.7, 5.1, 1.9],
       [6.8, 3.2, 5.9, 2.3],
       [6.7, 3.3, 5.7, 2.5],
       [6.7, 3. , 5.2, 2.3],
       [6.3, 2.5, 5. , 1.9],
       [6.5, 3. , 5.2, 2. ],
       [6.2, 3.4, 5.4, 2.3],
       [5.9, 3. , 5.1, 1.8]])
data_x=np.insert(data_x,0,1,axis=1)
data_x.shape,data_y.shape
((150, 5), (150, 1))
#训练数据的特征和标签
data_x,data_y
(array([[1. , 5.1, 3.5, 1.4, 0.2],
        [1. , 4.9, 3. , 1.4, 0.2],
        [1. , 4.7, 3.2, 1.3, 0.2],
        [1. , 4.6, 3.1, 1.5, 0.2],
        [1. , 5. , 3.6, 1.4, 0.2],
        [1. , 5.4, 3.9, 1.7, 0.4],
        [1. , 4.6, 3.4, 1.4, 0.3],
        [1. , 5. , 3.4, 1.5, 0.2],
        [1. , 4.4, 2.9, 1.4, 0.2],
        [1. , 4.9, 3.1, 1.5, 0.1],
        [1. , 5.4, 3.7, 1.5, 0.2],
        [1. , 4.8, 3.4, 1.6, 0.2],
        [1. , 4.8, 3. , 1.4, 0.1],
        [1. , 4.3, 3. , 1.1, 0.1],
        [1. , 5.8, 4. , 1.2, 0.2],
        [1. , 5.7, 4.4, 1.5, 0.4],
        [1. , 5.4, 3.9, 1.3, 0.4],
        [1. , 5.1, 3.5, 1.4, 0.3],
        [1. , 5.7, 3.8, 1.7, 0.3],
        [1. , 5.1, 3.8, 1.5, 0.3],
        [1. , 5.4, 3.4, 1.7, 0.2],
        [1. , 5.1, 3.7, 1.5, 0.4],
        [1. , 4.6, 3.6, 1. , 0.2],
        [1. , 5.1, 3.3, 1.7, 0.5],
        [1. , 4.8, 3.4, 1.9, 0.2],
        [1. , 5. , 3. , 1.6, 0.2],
        [1. , 5. , 3.4, 1.6, 0.4],
        [1. , 5.2, 3.5, 1.5, 0.2],
        [1. , 5.2, 3.4, 1.4, 0.2],
        [1. , 4.7, 3.2, 1.6, 0.2],
        [1. , 4.8, 3.1, 1.6, 0.2],
        [1. , 5.4, 3.4, 1.5, 0.4],
        [1. , 5.2, 4.1, 1.5, 0.1],
        [1. , 5.5, 4.2, 1.4, 0.2],
        [1. , 4.9, 3.1, 1.5, 0.2],
        [1. , 5. , 3.2, 1.2, 0.2],
        [1. , 5.5, 3.5, 1.3, 0.2],
        [1. , 4.9, 3.6, 1.4, 0.1],
        [1. , 4.4, 3. , 1.3, 0.2],
        [1. , 5.1, 3.4, 1.5, 0.2],
        [1. , 5. , 3.5, 1.3, 0.3],
        [1. , 4.5, 2.3, 1.3, 0.3],
        [1. , 4.4, 3.2, 1.3, 0.2],
        [1. , 5. , 3.5, 1.6, 0.6],
        [1. , 5.1, 3.8, 1.9, 0.4],
        [1. , 4.8, 3. , 1.4, 0.3],
        [1. , 5.1, 3.8, 1.6, 0.2],
        [1. , 4.6, 3.2, 1.4, 0.2],
        [1. , 5.3, 3.7, 1.5, 0.2],
        [1. , 5. , 3.3, 1.4, 0.2],
        [1. , 7. , 3.2, 4.7, 1.4],
        [1. , 6.4, 3.2, 4.5, 1.5],
        [1. , 6.9, 3.1, 4.9, 1.5],
        [1. , 5.5, 2.3, 4. , 1.3],
        [1. , 6.5, 2.8, 4.6, 1.5],
        [1. , 5.7, 2.8, 4.5, 1.3],
        [1. , 6.3, 3.3, 4.7, 1.6],
        [1. , 4.9, 2.4, 3.3, 1. ],
        [1. , 6.6, 2.9, 4.6, 1.3],
        [1. , 5.2, 2.7, 3.9, 1.4],
        [1. , 5. , 2. , 3.5, 1. ],
        [1. , 5.9, 3. , 4.2, 1.5],
        [1. , 6. , 2.2, 4. , 1. ],
        [1. , 6.1, 2.9, 4.7, 1.4],
        [1. , 5.6, 2.9, 3.6, 1.3],
        [1. , 6.7, 3.1, 4.4, 1.4],
        [1. , 5.6, 3. , 4.5, 1.5],
        [1. , 5.8, 2.7, 4.1, 1. ],
        [1. , 6.2, 2.2, 4.5, 1.5],
        [1. , 5.6, 2.5, 3.9, 1.1],
        [1. , 5.9, 3.2, 4.8, 1.8],
        [1. , 6.1, 2.8, 4. , 1.3],
        [1. , 6.3, 2.5, 4.9, 1.5],
        [1. , 6.1, 2.8, 4.7, 1.2],
        [1. , 6.4, 2.9, 4.3, 1.3],
        [1. , 6.6, 3. , 4.4, 1.4],
        [1. , 6.8, 2.8, 4.8, 1.4],
        [1. , 6.7, 3. , 5. , 1.7],
        [1. , 6. , 2.9, 4.5, 1.5],
        [1. , 5.7, 2.6, 3.5, 1. ],
        [1. , 5.5, 2.4, 3.8, 1.1],
        [1. , 5.5, 2.4, 3.7, 1. ],
        [1. , 5.8, 2.7, 3.9, 1.2],
        [1. , 6. , 2.7, 5.1, 1.6],
        [1. , 5.4, 3. , 4.5, 1.5],
        [1. , 6. , 3.4, 4.5, 1.6],
        [1. , 6.7, 3.1, 4.7, 1.5],
        [1. , 6.3, 2.3, 4.4, 1.3],
        [1. , 5.6, 3. , 4.1, 1.3],
        [1. , 5.5, 2.5, 4. , 1.3],
        [1. , 5.5, 2.6, 4.4, 1.2],
        [1. , 6.1, 3. , 4.6, 1.4],
        [1. , 5.8, 2.6, 4. , 1.2],
        [1. , 5. , 2.3, 3.3, 1. ],
        [1. , 5.6, 2.7, 4.2, 1.3],
        [1. , 5.7, 3. , 4.2, 1.2],
        [1. , 5.7, 2.9, 4.2, 1.3],
        [1. , 6.2, 2.9, 4.3, 1.3],
        [1. , 5.1, 2.5, 3. , 1.1],
        [1. , 5.7, 2.8, 4.1, 1.3],
        [1. , 6.3, 3.3, 6. , 2.5],
        [1. , 5.8, 2.7, 5.1, 1.9],
        [1. , 7.1, 3. , 5.9, 2.1],
        [1. , 6.3, 2.9, 5.6, 1.8],
        [1. , 6.5, 3. , 5.8, 2.2],
        [1. , 7.6, 3. , 6.6, 2.1],
        [1. , 4.9, 2.5, 4.5, 1.7],
        [1. , 7.3, 2.9, 6.3, 1.8],
        [1. , 6.7, 2.5, 5.8, 1.8],
        [1. , 7.2, 3.6, 6.1, 2.5],
        [1. , 6.5, 3.2, 5.1, 2. ],
        [1. , 6.4, 2.7, 5.3, 1.9],
        [1. , 6.8, 3. , 5.5, 2.1],
        [1. , 5.7, 2.5, 5. , 2. ],
        [1. , 5.8, 2.8, 5.1, 2.4],
        [1. , 6.4, 3.2, 5.3, 2.3],
        [1. , 6.5, 3. , 5.5, 1.8],
        [1. , 7.7, 3.8, 6.7, 2.2],
        [1. , 7.7, 2.6, 6.9, 2.3],
        [1. , 6. , 2.2, 5. , 1.5],
        [1. , 6.9, 3.2, 5.7, 2.3],
        [1. , 5.6, 2.8, 4.9, 2. ],
        [1. , 7.7, 2.8, 6.7, 2. ],
        [1. , 6.3, 2.7, 4.9, 1.8],
        [1. , 6.7, 3.3, 5.7, 2.1],
        [1. , 7.2, 3.2, 6. , 1.8],
        [1. , 6.2, 2.8, 4.8, 1.8],
        [1. , 6.1, 3. , 4.9, 1.8],
        [1. , 6.4, 2.8, 5.6, 2.1],
        [1. , 7.2, 3. , 5.8, 1.6],
        [1. , 7.4, 2.8, 6.1, 1.9],
        [1. , 7.9, 3.8, 6.4, 2. ],
        [1. , 6.4, 2.8, 5.6, 2.2],
        [1. , 6.3, 2.8, 5.1, 1.5],
        [1. , 6.1, 2.6, 5.6, 1.4],
        [1. , 7.7, 3. , 6.1, 2.3],
        [1. , 6.3, 3.4, 5.6, 2.4],
        [1. , 6.4, 3.1, 5.5, 1.8],
        [1. , 6. , 3. , 4.8, 1.8],
        [1. , 6.9, 3.1, 5.4, 2.1],
        [1. , 6.7, 3.1, 5.6, 2.4],
        [1. , 6.9, 3.1, 5.1, 2.3],
        [1. , 5.8, 2.7, 5.1, 1.9],
        [1. , 6.8, 3.2, 5.9, 2.3],
        [1. , 6.7, 3.3, 5.7, 2.5],
        [1. , 6.7, 3. , 5.2, 2.3],
        [1. , 6.3, 2.5, 5. , 1.9],
        [1. , 6.5, 3. , 5.2, 2. ],
        [1. , 6.2, 3.4, 5.4, 2.3],
        [1. , 5.9, 3. , 5.1, 1.8]]),
 array([[0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [0],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [1],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2],
        [2]]))

由于有三个类别,那么在训练时三类数据要分开

data1=data.copy()
data1
ones F1 F2 F3 F4 target
0 1 5.1 3.5 1.4 0.2 0
1 1 4.9 3.0 1.4 0.2 0
2 1 4.7 3.2 1.3 0.2 0
3 1 4.6 3.1 1.5 0.2 0
4 1 5.0 3.6 1.4 0.2 0
... ... ... ... ... ... ...
145 1 6.7 3.0 5.2 2.3 2
146 1 6.3 2.5 5.0 1.9 2
147 1 6.5 3.0 5.2 2.0 2
148 1 6.2 3.4 5.4 2.3 2
149 1 5.9 3.0 5.1 1.8 2

150 rows × 6 columns

data

data1.loc[data["target"]!=0,"target"]=0
data1.loc[data["target"]==0,"target"]=1
data1
ones F1 F2 F3 F4 target
0 1 5.1 3.5 1.4 0.2 1
1 1 4.9 3.0 1.4 0.2 1
2 1 4.7 3.2 1.3 0.2 1
3 1 4.6 3.1 1.5 0.2 1
4 1 5.0 3.6 1.4 0.2 1
... ... ... ... ... ... ...
145 1 6.7 3.0 5.2 2.3 0
146 1 6.3 2.5 5.0 1.9 0
147 1 6.5 3.0 5.2 2.0 0
148 1 6.2 3.4 5.4 2.3 0
149 1 5.9 3.0 5.1 1.8 0

150 rows × 6 columns

data1_x=data1.iloc[:,:data1.shape[1]-1].values
data1_y=data1.iloc[:,data1.shape[1]-1].values
data1_x.shape,data1_y.shape
((150, 5), (150,))
#针对第二类,即第二个分类器的数据
data2=data.copy()
data2.loc[data["target"]==1,"target"]=1
data2.loc[data["target"]!=1,"target"]=0
data2["target"]==0
0      True
1      True
2      True
3      True
4      True
       ... 
145    True
146    True
147    True
148    True
149    True
Name: target, Length: 150, dtype: bool
data2.shape[1]
6
data2.iloc[50:55,:]
ones F1 F2 F3 F4 target
50 1 7.0 3.2 4.7 1.4 1
51 1 6.4 3.2 4.5 1.5 1
52 1 6.9 3.1 4.9 1.5 1
53 1 5.5 2.3 4.0 1.3 1
54 1 6.5 2.8 4.6 1.5 1
data2_x=data2.iloc[:,:data2.shape[1]-1].values
data2_y=data2.iloc[:,data2.shape[1]-1].values
#针对第三类,即第三个分类器的数据
data3=data.copy()
data3.loc[data["target"]==2,"target"]=1
data3.loc[data["target"]!=2,"target"]=0
data3
ones F1 F2 F3 F4 target
0 1 5.1 3.5 1.4 0.2 0
1 1 4.9 3.0 1.4 0.2 0
2 1 4.7 3.2 1.3 0.2 0
3 1 4.6 3.1 1.5 0.2 0
4 1 5.0 3.6 1.4 0.2 0
... ... ... ... ... ... ...
145 1 6.7 3.0 5.2 2.3 1
146 1 6.3 2.5 5.0 1.9 1
147 1 6.5 3.0 5.2 2.0 1
148 1 6.2 3.4 5.4 2.3 1
149 1 5.9 3.0 5.1 1.8 1

150 rows × 6 columns

data3_x=data3.iloc[:,:data3.shape[1]-1].values
data3_y=data3.iloc[:,data3.shape[1]-1].values

1.3 定义假设函数,代价函数,梯度下降算法(从实验3复制过来)

def sigmoid(z):
    return 1 / (1 + np.exp(-z))
def h(X,w):
    z=X@w
    h=sigmoid(z)
    return h
#代价函数构造
def cost(X,w,y):
    #当X(m,n+1),y(m,),w(n+1,1)
    y_hat=sigmoid(X@w)
    right=np.multiply(y.ravel(),np.log(y_hat).ravel())+np.multiply((1-y).ravel(),np.log(1-y_hat).ravel())
    cost=-np.sum(right)/X.shape[0]
    return cost
def sigmoid(z):
    return 1 / (1 + np.exp(-z))

def h(X,w):
    z=X@w
    h=sigmoid(z)
    return h

#代价函数构造
def cost(X,w,y):
    #当X(m,n+1),y(m,),w(n+1,1)
    y_hat=sigmoid(X@w)
    right=np.multiply(y.ravel(),np.log(y_hat).ravel())+np.multiply((1-y).ravel(),np.log(1-y_hat).ravel())
    cost=-np.sum(right)/X.shape[0]
    return cost



def grandient(X,y,iter_num,alpha):
    y=y.reshape((X.shape[0],1))
    w=np.zeros((X.shape[1],1))
    cost_lst=[]  
    for i in range(iter_num):
        y_pred=h(X,w)-y
        temp=np.zeros((X.shape[1],1))
        for j in range(X.shape[1]):
            right=np.multiply(y_pred.ravel(),X[:,j])
            
            gradient=1/(X.shape[0])*(np.sum(right))
            temp[j,0]=w[j,0]-alpha*gradient
        w=temp
        cost_lst.append(cost(X,w,y.ravel()))
    return w,cost_lst

1.4 调用梯度下降算法来学习三个分类模型的参数

#初始化超参数
iter_num,alpha=600000,0.001
#训练第一个模型
w1,cost_lst1=grandient(data1_x,data1_y,iter_num,alpha)
import matplotlib.pyplot as plt
plt.plot(range(iter_num),cost_lst1,"b-o")
[<matplotlib.lines.Line2D at 0x2562630b100>]

python多分类逻辑回归,《 Python机器学习入门实验 》,机器学习,python,分类

#训练第二个模型
w2,cost_lst2=grandient(data2_x,data2_y,iter_num,alpha)
import matplotlib.pyplot as plt
plt.plot(range(iter_num),cost_lst2,"b-o")
[<matplotlib.lines.Line2D at 0x25628114280>]

python多分类逻辑回归,《 Python机器学习入门实验 》,机器学习,python,分类

#训练第三个模型
w3,cost_lst3=grandient(data3_x,data3_y,iter_num,alpha)
w3
array([[-3.22437049],
       [-3.50214058],
       [-3.50286355],
       [ 5.16580317],
       [ 5.89898368]])
import matplotlib.pyplot as plt
plt.plot(range(iter_num),cost_lst3,"b-o")
[<matplotlib.lines.Line2D at 0x2562e0f81c0>]

python多分类逻辑回归,《 Python机器学习入门实验 》,机器学习,python,分类

1.5 利用模型进行预测

h(data_x,w3)
array([[1.48445441e-11],
       [1.72343968e-10],
       [1.02798153e-10],
       [5.81975546e-10],
       [1.48434710e-11],
       [1.95971176e-11],
       [2.18959639e-10],
       [5.01346874e-11],
       [1.40930075e-09],
       [1.12830635e-10],
       [4.31888744e-12],
       [1.69308343e-10],
       [1.35613372e-10],
       [1.65858883e-10],
       [7.89880725e-14],
       [4.23224675e-13],
       [2.48199140e-12],
       [2.67766642e-11],
       [5.39314286e-12],
       [1.56935848e-11],
       [3.47096426e-11],
       [4.01827075e-11],
       [7.63005509e-12],
       [8.26864773e-10],
       [7.97484594e-10],
       [3.41189783e-10],
       [2.73442178e-10],
       [1.75314894e-11],
       [1.48456174e-11],
       [4.84204982e-10],
       [4.84239990e-10],
       [4.01914238e-11],
       [1.18813180e-12],
       [3.14985611e-13],
       [2.03524473e-10],
       [2.14461446e-11],
       [2.18189955e-12],
       [1.16799745e-11],
       [5.92281641e-10],
       [3.53217554e-11],
       [2.26727669e-11],
       [8.74004884e-09],
       [2.93949962e-10],
       [6.26783110e-10],
       [2.23513465e-10],
       [4.41246960e-10],
       [1.45841303e-11],
       [2.44584721e-10],
       [6.13010507e-12],
       [4.24539165e-11],
       [1.64123143e-03],
       [8.55503211e-03],
       [1.65105645e-02],
       [9.87814122e-02],
       [3.97290777e-02],
       [1.11076040e-01],
       [4.19003715e-02],
       [2.88426221e-03],
       [6.27161978e-03],
       [7.67020481e-02],
       [2.27204861e-02],
       [2.08212169e-02],
       [4.58067633e-03],
       [9.90450665e-02],
       [1.19419048e-03],
       [1.41462060e-03],
       [2.22638069e-01],
       [2.68940904e-03],
       [3.66014737e-01],
       [6.97791873e-03],
       [5.78803255e-01],
       [2.32071970e-03],
       [5.28941621e-01],
       [4.57649874e-02],
       [2.69208900e-03],
       [2.84603646e-03],
       [2.20421076e-02],
       [2.07507605e-01],
       [9.10460936e-02],
       [2.44824946e-04],
       [8.37509821e-03],
       [2.78543808e-03],
       [3.11283202e-03],
       [8.89831833e-01],
       [3.65880536e-01],
       [3.03993844e-02],
       [1.18930239e-02],
       [4.99150151e-02],
       [1.10252946e-02],
       [5.15923462e-02],
       [1.43653056e-01],
       [4.41610209e-02],
       [7.37513950e-03],
       [2.88447014e-03],
       [5.07366744e-02],
       [7.24617687e-03],
       [1.83460602e-02],
       [5.40874928e-03],
       [3.87210511e-04],
       [1.55791816e-02],
       [9.99862942e-01],
       [9.89637526e-01],
       [9.86183040e-01],
       [9.83705644e-01],
       [9.98410187e-01],
       [9.97834502e-01],
       [9.84208537e-01],
       [9.85434538e-01],
       [9.94141336e-01],
       [9.94561329e-01],
       [7.20333384e-01],
       [9.70431293e-01],
       [9.62754456e-01],
       [9.96609064e-01],
       [9.99222270e-01],
       [9.83684437e-01],
       [9.26437633e-01],
       [9.83486260e-01],
       [9.99950496e-01],
       [9.39002061e-01],
       [9.88043323e-01],
       [9.88637702e-01],
       [9.98357641e-01],
       [7.65848930e-01],
       [9.73006160e-01],
       [8.76969899e-01],
       [6.61137141e-01],
       [6.97324053e-01],
       [9.97185846e-01],
       [6.11033594e-01],
       [9.77494647e-01],
       [6.58573810e-01],
       [9.98437920e-01],
       [5.24529693e-01],
       [9.70465066e-01],
       [9.87624920e-01],
       [9.97236435e-01],
       [9.26432706e-01],
       [6.61104746e-01],
       [8.84442100e-01],
       [9.96082862e-01],
       [8.40940308e-01],
       [9.89637526e-01],
       [9.96974990e-01],
       [9.97386310e-01],
       [9.62040470e-01],
       [9.52214579e-01],
       [8.96902215e-01],
       [9.90200940e-01],
       [9.28785160e-01]])
#将数据输入三个模型的看看结果
multi_pred=pd.DataFrame(zip(h(data_x,w1).ravel(),h(data_x,w2).ravel(),h(data_x,w3).ravel()))
multi_pred
0 1 2
0 0.999297 0.108037 1.484454e-11
1 0.997061 0.270814 1.723440e-10
2 0.998633 0.164710 1.027982e-10
3 0.995774 0.231910 5.819755e-10
4 0.999415 0.085259 1.484347e-11
... ... ... ...
145 0.000007 0.127574 9.620405e-01
146 0.000006 0.496389 9.522146e-01
147 0.000010 0.234745 8.969022e-01
148 0.000006 0.058444 9.902009e-01
149 0.000014 0.284295 9.287852e-01

150 rows × 3 columns

multi_pred.values[:3]
array([[9.99297209e-01, 1.08037473e-01, 1.48445441e-11],
       [9.97060801e-01, 2.70813780e-01, 1.72343968e-10],
       [9.98632728e-01, 1.64709623e-01, 1.02798153e-10]])
#每个样本的预测值
np.argmax(multi_pred.values,axis=1)
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2,
       2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2], dtype=int64)
#每个样本的真实值
data_y
array([[0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [0],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2],
       [2]])

1.6 评估模型

np.argmax(multi_pred.values,axis=1)==data_y.ravel()
array([ True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True, False,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True, False, False,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True, False,  True,  True,  True, False,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True])
np.sum(np.argmax(multi_pred.values,axis=1)==data_y.ravel())
145
np.sum(np.argmax(multi_pred.values,axis=1)==data_y.ravel())/len(data)
0.9666666666666667

1.7 试试sklearn

from sklearn.linear_model import LogisticRegression
#建立第一个模型
clf1=LogisticRegression()
clf1.fit(data1_x,data1_y)
#建立第二个模型
clf2=LogisticRegression()
clf2.fit(data2_x,data2_y)
#建立第三个模型
clf3=LogisticRegression()
clf3.fit(data3_x,data3_y)
LogisticRegression()
y_pred1=clf1.predict(data_x)
y_pred2=clf2.predict(data_x)
y_pred3=clf3.predict(data_x)
#可视化各模型的预测结果
multi_pred=pd.DataFrame(zip(y_pred1,y_pred2,y_pred3),columns=["模型1","模糊2","模型3"])
multi_pred
模型1 模糊2 模型3
0 1 0 0
1 1 0 0
2 1 0 0
3 1 0 0
4 1 0 0
... ... ... ...
145 0 0 1
146 0 1 1
147 0 0 1
148 0 0 1
149 0 0 1

150 rows × 3 columns

#判断预测结果
np.argmax(multi_pred.values,axis=1)
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0,
       0, 1, 1, 1, 2, 0, 1, 1, 0, 0, 0, 2, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1,
       0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1, 2,
       2, 2, 2, 1, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 1, 2,
       2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2], dtype=int64)
data_y.ravel()
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
#计算准确率
np.sum(np.argmax(multi_pred.values,axis=1)==data_y.ravel())/data.shape[0]
0.7333333333333333

实验:请动手完成你们第一个多分类问题,祝好运!完成下面代码

1. 数据读取

data_x,data_y=datasets.make_blobs(n_samples=200, n_features=6,  centers=4,random_state=0)
data_x.shape,data_y.shape
((200, 6), (200,))

2. 训练数据的准备

data=np.insert(data_x,data_x.shape[1],data_y,axis=1)
data=pd.DataFrame(data,columns=["F1","F2","F3","F4","F5","F6","target"])
data
F1 F2 F3 F4 F5 F6 target
0 2.116632 7.972800 -9.328969 -8.224605 -12.178429 5.498447 2.0
1 1.886449 4.621006 2.841595 0.431245 -2.471350 2.507833 0.0
2 2.391329 6.464609 -9.805900 -7.289968 -9.650985 6.388460 2.0
3 -1.034776 6.626886 9.031235 -0.812908 5.449855 0.134062 1.0
4 -0.481593 8.191753 7.504717 -1.975688 6.649021 0.636824 1.0
... ... ... ... ... ... ... ...
195 5.434893 7.128471 9.789546 6.061382 0.634133 5.757024 3.0
196 -0.406625 7.586001 9.322750 -1.837333 6.477815 -0.992725 1.0
197 2.031462 7.804427 -8.539512 -9.824409 -10.046935 6.918085 2.0
198 4.081889 6.127685 11.091126 4.812011 -0.005915 5.342211 3.0
199 0.985744 7.285737 -8.395940 -6.586471 -9.651765 6.651012 2.0

200 rows × 7 columns

data["target"]=data["target"].astype("int32")
data
F1 F2 F3 F4 F5 F6 target
0 2.116632 7.972800 -9.328969 -8.224605 -12.178429 5.498447 2
1 1.886449 4.621006 2.841595 0.431245 -2.471350 2.507833 0
2 2.391329 6.464609 -9.805900 -7.289968 -9.650985 6.388460 2
3 -1.034776 6.626886 9.031235 -0.812908 5.449855 0.134062 1
4 -0.481593 8.191753 7.504717 -1.975688 6.649021 0.636824 1
... ... ... ... ... ... ... ...
195 5.434893 7.128471 9.789546 6.061382 0.634133 5.757024 3
196 -0.406625 7.586001 9.322750 -1.837333 6.477815 -0.992725 1
197 2.031462 7.804427 -8.539512 -9.824409 -10.046935 6.918085 2
198 4.081889 6.127685 11.091126 4.812011 -0.005915 5.342211 3
199 0.985744 7.285737 -8.395940 -6.586471 -9.651765 6.651012 2

200 rows × 7 columns

data.insert(0,"ones",1)
data
ones F1 F2 F3 F4 F5 F6 target
0 1 2.116632 7.972800 -9.328969 -8.224605 -12.178429 5.498447 2
1 1 1.886449 4.621006 2.841595 0.431245 -2.471350 2.507833 0
2 1 2.391329 6.464609 -9.805900 -7.289968 -9.650985 6.388460 2
3 1 -1.034776 6.626886 9.031235 -0.812908 5.449855 0.134062 1
4 1 -0.481593 8.191753 7.504717 -1.975688 6.649021 0.636824 1
... ... ... ... ... ... ... ... ...
195 1 5.434893 7.128471 9.789546 6.061382 0.634133 5.757024 3
196 1 -0.406625 7.586001 9.322750 -1.837333 6.477815 -0.992725 1
197 1 2.031462 7.804427 -8.539512 -9.824409 -10.046935 6.918085 2
198 1 4.081889 6.127685 11.091126 4.812011 -0.005915 5.342211 3
199 1 0.985744 7.285737 -8.395940 -6.586471 -9.651765 6.651012 2

200 rows × 8 columns

#第一个类别的数据
data1=data.copy()
data1.loc[data["target"]==0,"target"]=1
data1.loc[data["target"]!=0,"target"]=0
data1
ones F1 F2 F3 F4 F5 F6 target
0 1 2.116632 7.972800 -9.328969 -8.224605 -12.178429 5.498447 0
1 1 1.886449 4.621006 2.841595 0.431245 -2.471350 2.507833 1
2 1 2.391329 6.464609 -9.805900 -7.289968 -9.650985 6.388460 0
3 1 -1.034776 6.626886 9.031235 -0.812908 5.449855 0.134062 0
4 1 -0.481593 8.191753 7.504717 -1.975688 6.649021 0.636824 0
... ... ... ... ... ... ... ... ...
195 1 5.434893 7.128471 9.789546 6.061382 0.634133 5.757024 0
196 1 -0.406625 7.586001 9.322750 -1.837333 6.477815 -0.992725 0
197 1 2.031462 7.804427 -8.539512 -9.824409 -10.046935 6.918085 0
198 1 4.081889 6.127685 11.091126 4.812011 -0.005915 5.342211 0
199 1 0.985744 7.285737 -8.395940 -6.586471 -9.651765 6.651012 0

200 rows × 8 columns

data1_x=data1.iloc[:,:data1.shape[1]-1].values
data1_y=data1.iloc[:,data1.shape[1]-1].values
data1_x.shape,data1_y.shape
((200, 7), (200,))
#第二个类别的数据
data2=data.copy()
data2.loc[data["target"]==1,"target"]=1
data2.loc[data["target"]!=1,"target"]=0
data2
ones F1 F2 F3 F4 F5 F6 target
0 1 2.116632 7.972800 -9.328969 -8.224605 -12.178429 5.498447 0
1 1 1.886449 4.621006 2.841595 0.431245 -2.471350 2.507833 0
2 1 2.391329 6.464609 -9.805900 -7.289968 -9.650985 6.388460 0
3 1 -1.034776 6.626886 9.031235 -0.812908 5.449855 0.134062 1
4 1 -0.481593 8.191753 7.504717 -1.975688 6.649021 0.636824 1
... ... ... ... ... ... ... ... ...
195 1 5.434893 7.128471 9.789546 6.061382 0.634133 5.757024 0
196 1 -0.406625 7.586001 9.322750 -1.837333 6.477815 -0.992725 1
197 1 2.031462 7.804427 -8.539512 -9.824409 -10.046935 6.918085 0
198 1 4.081889 6.127685 11.091126 4.812011 -0.005915 5.342211 0
199 1 0.985744 7.285737 -8.395940 -6.586471 -9.651765 6.651012 0

200 rows × 8 columns

data2_x=data2.iloc[:,:data2.shape[1]-1].values
data2_y=data2.iloc[:,data2.shape[1]-1].values
#第三个类别的数据
data3=data.copy()
data3.loc[data["target"]==2,"target"]=1
data3.loc[data["target"]!=2,"target"]=0
data3
ones F1 F2 F3 F4 F5 F6 target
0 1 2.116632 7.972800 -9.328969 -8.224605 -12.178429 5.498447 1
1 1 1.886449 4.621006 2.841595 0.431245 -2.471350 2.507833 0
2 1 2.391329 6.464609 -9.805900 -7.289968 -9.650985 6.388460 1
3 1 -1.034776 6.626886 9.031235 -0.812908 5.449855 0.134062 0
4 1 -0.481593 8.191753 7.504717 -1.975688 6.649021 0.636824 0
... ... ... ... ... ... ... ... ...
195 1 5.434893 7.128471 9.789546 6.061382 0.634133 5.757024 0
196 1 -0.406625 7.586001 9.322750 -1.837333 6.477815 -0.992725 0
197 1 2.031462 7.804427 -8.539512 -9.824409 -10.046935 6.918085 1
198 1 4.081889 6.127685 11.091126 4.812011 -0.005915 5.342211 0
199 1 0.985744 7.285737 -8.395940 -6.586471 -9.651765 6.651012 1

200 rows × 8 columns

data3_x=data3.iloc[:,:data3.shape[1]-1].values
data3_y=data3.iloc[:,data3.shape[1]-1].values
#第四个类别的数据
data4=data.copy()
data4.loc[data["target"]==3,"target"]=1
data4.loc[data["target"]!=3,"target"]=0
data4
ones F1 F2 F3 F4 F5 F6 target
0 1 2.116632 7.972800 -9.328969 -8.224605 -12.178429 5.498447 0
1 1 1.886449 4.621006 2.841595 0.431245 -2.471350 2.507833 0
2 1 2.391329 6.464609 -9.805900 -7.289968 -9.650985 6.388460 0
3 1 -1.034776 6.626886 9.031235 -0.812908 5.449855 0.134062 0
4 1 -0.481593 8.191753 7.504717 -1.975688 6.649021 0.636824 0
... ... ... ... ... ... ... ... ...
195 1 5.434893 7.128471 9.789546 6.061382 0.634133 5.757024 1
196 1 -0.406625 7.586001 9.322750 -1.837333 6.477815 -0.992725 0
197 1 2.031462 7.804427 -8.539512 -9.824409 -10.046935 6.918085 0
198 1 4.081889 6.127685 11.091126 4.812011 -0.005915 5.342211 1
199 1 0.985744 7.285737 -8.395940 -6.586471 -9.651765 6.651012 0

200 rows × 8 columns

data4_x=data4.iloc[:,:data4.shape[1]-1].values
data4_y=data4.iloc[:,data4.shape[1]-1].values

3. 定义假设函数、代价函数和梯度下降算法

def sigmoid(z):
    return 1 / (1 + np.exp(-z))
def h(X,w):
    z=X@w
    h=sigmoid(z)
    return h
#代价函数构造
def cost(X,w,y):
    #当X(m,n+1),y(m,),w(n+1,1)
    y_hat=sigmoid(X@w)
    right=np.multiply(y.ravel(),np.log(y_hat).ravel())+np.multiply((1-y).ravel(),np.log(1-y_hat).ravel())
    cost=-np.sum(right)/X.shape[0]
    return cost
def grandient(X,y,iter_num,alpha):
    y=y.reshape((X.shape[0],1))
    w=np.zeros((X.shape[1],1))
    cost_lst=[]  
    for i in range(iter_num):
        y_pred=h(X,w)-y
        temp=np.zeros((X.shape[1],1))
        for j in range(X.shape[1]):
            right=np.multiply(y_pred.ravel(),X[:,j])
            
            gradient=1/(X.shape[0])*(np.sum(right))
            temp[j,0]=w[j,0]-alpha*gradient
        w=temp
        cost_lst.append(cost(X,w,y.ravel()))
    return w,cost_lst

4. 学习这四个分类模型

import matplotlib.pyplot as plt
#初始化超参数
iter_num,alpha=600000,0.001
#训练第1个模型
w1,cost_lst1=grandient(data1_x,data1_y,iter_num,alpha)
plt.plot(range(iter_num),cost_lst1,"b-o")
[<matplotlib.lines.Line2D at 0x25624eb08e0>]

python多分类逻辑回归,《 Python机器学习入门实验 》,机器学习,python,分类

#训练第2个模型
w2,cost_lst2=grandient(data2_x,data2_y,iter_num,alpha)
plt.plot(range(iter_num),cost_lst2,"b-o")
[<matplotlib.lines.Line2D at 0x25631b87a60>]

python多分类逻辑回归,《 Python机器学习入门实验 》,机器学习,python,分类

#训练第3个模型
w3,cost_lst3=grandient(data3_x,data3_y,iter_num,alpha)
plt.plot(range(iter_num),cost_lst3,"b-o")
[<matplotlib.lines.Line2D at 0x2562bcdfac0>]

python多分类逻辑回归,《 Python机器学习入门实验 》,机器学习,python,分类

#训练第4个模型
w4,cost_lst4=grandient(data4_x,data4_y,iter_num,alpha)
plt.plot(range(iter_num),cost_lst4,"b-o")
[<matplotlib.lines.Line2D at 0x25631ff4ee0>]

python多分类逻辑回归,《 Python机器学习入门实验 》,机器学习,python,分类

5. 利用模型进行预测

data_x
array([[ 2.11663151e+00,  7.97280013e+00, -9.32896918e+00,
        -8.22460526e+00, -1.21784287e+01,  5.49844655e+00],
       [ 1.88644899e+00,  4.62100554e+00,  2.84159548e+00,
         4.31244563e-01, -2.47135027e+00,  2.50783257e+00],
       [ 2.39132949e+00,  6.46460915e+00, -9.80590050e+00,
        -7.28996786e+00, -9.65098460e+00,  6.38845956e+00],
       ...,
       [ 2.03146167e+00,  7.80442707e+00, -8.53951210e+00,
        -9.82440872e+00, -1.00469351e+01,  6.91808489e+00],
       [ 4.08188906e+00,  6.12768483e+00,  1.10911262e+01,
         4.81201082e+00, -5.91530191e-03,  5.34221079e+00],
       [ 9.85744105e-01,  7.28573657e+00, -8.39593964e+00,
        -6.58647097e+00, -9.65176507e+00,  6.65101187e+00]])
data_x=np.insert(data_x,0,1,axis=1)
data_x.shape
(200, 7)
w3.shape
(7, 1)
multi_pred=pd.DataFrame(zip(h(data_x,w1).ravel(),h(data_x,w2).ravel(),h(data_x,w3).ravel(),h(data_x,w4).ravel()))
multi_pred
0 1 2 3
0 0.020436 4.556248e-15 9.999975e-01 2.601227e-27
1 0.820488 4.180906e-05 3.551499e-05 5.908691e-05
2 0.109309 7.316201e-14 9.999978e-01 7.091713e-24
3 0.036608 9.999562e-01 1.048562e-09 5.724854e-03
4 0.003075 9.999292e-01 2.516742e-09 6.423038e-05
... ... ... ... ...
195 0.017278 3.221293e-06 3.753372e-14 9.999943e-01
196 0.003369 9.999966e-01 6.673394e-10 2.281428e-03
197 0.000606 1.118174e-13 9.999941e-01 1.780212e-28
198 0.013072 4.999118e-05 9.811154e-14 9.996689e-01
199 0.151548 1.329623e-13 9.999447e-01 2.571989e-24

200 rows × 4 columns文章来源地址https://www.toymoban.com/news/detail-628300.html

6. 计算准确率

np.sum(np.argmax(multi_pred.values,axis=1)==data_y.ravel())/len(data)
1.0

附:系列文章

实验 目录 直达链接
1 Numpy以及可视化回顾 https://want595.blog.csdn.net/article/details/131891689
2 线性回归 https://want595.blog.csdn.net/article/details/131892463
3 逻辑回归 https://want595.blog.csdn.net/article/details/131912053
4 多分类实践(基于逻辑回归) https://want595.blog.csdn.net/article/details/131913690
5 机器学习应用实践-手动调参 https://want595.blog.csdn.net/article/details/131934812
6 贝叶斯推理 https://want595.blog.csdn.net/article/details/131947040
7 KNN最近邻算法 https://want595.blog.csdn.net/article/details/131947885
8 K-means无监督聚类 https://want595.blog.csdn.net/article/details/131952371
9 决策树 https://want595.blog.csdn.net/article/details/131991014
10 随机森林和集成学习 https://want595.blog.csdn.net/article/details/132003451
11 支持向量机 https://want595.blog.csdn.net/article/details/132010861
12 神经网络-感知器 https://want595.blog.csdn.net/article/details/132014769
13 基于神经网络的回归-分类实验 https://want595.blog.csdn.net/article/details/132127413
14 手写体卷积神经网络 https://want595.blog.csdn.net/article/details/132223494
15 将Lenet5应用于Cifar10数据集 https://want595.blog.csdn.net/article/details/132223751
16 卷积、下采样、经典卷积网络 https://want595.blog.csdn.net/article/details/132223985

到了这里,关于【Python机器学习】实验04(1) 多分类(基于逻辑回归)实践的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 【Python机器学习】实验03 逻辑回归

    在这一次练习中,我们将要实现逻辑回归并且应用到一个分类任务。我们还将通过将正则化加入训练算法,来提高算法的鲁棒性,并用更复杂的情形来测试它。 本实验的数据包含两个变量(评分1和评分2,可以看作是特征),某大学的管理者,想通过申请学生两次测试的评分,来

    2024年02月11日
    浏览(34)
  • 机器学习:基于逻辑回归和高斯贝叶斯对人口普查数据集的分类与预测

    机器学习:基于逻辑回归和高斯贝叶斯对人口普查数据集的分类与预测 作者:i阿极 作者简介:Python领域新星作者、多项比赛获奖者:博主个人首页 😊😊😊如果觉得文章不错或能帮助到你学习,可以点赞👍收藏📁评论📒+关注哦!👍👍👍 📜📜📜如果有小伙伴需要数据

    2023年04月08日
    浏览(31)
  • python机器学习——分类模型评估 & 分类算法(k近邻,朴素贝叶斯,决策树,随机森林,逻辑回归,svm)

    交叉验证:为了让被评估的模型更加准确可信 交叉验证:将拿到的数据,分为训练和验证集。以下图为例:将数据分成5份,其中一份作为验证集。然后经过5次(组)的测试,每次都更换不同的验证集。即得到5组模型的结果,取平均值作为最终结果。又称5折交叉验证。 通常情

    2024年02月03日
    浏览(52)
  • 【机器学习】逻辑回归(二元分类)

    离散感知器:输出的预测值仅为 0 或 1 连续感知器(逻辑分类器):输出的预测值可以是 0 到 1 的任何数字,标签为 0 的点输出接近于 0 的数,标签为 1 的点输出接近于 1 的数 逻辑回归算法(logistics regression algorithm):用于训练逻辑分类器的算法 sigmoid 函数: g ( z ) = 1 1 +

    2024年02月21日
    浏览(38)
  • 【Python机器学习】决策树、逻辑回归、神经网络等模型对电信用户流失分类实战(附源码和数据集)

    需要源码和数据集请点赞关注收藏后评论区留言私信~~~ 该实例数据来自kaggle,它的每一条数据为一个用户的信息,共有21个有效字段,其中最后一个字段Churn标志该用户是否流失   可用pandas的read_csv()函数来读取数据,用DataFrame的head()、shape、info()、duplicated()、nunique()等来初步

    2024年02月03日
    浏览(30)
  • 【机器学习】鸢尾花分类-逻辑回归示例

    功能: 这段代码演示了如何使用逻辑回归对鸢尾花数据集进行训练,并将训练好的模型保存到文件中。然后,它允许用户输入新的鸢尾花特征数据,使用保存的模型进行预测,并输出预测结果。 步骤概述: 加载数据和预处理: 使用 Scikit-Learn 中的 datasets 模块加载鸢尾花数据

    2024年02月10日
    浏览(31)
  • 逻辑回归揭秘: 从分类原理到机器学习实践

    逻辑回归 (Logistic Regression) 尽管名字中带有 “回归” 两个字, 但主要是用来解决分类问题, 尤其是二分类问题. 逻辑回归的核心思想是: 通过将线性回归的输出传递给一个激活函数 (Activation Function) 比如 Sigmoid 函数) 将连续值转化为 0 到 1 之间的概率值, 在根据阈值 (Threshold) 对概

    2024年02月02日
    浏览(32)
  • 【机器学习300问】16、逻辑回归模型实现分类的原理?

            在上一篇文章中,我初步介绍了什么是逻辑回归模型,从它能解决什么问题开始介绍,并讲到了它长什么样子的。如果有需要的小伙伴可以回顾一下,链接我放在下面啦:                              【机器学习300问】15、什么是逻辑回归模型?     

    2024年01月25日
    浏览(45)
  • 【白话机器学习的数学】读书笔记(3)学习分类(感知机、逻辑回归)

    1.分类的目的 找到一条线把白点和黑点分开。这条直线是使 权重向量成为法线向量 的直线。(解释见下图) 直线的表达式为: ω ⋅ x = ∑ i = 1 n ω i ⋅ x i = 0 omega·x = sum_{i=1}^nomega_i · x_i = 0 ω ⋅ x = i = 1 ∑ n ​ ω i ​ ⋅ x i ​ = 0 ω omega ω 是权重向量 权重向量就是我们想要知

    2024年01月18日
    浏览(41)
  • 【AI底层逻辑】——篇章5(上):机器学习算法之回归&分类

    目录 引入 一、何为机器学习 1、定规则和学规则 2、算法的定义

    2024年02月16日
    浏览(40)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包