时间:2021-05-22
基本概念
precision:预测为对的当中,原本为对的比例(越大越好,1为理想状态)
recall:原本为对的当中,预测为对的比例(越大越好,1为理想状态)
F-measure:F度量是对准确率和召回率做一个权衡(越大越好,1为理想状态,此时precision为1,recall为1)
accuracy:预测对的(包括原本是对预测为对,原本是错的预测为错两种情形)占整个的比例(越大越好,1为理想状态)
fp rate:原本是错的预测为对的比例(越小越好,0为理想状态)
tp rate:原本是对的预测为对的比例(越大越好,1为理想状态)
ROC曲线通常在Y轴上具有真阳性率,在X轴上具有假阳性率。这意味着图的左上角是“理想”点 - 误报率为零,真正的正率为1。这不太现实,但它确实意味着曲线下面积(AUC)通常更好。
二分类问题:ROC曲线
ROC图如下所示:
多分类问题:ROC曲线
ROC曲线通常用于二分类以研究分类器的输出。为了将ROC曲线和ROC区域扩展到多类或多标签分类,有必要对输出进行二值化。⑴可以每个标签绘制一条ROC曲线。⑵也可以通过将标签指示符矩阵的每个元素视为二元预测(微平均)来绘制ROC曲线。⑶另一种用于多类别分类的评估方法是宏观平均,它对每个标签的分类给予相同的权重。
from __future__ import absolute_importfrom __future__ import divisionfrom __future__ import print_functionimport timestart_time = time.time()import matplotlib.pyplot as pltfrom sklearn.metrics import roc_curvefrom sklearn.metrics import aucimport numpy as npfrom sklearn.model_selection import train_test_splitfrom sklearn.metrics import recall_score,accuracy_scorefrom sklearn.metrics import precision_score,f1_scorefrom keras.optimizers import Adam,SGD,sgdfrom keras.models import load_modelfrom itertools import cyclefrom scipy import interpfrom sklearn.preprocessing import label_binarizenb_classes = 5print('读取数据')X_train = np.load('x_train-resized_5.npy')Y_train = np.load('y_train-resized_5.npy')print(X_train.shape)print(Y_train.shape)print('获取测试数据和验证数据')X_train, X_valid, Y_train, Y_valid = train_test_split(X_train, Y_train, test_size=0.1, random_state=666)Y_train = np.asarray(Y_train,np.uint8)Y_valid = np.asarray(Y_valid,np.uint8)X_valid = np.asarray(X_valid, np.float32) / 255.print('获取模型')model = load_model('./model/SE-InceptionV3_model.h5')opt = Adam(lr=1e-4)model.compile(optimizer=opt, loss='categorical_crossentropy')print("Predicting")Y_pred = model.predict(X_valid)Y_pred = [np.argmax(y) for y in Y_pred] # 取出y中元素最大值所对应的索引Y_valid = [np.argmax(y) for y in Y_valid]# Binarize the outputY_valid = label_binarize(Y_valid, classes=[i for i in range(nb_classes)])Y_pred = label_binarize(Y_pred, classes=[i for i in range(nb_classes)])# micro:多分类 # weighted:不均衡数量的类来说,计算二分类metrics的平均# macro:计算二分类metrics的均值,为每个类给出相同权重的分值。precision = precision_score(Y_valid, Y_pred, average='micro')recall = recall_score(Y_valid, Y_pred, average='micro')f1_score = f1_score(Y_valid, Y_pred, average='micro')accuracy_score = accuracy_score(Y_valid, Y_pred)print("Precision_score:",precision)print("Recall_score:",recall)print("F1_score:",f1_score)print("Accuracy_score:",accuracy_score)# roc_curve:真正率(True Positive Rate , TPR)或灵敏度(sensitivity)# 横坐标:假正率(False Positive Rate , FPR)# Compute ROC curve and ROC area for each classfpr = dict()tpr = dict()roc_auc = dict()for i in range(nb_classes): fpr[i], tpr[i], _ = roc_curve(Y_valid[:, i], Y_pred[:, i]) roc_auc[i] = auc(fpr[i], tpr[i])# Compute micro-average ROC curve and ROC areafpr["micro"], tpr["micro"], _ = roc_curve(Y_valid.ravel(), Y_pred.ravel())roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])# Compute macro-average ROC curve and ROC area# First aggregate all false positive ratesall_fpr = np.unique(np.concatenate([fpr[i] for i in range(nb_classes)]))# Then interpolate all ROC curves at this pointsmean_tpr = np.zeros_like(all_fpr)for i in range(nb_classes): mean_tpr += interp(all_fpr, fpr[i], tpr[i])# Finally average it and compute AUCmean_tpr /= nb_classesfpr["macro"] = all_fprtpr["macro"] = mean_tprroc_auc["macro"] = auc(fpr["macro"], tpr["macro"])# Plot all ROC curveslw = 2plt.figure()plt.plot(fpr["micro"], tpr["micro"], label='micro-average ROC curve (area = {0:0.2f})' ''.format(roc_auc["micro"]), color='deeppink', linestyle=':', linewidth=4)plt.plot(fpr["macro"], tpr["macro"], label='macro-average ROC curve (area = {0:0.2f})' ''.format(roc_auc["macro"]), color='navy', linestyle=':', linewidth=4)colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])for i, color in zip(range(nb_classes), colors): plt.plot(fpr[i], tpr[i], color=color, lw=lw, label='ROC curve of class {0} (area = {1:0.2f})' ''.format(i, roc_auc[i]))plt.plot([0, 1], [0, 1], 'k--', lw=lw)plt.xlim([0.0, 1.0])plt.ylim([0.0, 1.05])plt.xlabel('False Positive Rate')plt.ylabel('True Positive Rate')plt.title('Some extension of Receiver operating characteristic to multi-class')plt.legend(loc="lower right")plt.savefig("../images/ROC/ROC_5分类.png")plt.show()print("--- %s seconds ---" % (time.time() - start_time))ROC图如下所示:
以上这篇python实现二分类和多分类的ROC曲线教程就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持。
声明:本页内容来源网络,仅供用户参考;我单位不保证亦不表示资料全面及准确无误,也不保证亦不表示这些资料为最新信息,如因任何原因,本网内容或者用户因倚赖本网内容造成任何损失或损害,我单位将不会负任何法律责任。如涉及版权问题,请提交至online#300.cn邮箱联系删除。
-AUC计算方法-AUC的Python实现方式AUC计算方法AUC是ROC曲线下的面积,它是机器学习用于二分类模型的评价指标,AUC反应的是模型对样本的排序能力
前言ROC(ReceiverOperatingCharacteristic)曲线和AUC常被用来评价一个二值分类器(binaryclassifier)的优劣。这
最近在看吴恩达的机器学习课程,自己用python实现了其中的logistic算法,并用梯度下降获取最优值。logistic分类是一个二分类问题,而我们的线性回归
本文实例为大家分享了Python感知器算法实现的具体代码,供大家参考,具体内容如下先创建感知器类:用于二分类#-*-coding:utf-8-*-importn
解决的问题:1、实现了二分类的卡方分箱;2、实现了最大分组限定停止条件,和最小阈值限定停止条件;问题,还不太清楚,后续补充。1、自由度k,如何来确定,卡方阈值的