基于Python的信用评分卡建模分析

信用评分技术是一种应用统计模型,其作用是对贷款申请人(信用卡申请人)做风险评估分值的方法。信用评分卡可以根据申请人的基本资料,征信局信息等等的数据,对客户的信用进行评估。

结合信用评分卡的建立原理:

1:建模准备:(拒绝推断,查重、变量转换,构造训练集)

2:变量粗筛

3:变量清洗

4:变量细筛与变量水平压缩

5:建模与实施

风险评分模型主要种类:

  • 申请评分:通过客户申请时的信息预测将来发生违约/逾期等的统计概率
  • 行为评分:通过客户以往行为表现,预测将来发生违约/逾期等的统计概率
  • 催收评分:通过客户以往行为表现,预测已逾期账户清偿欠款/逾期恶化的统计概率

工作原理

客户申请评分卡是一种统计模型,它可基于对当前申请人的各项资料进行评估并给出一个分数,该评分能定量对申请人的偿债能力作出预判。

客户申请评分卡由一系列特征项组成,每个特征项相当于申请表上的一个问题(例如,年龄、银行流水、收入等)。每一个特征项都有一系列可能的属性,相当于每一个问题的一系列可能答案(例如,对于年龄这个问题,答案可能就有30岁以下、30到45等)。在开发评分卡系统模型中,先确定属性与申请人未来信用表现之间的相互关系,然后给属性分配适当的分数权重,分配的分数权重要反映这种相互关系。分数权重越大,说明该属性表示的信用表现越好。一个申请的得分是其属性分值的简单求和。如果申请人的信用评分大于等于金融放款机构所设定的界限分数,此申请处于可接受的风险水平并将被批准;低于界限分数的申请人将被拒绝或给予标示以便进一步审查。

1.建模准备

《基于Python的信用评分卡建模分析》
《基于Python的信用评分卡建模分析》

#读取数据
#accepts文件为风险审核通过,已贷款的数据
#rejects文件为未通过风险审核的数据
import pandas as pd
import numpy as np
accepts = pd.read_csv('script_credit/accepts.csv')
rejects = pd.read_csv('script_credit/rejects.csv')

(1)查看数据结构

accepts.info()
#查看数据缺失部分和数据dtypes

《基于Python的信用评分卡建模分析》

(2)对拒绝推断rejects文件进行预测bad_ind

#拒绝推断
#先将变量与自变量分开
accepts_X = accepts[['tot_derog','age_oldest_tr','rev_util','fico_score','ltv']]
accepts_y = accepts['bad_ind']
rejects_X = rejects[['tot_derog','age_oldest_tr','rev_util','fico_score','ltv']]

(3)对accepts_X文件中缺失数据进行填充

# 利用fancyimpute包中的knn方法进行数据填充
# 由于系统为32位,未能成功安装fancyimpute包,采用mean值填充
# Use 3 nearest rows which have a feature to fill in each row's missing features
# import fancyimpute as fimp
#accepts_x_filled = pd.DataFrame(fimp.KNN(3).complete(accepts_x.as_matrix()))
#accepts_x_filled.columns = accepts_x.columns
#rejects_x_filled = pd.DataFrame(fimp.KNN(3).complete(rejects_x.as_matrix()))
#rejects_x_filled.columns = rejects_x.columns
accepts_X_filled = accepts_X.fillna(accepts_X.mean())
rejects_X_filled = rejects_X.fillna(rejects_X.mean())

(4)标准化数据

# 标准化数据
from sklearn.preprocessing import Normalizer
accepts_x_norm = pd.DataFrame(Normalizer().fit_transform(accepts_x_filled))
accepts_x_norm.columns = accepts_x_filled.columns
rejects_x_norm = pd.DataFrame(Normalizer().fit_transform(rejects_x_filled))
rejects_x_norm.columns = rejects_x_filled.columns

(5)对rejects文件中的bad_ind进行预测

# 利用knn模型进行预测
from sklearn.neighbors import NearestNeighbors
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=5, weights='distance')
neigh.fit(accepts_X_norm, accepts_y) 
rejects['bad_ind'] = neigh.predict(rejects_X_norm)

(6)数据合并,构造训练集

将将审核通过的申请者和未通过的申请者的数据进行合并

# accepts的数据是针对于违约用户的过度抽样
#因此,rejects也要进行同样比例的抽样
rejects_res = rejects[rejects['bad_ind'] == 0].sample(1340)
rejects_res = pd.concat([rejects_res, rejects[rejects['bad_ind'] == 1]], axis = 0)
data = pd.concat([accepts.iloc[:, 2:-1], rejects_res.iloc[:,1:]], axis = 0)

2.变量粗筛

#bankruptcy_ind---曾经破产标识,N没有 Y 有
bankruptcy_dict = {'N':0, 'Y':1}  #转换为0,1
data.bankruptcy_ind = data.bankruptcy_ind.map(bankruptcy_dict)

# 盖帽法处理年份变量中的异常值,并将年份其转化为距现在多长时间

#将小于0.1分位的数据转为0.1分位数据;大于0.99分位数据转为0.99分位数据
year_min = data.vehicle_year.quantile(0.1)
year_max = data.vehicle_year.quantile(0.99)
data.vehicle_year = data.vehicle_year.map(lambda x: year_min if x <= year_min else x)
data.vehicle_year = data.vehicle_year.map(lambda x: year_max if x >= year_max else x)
data.vehicle_year = data.vehicle_year.map(lambda x: 2018 - x)
data.drop(['vehicle_make'], axis = 1, inplace = True)  #删掉汽车制造商

#数据填充

#本来利用fancyimpute包中的knn方法进行数据填充
data_filled = data.fillna(data.mean())
data_filled.columns = data.columns

定义X,y

X = data_filled[['age_oldest_tr', 'bankruptcy_ind', 'down_pyt', 'fico_score',
       'loan_amt', 'loan_term', 'ltv', 'msrp', 'purch_price', 'rev_util',
       'tot_derog', 'tot_income', 'tot_open_tr', 'tot_rev_debt',
       'tot_rev_line', 'tot_rev_tr', 'tot_tr', 'used_ind', 'veh_mileage',
       'vehicle_year']]
y = data_filled['bad_ind']

#粗筛变量

# 利用随机森林筛选变量
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(max_depth=5, random_state=0)
clf.fit(X,y)
#将特征的重要性,按数值排列,前9个,初筛变量
importances = list(clf.feature_importances_)
importances_order = importances.copy()
importances_order.sort(reverse=True)
cols = list(X.columns)
col_top = []
for i in importances_order[:9]:
    col_top.append((i,cols[importances.index(i)]))
col_top

[(0.32921535609407487, ‘fico_score’),
(0.12722011801837413, ‘age_oldest_tr’),
(0.10428283609878117, ‘ltv’),
(0.084528506996671832, ‘tot_derog’),
(0.074201234487731263, ‘rev_util’),
(0.071344607737941074, ‘tot_tr’),
(0.067959721613501806, ‘tot_rev_line’),
(0.027759028579637572, ‘msrp’),
(0.01973823706017484, ‘tot_rev_debt’)]

col = [i[1] for i in col_top]

# 变量细筛与数据清洗

#WoE包来自GitHub
#由于源代码cuts, bins = pd.qcut(df["X"], self.qnt_num, retbins=True, labels=False)
#在pd.qcut部分会出现错误,需增加duplicates='raise'或者'drop'
from WoE import *
import warnings
warnings.filterwarnings("ignore")
iv_c = {}
for i in col:
    try:
        iv_c[i] = WoE(v_type='c').fit(data_filled[i],data_filled['bad_ind']).optimize().iv()
    except:
        print(i)

#变量分箱WOE转换,进行分箱、标准化

woe_a=data_filled[col].apply(lambda x:WoE(v_type='c').fit(x,data_filled['bad_ind'])
      .optimize().fit_transform(x,data_filled['bad_ind']))

# 构造分类模型

from sklearn.cross_validation import train_test_split
X = WOE_c
y = data_filled['bad_ind']
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.3, random_state = 0)

# 构建逻辑回归模型,进行违约概率预测

import itertools
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix,recall_score,classification_report
lr = LogisticRegression(C =1,penalty='l1')
lr.fit(X_train,y_train.values.ravel())
y_pred = lr.predict(X_test.values)
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test,y_pred)
np.set_printoptions(precision=2)
print("Recall metric in the testing dataset: ", 
       cnf_matrix[1,1]/(cnf_matrix[1,0]+cnf_matrix[1,1]))

#画混淆图

def plot_confusion_matrix(cm,classes,title='Confusion matrix',cmap=plt.cm.Blues):
    import matplotlib.pyplot as plt
    plt.imshow(cm, interpolation='nearest', cmap=cmap)
    plt.title(title)
    plt.colorbar()
    tick_marks = np.arange(len(classes))
    plt.xticks(tick_marks, classes, rotation=0)
    plt.yticks(tick_marks, classes)
    thresh = cm.max() / 2.
    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
        plt.text(j, i, cm[i, j],
                 horizontalalignment="center",
                 color="white" if cm[i, j] > thresh else "black")
    plt.tight_layout()
    plt.ylabel('True label')
    plt.xlabel('Predicted label')
# Plot non-normalized confusion matrix
class_names = [0,1]
plt.figure()
plot_confusion_matrix(cnf_matrix
                      , classes=class_names
                      , title='Confusion matrix')
plt.show()
#(1744+44)/(1744+44+433+38) = 79.1%

《基于Python的信用评分卡建模分析》

## 加入代价敏感参数,重新计算

lr = LogisticRegression(C=1,penalty ='l1',class_weight='balanced')
lr.fit(X_train,y_train.values.ravel())
y_pred = lr.predict(X_test.values)
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test,y_pred)
np.set_printoptions(precision=2)
print("Recall metric in the testing dataset: ", 
       cnf_matrix[1,1]/(cnf_matrix[1,0]+cnf_matrix[1,1]))
class_names = [0,1]
plt.figure()
plot_confusion_matrix(cnf_matrix
                      , classes=class_names
                      , title='Confusion matrix')
plt.show() 
# (1170+366)/(1170+622+366+111) =68%

《基于Python的信用评分卡建模分析》

## 检验模型

# ### 检验模型
from sklearn.metrics import roc_curve, auc
fpr,tpr,threshold = roc_curve(y_test,y_pred, drop_intermediate=False) ###计算真正率和假正率
roc_auc = auc(fpr,tpr) ###计算auc的值
plt.figure()  
lw = 2  
plt.figure(figsize=(10,10))  
plt.plot(fpr, tpr, color='darkorange',  
         lw=lw, label='ROC curve (area = %0.2f)' % roc_auc) ###假正率为横坐标,真正率为纵坐标做曲线  
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')  
plt.xlim([0.0, 1.0])  
plt.ylim([0.0, 1.05])  
plt.xlabel('False Positive Rate')  
plt.ylabel('True Positive Rate')  
plt.title('Receiver operating characteristic example')  
plt.legend(loc="lower right")  
plt.show()

《基于Python的信用评分卡建模分析》
《基于Python的信用评分卡建模分析》

#利用sklearn.metrics中的roc_curve算出tpr,fpr作图

#利用sklearn.metrics中的roc_curve算出tpr,fpr作图
fig, ax = plt.subplots()
ax.plot(1 - threshold, tpr, label='tpr') # ks曲线要按照预测概率降序排列,所以需要1-threshold镜像
ax.plot(1 - threshold, fpr, label='fpr')
ax.plot(1 - threshold, tpr-fpr,label='KS')
plt.xlabel('score')
plt.title('KS Curve')
#plt.xticks(np.arange(0,1,0.2), np.arange(1,0,-0.2))
#plt.xticks(np.arange(0,1,0.2), np.arange(score.max(),score.min(),-0.2*(data['反欺诈评分卡总分'].max() - data['反欺诈评分卡总分'].min())))
plt.figure(figsize=(20,20))
legend = ax.legend(loc='upper left', shadow=True, fontsize='x-large')
plt.show()

《基于Python的信用评分卡建模分析》

# ### 评分卡开发

# 求各变量各水平得分
n = 0
for i in X.columns:
    if n == 0:
        temp = WoE(v_type='c').fit(data_filled[i],data_filled['bad_ind']).optimize().bins
        temp['name'] = [i]*len(temp)
        scorecard = temp.copy()
        n += 1
    else:
        temp = WoE(v_type='c').fit(data_filled[i],data_filled['bad_ind']).optimize().bins
        temp['name'] = [i]*len(temp)
        scorecard = pd.concat([scorecard, temp], axis = 0)
        n += 1
scorecard['score'] = scorecard['woe'].map(lambda x: -int(np.ceil(28.8539*x)))

# 求原始数据表中每个样本的得分

def fico_score_cnvnt(x):
    if x < 6.657176e+02:
        return -21
    else:
        return 16
    
def age_oldest_tr_cnvnt(x):
    if x < 1.618624e+02:
        return -9
    else:
        return 20

def rev_util_cnvnt(x):
    if x < 7.050000e+01:
        return 7
    else:
        return -19    

def ltv_cnvnt(x):
    if x < 9.450000e+01:
        return 16
    else:
        return -8    

def tot_tr_cnvnt(x):
    if x < 1.085218e+01:
        return -13
    elif x < 1.330865e+01:
        return -4
    elif x < 1.798767e+01:
        return 3
    else:
        return 11    

def tot_rev_line_cnvnt(x):
    if x < 1.201000e+04:
        return -12
    else:
        return 19   

def tot_derog_cnvnt(x):
    if x < 1.072596e+00:
        return 8
    else:
        return -13   

def purch_price_cnvnt(x):
    if x < 1.569685e+04:
        return -5
    else:
        return 3    

def tot_rev_debt_cnvnt(x):
    if x < 1.024000e+04:
        return -2
    else:
        return 8
func = [fico_score_cnvnt,

 age_oldest_tr_cnvnt,

 rev_util_cnvnt,

 ltv_cnvnt,

 tot_tr_cnvnt,

 tot_rev_line_cnvnt,

 tot_derog_cnvnt,

 purch_price_cnvnt,

 tot_rev_debt_cnvnt]

计算得分

X_score_dict = {i:j for i,j in zip(X.columns,func)}
X_score = data_filled[X.columns].copy()
for i in X_score.columns:
    X_score[i] = X_score[i].map(X_score_dict[i])
X_score['SCORE'] = X_score[X.columns].apply(lambda x: sum(x) + 513, axis = 1)
X_score_label = pd.concat([X_score, data_filled['bad_ind']], axis = 1)
X_score_label.head()
import seaborn as sns
fig, ax = plt.subplots()
ax1 = sns.kdeplot(X_score_label[X_score_label['bad_ind'] == 1]['SCORE'],label='1')
ax2 = sns.kdeplot(X_score_label[X_score_label['bad_ind'] == 0]['SCORE'],label='0')
plt.show()

《基于Python的信用评分卡建模分析》
《基于Python的信用评分卡建模分析》

5.总结及展望

本文结合信用评分卡的建立原理,从数据的预处理建模分析创建信用评分卡建立自动评分系统,创建了一个简单的信用评分系统。

基于AI 的机器学习评分卡系统可通过把旧数据(某个时间点后,例如2年)剔除掉后再进行自动建模模型评估、并不断优化特征变量,使得系统更加强大。

6.参考

Hellobi Live | 1小时学会建立信用评分卡(金融数据的小分析-Python)

    原文作者:你竟然说我黑
    原文地址: https://zhuanlan.zhihu.com/p/37160149
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞