大家好,想请问一个CV(cross validation)的问题
我用sklearn model_selection 两个不同CV的方法
1. train_test_split
2. StratifiedKFold
结果同一个model train出来
1. train acc ~90%, test acc ~75%(overfitting)
2. train acc ~90%, test acc ~30%(average acc)
为什么在testing上差距会这么大啊?
代表我用方法1 train出来是超级无敌overfitting吗?
或是我的DataSet是根本无法分析的?
还是我脑残code写错了?
keras log上
方法1 val_acc会跟着train acc上升,但是方法2每个round都是死鱼在30%
python code:
1.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=87)
2.
skf = StratifiedKFold(n_splits=4)
for train_index, test_index in skf.split(X, y):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
麻烦各位指点一下!感谢
//