- Scala Machine Learning Projects
- Md. Rezaul Karim
- 348字
- 2021-06-30 19:05:36
Selecting the best model for deployment
From the preceding results, it can be seen that LR and SVM models have the same but higher false positive rate compared to Random Forest and DT. So we can say that DT and Random Forest have better accuracy overall in terms of true positive counts. Let's see the validity of the preceding statement with prediction distributions on pie charts for each model:

Now, it's worth mentioning that using random forest, we are actually getting high accuracy, but it's a very resource, as well as time-consuming job; the training, especially, takes a considerably longer time as compared to LR and SVM.
Therefore, if you don't have higher memory or computing power, it is recommended to increase the Java heap space prior to running this code to avoid OOM errors.
Finally, if you want to deploy the best model (that is, Random Forest in our case), it is recommended to save the cross-validated model immediately after the fit() method invocation:
// Save the workflow
cvModel.write.overwrite().save("model/RF_model_churn")
Your trained model will be saved to that location. The directory will include:
- The best model
- Estimator
- Evaluator
- The metadata of the training itself
Now the next task will be restoring the same model, as follows:
// Load the workflow back
val cvModel = CrossValidatorModel.load("model/ RF_model_churn/")
Finally, we need to transform the test set to the model pipeline that maps the features according to the same mechanism we described in the preceding feature engineering step:
val predictions = cvModel.transform(Preprocessing.testSet)
Finally, we evaluate the restored model:
val evaluator = new BinaryClassificationEvaluator()
.setLabelCol("label")
.setRawPredictionCol("prediction")
val accuracy = evaluator.evaluate(predictions)
println("Accuracy: " + accuracy)
evaluator.explainParams()
val predictionAndLabels = predictions
.select("prediction", "label")
.rdd.map(x => (x(0).asInstanceOf[Double], x(1)
.asInstanceOf[Double]))
val metrics = new BinaryClassificationMetrics(predictionAndLabels)
val areaUnderPR = metrics.areaUnderPR
println("Area under the precision-recall curve: " + areaUnderPR)
val areaUnderROC = metrics.areaUnderROC
println("Area under the receiver operating characteristic (ROC) curve: " + areaUnderROC)
>>>
You will receive the following output:

Well, done! We have managed to reuse the model and do the same prediction. But, probably due to the randomness of data, we observed slightly different predictions.
- PyTorch深度學(xué)習(xí)實(shí)戰(zhàn)
- 精通Excel VBA
- Multimedia Programming with Pure Data
- RPA(機(jī)器人流程自動(dòng)化)快速入門:基于Blue Prism
- Troubleshooting OpenVPN
- 悟透AutoCAD 2009案例自學(xué)手冊(cè)
- 傳感器原理與工程應(yīng)用
- DynamoDB Applied Design Patterns
- 從零開(kāi)始學(xué)ASP.NET
- WPF專業(yè)編程指南
- 數(shù)字多媒體技術(shù)與應(yīng)用實(shí)例
- Mastering Machine Learning with R
- 深度學(xué)習(xí)實(shí)戰(zhàn)
- 網(wǎng)絡(luò)設(shè)備規(guī)劃、配置與管理大全(Cisco版)
- 智能與智慧:人工智能遇見(jiàn)中國(guó)哲學(xué)家