In todays competitive telecommunication business, reliable customer retention techniques will give an extra advantage to any telecommunication company. Customer churn prediction is considered as a major instrument for customer retention(Kim et al.2014). An experiment was conducted using the data of an Iranian mobile company by applying different data mining techniques. Four different data-mining techniques were employed to compare with one anothers output to come up with a more accurate hybrid method. Decision Tree (DT), Artificial Neural Network (ANN), K-Nearest Neighbors (KNN) and Support Vector Machine (SVM) were the methods applied on the dataset. The dataset contained 3150 customers data over a year time span (Keramati et al.2014). For experiment purpose, the dataset was separated in two train sets and test sets containing four labels-churn, not churn, labeled not-churn by mistake, labeled churn by mistake. Each of the experiment was conducted in a training set of 347 records labeled as churned and 1858 records labeled as non-churned. Using this procedure, the authors arranged 5 blocks to run the classification techniques and compare the results. To examine how the decision trees handle churn prediction problem, they built ANOVA completed block design. For every decision tree-run each cell of the table contained three values- the value of recall (RE), precision (PR) measure and misclassification (MI). Among the several decision trees, WEKA (Random Forest) executed the most acceptable response in both F-Score and Misclassification measures. ANN classification procedure was utilized to find to most suitable network for the churn prediction problem. To monitor the execution, they built one hidden and two hidden network with randomly arranged 10 new experiments. The result of the experiment showed that a network with two hidden layers can beat the performance of a network with just one hidden layer. The KNN technique was employed using 4 tuning parameters- Euclidean, City block, Cosine, Correlation. The outcome demonstrated that the best F-score was obtained by employing cosine distance method with 1 neighbor. In SVM technique RBF, MLP and Polynomial were the tuning parameters which were used to compare the F-Score results. The result showed that among these techniques polynomial kernel type application gained the best F-score (Keramati et al.2014). The dataset had 11 features- Number of fail calls, Complaints, Subscription length, Charge amount, second of use, Frequency of use, status, Frequency of SMS, Distinct call numbers, Age group, Type of service. For feature extraction the authors applied DT classifier with 2048 set of different features. The final result of feature extraction showed that Frequency of use (FU), number of complains (co) and second of use (SU) are the most important features (Keramati et al.2014).
In their proposed methodology, the authors used a hybrid method applying all four classifiers. They separated the dataset in four train-set: KNN-train set, SVM-train set, ANN-train set and DT-train set. In their suggested technique, they multiply the scores from the classifiers with their decisions. Then the final summation of the multiplications will be compared with the overall scores of each interior classifier. If the summation is bigger than the half of the overall score it will be considered as churn. And also, in their proposed table they added validity and reliability along with scores, average and variance. At the end of the calculation, they showed that any telecommunication company can obtain 95% accuracy from their hybrid technique in terms of Precision and Recall measures (Keramati et al.2014).