- Applied Deep Learning with Python
- Alex Galea Luis Capelo
- 1633字
- 2021-08-13 15:53:11
Activity: Preparing to Train a Predictive Model for the Employee-Retention Problem
Suppose you are hired to do freelance work for a company who wants to find insights into why their employees are leaving. They have compiled a set of data they think will be helpful in this respect. It includes details on employee satisfaction levels, evaluations, time spent at work, department, and salary.
The company shares its data with you by sending you a file called hr_data.csv and asking what you think can be done to help stop employees from leaving. To apply the concepts we've learned thus far to a real-life problem. In particular, we seek to:
- Determine a plan for using predictive analytics to provide impactful business insights, given the available data.
- Prepare the data for use in machine learning models.
- With the chapter-2-workbook.ipynb notebook file open, scroll to the Activity section.
- Check the head of the table by running the following code:
%%bash
head ../data/hr-analytics/hr_data.csv
Judging by the output, convince yourself that it looks to be in standard CSV format. For CSV files, we should be able to simply load the data with pd.read_csv.
- Load the data with Pandas by running df = pd.read_csv('../data/hranalytics/hr_data.csv') . Use tab completion to help type the file path.
- Inspect the columns by printing df.columns and make sure the data has loaded as expected by printing the DataFrame head and tail with df.head() and df.tail() :
We can see that it appears to have loaded correctly. Based on the tail index values, there are nearly 15,000 rows; let's make sure we didn't miss any.
- Check the number of rows (including the header) in the CSV file with the following code:
with open('../data/hr-analytics/hr_data.csv') as f:
print(len(f.read().splitlines()))
- Compare this result to len(df) to make sure we've loaded all the data:
Now that our client's data has been properly loaded, let's think about how we can use predictive analytics to find insights into why their employees are leaving.
Let's run through the first steps for creating a predictive analytics plan:
-
- Look at the available data: We've already done this by looking at the columns, data types, and the number of samples
- Determine the business needs: The client has clearly expressed their needs: reduce the number of employees who leave
- Assess the data for suitability: Let's try to determine a plan that can help satisfy the client's needs, given the provided data
Recall, as mentioned earlier, that effective analytics techniques lead to impactful business decisions. With that in mind, if we were able to predict how likely an employee is to quit, the business could selectively target those employees for special treatment. For example, their salary could be raised or their number of projects reduced. Furthermore, the impact of these changes could be estimated using the model!
To assess the validity of this plan, let's think about our data. Each row represents an employee who either works for the company or has left, as labeled by the column named left. We can, therefore, train a model to predict this target, given a set of features.
Assess the target variable. Check the distribution and number of missing entries by running the following code:
df.left.value_counts().plot('barh')
print(df.left.isnull().sum())
Here's the output of the second code line:
About three-quarters of the samples are employees who have not left. The group who has left makes up the other quarter of the samples. This tells us we are dealing with an imbalanced classification problem, which means we'll have to take special measures to account for each class when calculating accuracies. We also see that none of the target variables are missing (no NaN values).
Now, we'll assess the features:
- Print the data type of each by executing df.dtypes. Observe how we have a mix of continuous and discrete features:
- Display the feature distributions by running the following code:
for f in df.columns:
try:
fig = plt.figure()
…
print('-'*30)
This code snippet is a little complicated, but it's very useful for showing an overview of both the continuous and discrete features in our dataset. Essentially, it assumes each feature is continuous and attempts to plot its distribution, and reverts to simply plotting the value counts if the feature turns out to be discrete.
The result is as follows:
For many features, we see a wide distribution over the possible values, indicating a good variety in the feature spaces. This is encouraging; features that are strongly grouped around a small range of values may not be very informative for the model. This is the case for promotion_last_5years, where we see that the vast majority of samples are 0.
The next thing we need to do is remove any NaN values from the dataset.
- Check how many NaN values are in each column by running the following code:
df.isnull().sum() / len(df) * 100
We can see there are about 2.5% missing for average_montly_hours, 1% missing for time_spend_company, and 98% missing for is_smoker! Let's use a couple of different strategies that we've learned about to handle these.
- Since there is barely any information in the is_smoker metric, let's drop this column. Do this by running: del df['is_smoker'] .
- Since time_spend_company is an integer field, we'll use the median value to fill the NaN values in this column. This can be done with the following code:
fill_value = df.time_spend_company.median()
df.time_spend_company = df.time_spend_company.fillna(fill_value)
The final column to deal with is average_montly_hours. We could do something similar and use the median or rounded mean as the integer fill value. Instead, let's try to take advantage of its relationship with another variable. This may allow us to fill the missing data more accurately.
- Make a boxplot of average_montly_hours segmented by number_project. This can be done by running the following code:
sns.boxplot(x='number_project', y='average_montly_hours', data=df)
We can see how the number of projects is correlated with average_monthly_hours, a result that is hardly surprising. We'll exploit this relationship by filling in the NaN values of average_montly_hours differently, depending on the number of projects for that sample. Specifically, we'll use the mean of each group.
- Calculate the mean of each group by running the following code:
mean_per_project = df.groupby('number_project')\.
average_montly_hours.mean()
mean_per_project = dict(mean_per_project)
print(mean_per_project)
We can then map this onto the number_project column and pass the resulting series object as the argument to fillna.
- Fill the NaN values in average_montly_hours by executing the following code:
fill_values = df.number_project.map(mean_per_project)
df.average_montly_hours = df.average_montly_hours.fillna(fill_values)
- Confirm that df has no more NaN values by running the following assertion test. If it does not raise an error, then you have successfully removed the NaNs from the table:
assert df.isnull().sum().sum() == 0 - Finally, we will transform the string and Boolean fields into integer representations. In particular, we'll manually convert the target variable left from yes and no to 1 and 0 and build the one-hot encoded features. Do this by running the following code:
df.left = df.left.map({'no': 0, 'yes': 1})
df = pd.get_dummies(df)
- Print df.columns to show the fields:
We can see that department and salary have been split into various binary features.
The final step to prepare our data for machine learning is scaling the features, but for various reasons (for example, some models do not require scaling), we'll do it as part of the model-training workflow in the next activity.
- We have completed the data preprocessing and are ready to move on to training models! Let's save our preprocessed data by running the following code:
df.to_csv('../data/hr-analytics/hr_data_processed.csv', index=False)
Again, we pause here to note how well the Jupyter Notebook suited our needs when performing this initial data analysis and clean-up. Imagine, for example, we left this project in its current state for a few months. Upon returning to it, we would probably not remember what exactly was going on when we left it. Referring back to this notebook though, we would be able to retrace our steps and quickly recall what we previously learned about the data. Furthermore, we could update the data source with any new data and re-run the notebook to prepare the new set of data for use in our machine learning algorithms. Recall that in this situation, it would be best to make a copy of the notebook first, so as not to lose the initial analysis.
To summarize, we've learned and applied methods for preparing to train a machine learning model. We started by discussing steps for identifying a problem that can be solved with predictive analytics. This consisted of:
- Looking at the available data
- Determining the business needs
- Assessing the data for suitability
We also discussed how to identify supervised versus unsupervised and regression versus classification problems.
After identifying our problem, we learned techniques for using Jupyter Notebooks to build and test a data transformation pipeline. These techniques included methods and best practices for filling missing data, transforming categorical features, and building train/test data sets.
In the remainder of this chapter, we will use this preprocessed data to train a variety of classification models. To avoid blindly applying algorithms we don't understand, we start by introducing them and overviewing how they work. Then, we use Jupyter to train and compare their predictive capabilities. Here, we have the opportunity to discuss more advanced topics in machine learning like overfitting, k-fold cross-validation, and validation curves.
- Mastering NetBeans
- C++程序設計(第3版)
- 解構產品經理:互聯網產品策劃入門寶典
- SQL Server 2012數據庫技術及應用(微課版·第5版)
- Magento 2 Development Cookbook
- 前端架構:從入門到微前端
- Python 3破冰人工智能:從入門到實戰
- 概率成形編碼調制技術理論及應用
- Active Directory with PowerShell
- Python High Performance Programming
- Microsoft Azure Storage Essentials
- SSM開發實戰教程(Spring+Spring MVC+MyBatis)
- RESTful Java Web Services(Second Edition)
- Vue.js 2 Web Development Projects
- 打開Go語言之門:入門、實戰與進階