官术网_书友最值得收藏!

Chapter 2. Data Cleaning

Without any further ado, lets kick-start the engine and start our foray into the world of predictive analytics. However, you need to remember that our fuel is data. In order to do any predictive analysis, one needs to access and import data for the engine to rev up.

I assume that you have already installed Python and the required packages with an IDE of your choice. Predictive analytics, like any other art, is best learnt when tried hands-on and practiced as frequently as possible. The book will be of the best use if you open a Python IDE of your choice and practice the explained concepts on your own. So, if you haven't installed Python and its packages yet, now is the time. If not all the packages, at-least pandas should be installed, which are the mainstay of the things that we will learn in this chapter.

After reading this chapter, you should be familiar with the following topics:

  • Handling various kind of data importing scenarios that is importing various kind of datasets (.csv, .txt), different kind of delimiters (comma, tab, pipe), and different methods (read_csv, read_table)
  • Getting basic information, such as dimensions, column names, and statistics summary
  • Getting basic data cleaning done that is removing NAs and blank spaces, imputing values to missing data points, changing a variable type, and so on
  • Creating dummy variables in various scenarios to aid modelling
  • Generating simple plots like scatter plots, bar charts, histograms, box plots, and so on

From now on, we will be using a lot of publicly available datasets to illustrate concepts and examples. All the used datasets have been stored in a Google Drive folder, which can be accessed from this link: https://goo.gl/zjS4C6.

Note

This folder is called "Datasets for Predictive Modelling with Python". This folder has a subfolder dedicated to each chapter of the book. Each subfolder contains the datasets that were used in the chapter.

The paths for the dataset used in this book are paths on my local computer. You can download the datasets from these subfolders to your local computer before using them. Better still, you can download the entire folder, at once and save it somewhere on your local computer.

主站蜘蛛池模板: 临漳县| 阿克苏市| 桐城市| 仙游县| 玉屏| 大方县| 长汀县| 富平县| 呼伦贝尔市| 曲松县| 牟定县| 阿勒泰市| 富锦市| 河南省| 那坡县| 纳雍县| 台南市| 科技| 克什克腾旗| 若尔盖县| 黄大仙区| 潼关县| 丁青县| 东乡族自治县| 漯河市| 合阳县| 绥阳县| 大悟县| 安福县| 城步| 汉中市| 曲靖市| 田林县| 忻城县| 隆回县| 乌审旗| 马龙县| 新郑市| 喀什市| 宁明县| 万安县|