官术网_书友最值得收藏!

Chapter 2. Data Cleaning

Without any further ado, lets kick-start the engine and start our foray into the world of predictive analytics. However, you need to remember that our fuel is data. In order to do any predictive analysis, one needs to access and import data for the engine to rev up.

I assume that you have already installed Python and the required packages with an IDE of your choice. Predictive analytics, like any other art, is best learnt when tried hands-on and practiced as frequently as possible. The book will be of the best use if you open a Python IDE of your choice and practice the explained concepts on your own. So, if you haven't installed Python and its packages yet, now is the time. If not all the packages, at-least pandas should be installed, which are the mainstay of the things that we will learn in this chapter.

After reading this chapter, you should be familiar with the following topics:

  • Handling various kind of data importing scenarios that is importing various kind of datasets (.csv, .txt), different kind of delimiters (comma, tab, pipe), and different methods (read_csv, read_table)
  • Getting basic information, such as dimensions, column names, and statistics summary
  • Getting basic data cleaning done that is removing NAs and blank spaces, imputing values to missing data points, changing a variable type, and so on
  • Creating dummy variables in various scenarios to aid modelling
  • Generating simple plots like scatter plots, bar charts, histograms, box plots, and so on

From now on, we will be using a lot of publicly available datasets to illustrate concepts and examples. All the used datasets have been stored in a Google Drive folder, which can be accessed from this link: https://goo.gl/zjS4C6.

Note

This folder is called "Datasets for Predictive Modelling with Python". This folder has a subfolder dedicated to each chapter of the book. Each subfolder contains the datasets that were used in the chapter.

The paths for the dataset used in this book are paths on my local computer. You can download the datasets from these subfolders to your local computer before using them. Better still, you can download the entire folder, at once and save it somewhere on your local computer.

主站蜘蛛池模板: 库尔勒市| 行唐县| 乌鲁木齐市| 缙云县| 泾川县| 钟祥市| 镇原县| 丹阳市| 洛川县| 诏安县| 金堂县| 五家渠市| 常德市| 青铜峡市| 称多县| 扬州市| 宣汉县| 星座| 太保市| 关岭| 东阳市| 建瓯市| 印江| 呈贡县| 阿勒泰市| 隆德县| 彭山县| 醴陵市| 宜黄县| 东乡族自治县| 丰台区| 仁布县| 溆浦县| 新河县| 托里县| 天全县| 阳城县| 峨眉山市| 广元市| 沧源| 临江市|