- R Deep Learning Cookbook
- Dr. PKS Prakash Achyutuni Sri Krishna Rao
- 83字
- 2021-07-02 20:49:12
How to do it...
The data is imported using a standard function from R, as shown in the following code.
- The data is imported using the read.csv file and transformed into the matrix format followed by selecting the features used to model as defined in xFeatures and yFeatures. The next step in TensorFlow is to set up a graph to run optimization:
# Loading input and test data
xFeatures = c("Temperature", "Humidity", "Light", "CO2", "HumidityRatio")
yFeatures = "Occupancy"
occupancy_train <-as.matrix(read.csv("datatraining.txt",stringsAsFactors = T))
occupancy_test <- as.matrix(read.csv("datatest.txt",stringsAsFactors = T))
# subset features for modeling and transform to numeric values
occupancy_train<-apply(occupancy_train[, c(xFeatures, yFeatures)], 2, FUN=as.numeric)
occupancy_test<-apply(occupancy_test[, c(xFeatures, yFeatures)], 2, FUN=as.numeric)
# Data dimensions
nFeatures<-length(xFeatures)
nRow<-nrow(occupancy_train)
- Before setting up the graph, let's reset the graph using the following command:
# Reset the graph
tf$reset_default_graph()
- Additionally, let's start an interactive session as it will allow us to execute variables without referring to the session-to-session object:
# Starting session as interactive session
sess<-tf$InteractiveSession()
- Define the logistic regression model in TensorFlow:
# Setting-up Logistic regression graph
x <- tf$constant(unlist(occupancy_train[, xFeatures]), shape=c(nRow, nFeatures), dtype=np$float32) #
W <- tf$Variable(tf$random_uniform(shape(nFeatures, 1L)))
b <- tf$Variable(tf$zeros(shape(1L)))
y <- tf$matmul(x, W) + b
- The input feature x is defined as a constant as it will be an input to the system. The weight W and bias b are defined as variables that will be optimized during the optimization process. The y is set up as a symbolic representation between x, W, and b. The weight W is set up to initialize random uniform distribution and b is assigned the value zero.
- The next step is to set up the cost function for logistic regression:
# Setting-up cost function and optimizer
y_ <- tf$constant(unlist(occupancy_train[, yFeatures]), dtype="float32", shape=c(nRow, 1L))
cross_entropy<-tf$reduce_mean(tf$nn$sigmoid_cross_entropy_with_logits(labels=y_, logits=y, name="cross_entropy"))
optimizer <- tf$train$GradientDescentOptimizer(0.15)$minimize(cross_entropy)
The variable y_ is the response variable. Logistic regression is set up using cross entropy as the loss function. The loss function is passed to the gradient descent optimizer with a learning rate of 0.15. Before running the optimization, initialize the global variables:
# Start a session
init <- tf$global_variables_initializer()
sess$run(init)
- Execute the gradient descent algorithm for the optimization of weights using cross entropy as the loss function:
# Running optimization
for (step in 1:5000) {
sess$run(optimizer)
if (step %% 20== 0)
cat(step, "-", sess$run(W), sess$run(b), "==>", sess$run(cross_entropy), "n")
}
推薦閱讀
- ClickHouse性能之巔:從架構(gòu)設(shè)計(jì)解讀性能之謎
- Learning C++ Functional Programming
- PHP+MySQL網(wǎng)站開(kāi)發(fā)技術(shù)項(xiàng)目式教程(第2版)
- Building Mobile Applications Using Kendo UI Mobile and ASP.NET Web API
- 假如C語(yǔ)言是我發(fā)明的:講給孩子聽(tīng)的大師編程課
- Learning Apache Mahout Classification
- 從0到1:Python數(shù)據(jù)分析
- 組態(tài)軟件技術(shù)與應(yīng)用
- 深入分布式緩存:從原理到實(shí)踐
- RabbitMQ Essentials
- 執(zhí)劍而舞:用代碼創(chuàng)作藝術(shù)
- Visual Foxpro 9.0數(shù)據(jù)庫(kù)程序設(shè)計(jì)教程
- Python物理建模初學(xué)者指南(第2版)
- 3D Printing Designs:The Sun Puzzle
- 軟件測(cè)試項(xiàng)目實(shí)戰(zhàn)之功能測(cè)試篇