simple_linear_regression

knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>"
)

Source: https://www.guru99.com/pytorch-tutorial.html

Creating the network model

Our network model is a simple Linear layer with an input and an output shape of one.

And the network output should be like this

Net(
  (hidden): Linear(in_features=1, out_features=1, bias=True)
)
library(rTorch)

nn       <- torch$nn
Variable <- torch$autograd$Variable

torch$manual_seed(123)

py_run_string("import torch")
main = py_run_string(
"
import torch.nn as nn

class Net(nn.Module):
   def __init__(self):
       super(Net, self).__init__()
       self.layer = torch.nn.Linear(1, 1)

   def forward(self, x):
       x = self.layer(x)      
       return x
")


# build a Linear Rgression model
net <- main$Net()
print(net)

Datasets

Before you start the training process, you need to know our data. You make a random function to test our model. $Y = x3 sin(x)+ 3x+0.8 rand(100)$

np$random$seed(123L)

x = np$random$rand(100L)
y = np$sin(x) * np$power(x, 3L) + 3L * x + np$random$rand(100L) * 0.8

plot(x, y)

Before you start the training process, you need to convert the numpy array to Variables that supported by Torch and autograd.

Converting from numpy to tensor

Notice that before converting to a Torch tensor, we need first to convert the R numeric vector to a numpy array:

# convert numpy array to tensor in shape of input size
x <- r_to_py(x)
y <- r_to_py(y)
x = torch$from_numpy(x$reshape(-1L, 1L))$float()
y = torch$from_numpy(y$reshape(-1L, 1L))$float()
print(x, y)

Optimizer and Loss

Next, you should define the Optimizer and the Loss Function for our training process.

# Define Optimizer and Loss Function
optimizer <- torch$optim$SGD(net$parameters(), lr=0.2)
loss_func <- torch$nn$MSELoss()
print(optimizer)
print(loss_func)

Training

Now let's start our training process. With an epoch of 250, you will iterate our data to find the best value for our hyperparameters.

# x = x$type(torch$float)   # make it a a FloatTensor
# y = y$type(torch$float)

# x <- torch$as_tensor(x, dtype = torch$float)
# y <- torch$as_tensor(y, dtype = torch$float)

inputs  = Variable(x)
outputs = Variable(y)

# base plot
plot(x$data$numpy(), y$data$numpy(), col = "blue")
for (i in 1:250) {
   prediction = net(inputs)
   loss = loss_func(prediction, outputs)
   optimizer$zero_grad()
   loss$backward()
   optimizer$step()

   if (i > 1) break

   if (i %% 10 == 0) {
       # plot and show learning process
      # points(x$data$numpy(), y$data$numpy())
      points(x$data$numpy(), prediction$data$numpy(), col="red")
       # cat(i, loss$data$numpy(), "\n")
   }
}

Result

As you can see below, you successfully performed regression with a neural network. Actually, on every iteration, the red line in the plot will update and change its position to fit the data. But in this picture, you only show you the final result.



Try the rTorch package in your browser

Any scripts or data that you put into this service are public.

rTorch documentation built on Jan. 13, 2021, 4:32 p.m.