knitr::opts_chunk$set( collapse = TRUE, comment = "#>" )
Source: https://www.guru99.com/pytorch-tutorial.html
Our network model is a simple Linear layer with an input and an output shape of one.
And the network output should be like this
Net( (hidden): Linear(in_features=1, out_features=1, bias=True) )
library(rTorch) nn <- torch$nn Variable <- torch$autograd$Variable torch$manual_seed(123) py_run_string("import torch") main = py_run_string( " import torch.nn as nn class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.layer = torch.nn.Linear(1, 1) def forward(self, x): x = self.layer(x) return x ") # build a Linear Rgression model net <- main$Net() print(net)
Before you start the training process, you need to know our data. You make a random function to test our model. $Y = x3 sin(x)+ 3x+0.8 rand(100)$
np$random$seed(123L) x = np$random$rand(100L) y = np$sin(x) * np$power(x, 3L) + 3L * x + np$random$rand(100L) * 0.8 plot(x, y)
Before you start the training process, you need to convert the numpy array to Variables that supported by Torch and autograd.
Notice that before converting to a Torch tensor, we need first to convert the R numeric vector to a numpy
array:
# convert numpy array to tensor in shape of input size x <- r_to_py(x) y <- r_to_py(y) x = torch$from_numpy(x$reshape(-1L, 1L))$float() y = torch$from_numpy(y$reshape(-1L, 1L))$float() print(x, y)
Next, you should define the Optimizer and the Loss Function for our training process.
# Define Optimizer and Loss Function optimizer <- torch$optim$SGD(net$parameters(), lr=0.2) loss_func <- torch$nn$MSELoss() print(optimizer) print(loss_func)
Now let's start our training process. With an epoch of 250, you will iterate our data to find the best value for our hyperparameters.
# x = x$type(torch$float) # make it a a FloatTensor # y = y$type(torch$float) # x <- torch$as_tensor(x, dtype = torch$float) # y <- torch$as_tensor(y, dtype = torch$float) inputs = Variable(x) outputs = Variable(y) # base plot plot(x$data$numpy(), y$data$numpy(), col = "blue") for (i in 1:250) { prediction = net(inputs) loss = loss_func(prediction, outputs) optimizer$zero_grad() loss$backward() optimizer$step() if (i > 1) break if (i %% 10 == 0) { # plot and show learning process # points(x$data$numpy(), y$data$numpy()) points(x$data$numpy(), prediction$data$numpy(), col="red") # cat(i, loss$data$numpy(), "\n") } }
As you can see below, you successfully performed regression with a neural network. Actually, on every iteration, the red line in the plot will update and change its position to fit the data. But in this picture, you only show you the final result.
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.