The Sequential class allows us to build neural networks ‘on the fly’ without having to define the class explicitly. This makes it much easier for us to quickly build neural networks and go to the part where we need to implement their forward() function, because the serial class implements the forward() function for us. In this lesson we will learn how to use the PyTorch sequential lesson to build ConvNet.

We’ll start with the import. We import everything we need.

imported flare
imported flare
imported flare torches as transformations

Imported numbered as np
Import matplotlib.pyplot as plt

Registration

Then we make a set of data for the training, because not only do we build a coherent model, but we also form a coherent model.

For this manual we use the CIFAR10 dataset. He’s got ten lessons. The images in CIFAR-10 are 3x32x32, i.e. 3-channel color images of 32×32 pixels.

Pytorch has created a package called torchvision that has data chargers for common datasets such as Imagenet, CIFAR10, MNIST,

The output data of Flashlight Vision is PIL images in the [0, 1] range. We transform them into standardized range sensors [-1, 1].

transform=transform.compose(
[transform.ToTensor(),
transform.normalize((0,5,0,5,0,5),(0,5,0,5)))))).

Flare Vision makes it extremely easy to load the CIFAR10.

train_ds=torchvision.datasets.CIFAR10(root=’./data’,train=True,download=True,transform=transform)

train_ds_loader=torch.utils.dataLoader(train_ds,batch_size=32,shuffle=True,num_workers=2)

test_ds=torchvision.datasets.CIFAR10(root=’./data’,train=False,download=true,transform=transform)
test_ds_loader=torch.utils.data.DataLoader(test_ds,batch_size=32,shuffle=False,num_workers=2)

From the composition of the train we have access to the first element, which will be a motorcade containing the tensor of our images and the tensor of the tags. We examine the shape of the image, which is 3x32x32, height and width 32, and it is a colour image.

image,label=train_ds[0] image.shape #torch.Size([3, 32, 32])).

Let’s just use a symbol to show this image. The only thing we can do here is to standardize this particular image. It is therefore made in such a way that the image can be mapped.

def imshow(img):
img=img/2+0,5
npimg=img.numpy()
plt.imshow(npimg,(1,2,0)).
plt.show()

dataiter=iter(train_ds_loader)
Images, labels=dataiter.next()

imshow(torchvision.utils.make_grid(photos))

We are now ready to use this information to build our first coherent model.

Creating amodel

Operations such as activating functions, merging and aligning may be of interest to you at this stage. Most operations are performed in the Neural Network Function API. What’s cool is that Pytorch has turned itself upside down in the neural network module.

The Sequential class implicitly constructs the prospective method by successively building the architecture of the network. The PyTorch Sequence Model is a container class or also known as an envelope class, with which we can create neural network models. We can build any neural network model together using a sequential model, i.e. we build layers to create networks, and we can even build multiple networks together.

import flashlight.nn as nn
import flashlight.nn.functional as F

Torch.nn.functional as F allows us to create consistent models, and since we can define our layers, our activation functions, our smoothing operations, they are stretching operations. The main conclusion here is that the presence of all these functions grouped in neural network models allows us to use a serial class to wrap and connect different modules. Maxpooling allows us to generate convection with flat-running ReLU activation functions, allowing us to build our models in a sequential order.

model =nn.Sequential(
nn.Conv2d(3,32,kernel_size=3,padding=1),
nn.ReLU(),
nn.Conv2d(32,64,kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.MaxPool2d(2,2),

nn.Conv2d(64.128,kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.Conv2d(128.128,kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.MaxPool2d(2,2),

nn.Conv2d(128.256,kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.Conv2d(256.256,kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.MaxPool2d(2,2),

nn.Flatten(),
nn.Linear(256*4*4,1024),
nn.ReLU(),
nn.Linear(1024,512),
nn.ReLU(),
nn.Linear(512,10)).

It seems so simple compared to when we had to make a model with a classmaking method. The serial class lives in a neural network package, and this is the class we build, or we build a copy of this class and pass it on to other modules in series.

Loss function and optimizer

Let’s use the Intertropical Loss and SGD with Pulse Rating.

imported torches as optimal

loss_fn=nn.CrossEntropyLoss() 1A optimizer=optim.SGD(model.parameters(),lr=0,001,0,9)

Train model

All we have to do is loop our data iterator, feed the network and optimize it.

Train loss=[] valid_losses=[].

for a time between (1,num_epoch+1):
train_loss=0,0
valid_loss=0,0

model.train()
for img,lbl in train_ds_loader:
img=img.cuda()
lbl=lbl.cuda()

optimizer.zero_grad()
preict=model(img)
loss=loss_fn(predict,lbl)
loss.backward()
optimizer.step()
train_loss+=loss.item()*img.size(0)

model.eval()
for img,lbl in test_ds_loader:
img=img.cuda()
lbl=lbl.cuda()

predict=model(img)
loss=loss_fn(predict,lbl)

valid_loss+=Loss.item()*img.size(0)

train_loss=train_loss/len (train_ds_loader.sampler)
valid_loss=valid_loss/len (test_ds_loader.sampler)

Train_losses.attach(train_loss)
valid_losses.attach(valid_losses)

print (‘Epoch:{} train loss:{:.4f} valid loss:{:.4f}’.format(epoch,train_loss,valid_loss)).

In this case, we will not evaluate or predict the railway network, because we are simply curious about how we can create a coherent model, train it and access the class, and then create an instance of the class by consecutively crossing any number of neural network modules.

Run this code in Google Colab

You May Also Like

🥇 Calculate the Cost of a Product in Excel  Step by Step Guide ▷ 2020

If you want to sell a product or offer a service to…

🥇 MICROSOFT POWERPOINT  What is it? + Alternatives ▷ 2020

One of the advantages of using Office software packages is that you…

Pytorch Image Augmentation using Transforms.

In-depth learning models generally require a lot of data for learning. In…

Fix: Google Photos not backing up on iPhone

For many iPhone users, Google Photos remains the first choice, even though…