Draw Neural Network Diagram Python

Image Source
Image Source

If you lot've been following along with this serial of blog posts, then you already know what a huge fan I am of Keras.

Keras is a super powerful, like shooting fish in a barrel to utilize Python library for building neural networks and deep learning networks.

In the remainder of this blog mail, I'll demonstrate how to build a simple neural network using Python and Keras, and then utilise it to the task of image classification.

Looking for the source code to this mail?

Jump Right To The Downloads Department

A unproblematic neural network with Python and Keras

To showtime this post, we'll quickly review the most common neural network architecture — feedforward networks.

We'll then discuss our project structure followed by writing some Python lawmaking to define our feedforward neural network and specifically apply information technology to the Kaggle Dogs vs. Cats classification claiming. The goal of this challenge is to correctly allocate whether a given epitome contains a dog or a true cat.

We'll review the results of our simple neural network architecture and discuss methods to improve it.

Our final pace will exist to build a test script that will load images and classify them with OpenCV, Keras, and our trained model.

Feedforward neural networks

While there are many, many different neural network architectures, the most common architecture is the feedforward network:

Figure 1: An example of a feedforward neural network with 3 input nodes, a hidden layer with 2 nodes, a second hidden layer with 3 nodes, and a final output layer with 2 nodes.
Effigy 1: An example of a feedforward neural network with 3 input nodes, a hidden layer with 2 nodes, a second hidden layer with 3 nodes, and a final output layer with 2 nodes.

In this type of architecture, a connection between two nodes is just permitted from nodes in layer i to nodes in layer i + i (hence the term feedforward; at that place are no backwards or inter-layer connections allowed).

Furthermore, the nodes in layer i are fully connected to the nodes in layer i + 1. This implies that every node in layer i connects to every node in layer i + 1. For case, in the effigy above, there are a total of 2 x 3 = 6 connections between layer 0 and layer 1 — this is where the term "fully connected" or "FC" for brusk, comes from.

We commonly use a sequence of integers to speedily and concisely describe the number of nodes in each layer.

For case, the network above is a 3-2-3-2 feedforward neural network:

  • Layer 0 contains 3 inputs, our x_{i} values. These could be raw pixel intensities or entries from a feature vector.
  • Layers 1 and 2 are hidden layers , containing two and three nodes, respectively.
  • Layer 3 is the output layer or the visible layer — this is where we obtain the overall output classification from our network. The output layer normally has equally many nodes as class labels; one node for each potential output. In our Kaggle Dogs vs. Cats example, we have ii output nodes — one for "canis familiaris" and another for "true cat".

Project directory structure

Figure 2: The Kaggle Dogs vs. Cats dataset is used in our simple neural network with Keras.

Before we begin, caput to the "Downloads" section of this blog mail service, and download the files and data. From there yous'll be able to follow along as nosotros work through today'due south examples.

One time your zip is downloaded, extract the files.

From within the directory, permit'southward run the tree command with two control line arguments to list our projection structure:

$ tree --filelimit x --dirsfirst . ├── kaggle_dogs_vs_cats │ └── railroad train [25000 entries exceeds filelimit, not opening dir] ├── test_images [50 entries exceeds filelimit, not opening dir] ├── output │ └── simple_neural_network.hdf5 ├── simple_neural_network.py └── test_network.py  iv directories, 4 files          

The first command line statement is of import equally information technology prevents tree from displaying all of the paradigm files and cluttering our final.

The Kaggle Dogs vs. Cats dataset is in the relevant directory (kaggle_dogs_vs_cats). All 25,000 images are contained in the train subdirectory. This data came from the train.nada dataset available on Kaggle.

I've also included 50 samples from the Kaggle test1.goose egg available on their website.

The output directory contains our serialized model that nosotros'll generate with Keras at the bottom of the outset script.

Nosotros'll review the ii Python scripts, simple_neural_network.py and test_network.py , in the next sections.

Implementing our own neural network with Python and Keras

Now that we understand the basics of feedforward neural networks, let'south implement ane for image classification using Python and Keras.

To start, you'll want to follow the advisable tutorial for your organization to install TensorFlow and Keras:

  • Configuring Ubuntu for deep learning with Python
  • Setting up Ubuntu 16.04 + CUDA + GPU for deep learning with Python
  • Configuring macOS for deep learning with Python

Note: A GPU is not needed for today'southward blog post — your laptop can run this very elementary network easily. That being said, in full general I do not recommend using a laptop for deep learning. Laptops are for productivity rather than working with TB sized datasets required for many deep learning activities. I recommend Amazon AWS using my pre-configured AMI or Microsoft'south DSVM. Both of these environments are gear up to go in less than five minutes.

From there, open up a new file, name information technology simple_neural_network.py , and we'll get coding:

# import the necessary packages from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from keras.models import Sequential from keras.layers import Activation from keras.optimizers import SGD from keras.layers import Dense from keras.utils import np_utils from imutils import paths import numpy as np import argparse import cv2 import os          

Nosotros start off by importing our required Python packages. We'll be using a number of scikit-acquire implementations along with Keras layers and activation functions. If you exercise not already have your development surround configured for Keras, please see this blog post.

We'll be too using imutils, my personal library of OpenCV convenience functions. If y'all do not already have imutils installed on your system, you can install information technology via pip :

$ pip install imutils          

Side by side, allow's define a method to take and image and depict information technology. In previous tutorials, we've extracted color histograms from images and used these distributions to characterize the contents of an image.

This time, let'south use the raw pixel intensities instead. To accomplish this, nosotros ascertain the image_to_feature_vector function which accepts an input image and resizes it to a fixed size , ignoring the aspect ratio:

def image_to_feature_vector(image, size=(32, 32)): 	# resize the image to a fixed size, so flatten the prototype into 	# a list of raw pixel intensities 	render cv2.resize(image, size).flatten()          

We resize our image to stock-still spatial dimensions to ensure each and every paradigm in the input dataset has the same "feature vector" size. This is a requirement when utilizing our neural network — each image must be represented past a vector.

In this instance, nosotros resize our image to 32 x 32 pixels and and then flatten the 32 x 32 x 3 image (where we have three channels, i for each Cherry-red, Green, and Blue channel, respectively) into a 3,072-d feature vector.

The next code block handles parsing our command line arguments and taking care of a few initializations:

# construct the statement parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-d", "--dataset", required=True, 	help="path to input dataset") ap.add_argument("-m", "--model", required=True, 	help="path to output model file") args = vars(ap.parse_args())  # grab the listing of images that nosotros'll be describing print("[INFO] describing images...") imagePaths = list(paths.list_images(args["dataset"]))  # initialize the data matrix and labels list data = [] labels = []          

We only need a single switch here, --dataset , which is the path to the input directory containing the Kaggle Dogs vs. Cats images. This dataset tin be downloaded from the official Kaggle Dogs vs. Cats contest page.

Line xxx grabs the paths to our --dataset of images residing on disk. We then initialize the information and labels lists, respectively, on Lines 33 and 34.

At present that we accept our imagePaths , we tin can loop over them individually, load them from disk, catechumen the images to feature vectors, and the update the data and labels lists:

# loop over the input images for (i, imagePath) in enumerate(imagePaths): 	# load the epitome and extract the course label (assuming that our 	# path every bit the format: /path/to/dataset/{class}.{image_num}.jpg 	epitome = cv2.imread(imagePath) 	characterization = imagePath.split(os.path.sep)[-ane].dissever(".")[0]  	# construct a feature vector raw pixel intensities, then update 	# the data matrix and labels list 	features = image_to_feature_vector(image) 	data.suspend(features) 	labels.suspend(characterization)  	# show an update every 1,000 images 	if i > 0 and i % 1000 == 0: 		print("[INFO] processed {}/{}".format(i, len(imagePaths)))          

The data list now contains the flattened 32 ten 32 x 3 = 3,072-d representations of every image in our dataset. All the same, before we can railroad train our neural network, nosotros first need to perform a fleck of preprocessing:

# encode the labels, converting them from strings to integers le = LabelEncoder() labels = le.fit_transform(labels)  # calibration the input image pixels to the range [0, one], and then transform # the labels into vectors in the range [0, num_classes] -- this # generates a vector for each label where the index of the characterization # is prepare to `i` and all other entries to `0` information = np.array(data) / 255.0 labels = np_utils.to_categorical(labels, ii)  # partitioning the information into training and testing splits, using 75% # of the data for training and the remaining 25% for testing print("[INFO] amalgam preparation/testing carve up...") (trainData, testData, trainLabels, testLabels) = train_test_split( 	data, labels, test_size=0.25, random_state=42)          

Lines 61 and 62 handle scaling the input data to the range [0, 1], followed past converting the labels from a set of integers to a ready of vectors (a requirement for the cross-entropy loss function we volition apply when training our neural network).

We and so construct our training and testing splits on Lines 67 and 68, using 75% of the data for training and the remaining 25% for testing.

For a more than detailed review of the information preprocessing stage, delight see this web log post.

We are at present ready to define our neural network using Keras:

# define the architecture of the network model = Sequential() model.add together(Dense(768, input_dim=3072, init="compatible", 	activation="relu")) model.add(Dumbo(384, activation="relu", kernel_initializer="compatible")) model.add(Dense(2)) model.add(Activation("softmax"))          

On Lines 71-76 we construct our neural network compages — a 3072-768-384-2 feedforward neural network.

Our input layer has 3,072 nodes, 1 for each of the 32 ten 32 ten 3 = 3,072 raw pixel intensities in our flattened input images.

We and then accept 2 hidden layers, each with 768 and 384 nodes, respectively. These node counts were determined via a cross-validation and hyperparameter tuning experiment performed offline.

The output layer has 2 nodes — i for each of the "dog" and "cat" labels.

We then employ a softmax activation function on top of the network — this will give us our actual output class characterization probabilities.

The next step is to train our model using Stochastic Gradient Descent (SGD):

# train the model using SGD print("[INFO] compiling model...") sgd = SGD(lr=0.01) model.compile(loss="binary_crossentropy", optimizer=sgd, 	metrics=["accuracy"]) model.fit(trainData, trainLabels, epochs=50, batch_size=128, 	verbose=i)          

To railroad train our model, we'll set the learning charge per unit parameter of SGD to 0.01. We'll use the binary_crossentropy loss office for the network equally well.

In nearly cases, y'all'll desire to employ merely crossentropy , but since there are only two class labels, we use binary_crossentropy . For > 2 class labels, make certain you use crossentropy .

The network is then immune to train for a total of 50 epochs, meaning that the model "sees" each individual training example 50 times in an attempt to learn an underlying pattern.

The final lawmaking block evaluates our Keras neural network on the testing information:

# show the accuracy on the testing set up impress("[INFO] evaluating on testing set up...") (loss, accuracy) = model.evaluate(testData, testLabels, 	batch_size=128, verbose=1) print("[INFO] loss={:.4f}, accuracy: {:.4f}%".format(loss, 	accuracy * 100))  # dump the network compages and weights to file print("[INFO] dumping architecture and weights to file...") model.relieve(args["model"])          

Classifying images using neural networks with Python and Keras

To execute our simple_neural_network.py script, brand sure you have already downloaded the source code and information for this post by using the "Downloads" section at the lesser of this tutorial.

The post-obit command can be used to train our neural network using Python and Keras:

$ python simple_neural_network.py --dataset kaggle_dogs_vs_cats \     --model output/simple_neural_network.hdf5          

The output of our script can be seen in the screenshot below:

Figure 2: Training a simple neural network using the Keras deep learning library and the Python programming language.
Effigy 3: Grooming a simple neural network using the Keras deep learning library and the Python programming language.

On my Titan X GPU, the entire process of feature extraction, training the neural network, and evaluation took a total of 1m 15s with each epoch taking less than 0 seconds to complete.

At the end of the 50th epoch, we see that nosotros are getting ~76% accuracy on the preparation data and 67% accuracy on the testing data.

This ~9% difference in accurateness implies that our network is overfitting a bit; nevertheless, it is very common to come across ~10% gaps in preparation versus testing accuracy, especially if yous have limited training data.

You should start to become very worried regarding overfitting when your preparation accuracy reaches 90%+ and your testing accuracy is substantially lower than that.

In either example, this 67.376% is the highest accuracy nosotros've obtained thus far in this series of tutorials. Every bit we'll find out subsequently on, nosotros can easily obtain > 95% accurateness by utilizing Convolutional Neural Networks.

Classifying images using our Keras model

We're going to build a test script to verify our results visually.

So let's get ahead and create a new file named test_network.py in your favorite editor and enter the following code:

# import the necessary packages from __future__ import print_function from keras.models import load_model from imutils import paths import numpy as np import argparse import imutils import cv2  def image_to_feature_vector(image, size=(32, 32)): 	# resize the image to a fixed size, then flatten the image into 	# a list of raw pixel intensities 	render cv2.resize(image, size).flatten()  # construct the statement parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-chiliad", "--model", required=True, 	help="path to output model file") ap.add_argument("-t", "--test-images", required=Truthful, 	assist="path to the directory of testing images") ap.add_argument("-b", "--batch-size", type=int, default=32, 	help="size of mini-batches passed to network") args = vars(ap.parse_args())          

On Lines ii-viii, we load necessary packages. These should exist familiar as we used each of them in a higher place, with the exception of load_model from keras.models . The load_model module simply loads the serialized Keras model from disk and then that we can send images through the network and acquire predictions.

The image_to_feature_vector function is identical and nosotros include information technology in the test script because we want to preprocess our images in the same way as training.

Our script has three command line arguments which can be provided at runtime (Lines 16-23):

  • --model : The path to our serialized model file.
  • --test-images : The path to the directory of test images.
  • --batch-size : Optionally, the size of mini-batches can be specified with the default being 32 .

You do not need to modify Lines 16-23 — if you are unfamiliar with argparse and control line arguments, just give this blog post a read.

Moving on, let'southward ascertain our classes and load our serialized model from disk:

# initialize the class labels for the Kaggle dogs vs cats dataset CLASSES = ["cat", "dog"]  # load the network impress("[INFO] loading network architecture and weights...") model = load_model(args["model"]) impress("[INFO] testing on images in {}".format(args["test_images"]))          

Line 26 creates a list of the classes we're working with today — a cat and a dog.

From there nosotros load the model into memory so that we tin can easily classify images as needed (Line 30).

Let'south brainstorm looping over the test images and predicting whether each image is a feline or canine:

# loop over our testing images for imagePath in paths.list_images(args["test_images"]): 	# load the image, resize it to a stock-still 32 x 32 pixels (ignoring 	# attribute ratio), and then extract features from it 	print("[INFO] classifying {}".format( 		imagePath[imagePath.rfind("/") + 1:])) 	prototype = cv2.imread(imagePath) 	features = image_to_feature_vector(image) / 255.0 	features = np.array([features])          

We begin looping over all images in the testing directory on Line 34.

First, we load the epitome and preprocess it (Lines 39-41).

From there, let'southward transport the paradigm through the neural network:

            # classify the image using our extracted features and pre-trained 	# neural network 	probs = model.predict(features)[0] 	prediction = probs.argmax(axis=0)  	# draw the class and probability on the test prototype and brandish it 	# to our screen 	label = "{}: {:.2f}%".format(CLASSES[prediction], 		probs[prediction] * 100) 	cv2.putText(prototype, characterization, (x, 35), cv2.FONT_HERSHEY_SIMPLEX, 		1.0, (0, 255, 0), iii) 	cv2.imshow("Image", prototype) 	cv2.waitKey(0)          

A prediction is made on Lines 45 and 46.

The remaining lines build a brandish label containing the class proper name and probability score and overlay information technology on the prototype (Lines 50-54). Each iteration of the loop, we wait for a keypress so that nosotros tin check images one at a time (Line 55).

Testing our neural network with Keras

Now that we're finished implementing our examination script, let's run it and run into our hard piece of work in activeness. To grab the code and images, be certain to scroll down to the "Downloads" section of this blog mail service.

When you take the files extracted, to run our test_network.py we only execute it in the final and provide two control line arguments:

$ python test_network.py --model output/simple_neural_network.hdf5 \ 	--test-images test_images Using TensorFlow backend. [INFO] loading network architecture and weights... [INFO] testing on images in test_images [INFO] classifying 48.jpg [INFO] classifying 49.jpg [INFO] classifying viii.jpg [INFO] classifying 9.jpg [INFO] classifying 14.jpg [INFO] classifying 28.jpg          

Did y'all see the post-obit error bulletin?

Using TensorFlow backend. usage: test_network.py [-h] -m MODEL -t TEST_IMAGES [-b BATCH_SIZE] test_network.py: error: the post-obit arguments are required: -yard/--model, -t/--test-images          

This bulletin describes how to use the script with command line arguments.

Are you unfamiliar with control line arguments and argparse? No worries — simply give this blog mail on control line arguments a quick read.

If everything worked correctly, after the model loads and runs the get-go inference, we're presented with a picture of a canis familiaris:

Figure 4: A dog from the Kaggle Dogs vs. Cats competition test dataset is correctly classified using our simple neural network with Keras script.

The network classified the canis familiaris with 71% prediction accurateness. So far then skilful!

When you're ready, press a key to cycle to the next image (the window must be active).

Effigy 5: Fifty-fifty a unproblematic neural network with Keras tin can attain relatively good accuracy and distinguish between dogs and cats.

Our cute and cuddly cat with white chest hair passed the test with 77% accuracy!

Onto Lois, a dog:

Figure 6: Lois likes the snow. He also likes when a simple deep learning neural network correctly classifies him as a dog!

Lois is definitely a dog — our model is 97% sure of it.

Let's try another cat:

Figure 7: Deep learning nomenclature allows us to do just that — to classify the image contents. Using the Kaggle Dogs vs. Cats dataset, we have built an elementary model to allocate dog and cat images.

Yahoo! This ball of fur is correctly predicted to be a cat.

Let's endeavor a yet some other dog:

Figure 8: This is an case of a misclassification. Our elementary neural network built with Keras has room for improvement equally it is only 67% accurate. To learn how to better the model, check out DL4CV.

DOH! Our network thinks this dog is a true cat with 61% confidence. Clearly this is a misclassification.

How could that be? Well, our network is only67% accurate as nosotros demonstrated above. Information technology will be mutual to see a number of misclassifications.

Our last image is of one of the most adorable kittens in the test_images binder. I've named this kitten Simba. But is Simba a cat co-ordinate to our model?

Figure ix: Our simple neural network built with Keras (TensorFlow backend), misclassifies a number of images such as of this cat (it predicted the image contains a domestic dog). Deep learning requires experimentation and iterative development to improve accuracy.

Alas, our network has failed us, but just by 3.29 percent. I was almost certain that our network would classify Simba correctly, but I was wrong.

Not to worry — at that place are improvements nosotros tin can make to rank on theTop-25leaderboard of the Kaggle Dogs vs. Cats challenge.

In my new book, Deep Learning for Reckoner Vision with Python, I demonstrate how to do only that. In fact, I'll go so far to say that you'll probably reach aHeight-5 position with what you'll learn in the book.

To option up your copy, simply apply this link: Deep Learning for Computer Vision with Python .

What'south next? I recommend PyImageSearch University.

Course information:
35+ total classes • 39h 44m video • Final updated: February 2022
★★★★★ 4.84 (128 Ratings) • 3,000+ Students Enrolled

I strongly believe that if you lot had the right teacher you could master computer vision and deep learning.

Do you retrieve learning computer vision and deep learning has to exist time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a caste in computer science?

That's non the case.

All y'all need to chief computer vision and deep learning is for someone to explicate things to you in elementary, intuitive terms. And that's exactly what I practise. My mission is to change didactics and how complex Artificial Intelligence topics are taught.

If y'all're serious almost learning figurer vision, your adjacent stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV grade online today. Hither yous'll acquire how to successfully and confidently apply computer vision to your work, research, and projects. Join me in reckoner vision mastery.

Within PyImageSearch University you'll observe:

  • 35+ courses on essential figurer vision, deep learning, and OpenCV topics
  • ✓ 35+ Certificates of Completion
  • 39h 44m on-demand video
  • Brand new courses released every calendar month , ensuring y'all tin keep up with land-of-the-fine art techniques
  • Pre-configured Jupyter Notebooks in Google Colab
  • ✓ Run all code examples in your spider web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
  • ✓ Access to centralized lawmaking repos for all 500+ tutorials on PyImageSearch
  • Like shooting fish in a barrel one-click downloads for code, datasets, pre-trained models, etc.
  • ✓ Access on mobile, laptop, desktop, etc.

Click here to bring together PyImageSearch University

Summary

In today's weblog post, I demonstrated how to train a simple neural network using Python and Keras.

We and then practical our neural network to the Kaggle Dogs vs. Cats dataset and obtained67.376% accuracy utilizing only theraw pixel intensities of the images.

Starting side by side calendar week, I'll begin discussing optimization methods such as gradient descent and Stochastic Gradient Descent (SGD). I'll also include a tutorial on backpropagation to assistance you empathise the inner-workings of this important algorithm.

Before you go, be sure to enter your email address in the form beneath to be notified when time to come blog posts are published — yous won't want to miss them!

Download the Source Lawmaking and Gratis 17-page Resource Guide

Enter your email address below to get a .zip of the lawmaking and a FREE 17-page Resource Guide on Figurer Vision, OpenCV, and Deep Learning. Inside you'll notice my hand-picked tutorials, books, courses, and libraries to assist you master CV and DL!

laffertyevenand.blogspot.com

Source: https://pyimagesearch.com/2016/09/26/a-simple-neural-network-with-python-and-keras/

0 Response to "Draw Neural Network Diagram Python"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel