Neural Networks

 

What Is a Neural Network?

3 things you need to know

A neural network (also called an artificial neural network) is an adaptive system that learns by using interconnected nodes or neurons in a layered structure that resembles a human brain. A neural network can learn from data—so it can be trained to recognize patterns, classify data, and forecast future events.

A neural network breaks down the input into layers of abstraction. It can be trained using many examples to recognize patterns in speech or images, for example, just as the human brain does. Its behavior is defined by the way its individual elements are connected and by the strength, or weights, of those connections. These weights are automatically adjusted during training according to a specified learning rule until the artificial neural network performs the desired task correctly.

Why Do Neural Networks Matter?

Neural networks are a type of machine learning approach inspired by how neurons signal to each other in the human brain. Neural networks are especially suitable for modeling non-linear relationships, and they are typically used to perform pattern recognition and classify objects or signals in speech, vision, and control systems.

Here are a few examples of how neural networks are used in machine learning applications:

Neural networks, particularly deep neural networks, have become known for their proficiency at complex identification applications such as face recognition, text translation, and voice recognition. These approaches are a key technology driving innovation in advanced driver assistance systems and tasks including lane classification and traffic sign recognition.

Panel Navigation

How Do Neural Networks Work?

Inspired by biological nervous systems, a neural network combines several processing layers, using simple elements operating in parallel. The network consists of an input layer, one or more hidden layers, and an output layer. In each layer there are several nodes, or neurons, and the nodes in each layer use the outputs of all nodes in the previous layer as inputs, such that all neurons interconnect with each other through the different layers. Each neuron typically is assigned a weight that is adjusted during the learning process and decreases or increases in the weight change the strength of that neuron’s signal.

Typical neural network architecture.

Typical neural network architecture.

Like other machine learning algorithms:

  • Neural networks can be used for supervised learning (classification, regression) and unsupervised learning (pattern recognition, clustering)
  • Model parameters are set by weighting the neural network through “learning” on training data, typically by optimizing weights to minimize prediction error

Types of Neural Networks

The first and simplest neural network was the perceptron, introduced by Frank Rosenblatt in 1958. It consisted of a single neuron and essentially a linear regression model with a sigmoid activation function. Since then, increasingly complex neural networks have been explored, leading up to today’s deep networks, which can contain hundreds of layers.

Deep learning refers to neural networks with many layers, whereas neural networks with only two or three layers of connected neurons are also known as shallow neural networks. Deep learning has become popular because it eliminates the need to extract features from images, which previously challenged the application of machine learning to image and signal processing. However, although feature extraction can be omitted in image processing applications, some form of feature extraction is still commonly applied to signal processing tasks to improve model accuracy.

The types of neural network commonly used for engineering applications include:

  • Feedforward neural network: Consists of an input layer, one or a few hidden layers, and an output layer (a typical shallow neural network)
  • Convolutional neural network (CNN): Deep neural network architecture widely applied to image processing and characterized by convolutional layers that shift windows across the input with nodes that share weights, abstracting the (typically image) input to feature maps
  • Recurrent neural network (RNN): Neural network architecture with feedback loops that model sequential dependencies in the input, as in time series, sensor, and text data; the most popular type of RNN is a long short-term memory network (LSTM)

You can learn more about deep learning here:

Developing Neural Networks with MATLAB

MATLAB® offers specialized toolboxes for machine learning, neural networks, deep learning, computer vision, and automated driving applications.

With just a few lines of code, MATLAB lets you develop neural networks without being an expert. Get started quickly, create and visualize neural network models, integrate them into your existing applications, and deploy them to servers, enterprise systems, clusters, clouds, and embedded devices.

Typical Workflow for Building Neural Networks

Developing AI applications, and specifically neural networks, typically involves these steps:

1. Data Preparation

  • You acquire sufficient labelled training data, with much more required to train deep neural networks; labeler apps such as the Image, Video and Signal labeled, can expedite this process
  • You can use simulation to generate training data, especially if gathering data from real systems is impractical (e.g., failure conditions)
  • You can augment data to represent more variability in training data

2. AI Modeling

  • You can train shallow neural networks interactively in Classification and Regression Learner from Statistics and Machine Learning Toolbox™, or you can use command-line functions; this is recommended if you want to compare the performance of shallow neural networks with other conventional machine learning algorithms, such as decision trees or SVMs, or if you have only limited labelled training data available
  • Specify and train neural networks (shallow or deep) interactively using Deep Network Designer or command-line functions from Deep Learning Toolbox™, which is particularly suitable for deep neural networks, or if you need more flexibility in customizing network architecture and solvers

3. Simulation and Test

  • You can integrate neural networks in Simulink® models as blocks, which can facilitate integration with a larger system, testing, and deployment to many types of hardware

4. Deployment

  • Generate plain C/C++ code from shallow neural networks trained in the Statistics and Machine Learning Toolbox for deployment to embedded hardware and high-performance computing systems
  • Generate optimized CUDA and plain C/C++ code from neural networks trained in the Deep Learning Toolbox for fast inference on GPUs and other types of industrial hardware (ARM, FPGA )