Machine Learning News Hubb
Advertisement Banner
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
Machine Learning News Hubb
No Result
View All Result
Home Machine Learning

A Guide to Neural Architecture Search | by Nilotpal Sinha PhD | Sep, 2022

admin by admin
September 7, 2022
in Machine Learning


This article provides a general overview of the Neural Architecture Search (NAS) methods.

Photo by Andrea De Santis on Unsplash

In recent years, neural networks (NNs) have been very instrumental in solving problems in various domains such as computer vision, natural language processing, etc. However, the architecture of the neural networks (such as AlexNet, ResNet, and VGGNet) has been designed mainly by humans, relying on their intuition and understanding of specific problems.

This has led to the growing interest in new kinds of algorithms that can automatically design the architecture of neural networks. This type of algorithm is known as “Neural Architecture Search” or NAS in short.

NAS tries to replace the reliance on human intuition with an automated search of the neural architecture for a given task.

An artificial neural network is based on a collection of connected units or nodes (shown as yellow circles in Figure 1 below) called artificial neurons. These artificial neurons are connected in a specific structure (as shown below in Figure 1) to perform a task.

Figure 1

From Figure 1, we can see 2 different neural network architectures and we need to choose the better one for our task.

In earlier days, people used to design these architectures which is very time-consuming. This manual work is now being replaced by the Neural Architecture Search (NAS) algorithms.

Now, you might be wondering why I need to know about this topic.

The simple answer is that this type of technology is being used in big companies like Google, Microsoft, etc. to find AI solutions to problems that do not fall under the classical computer science benchmark.

For example, an AI system designed to perform face recognition cannot be used for predicting the weather. So, for weather prediction, you have to design the architecture of the neural network again which is time-consuming.

A better solution is to use these NAS algorithms to find the architecture.

Any NAS method has three parts (as shown in Figure 1 below): search space, search strategy, and performance estimation.

Figure 1

Search Space

The search space typically defines the type of architecture that can be represented in principle. This defines the architectural landscape in which the search algorithm will perform the search process.

For example, the search space might include deciding the number of channels of a kernel, width size, and kernel size of the filter of a CNN network.

Search Strategy

The search strategy defines the process that is used to explore the neural architectural search space. This typically includes reinforcement learning (RL) based methods, evolutionary algorithm (EA) based methods, and gradient-based methods.

Performance Estimation

Lastly, performance estimation refers to the process of estimating the performance of neural network architecture. This is the most important part as it is used by the search strategy algorithms to navigate the search space.

The objective of any NAS method is to find the architecture with high performance using the performance estimation for a given task.

How Search Strategy works

Any search strategy will first select an architecture and will send this architecture to the performance estimation block to get back the performance metric of the architecture (see figure 1 above). Based on this metric, the different search strategies will update their process which is as follows:

Evolutionary algorithm (EA)-based NAS updates a population of architectures based on the performance of the architectures from the performance estimation process.

Reinforcement learning (RL)-based NAS has an RL agent sampling architecture in the search space, which is updated depending on the performance of the architecture determined by the performance estimation process.

Lastly, gradient-based methods begin with a random neural architecture and the neural architecture is then updated using the gradient information based on the performance estimation process.

Conclusion

This is one of the fastest-growing subfields within the field of AI and will be responsible for the accelerated growth of AI in areas other than the ones used in computer science for benchmarking. This is because we do not need any human expert in a specific field to guide the search process as the NAS algorithms can perform the search automatically with the available data alone.



Source link

Previous Post

Making pictures with words. The rise and rise of AI text-to-image… | by Sau Sheong | Sep, 2022

Next Post

How to turn your local (zip) data into a Huggingface Dataset | by Dr. Varshita Sher | Sep, 2022

Next Post

How to turn your local (zip) data into a Huggingface Dataset | by Dr. Varshita Sher | Sep, 2022

Transfer learning for TensorFlow image classification models in Amazon SageMaker

Parkinson’s Disease Prediction [End to End Project] [ML& DL] | by Grajeshchary | Sep, 2022

Parkinson’s Disease Prediction [End to End Project] [ML& DL] | by Grajeshchary | Sep, 2022

Related Post

Artificial Intelligence

Exploring TensorFlow Model Prediction Issues | by Adam Brownell | Feb, 2023

by admin
February 2, 2023
Machine Learning

Different Loss Functions used in Regression | by Iqra Bismi | Feb, 2023

by admin
February 2, 2023
Machine Learning

How to organize bills? – 3 ways to track bills

by admin
February 2, 2023
Artificial Intelligence

How to decide between Amazon Rekognition image and video API for video moderation

by admin
February 2, 2023
Artificial Intelligence

The Future of AI: GPT-3 vs GPT-4: A Comparative Analysis | by Mohd Saqib | Jan, 2023

by admin
February 2, 2023
Deep Learning

6 Ways To Streamline Tech Hiring With A Recruitment Automation Platform

by admin
February 2, 2023

© 2023 Machine Learning News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

Newsletter Sign Up.

No Result
View All Result
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

© 2023 JNews - Premium WordPress news & magazine theme by Jegtheme.