This article provides a general overview of the Neural Architecture Search (NAS) methods.
In recent years, neural networks (NNs) have been very instrumental in solving problems in various domains such as computer vision, natural language processing, etc. However, the architecture of the neural networks (such as AlexNet, ResNet, and VGGNet) has been designed mainly by humans, relying on their intuition and understanding of specific problems.
This has led to the growing interest in new kinds of algorithms that can automatically design the architecture of neural networks. This type of algorithm is known as “Neural Architecture Search” or NAS in short.
NAS tries to replace the reliance on human intuition with an automated search of the neural architecture for a given task.
An artificial neural network is based on a collection of connected units or nodes (shown as yellow circles in Figure 1 below) called artificial neurons. These artificial neurons are connected in a specific structure (as shown below in Figure 1) to perform a task.

From Figure 1, we can see 2 different neural network architectures and we need to choose the better one for our task.
In earlier days, people used to design these architectures which is very time-consuming. This manual work is now being replaced by the Neural Architecture Search (NAS) algorithms.
Now, you might be wondering why I need to know about this topic.
The simple answer is that this type of technology is being used in big companies like Google, Microsoft, etc. to find AI solutions to problems that do not fall under the classical computer science benchmark.
For example, an AI system designed to perform face recognition cannot be used for predicting the weather. So, for weather prediction, you have to design the architecture of the neural network again which is time-consuming.
A better solution is to use these NAS algorithms to find the architecture.
Any NAS method has three parts (as shown in Figure 1 below): search space, search strategy, and performance estimation.

Search Space
The search space typically defines the type of architecture that can be represented in principle. This defines the architectural landscape in which the search algorithm will perform the search process.
For example, the search space might include deciding the number of channels of a kernel, width size, and kernel size of the filter of a CNN network.
Search Strategy
The search strategy defines the process that is used to explore the neural architectural search space. This typically includes reinforcement learning (RL) based methods, evolutionary algorithm (EA) based methods, and gradient-based methods.
Performance Estimation
Lastly, performance estimation refers to the process of estimating the performance of neural network architecture. This is the most important part as it is used by the search strategy algorithms to navigate the search space.
The objective of any NAS method is to find the architecture with high performance using the performance estimation for a given task.
How Search Strategy works
Any search strategy will first select an architecture and will send this architecture to the performance estimation block to get back the performance metric of the architecture (see figure 1 above). Based on this metric, the different search strategies will update their process which is as follows:
Evolutionary algorithm (EA)-based NAS updates a population of architectures based on the performance of the architectures from the performance estimation process.
Reinforcement learning (RL)-based NAS has an RL agent sampling architecture in the search space, which is updated depending on the performance of the architecture determined by the performance estimation process.
Lastly, gradient-based methods begin with a random neural architecture and the neural architecture is then updated using the gradient information based on the performance estimation process.
Conclusion
This is one of the fastest-growing subfields within the field of AI and will be responsible for the accelerated growth of AI in areas other than the ones used in computer science for benchmarking. This is because we do not need any human expert in a specific field to guide the search process as the NAS algorithms can perform the search automatically with the available data alone.