Machine Learning News Hubb
Advertisement Banner
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
Machine Learning News Hubb
No Result
View All Result
Home Edge AI

Sensors: Achieving Safety and Accuracy Control in 21st-century Robotics

admin by admin
February 14, 2023
in Edge AI


With an increasing demand for automation in the 21st century, robots have had rapid and unprecedented growth across many industries, including logistics, warehousing, manufacturing, and food delivery.

Human-robot interaction (HRI), precise control, and safe collaboration between humans and robots are the cornerstones of adopting automation. Safety refers to multiple tasks in the context of robots, with collision detection, obstacle avoidance, navigation and localization, force detection, and proximity detection being a few examples. All these tasks are enabled by a suite of sensors, including LiDAR, imaging/vision sensors (cameras), tactile sensors, and ultrasonic sensors. With the advancement of machine vision technology, cameras are becoming increasingly important in robots.

Working Principle of Sensors in Robotics – Vision Sensors/Cameras

CCD (charge-coupled device) and CMOS (complementary metal oxide semiconductor) are common types of vision sensors. A CMOS sensor is a digital device that converts the charge of each pixel to its corresponding voltage, and the sensor typically includes amplifiers, noise correction, and digitalization circuits. On the contrary, a CCD sensor is an analog device that contains an array of photosensitive sites. Although each has its strengths, with the development of CMOS technology, CMOS sensors are now widely considered an appropriate fit for machine vision in robots thanks to their smaller footprint, lower cost, and lower power consumption compared with CCD sensors.

Vision sensors can be used for motion and distance estimation, object identification, and localization. The benefit of vision sensors is that they can collect significantly more information with high resolution compared with other sensors such as LiDAR and ultrasonic sensors. The diagram below compares different sensors based on nine benchmarks. Vision sensors have high resolution and low costs. However, they are inherently susceptible to adverse weather and lightness; therefore, other sensors are often needed to increase the overall system robustness when robots work in unpredictable weather or difficult terrain. A more detailed analysis and comparison of these benchmarks are included in IDTechEx‘s latest report, “Sensors for Robotics 2023-2043: Technologies, Markets, and Forecasts“.


Comparison of several commonly used sensors in robots.

How Are the Vision Sensors Used for Safety in Mobile Robots?

Mobile robotics is one of the largest robotic applications where cameras are used for object classification, safety, and navigation. Mobile robots primarily refer to automated guided vehicles (AGVs) and autonomous mobile robots (AMRs). However, autonomous mobility also plays an important role in many robots ranging from food delivery robots to autonomous agricultural robots (e.g., mowers, etc.) rely on autonomous mobility. Autonomous mobility is an inherently complicated task requiring obstacle avoidance and collision detection.

Depth estimation is one of the key steps in obstacle avoidance. The task requires one or multiple input RGB images collected from vision sensors. These images are used to reconstruct a 3D point cloud with machine vision algorithms, thereby estimating the depth between the obstacle and the robot. At this stage (2023), the majority of mobile robots (e.g., AGVs, AMRs, food delivery robots, robotic vacuum, etc.) are still used indoors, such as in warehouses, factories, shopping malls, and restaurants where the environment is well-controlled with a stable internet connection and illumination. Therefore, cameras can achieve their best performance, and machine vision tasks can be performed on the cloud, significantly reducing the computational power required for the robot itself, thereby leading to a lower cost.

For example, cameras are only needed to monitor the magnetic tape or QR code on the floor for grid-based AGVs. While this has been widely used and trendy nowadays, this does not work well for outdoor side-walk robots or inspection robots that work in areas with limited Wi-Fi coverage (e.g., under tree canopies, etc.). To solve this problem, the in-camera computer vision technique is emerging these days. As the name indicates, the image processing is all finished within the cameras. Due to the increasing demand for outdoor robots, IDTechEx believes that in-camera computer vision will be increasingly needed in the long term, especially for those designed to work in difficult terrain and harsh environments (e.g., exploration robots, etc.). However, in the short term, IDTechEx believes that the power consumption nature of onboard computer vision, along with the high costs of chips, will likely hold back the adoption. IDTechEx believes that many robot original equipment manufacturers (OEMs) would prefer to incorporate other sensors (e.g., ultrasonic sensors, LiDAR, etc.) as the first step to enhance the safety and robustness of the environment perception ability of their products.

Yulin Wang
Technology Analyst, IDTechEx





Source link

Previous Post

Data Quality Management: 6 Stages For Scaling Data Reliability

Next Post

Measure the Business Impact of Amazon Personalize Recommendations

Next Post

Measure the Business Impact of Amazon Personalize Recommendations

How to convert a PDF Bank Statement to Excel or CSV

Exploring the Scikit-Learn Library: 10 Practical Examples of Machine Learning Algorithms with Python | by Anello | Feb, 2023

Related Post

Artificial Intelligence

Creating Geospatial Heatmaps With Python’s Plotly and Folium Libraries | by Andy McDonald | Mar, 2023

by admin
March 19, 2023
Machine Learning

Algorithm: K-Means Clustering. The ideas of the preceding section are… | by Everton Gomede, PhD | Mar, 2023

by admin
March 19, 2023
Machine Learning

A Simple Guide for 2023

by admin
March 19, 2023
Artificial Intelligence

How Marubeni is optimizing market decisions using AWS machine learning and analytics

by admin
March 19, 2023
Artificial Intelligence

The Ethics of AI: How Can We Ensure its Responsible Use? | by Ghulam Mustafa Shoaib | Mar, 2023

by admin
March 19, 2023
Edge AI

Qualcomm Unveils Game-changing Snapdragon 7-series Mobile Platform to Bring Latest Premium Experiences to More Consumers

by admin
March 19, 2023

© 2023 Machine Learning News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

Newsletter Sign Up.

No Result
View All Result
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

© 2023 JNews - Premium WordPress news & magazine theme by Jegtheme.