Machine Learning News Hubb
Advertisement Banner
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
Machine Learning News Hubb
No Result
View All Result
Home Artificial Intelligence

Self-driving Technology and Self-driving cars— when will they become on the roads? | by Sasha Andrieiev | Jan, 2023

admin by admin
January 12, 2023
in Artificial Intelligence


The Future Comes a Little Later

In the nearest future, tens of thousands of self-driving cars may be on the roads. Big companies like BMW and Tesla continue to invest.

When it came to self-driving cars, their appearance seemed a matter of the near future. “You’ll be a permanent backseat driver in 2020,” the Guardian foresaw in 2015. “10 million self-driving cars are coming on the road by 2020,” brayed a Forbes headline from 2017. Those declarations were accompanied by General Motors, Toyota, Waymo, and Honda announcing that they will make autonomous cars in 2020. Elon Musk promised that Tesla would drive itself in 2014. Well, 2022 is here — but self-driving cars aren’t.

With extraordinary efforts from many top automakers that spent at least $16 billion on self-driving technology, drivers got some intermediate benefits. They can buy a car that automatically brakes for them, anticipates a collision, helps keep them in their lane, or even a vehicle whose autopilot mostly handles highway driving.

But the fully autonomous car is still awaited. What happened? Here are the answers to your questions about this long-promised self-driving car technology and why the future still has yet to arrive.

self driving car
Jelvix designed.

The optimistic scenarios for improving convenience and quality of life with self-driving cars are endless. They will change the future of many industries as they offer more flexibility, lower operating costs, take on dangerous missions, and enhance consumer convenience.

The elderly and the disabled will be able to be freer and more independent. Parents can send their children to school and animals to the vet. City services can use autonomous vehicles for search and rescue operations and logistics, such as delivering packages. Agriculture will appreciate the benefits of unmanned tractors and robotics for sowing, weeding, and harvesting.

Experts recognized trends that can unleash autonomous cars’ potential: vehicle automation, electrification, and ridesharing. Adopted concurrently, by 2050, these initiatives could:

  • Cut costs for transportation by 40% (in terms of fuel and infrastructure);
  • Clear parking spaces near parks, schools, and shopping centers;
  • Save 50 minutes per day dedicated to driving;
  • Decrease global CO2 emissions by 80%;
  • Minimize traffic congestion by 30% (regarding the number of vehicles);
  • Improve walkability and livability;
  • Lower the cost of short-hop rides to enable new types of shared transport.

But some downsides complicate transferring traffic to autonomous rails.

  • Self-driving cars’ average cost is over $100,000;
  • Any malfunction or minor failure can lead to serious accidents;
  • A much slower movement;
  • Operating at full capacity is impossible without road and infrastructure upgrades;
  • Technology makes it easy for hackers to take control of the car’s software;
  • Drivers will lose their jobs when self-driving cars take on.

Gartner’s “hype cycle” may be a useful compass if you’re skeptical about self-driving biggest claims. According to Gartner, people project success too aggressively when technology becomes possible. When the first flush of excitement passes, it is replaced by disappointment as the technology does not meet inflated expectations in the sworn timeframe. After early predictions became fodder for ridicule, self-driving technology is now moving toward its potential more modestly.

self driving car on the road
Jelvix designed.

Self-driving (autonomous, driver-less, or robotic) car is a vehicle that can drive independently by sensing and navigating surroundings without human aid. Typically such a technical marvel is equipped with an inertial navigation system, cameras, GPS units, and sensors, like LIDAR. The vehicle employs GPS and inertial navigation system positional information to improve its location, make estimates, and shape a 3D environment view.

In terms of effective autonomous driving, it has distinctions in the automation levels that the Society of Automotive Engineers determined:

self driving car levels
Jelvix designed.

At this point in history, humanity’s maximum is level 3, so we have no fully autonomous vehicles.

The basic notion is clear: outfit a vehicle with cameras that track the objects around it and make it react if it’s about to steer into them. Additionally, teach in-car computers the road rules and set them loose to navigate to their destination.

But numerous attempts have shown that everything is much more complicated. Driving is one of the most tricky activities humans routinely do, and following the rules isn’t enough to drive as a human does. People do this through visual contact with other drivers and pedestrians, reacting to weather conditions, and making decisions in milliseconds in situations that are impossible to encode in hard-and-fast rules.

And even the uncomplicated parts of driving, for example, object tracking, are much trickier than they sound. Take Google’s daughter company, Waymo, the industry leader in unscrewed cars. Waymo’s vehicles, like other self-driving cars, use high-resolution cameras and lidar to estimate distances to objects by bouncing light and sound off things.

The car’s computers combine all this to show where other vehicles, pedestrians, cyclists, and obstacles are and where they’re moving. For this part, lots of training data are needed. The algorithm must pull on billions of miles of driving data gathered to form expectations about how any object might move.

It’s tough to get enough training data on roads, so the vehicles train based on simulation data. Engineers train their AI systems to generalize the simulation data to the real world correctly. That’s far from a thorough description of the systems at work when self-driving cars are on the road. But it answers when wondering where our self-driving vehicles are: even the “easy” things hide surprising complexity.

self driving car volvo
Jelvix designed.

Despite the difficulty, companies continue to invest because self-driving cars will change the world — and bring their creators a lot of money. Almost every major car maker is trying to find its own way of creating autonomy. But some of them are much more serious about it than others.

Mercedes recently introduced a Level 3 autonomous car to consumers. The car is currently only available in Germany and is equipped with the updated Drive Pilot autonomous driving technology. Level 3 systems allow the vehicle to respond to its environment without needing the driver to take control. This is called conditionally automated driving (but you still cannot nap while sitting in the driver’s seat).

The iDrive 8 system in the 2022 BMW iX brings standout features and vastly improved graphics. But the model iX has also taken a step towards automated driving and parking functions. BMW claims to take its car to level 3 functionality with next-generation sensors and software. In total, iX has five cameras, five radar sensors, and twelve ultrasonic sensors to monitor the environment. It also features the most advanced standard driver assistance systems ever used in a BMW vehicle.

Tesla Model 3 is an example of a 4-level car’s semi-autonomous system already in use and available on the market. Tesla’s autopilot system originally used both a camera and radar for adaptive cruise control, but the automaker recently phased out radar in its new Tesla Vision system. Currently, the system only uses cameras and neural network processing, and all new Model 3 and Model Y vehicles feature radar-less technology.

Ford is behind other brands in developing autonomous driving technologies. Its Blue Cruise system, formerly Active Drive Assist, is available in the eye-catching new Ford F-150 and Mustang Mach-E models. They built the system around Intelligent Adaptive Cruise Control with Stop-and-Go/Lane Centering and Speed Sign Recognition. It includes all sorts of handy features to allow hands-free driving on certain stretches of road that Ford calls Handsfree Blue Zones.

self driving car road
Jelvix designed.

Google’s Alphabet is rapidly moving towards an autonomous vehicle in which intelligent autopilot will completely replace the driver. Autopilot can drive the car at highway speeds and in stop-and-go traffic mode, set the rate correspondingly to the traffic conditions, steer around curves within a lane and change a route at the driver’s request.

To date, Waymo vehicles are the closest to being fully autonomous and have been tested as taxi services in Arizona, US.

An unmanned car is a cyber-physical system because it combines its physical components (motors, transmission systems, actuators, etc.) with software components that connect the whole system’s core and coordinate all its elements. AI-powered driverless assistance systems make intelligent decisions by:

  • Preservation of the internal map of their surroundings;
  • Using this map to find the best path to the destination while avoiding obstacles such as road structures, pedestrians, and other vehicles;
  • After determining the optimal path, the solution is divided into several commands, which are then transmitted to the vehicle drive;
  • These actuators control the vehicle’s steering, throttle, and brake. The process of mapping, obstacle avoidance, localization, and path planning is constantly repeated by the powerful onboard processor until the vehicle reaches its destination.
self driving car sensors
Jelvix designed.

Depending on their budget and operating constraints, car manufacturers use a variety of actuators, sensors, and robust processors, but the process is typically similar.

  • Radar sensors monitor the location of nearby objects;
  • Ultrasonic sensors identify curbs and other autos in the parking place;
  • Video cameras observe traffic lights, track other cars, recognize road signs, and find pedestrians;
  • Lidar sensors bounce light pulses off the car’s surroundings for detecting road edges and measuring distances.

That sensory information is analyzed by complex software, which calculates the course and delivers commands to the actuators, which control acceleration, braking, and steering. Predictive modeling, obstacle avoidance algorithms, and object recognition help the car follow traffic laws and overcome obstacles.

Computers in unmanned cars analyze vast amounts of data. But engineers can’t write rules for every situation a vehicle might encounter on public roads. So they use machine learning to prepare the system for any contingency. For example, they can provide thousands of photographs of people crossing a route to the system to learn the subtle differences.

self driving technology
Jelvix designed.

Fully autonomous vehicles are being tested in many countries worldwide, but they have yet to be available to the general public. Why do you think? Because there are many technological, legislative, and environmental problems. Here are just a few.

Google and other autonomous car makers report that the weather was the main cause of system failures, after which drivers had to take control. Rain and snow can interfere with roof-mounted laser sensors and cameras. This is yet another problem that still needs to be solved.

Self-driving cars utilize sophisticated cameras to read traffic signs. However, signs with graffiti and stickers or damaged road signs can confuse these systems. Thus, engineers must train models to recognize such attributes by giving them the appropriate status during recognition.

Traffic is a system that leads to unsolvable situations. Will autonomous cars have problems in tunnels or bridges? How will they behave in rush-hour traffic? Will driverless vehicles be moved to a specific lane?

The latest blueprints suggest that a fully autonomous Level 5 vehicle would have no dashboard or steering wheel, so there would be no way for a human passenger even to take control of the car in an emergency. Then who will be held responsible for accidents caused by a self-driving car? Passenger or manufacturer?

One more drawback of self-driving cars is their lack of ability to make judgments between multiple unfavorable outcomes. For example, what if a self-driving vehicle faced only two options: veering to the left and striking a pedestrian or veering to the right and crashing a tree, potentially injuring vehicle passengers? Since both options are undesirable, which option would the autonomous car choose?

Lidar technology price has come down, but it needs to balance range and resolution. The issues being addressed are whether lidar signals will interfere if multiple driverless vehicles drive on the same road.

Generally, human drivers rely heavily on subtle cues and non-verbal signals to make split-second decisions and predict behavior in ways that a self-driving car cannot. But face-reading AI technology has already proven itself in use cases for identification and security. Can it be applied to solve the problem with autonomous vehicles?

As technology advances, federal, state, and municipal governments are analyzing, discussing, and addressing the potential challenges of this evolving transportation industry. Some are concerned that self-driving vehicles can pose a risk to other drivers and pedestrians, especially when self-driving and traditional cars are on the road. Others are worried about cybersecurity and personal privacy. It is also still an open question to what extent “connected vehicles” communicate with other cars and what infrastructure is necessary and feasible to maximize benefits and safety.

As we can see, the technology needs to be improved and refined until the systems have satisfactory precision and almost risk-free quality. And we at Jelvix can offer many related services in the area of autonomous driving.

Human-annotated data is more accurate and of the highest quality than data annotated by a machine. To ensure a superb machine learning experience, Jelvix has specialists with the required skills to provide high-qualified data annotation services: 2D bounding boxes, semantic segmentation, panoptic segmentation, polygons, labeling, and point and landmark annotations. Our specialists process images so that vehicles can see the world as humans.

The accuracy of video annotation and object tracking depends on your annotators. Our data annotation experts operate with the top annotation tools for computer vision. We annotate videos by converting them into frames and reconstructing them after the annotation is complete. With Jelvix experts, you’ll get consistent object tracking to label videos.

If you need to systematize and process large files and collections of files, delegate these tasks to a team of Jelvix data annotation experts. Our teams can provide classification, named-entity recognition, relationship extraction, keyword tagging, sentiment analysis, and text categorization.

Today, many companies are searching for 3D point cloud processing. We can offer you a data labeling team to collect model data points from the real world in three dimensions. Jelvix experts can also apply 3D segmentation to identify the object’s motion in a video and draw 3D cuboids around to detect them.

Don’t forget to give us your 👏 !

Wanna keep in touch online? Twitter | Facebook | Instagram





Source link

Previous Post

What is financial statement spreading?

Next Post

How CNC Machining Has Evolved. A Brief History | by Mateusz Heluszka | Jan, 2023

Next Post

How CNC Machining Has Evolved. A Brief History | by Mateusz Heluszka | Jan, 2023

Create a Local dbt Project

Multilingual customer support translation made easy on Salesforce Service Cloud using Amazon Translate

Related Post

Artificial Intelligence

Amazon SageMaker built-in LightGBM now offers distributed training using Dask

by admin
January 30, 2023
Artificial Intelligence

Don’t blame a Data Scientist on failed projects! | by Darya Petrashka | Dec, 2022

by admin
January 30, 2023
Edge AI

BrainChip Tapes Out AKD1500 Chip in GlobalFoundries 22nm FD SOI Process

by admin
January 30, 2023
Big Data

You Have More Data Quality Issues Than You Think: Here’s Why.

by admin
January 30, 2023
Artificial Intelligence

Using Generative Models for Creativity

by admin
January 30, 2023
Artificial Intelligence

CRPS : Scoring Function for Bayesian ML Models | by Itamar Faran

by admin
January 29, 2023

© 2023 Machine Learning News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

Newsletter Sign Up.

No Result
View All Result
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

© 2023 JNews - Premium WordPress news & magazine theme by Jegtheme.