- Applied Deep Learning and Computer Vision for Self/Driving Cars
- Sumit Ranjan;Dr. S. Senthamilarasu
- 880字
- 2021-04-09 23:13:01
Deep learning and computer vision approaches for SDCs
Perhaps the most exciting new technology in the world today is deep neural networks, especially convolutional neural networks. This is known collectively as deep learning. These networks are conquering some of AI's and pattern recognition's most common problems. Due to the rise in computational power, the milestones in AI have been achieved increasingly commonly over recent years, and have often exceeded human capabilities. Deep learning offers some exciting features such as its ability to learn complex mapping functions automatically and being able to scale up automatically. In many real-world applications, such as large-scale image classification and recognition tasks, such properties are essential. After a certain point, most machine learning algorithms reach plateaus, while deep neural network algorithms continually perform better with more and more data. The deep neural network is probably the only machine learning algorithm that can leverage the enormous amounts of training data from autonomous car sensors.
With the use of various sensor fusion algorithms, many autonomous car manufacturers are developing their own solutions, such as LIDAR by Google and Tesla's purpose-built computer; a chip specifically optimized for running a neural network.
Neural network systems have improved in terms of gauging image recognition problems over the past several years, and have exceeded human capabilities.
SDCs can be used to process this sensory data and make informed decisions, such as the following:
Lane detection: This is useful for driving correctly, as the car needs to know which side of the road it is on. Lane detection also makes it easy to follow a curved road.
Road sign recognition: The system must recognize road signs and be able to act accordingly.
Pedestrian detection: The system must detect pedestrians as it drives through a scene. Whether an object is a pedestrian or not, the system needs to know so that it can put more emphasis on not hitting pedestrians. It needs to drive more carefully around pedestrians than other objects that are less important, such as litter.
Traffic light detection: The vehicle needs to detect and recognize traffic lights so that, just like human drivers, it can comply with road rules.
Car detection: The presence of other cars in the environment must also be detected.
Face recognition: There is a need for an SDC to identify and recognize the driver's face, other people inside the car, and perhaps even those who are outside it. If the vehicle is connected to a specific network, it can match those faces against a database to recognize car thieves.
Obstacle detection: Obstacles can be detected using other means, such as ultrasound, but the car also needs to use its camera systems to identify any obstacles.
Vehicle action recognition: The vehicle should know how to interact with other drivers since autonomous cars will drive alongside non-autonomous cars for many years to come.
The list of requirements goes on. Indeed, deep learning systems are compelling tools, but there are certain properties that can affect their practicality, particularly when it comes to autonomous cars. We will implement solutions for all of these problems in later chapters.
LIDAR and computer vision for SDC vision
Some people may be surprised to know that early generation cars from Google barely used their cameras. The LIDAR sensor is useful, but it could not see lights and color, and the camera was mostly used to recognize things such as red and green lights.
Google has since become one of the world's leading players in neural network technology. It has made a substantial effort to execute the sensor fusion of LIDARs, cameras, and other sensors. Sensor fusion is likely to be very good at using neural networks to assist Google's vehicles. Other firms, such as Daimler, have also demonstrated an excellent ability to fuse camera and LIDAR information together. LIDARs are working today, and are expected to become cheaper. However, we have still not crossed that threshold to make the leap toward new neural network technology.
One of the shortcomings of LIDAR is that it usually has a low resolution; so, while not sensing an object in front of the car is very unlikely, it may not figure out what exactly the barrier is. We have already seen an example in the section, The cheapest computer and hardware, on how fusing the camera with convolutional neural networks and LIDAR will make these systems much better in this area, and knowing and recognizing what things are means making better predictions regarding where they are going to be in the future.
Many people claim that computer vision systems would be good enough to allow a car to drive on any road without a map, in the same manner as a human being. This methodology applies mostly to very basic roads, such as highways. They are identical in terms of directions and that they are easy to understand. Autonomous systems are not inherently intended to function as human beings do. The vision system plays an important role because it can classify all the objects well enough, but maps are important and we cannot neglect them. This is because, without such data, we might end up driving down unknown roads.
- Sphinx Search Beginner's Guide
- Flash CS6標(biāo)準(zhǔn)教程(全視頻微課版)
- Photoshop案例實(shí)戰(zhàn)從入門(mén)到精通
- Zenoss Core Network and System Monitoring
- Alice 3 Cookbook
- 平面設(shè)計(jì)綜合教程:Photoshop+Illustrator+CorelDRAW +InDesign(微課版)
- ASP.NET MVC 1.0 Quickly
- 中文版Maya 2014案例教程
- Adobe創(chuàng)意大學(xué)Illustrator CS5 產(chǎn)品專家認(rèn)證標(biāo)準(zhǔn)教材
- Adobe創(chuàng)意大學(xué)Photoshop CS5 產(chǎn)品專家認(rèn)證標(biāo)準(zhǔn)教材
- Photoshop CC摳圖+修圖+調(diào)色+合成+特效實(shí)戰(zhàn)視頻教程
- 詳解AutoCAD 2022電氣設(shè)計(jì)(第6版)
- 中文版Photoshop 2022基礎(chǔ)教程
- 中文版Photoshop CS6從新手到高手·全彩版
- LaTeX入門(mén)與實(shí)戰(zhàn)應(yīng)用