Access to robotaxis is constantly expanding. Only this month, Waymo and Uber opened a Interest in the Uber app for austine knights. Despite their growing presence, most people do not fully do Understand the different levels of automation in autonomous vehicles (AV) and most see them as a black box that, for all goals and goals, work through magic and magic. To continue building public trust, more must be separated to educate consumers what lies under the capping of autonomous vehicles. This article will present a high-level architecture that explains the internal work of AVs.
High -level architecture for autonomous vehicles. Items in green are inputs and those in blue are … [+]
When driving, we use our senses to observe our surroundings. We combine inputs from what we see, and hear to paint a picture of where we are and what is happening around us. Using the current state of the road and all its users, we create a plan to reach our destination in a safe and efficient way. We finally act on that plan through contribution to our wheel and pedals. Autonomous vehicles, like many other robotics applications, follow a similar process that can be described in four main subsystems: perception, state assessment, planning and forecasting and control.
Perception
A visual comparison of camera, lidar and radar data
When driving, we use our senses to observe our surroundings. Companydo Autonomous Vehicle Company will have their own unique preference for sensors and where to put them in the car; However, there are three sensors most commonly used in the industry: cameras, radar and lidar. By combining the inputs of these sensors, a vehicle can paint a complete view of the driving environment.
The cameras offer some of the richest information about the surroundings of a vehicle. For example, when observing a pedestrian on a crossing, a camera may show in which direction they are facing, their facial expressions and body language; Details that are important in determining where another passenger can go. Despite all this information, the cameras fight that operates in low -light or bright environments. The radar can be used to detect objects in the dark and can give the exact distance and speed with which they are moving. However, the radar can only have a difficult time by distinguishing two pedestrians walking near each other. This can affect the pursuit of pedestrians that can lead to incorrect decision -making in response to navigation and security.
Lidar releases laser and measures the time the laser needs to reflect again in the vehicle. Its production is a detailed 3D map of the world, where each point is a measure of how much that object is. These maps paint an accurate view of the world that helps the car to discover and avoid the objects around it. Despite its accuracy, Lidar can fight with rain or snow as lasers can reflect the rain of individual rain, distorting the image the vehicle sees. Moreover, Lidar is the most expensive of the three sensors; However, there is a reason to be hopeful that the cost of these sensors will continue to decline as autonomous vehicles become more common.
Despite the different strengths and weaknesses of the camera, radar and lidar; When combined, they create more powerful and reliable AV. Inputs from all three sensors enter a machinery learning algorithm that labels objects in important categories such as vehicles, pedestrians, cyclists, motorcycles, construction sites and lane closures.
Assessment
When driving, we combine inputs from what we see, hear and feel to paint a picture of where we are and what is happening around us. The assessment of the state serves the same purpose. A big step in the state assessment is localization, a process where the vehicle determines exactly where it is. This uses GPS to find a relative location on the map, the camera data to find out which lane is currently the vehicle, and if it is necessary to compare its current position with a previously recorded map with high definition to find a known location up to the accuracy of the centimeter. The lack of assessment of the state for us would be the equivalent of us trying to drive, but having no idea where we are while we are experiencing vertigo. State assessment uses information it collects from perception to determine where all other road users are regarding its position and where they are moving. All this is useful to know how to plan what the vehicle should do.
Planning and forecasting
When driving, we observe the current state of the road and all its users to create a plan to reach our destination in the safest and efficient way. State assessment helps the vehicle understand its position and the surrounding environment. Planning and forecast uses that knowledge to evaluate how the environment will change in future seconds and find out what route the vehicle should take in response. Planning can operate at a high and low level. At a high level a vehicle must determine which road to take to reach its destination. This works similarly to the way we use a map to get around. This plan may include traffic data and information about closing roads to find the most efficient route at the time. At a low level, the planner makes shorter decisions for that lane should be the vehicle, how quickly it has to go based on the vehicles around it, and also making second separate decisions to avoid obstacles. The planner works as a brain who decides what the vehicle should do.
control
When driving, we act on our plan through contribution to our wheel and pedals. The control is responsible for translating the predetermined actions of the planner into reality. Human control while driving is a subconscious process refined with years of experience. When we learn first to drive, we can have a sharp rush and braking until we get acquainted with the right level of contribution to our pedals. Planner for an autonomous vehicle can specify that the vehicle should speed up to 45 MPH but control turns it into an accurate cypress production production to ensure a smooth trip while also balancing not too slow as much disturb other drivers. The same is true for running. The vehicle may want to change lanes, but if it returns very quickly, it can lead to the overvaluation of the vehicle too far away in another lane or lose control all together. If you get into a vehicle with a friend and finish the sick car until the end of it, your friend is not necessarily a bad driver, they are just bad in control.
Final thoughts
Autonomous vehicles are very complex systems. AV companies can classify tasks under different subsystems or call them other names together. The performance of each subsystem will make a major change in the safety and ability of an AV to navigate more complex scenarios. Moreover, there are many other systems that are just as important as the treatment of mistakes, and teleoperation that are very complex and deserve their discussion. This compilation provides a high -level description with the aim of helping the public understand how autonomous vehicles operate to better inform the safety of technology. While companies share excellent research explaining their systems and their performanceI encourage additional work from AV companies to share the descriptions of the internal work of AV to continue building public trust.