Application of ADAS in car navigation equipment
First of all, let''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''s understand the basic concepts of ADAS. The full name of ADAS is Advanced DriverAssistance Systems, and the Chinese translation is Advanced Driver Assistance Systems. With the further development of urbanization and the improvement of residents’ living standards, there are more and more cars on the road. The sad thing is that there are more and more traffic accidents. There are many reasons for drivers’ safety awareness, illegal driving, The reasons for road conditions, fatigue driving, etc. are very complicated, so the demand for an ADAS to help the driver has become more and more intense. Many times, we all know that the co-pilot has a competent passenger who can help the driver pay attention to the driving dynamics of the vehicle ahead, and occasionally remind the driver to drive safely and pay attention to maintaining the distance between vehicles. In contrast, when the co-pilot is in normal conditions, The proportion of traffic accidents is less than that of none. But sometimes there is indeed no co-pilot after all. There is only the driver alone. Then treat ADAS as a 24/7 security guard, your co-pilot! Having said that, let’s take a look at ADAS inAndroidHow to achieve it on the system!
Most of the current ADAS solutions are based on image analysis, that is, use a camera to capture the driving dynamics in front of the vehicle, and then digitize it.algorithmThe model is calculated and analyzed to distinguish the position and size of the vehicle in front, and the position of the lane line. This algorithm is not the focus of our discussion. There are professionals to do in-depth research. We mainly apply it for production practice.
We have learned from the previous that the camera is quite critical. The clarity, brightness, saturation, and contrast of the original image will affect the vehicle detection effect and directly affect the analysis result. Therefore, when we choose the camera, we need to have a relatively high resolution. In addition, its viewing angle, horizontal angle, vertical angle, the quality of the lens is very important, the size of the sensor, FOCALLENGTH and BACK FOCALLENGTH, and the exposure window of the camera. The location and size directly determine the quality of image acquisition. In theory, the better the quality of the collected images, the more accurate the results of the algorithm analysis.
Some of the influencing parameters of the image source have been discussed before, and some things related to algorithm processing will be discussed below. At present, most ADAS needs to be calibrated in use, that is, tailor-made, which is a bit of a personal customization. The popular point is to adjust the parameters used by the setting algorithm according to the actual situation of your car. For example, the width of your car, the length of the front of the car, the height of the car, etc., and the speed of a vehicle you need, you can directly use the speed of the GPS. If possible, it is better to get the speed of the original car directly through the OBD. In special places such as underground tunnels and tunnels, stars may not be received, which will affect the effect. Usually can only do inertial navigation processing, which is also a last resort remedy.
On the android system, the camera processing has a ready-made processing module. Generally, the vehicle-mounted solution has the function of a driving recorder, so ADAS will not increase the hardware cost, so the images recorded by the driving can be shared. Then you need to make a fuss at the camera HAL layer. Those who are familiar with camera processing know that there must be a collection thread in it. We can put the processing of ADAS in that thread, but because ADAS is generally more CPU-intensive , The processing speed is not so fast, the general driving recorder can reach 25fps, but ADAS generally cannot process so much data, so we can create a working buffer queue, which is different from the camera''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''s original queue, which is based on the original queue. This means that if ADAS does not consume a buffer, it is allowed to add a newest buffer to the second queue. This way, the smoothness of processing can be maintained and the driving record will not be dropped. In this way, a thread is created to run the ADAS algorithm to process the secondary buffer queue, and then to pass the algorithm processing results out, usually a callback function is buried. In this callback function, find a way to transfer the result data to the upper application. The transfer process is still quite complicated, but when you understand the data reporting process that comes with the system camera, similar additions are not difficult.