Application of Optical Radar (LiDAR) in Driverless Technology

The success of driverless cars involves high-precision maps, real-time positioning, and obstacle detection, all of which are inseparable from optical radar (LiDAR). This article will delve into how optical radars are widely used in the technology of unmanned vehicles. The article first introduces the working principle of optical radar, including how to scan out the point cloud by laser; then explains in detail the application of optical radar in driverless technology, including map drawing, positioning and obstacle detection; finally discusses the current optical radar technology Challenges include external environmental disturbances, large data volumes, and high costs.

This article refers to the address: http://

Introduction to driverless technology

Driverless technology is the integration of multiple technologies, including sensors, positioning and deep learning, high-precision maps, path planning, obstacle detection and avoidance, mechanical control, system integration and optimization, energy and thermal management, and more. Although there are many different implementations of the existing unmanned vehicles, the system architecture is similar. The universal system architecture of the unmanned vehicle, the sensing end of the system is composed of different sensors, GPS is used for positioning, Light Detection And Ranging (LiDAR) is used for positioning and obstacle detection, and the camera is used for deep learning based. Object recognition and positioning assistance.

After the sensor information is collected, we enter the sensing phase, mainly positioning and object recognition (in Figure 1). At this stage, we can use mathematical methods, such as Kalman Filter and Particle Filter, to fuse various sensor information to get the current maximum probability. If LiDAR is used as the primary positioning sensor, we can compare the information returned by the LiDAR scan with the known high-precision map to get the current vehicle position. If there is no map, we can even compare the current LiDAR scan information with the previous scan information using the ICP algorithm to calculate the current vehicle position. After obtaining the LiDAR-based position prediction, mathematical methods can be used to fuse with other sensor information to derive more accurate position information.

Finally, we entered the planning and control phase. At this stage, we adjusted the vehicle's driving plan in real time based on the location information and the identified image information (such as traffic lights), and converted the driving plan into a control signal to control the vehicle. The global path planning can be implemented by an algorithm similar to A-Star, and the local path planning can be implemented by algorithms such as DWA.

Optical radar basics

Let's first understand the working principle of optical radar, especially the process of generating point clouds.

working principle

Optical radar is an optical remote sensing technology that determines the actual distance of a target object by first emitting a laser beam to the target object and then according to the time interval of the receiving-reflecting. Then based on the distance and the angle of the laser emission, the position information of the object can be derived by simple geometric changes. Since the propagation of the laser is less affected by the outside world, the distance that LiDAR can detect is generally more than 100m. Compared to conventional radars using radio waves, LiDAR uses laser radiation. Commercial LiDAR uses laser radiation with wavelengths typically between 600 nm and 1000 nm, far below the wavelengths used in conventional radars. Therefore, LiDAR can achieve higher precision in measuring object distance and surface shape, and can generally reach centimeter level.

The LiDAR system is generally divided into three parts: the first is a laser emitter that emits laser light with a wavelength between 600 nm and 1000 nm; the second part is a scanning and optical component that is mainly used to collect the distance between the reflection point and the point. The time and horizontal angle (Azimuth); the third part is the photosensitive part, which mainly detects the intensity of the returned light. Therefore, each point we detect includes spatial coordinate information (x, y, z) and light intensity information (i). The light intensity is directly related to the light reflectivity of the object, so a preliminary determination of the detected object can be made based on the detected light intensity.

What is a point cloud?

The LiDAR used by unmanned vehicles is not static. During the driving of the unmanned vehicle, LiDAR rotates at a constant angular velocity at the same time. During this process, the laser is continuously emitted and the information of the reflection points is collected to obtain a full range of environmental information. LiDAR also records the time and horizontal angle (Azimuth) of the point when collecting the distance of the reflection point, and each laser emitter has a number and a fixed vertical angle. Based on these data, we can calculate all The coordinates of the reflection point. The set of coordinates of all the reflection points collected by LiDAR per revolution forms a point cloud.

LiDAR can measure the distance from the object by laser reflection, because the vertical angle of the laser is fixed, denoted as a, here we can directly find the z-axis coordinate as sin(a)*distance. From cos(a)*distance we can get the projection of distance in the xy plane, remember to be xy_dist. LiDAR also records the horizontal angle b of the current LiDAR rotation while recording the distance of the reflection point. According to a simple set conversion, the x-axis coordinate and the y-axis coordinate of the point can be obtained as cos(b)*xy_dist and sin(b, respectively. ) *xy_dist.

LiDAR's application in driverless technology

Next, how to apply optical radar to driverless technology, especially for high-precision map rendering, point cloud-based positioning, and obstacle detection.

Drawing of high-definition maps

The HD map here is different from the navigation map we use every day. High-definition maps are made up of numerous point clouds, which are mainly used for precise positioning of unmanned vehicles. The mapping of HD maps is also done through LiDAR. The LiDAR-enabled map data collection vehicle repeatedly travels and collects point cloud data on a route that wants to draw a high-definition map. In the later stage, it is manually labeled to filter the error information in some point cloud maps, such as the points formed by the reflection of cars and pedestrians on the road, and then align and stitch the collected point clouds to form the final HD map.

Point cloud based positioning

First introduce the importance of positioning. Many people have this kind of question: If you have a precise GPS, you don't know the current location, you still need to locate it? Actually it is not. At present, high-precision military differential GPS can achieve centimeter-level accuracy in an "ideal" environment when it is static. The "ideal" environment here means that there is not much suspended medium in the atmosphere and the GPS has a strong receiving signal when measuring. However, unmanned vehicles are driven in complex dynamic environments, especially in large cities. Due to the blockage of various tall buildings, the problem of GPS multi-path reflection is more obvious. The GPS positioning information thus obtained can easily have an error of several tens of centimeters or even several meters. For cars traveling at high speeds with limited width, such errors are likely to cause traffic accidents. Therefore, there must be means other than GPS to enhance the accuracy of unmanned vehicle positioning.

As mentioned above, LiDAR constantly collects point clouds to understand the surrounding environment as the vehicle travels. We can naturally think of using this environmental information to locate. Here we can express this problem with a simplified probability problem: known GPS information at time t0, point cloud information at time t0, and three locations where the unmanned vehicle may be at t1: P1, P2, and P3 (here To simplify the problem, assume that the unmanned vehicle will be in one of these three positions). Find the probability that the car will be at these three points at t1. According to Bayes' rule, the problem of positioning of an unmanned vehicle can be simplified to the following probability formula:

The first item on the right side indicates the probability distribution of the point cloud information given the current position. The calculation method is generally divided into two types: local estimation and global estimation. The simpler method of local estimation is to estimate the possibility of the unmanned vehicle at the current position by geometrical derivation through the matching of the current point cloud and the point cloud at the previous moment. The global estimate is to use the current point cloud to match the high-definition map mentioned above, and you can get the possibility of the current car relative to a certain position on the map. In practice, two positioning methods are generally used in combination. The second term on the right represents the probability distribution for the current position prediction. Here, the position information given by GPS can be simply used as the prediction. By calculating the posterior probabilities of the three points P1, P2, and P3, it is possible to estimate where the unmanned vehicle is at the highest position. By multiplying the two probability distributions, the accuracy of the unmanned vehicle positioning can be greatly improved, as shown in FIG.

Obstacle detection

As we all know, a difficult problem in machine vision is to judge the distance of an object. Based on the 2D image captured by a single camera, accurate distance information cannot be obtained. The method of generating a depth map based on a multi-camera requires a large amount of calculation, and cannot satisfactorily meet the real-time requirements of the unmanned vehicle. Another thorny problem is that the optical camera is greatly affected by the lighting conditions, and the recognition accuracy of the object is very unstable. Figure 4 shows the problem of image feature matching in poor light conditions: the feature points in the left image are not successfully matched in the right image due to insufficient camera exposure. The left side of Figure 5 shows an example of successful 2D object feature matching: the template of the beer bottle can be successfully identified in the 2D image. But if the lens is pulled far, we can only recognize that the beer bottle on the right side is only attached to the surface of another 3D object. 2D objects are difficult to correctly identify in this context due to the lack of dimensions.

The point cloud generated by LiDAR can solve the above two problems to a large extent. With the characteristics of LiDAR, we can have a more accurate estimation of the distance, height and even surface shape of the reflective obstacle, thus greatly improving the obstacle detection. Accuracy, and this method is lower in the complexity of the algorithm than the camera-based visual algorithm, so it can better meet the real-time requirements of unmanned vehicles.

LiDAR technology challenges

Earlier we focused on LiDAR's help with driverless systems, but in practice, LiDAR also faces many challenges, including technology, computing performance and price challenges. In order to productize the unmanned vehicle system, we must solve these problems.

Technical challenge: suspended solids in the air

The accuracy of LiDAR is also affected by the weather. Suspended matter in the air affects the speed of light. Heavy fog and rain all affect the accuracy of LiDAR.

Two A and B LiDARs from different manufacturers were used in the test. It can be seen that as the experimental rainfall increases, the farthest detection distances of the two LiDARs decrease linearly. The propagation characteristics in rain or fog have received increasing attention in the academic field in recent years with the widespread use of laser technology. Studies have shown that both rain and fog are composed of small water droplets. The radius of the raindrop directly and its distribution density in the air directly determine the probability of the laser colliding with it during the propagation process. The higher the collision probability, the greater the impact of the laser propagation speed.

Computational performance challenges: large computational complexity

Even the 16-line LiDAR has reached 300,000 points per second. To ensure the real-time performance of the unmanned vehicle positioning algorithm and the obstacle detection algorithm, such a large amount of data processing is a major challenge. For example, the raw data given by LiDAR mentioned above is only the distance information of the reflected object. It is necessary to geometrically transform all the generated points and convert them into position coordinates, which involves at least 4 floating point operations and 3 trigonometric functions. Computing, and point cloud in the later processing there are a lot of more complex operations such as coordinate system conversion, which puts great demands on computing resources (CPU, GPU, FPGA).

Looking to the future

Despite the maturity of driverless technology, LiDAR has always been a hurdle. Pure vision and GPS/IMU positioning and obstacle avoidance schemes, although low in price, are still immature and difficult to apply to outdoor scenes; but at the same time, LiDAR prices are high, consumers are hard to bear the price of hundreds of thousands of dollars. car. Therefore, it is imperative to quickly reduce the cost of the system, especially LiDAR. One of the more promising methods is to use lower-priced LiDAR. Although some accuracy will be lost, other low-cost sensors can be used to mix information with LiDAR to more accurately estimate the position of the vehicle. In other words, it is to make up for the shortage of hardware sensors through better algorithms. We believe this is the recent development direction of unmanned vehicles. The price of high-precision LiDAR will also decline in the next one or two years due to the increase in market demand, paving the way for the further popularization of unmanned vehicles.

Ethereum Mining Machine

Bitcoin mining equipment is mainly highly specialized ASIC mill , The maximum computing power of a single miner is 110T/s( Ant S19Pro mill ), The scale of computing power of the whole network is 120EH/s above . Ethereum's mining equipment is mainly graphics card mining machine , Professional ASIC Mining machines are very few , On the one hand, it is because of the of Ethereum mining algorithm [ resist ASIC sex " Improved R & D efficiency ASIC The threshold of the miner , So how much do you know about the mainstream Ethereum mining machine ?

Ethereum mining algorithm has higher requirements for memory , The purpose of this design limits ASIC The use of chips in Ethereum mining . From the results , Most Ethereum mining machines are graphics card machines , This reflects the application of Ethereum mining algorithm ASIC Success on . From the absolute value of computational power scale , The computing power of Ethereum is about 200TH/s, The computing power of bitcoin is about 120EH/s, The latter is close to the former 60 ten thousandfold .

ASIC Mining machines have high computing power , Large power consumption , Like the latest ant S19Pro mill , Rated power consumption is 3250W, Need to consume... Every day 78 Degree electricity , According to the current currency price and 0.23 The electricity price in wet season is RMB , The proportion of electricity charge is 30.68%. Other older bitcoins ASIC mill , Like ants T17 series , The proportion of electricity charges generally exceeds 50%. by comparison , The power consumption of the graphics card miner is low , The proportion of electricity is also low . such as 5600XT8 Graphics card miner of card .

Ant S19-95T It adopts aluminum alloy profile electromechanical integrated design , Parallel four fan structure . According to the measured data , In line with and better than the official data .S19 The series is provided by TSMC 7nm chip , A new generation of customized chips with perfect whole machine architecture and high conversion rate APW12 Power Supply , Reduce power loss during conversion , Use more energy-saving , Greatly shorten the return cycle of the miner . In addition, the newly upgraded mining machine operation interface is also more humanized , Real time computing power . The average computing power and the operation of each chip are clear at a glance .

Shenma mining machine M30S-86T/88T/90T, Bit micro 2020 Launched in 2013 M30S Series of new mining machines , Announced that it would M30S The series is expanded to three 3X product . stay 3X In the new standard , Bit micro promises a one-year warranty for its series of products . This series of mining machines are provided by Samsung 8nm chip , Among them, god horse M30S series , The power consumption is 38J/T, Provide higher computing power , Lower power consumption and high stability.

Ethereum Mining Machine:Jasminer X4,Bitmain Antminer E9 (2.4Gh),iPollo V1,Jasminer X4-1U,iPollo V1 Classic,
Jasminer X4-C 1U,iPollo V1 Mini SE Plus

Ethereum Mining Machine,ETHW Miner,innosilicon a10 pro,E9 Antminer,Innosilicon A11

Shenzhen YLHM Technology Co., Ltd. , https://www.hkcryptominer.com