, apply for technology? multi-sensor fusion sensing technology solution, application field? intelligent driving, unique advantages: perceptual intelligence has always been a technical difficulty in L3/L4-level autopilot, while a single sensor has always been limited by weather / light / application scenarios and its own limitations, unable to achieve comprehensive and accurate perception of targets and environment.
Yu Ganwei fuses the detection data of visible light camera, infrared camera and radar at the front end (when data acquisition), and unifies the detection data of each sensor.
The image and radar data complete pixel-level real-time “space-time alignment synchronization” and output in “multi-dimensional pixel” format, which can well solve the technical pain points on the perceptual side.
The advantages of Yu Ganwei’s fusion sensing technology are as follows: 1) it is data-driven based on “physical perception”, which can avoid the corner cases problem of pure visual neural network.
2) the way of sensor pre-fusion can retain the original detection data to the maximum extent and give full play to the advantages of each sensor, so that the sensing system can achieve accurate target perception in real time without the limitation of weather light and scene.
3) support the synchronous completion of target identification and sample sampling, and help the car factory to establish data advantage.
The Yu sense micro-fusion perception system contains a special data acquisition module, which combines the target recognition with the acquisition of effective samples, which can provide the acquisition function of effective samples and cooperate with the development of the car factory.
The success of Tesla FSD is inseparable from the massive data advantage of Tesla.
Yu’s perceptual data with perceptual technology will help Chinese car companies to achieve corner overtaking on the self-driving track, leading Tesla FSD, and accelerating Chinese car companies to occupy overseas markets.
In terms of customer product cost management, the “multi-pixel” perceived data output by our solution can save customers a huge amount of perceived raw data transmission costs.
and reduce the core computing costs of customer products (such as the central domain control unit of the autopilot system).
Moreover, Yu Ganwei’s “multi-dimensional pixel” is fully compatible with the existing mainstream AI computing platform, it can reuse existing image data samples, eliminating the trouble that the product neural network training data needs to be re-collected, so that customers can improve their perception ability at low cost and efficiently.
Application scenario: Yu’s multi-sensor fusion sensing products can be applied to L3/L4-level self-driving.
In addition to passenger cars, they can also be applied to industrial robots such as automatic floor sweepers and automatic agricultural machines.
Multi-dimensional pixel is data-driven based on “physical perception”, which can well avoid the corner cases problem of pure visual neural network, so it can identify all kinds of objects on the road, and there will not be a situation that the target can not be recognized without sampling.
“Multi-dimensional pixels” can also detect the ups and downs of the road, helping the autopilot system to make decisions according to different levels of ups and downs.
“Multi-dimensional Pixel” can also provide information about the material and condition of the cover on the road, which needs to be avoided for the more fragile materials or damaged coverings detected.
And Yu Ganwei’s fusion perception scheme does not rely on high-precision maps at all.
for roads without lane lines and boundaries, the autopilot system can also plan the path according to the multi-modal perception information provided by “multi-dimensional pixels”.
In addition, the sensing accuracy of multi-sensor fusion can be accurate to 5 cm, which can well meet the requirements of some accurate operations.
The future prospect: Yu’s sensitive “multi-dimensional pixel” technology can not only support the “end-to-end + VLM (visual language model) + generative verification system”, but also directly and efficiently support the “occupancy network” (Occupancy Network) algorithm.
Occupying grid means that the perceptual space is divided into three-dimensional meshes (voxels), and multi-dimensional pixels contain 3D spatial position information, target speed information and material information, which can directly and efficiently support voxel algorithm in occupying grid in real time.
Tesla is currently promoting “BEV + Transformer+ Occupy Network”.
Domestic Huawei GOD2.
0 and Xiaomi also adopt the same architecture.
It is expected that many intelligent driving teams will introduce “Occupy Network” to enhance system capability in the future.
The application prospect of multi-dimensional pixels is very broad.
Yu Ganwei’s fusion sensing technology + BEV + Transformer+ occupies the grid is expected to become the best landing scheme for L3/L4 autopilot.
The Golden Award is launched by Galaxy Automobile, which aims to “discover good companies, promote good technology, and achieve Autobots”, and revolves around the theme of “Top 100 New Automobile supply chains in China”.
This year’s Golden Award focuses on smart driving, intelligent cockpit, intelligent chassis, automotive software, gauge-level chips, big data and artificial intelligence, powertrain and recharging, thermal management, car body and interior and exterior decoration, and new materials.
Select excellent enterprises and advanced technology solutions, show these outstanding enterprises and industry leaders inside and outside the industry, and jointly promote the development and progress of the industry.
, return to the first electric network home page >.