The LFT SW applications focus on the generation of pure LiDAR-based environment models as well as the safety-critical safeguarding of the function and performance for autonomous driving. Rule-based fusion of features and objects from LiDAR data-processing (LiDAR Perception) with data from independent processing chains of other sensors are part of these products.
The visual organs of a tiger are located on the front of the head and not on the side, thus enabling a three-dimensional view; the pair of eyes looks directly forward.
The LFT product "ASPP TigerEye®" (Advanced Sensor Perception Processing) is a modular SW suite for processing LiDAR data. It is based on LFT's experience in LiDAR data filtering, segmentation and classification in the aerospace industry and uses only classical deterministic algorithms. Accordingly, they do not need to be trained like AI-based procedures. By reducing the amount of data already in early processing phases, the computing time required is low. ASPP TigerEye® consists of 6 main modules which together comprise the complete LiDAR data processing chain for environmental perception and monitoring.
The modules are:
All modules can be purchased from LFT and be used independently from each other and are flexible in adapting to different LiDAR sensors and their LiDAR data formats.
LiDAR sensor technology is not free of artifacts and unwanted drop-ins. They are caused by various effects.
For example, there are drop-in pixels caused by receiver noise, direct sunlight, or bright haze. These are all effects that are not caused by the light pulse itself. There are also drop-ins caused by scattering of the LiDAR pulse on fog, clouds, snow or dust. LFT offers filters for all of these effects.
Three other closely related modules of ASPP TigerEye® are Free Space Detection, Lane Detection and Object Clustering & Tracking.
The module "Free Space" first segments the LiDAR point clouds into the ground surface and raised objects. Within the ground area, the accessible area belonging to the road is then determined and can be made available for further procedures.
The "Lane Detection" searches within the ground area for lane boundaries which are used to further define the free space.
The "Object Clustering & Tracking" module combines the LiDAR pixels not belonging to the ground area into objects and tracks them.
The sensor system of an autonomously driven vehicle has to be calibrated both during production and in later operation. Within ASPP TigerEye®, LFT has developed an SW module for the identification of reference objects in LiDAR point clouds as well as camera images, which provides automated camera image rectification as well as alignment between LiDAR and camera.
In addition, LFT has developed online methods that can continuously monitor (Continuous Built in Test) or fine-tune the alignment while driving.
For autonomous driving applications with correspondingly higher ASIL levels, live monitoring of the current visibility of the sensor system is essential. Depending on the LiDAR principle, LiDARs react differently to contamination: LFT has procedures to detect different types of degradation, from local blind spots to complete degradation in the entire field of view.
Depending on light and weather conditions, small obstacles (e.g. tires, wooden pallets and squared timber) can be difficult or impossible to detect by today's camera systems at a relevant distance of approx. 100 m in front of the vehicle to enable safe braking or evasion on motorways or expressways.
RADAR systems are technologically not able to detect such objects, because typically the reflection coefficient (RCS) is very small. With the '"Detection of relevant Small Obstacles" module, LFT offers the detection of small relevant obstacles that endanger the vehicle through recorded or available LiDAR sensor data.
The drivable area of the road can be determined taking into account the speed and the permitted curve radii of a motorway or multi-lane expressway.
An imitation of the mantis or so-called "praying mantis".
Why? Because this flying insect directs all processes, precisely, to an "action" (the catching of prey).
This is the merging and processing of fragmented and sometimes contradictory sensor data into an overall picture that can be understood by humans in real time.
The LFT product ADFS MentisFusion® (Advanced Data Fusion System) consists of deterministic, rule-based algorithms and SW packages for data fusion that accept pre-processed inputs as 2D and 3D objects as well as segmented 3D data points from the various data sources. Data sources are e.g. the processed camera, LiDAR and database information (including cloud based information) as well as RADAR information.
The purpose of sensor data fusion is to compensate for systematic gaps in the individual technologies by combining different sensors and thus to cover the entire environment with the same quality over the widest possible range of applications. In order to meet the high requirements for data integrity, the data is first calculated in independent functional chains and only then merged. Mutual monitoring and support ("Doer - Checker" principle) of the multiple sensor functional chains with different technologies is thus an important part of data fusion.
The modules are:
All modules can be purchased and used independently from each other at LFT and are flexible in adapting to different sensor branches and their data formats.
Results of “doer-checker”, respectively “checker-checker” fusion for perception processing originating from independent multimodal functional chains (LiDAR and camera, rule-based and AI), overlaid over video. We selected a somewhat challenging sequence with overhanging load, uphill road, ruts and low curb stones at low contrast ambient light. Fusion results nicely show (from 00:19) the benefits from checking and supplementing the results from one functional chain with the results from the other one. Especially the capabilities of LiDAR sensor and LiDAR processing for the nominal case of low contrast lighting and the corner case of an overhanging load are evident.
The MF-1 software combines road marking information coming from one or more sensor branches.
In a first step, lane marking hypotheses are analysed to determine whether they match for the same lane marking, i.e. whether there is sufficient agreement in terms of alignment and distance from each other and whether they overlap. If a match is found, the hypotheses are combined with known probabilistic fusion techniques. If no match but no contradiction to other hypotheses is found, a check is made to see whether the lack of match is due to a legitimate cause, e.g. if the intensity of the retroreflective lane markings in the LiDAR data is too low for a stable hypothesis. If the missing data from the other source do not necessarily indicate a malfunction of this functional chain, the hypotheses from a single source are entered into memory with the appropriate probabilistic weighting. If the new hypothesis contradicts an existing hypothesis, a more complex analysis must be performed.
The MF-2 software combines information on the accessible road area from the semantic segmentation of camera data with LiDAR data based polygons of the obstacle-free road (see TE-2).
The fusion of these data is done with a mutual verification, as no single sensor or processing branch has clear advantages over the other. It is therefore a so-called "checker-checker" principle. This method allows specific shortcomings of the individual processes to be identified and compensated for. For example, it is known from camera-based free space data that the detection of overhanging loads is only possible to a limited extent. On the other hand, camera-based open space detection can better detect purely intensity-based or color-based limitations, such as the transition of different pavement patterns in urban environments. Depending on the safety requirements, the process can be further secured by requiring that no moving objects be located within the open space area. The use of moving objects from a functionally independent RADAR branch is suitable for this purpose.
The MF-3 software combines object information derived from the semantic segmentation of camera data, from LiDAR and from RADAR data (see TE-4).
In a first step a spatial match of the relevant objects is checked. If a sufficient match is found, information from the different sources is taken into account. It should be noted that not all information from each source has to be included in the merged data package. Instead, the best available information from each source should be used. For example, camera-based semantic segmentation typically provides the best classification. LiDAR-based clustering and tracking can contribute good localization in all 3 dimensions. RADAR in turn provides the best speed value. Any of this information can be cross-checked with data from another processing source and corrected if necessary. For example, if semantic segmentation classifies a small, weakly contrasting object as "road" and thus overlooks it, size and class (possibly "unknown") from LiDAR data processing are used. Depending on the distance to the own vehicle, different strategies are used to minimise the risk. The challenge is to achieve a high level of performance, i.e. a low number of false negative detections (i.e. a real object is rejected) with an acceptable low number of false positive detections (i.e. a non-solid object is not rejected, e.g. a water spray cloud).
The MF-4 software is based on the object fusion of different sources, which is the basis of MF-3, and tracks these objects over time.
The different data sources in turn have different roles. LiDAR and RADAR data processing provide speed estimates that have to be combined and filtered. The object class typically derived from semantic segmentation provides important information for tracking objects in complex scenes, e.g. with occlusions, and for classifying static objects as such. The meta-information of the LiDAR data in turn provides information on where occlusions can occur at all. The aim is first to separate static from moving objects and then to track the moving objects with speed information through the sensor's field of view.
The LFT product Environment Perception is the holistic system for recording the vehicle environment and transferring it to the downstream vehicle system. It contains corresponding sensors HW (e.g. LiDAR, camera, radar) for the acquisition of the scenery, process software for the acquisition of the scene understanding on the basis of the respective data (both AI-based and deterministic) and fusion processes which are required for the functional safeguarding of the environment perception. The latter is a prerequisite for safe operation and thus certification of a vehicle with a higher level of autonomy.
The functional decomposition, the selection of the sensor technology and also the structure of the processing chains and fusion procedures must be adapted to the reliability and safety requirements of the "intended function" to be delivered. Here, too, mutual monitoring and support ("Doer - Checker" principle) of dissimilar sensors is a prerequisite for a complete provision of the "intended function".
LFT has a Reference System for recording time-synchronized and georeferenced raw data from automotive sensors.
It consists of a scalable HW/SW recording platform which currently records sensor raw data from 4 LiDARs, 4 cameras, two navigation systems and one RADAR with sub-µs resolution.
The angular calibration of the recorded sensor data is so good that the resolution of the sensors is not degraded.
The Reference System is primarily used to obtain raw data for the development and validation of our LiDAR Perception products.
It is also used to optimize sensor calibration and performance monitoring procedures. Due to its scalability it is also predestined for the rapid validation of new sensor technology.
The counterpart of this recording platform is a scalable visualization platform (V-SW), which is able to play back and process the recorded 3D data in real time and synchronously.
The V-SW offers the possibility to generate a RAW fusion of any sensors and can provide any kind of display for visualization.
At the same time the LFT LiDAR Perception products are integrated in the V-SW and can be evaluated.
It is also possible to connect further data sources to evaluate fusion processes.
Due to the aviation past of the LFT founding team, LFT builds on its extensive experience in the design of safety critical systems.
The safety requirements of an intended function in an autonomous system is one of the most important inputs for system design. The system architecture to provide the function must decompose the system into available components which, taken together, can guarantee the functional safety of the overall function.
The basis for this is ISO 26262, while at the same time it must be ensured that the function is secured under all permitted boundary conditions (SOTIF).
As a result, weaknesses in the subcomponents of a system are analyzed and compensated for by supplementing them with suitable other elements (Safety by Design).
The LFT team has more than 20 years of experience in the design of Airborne LiDAR sensor technology which our customers can access. Among other things we advise you on the following topics: