Skip to main content

Safety
KIT

LFT is a pioneer in the development of safety-critical software products in the field of LiDAR perception and data fusion for autonomous driving. Our rule-based fusion methods combine properties and objects from LiDAR data processing (LiDAR Perception) with data from independent processing chains of different, preferably different sensor technologies (e.g. AI camera-based performance path).

Our strengths lie not only in the generation of pure LiDAR-based environment perception models, but primarily in ensuring the function and performance of environment perception in the context of autonomous driving.

  • The Safety-Kit is your gateway to SAE L4 approval for autonomous vehicles.
  • The Safety-Kit enables safety levels up to ASIL D (SOTIF & ISO26262).
  • LFT makes autonomous driving as safe as flying and guides it to the certification level.

ASPP TigerEye®
(LiDAR Perception)

Presentation of ASPP TigerEye® (Advanced Sensor Perception Processing): A modular software suite for processing LiDAR data. Based on LFT's expertise in filtering, segmenting and classifying LiDAR data in the aerospace industry, this product uses only classical deterministic algorithms. These do not require extensive training, as is the case with AI-based methods. Reducing the amount of data in early processing stages results in low computational overhead.

ASPP TigerEye® includes 8 main modules covering the complete LiDAR data processing chain for environment awareness and monitoring:

  • TE-1: LiDAR Sensor Data Filtering to remove artifacts from LiDAR depth images caused by receiver noise, solar radiation, haze or scattering.

  • TE-2: Free Space Detection: Determines the free passable space in front of the vehicle by segmenting the ground surface and raised objects.

  • TE-3: Detection of lanes from the LiDAR sensor.

  • TE-4: Segmentation, clustering and tracking of 3D objects.

  • TE-5: Offline/Online Calibration: Automated camera image rectification and alignment between LiDAR and camera.

  • TE-6: LiDAR Detection Performance Monitoring: real-time monitoring of sensor range, identification of degradation levels, and delivery of CBITs for LiDAR sensing.

  • TE-7: Detection of relevant small obstacles.

  • TE-8: Dynamic Collision Warning: Prediction of the trajectory of dynamic objects for collision avoidance.

All modules are available separately from LFT and are flexibly adaptable to different LiDAR sensors and their data formats.

ADFS MentisFusion®
(DATA FUSION)

Our ADFS MentisFusion® (Advanced Data Fusion System) product range is inspired by nature - more precisely, by the "praying mantis". Why? Because this impressive insect focuses all its processes on a single "action" - catching prey.
Similarly, our product focuses on merging fragmented and sometimes conflicting sensor data into a comprehensible overall view for the vehicle in real time and processing it safely.

ADFS MentisFusion® combines deterministic, rule-based algorithms for data fusion of pre-processed data such as 2D structures, 3D objects and segmented 3D data points from different data sources. These data sources can be camera or LiDAR data, but also information from databases (including cloud-based data) and RADAR information.

The purpose of sensor data fusion is to bridge the systematic gaps of different technologies and provide comprehensive coverage of the environment with consistent quality across a wide range of applications.

To meet data integrity requirements, data is first generated in independent functional chains and then fused. The principle of mutual monitoring and support ("doer-checker" principle) of multiple sensor functional chains across technologies forms an important part of data fusion.

Fusion results for comprehensive perception

The fusion of independent multimodal functional chains (LiDAR and camera, rule-based and AI) and the overlay with video data results in impressive perception processing. In a challenging sequence with complex scenarios such as overhanging loads, uphill roads, ruts, and low curbs in low ambient light, the fusion results demonstrate the benefits of cross-checking and complementing the results of different functional chains.

Modules of the ADFS MentisFusion® product line:


  • MF-1: Lane Marker Fusion

The MF-2 software combines information on the reachable road surface from camera data and LiDAR data-based polygons. Using a "checker-checker" principle, the data is cross-checked to balance the strengths of different sensors and compensate for deficiencies. This enables accurate and reliable detection of the traversable area.


  • MF-2: Freespace-Detection

MF-2 software combines accessible road area information from camera data and LiDAR data-based polygons. Using a "checker-checker" principle, the data is cross-checked to balance the strengths of different sensors and compensate for deficiencies. This enables accurate and reliable detection of the traversable area.


  • MF-3: Object-Fusion

MF-3 combines object information from semantic camera segmentation, LiDAR and RADAR data. The spatial correspondence of relevant objects is checked and information from different sources is taken into account. The goal is to create a comprehensive, accurate image of detected objects.


  • MF-4: Combined Tracking of Object

This software is based on the fusion results of MF-3 and tracks objects over time (tracking). Different data sources provide velocity estimates that are combined to produce accurate tracking results. The interaction of different data sources enables precise localization of static and moving objects.

Our ADFS MentisFusion® products represent an innovative approach to data fusion and provide a robust solution to create holistic, accurate and reliable perception for autonomous driving.

Other products

Referencesystem
Toolchain
Services