LFT Logo

Safety Critical Software Applications

sw-applications

The LFT SW applications focus on the generation of pure LiDAR-based environment models as well as the safety-critical safeguarding of the function and performance for autonomous driving. Rule-based fusion of features and objects from LiDAR data-processing (LiDAR Perception) with data from independent processing chains of other sensors are part of these products.

ASPP TIGEREYE® (LIDAR PERCEPTION)

The visual organs of a tiger are located on the front of the head and not on the side, thus enabling a three-dimensional view; the pair of eyes looks directly forward.

The LFT product “ASPP TigerEye®” (Advanced Sensor Perception Processing) is a modular SW suite for processing LiDAR data. It is based on LFT’s experience in LiDAR data filtering, segmentation and classification in the aerospace industry and uses only classical deterministic algorithms. Accordingly, they do not need to be trained like AI-based procedures. By reducing the amount of data already in early processing phases, the computing time required is low. ASPP TigerEye® consists of 6 main modules which together comprise the complete LiDAR data processing chain for environmental perception and monitoring.

The modules are:

TE-1: LiDAR sensor data filtering to remove weather or sensor-related artifacts (false pixels) from LiDAR depth images

TE-2: Free Space: This application uses LiDAR data to determine first the ground area and then the free space in front of the vehicle
TE-3: Lane detection, i.e. the derivation of lanes from the LiDAR image
TE-4: Segmentation, clustering and tracking of raised 3D objects
TE-5: Offline / Online Calibration: a toolset for online calibration / calibration verification of LiDAR sensors
TE-6: LiDAR Detection Performance Monitor: A surveillance application that monitors live the current range for relevant obstacles based on the information available in the depth image. It can determine the degree of contamination or degradation of the sensor system and thus provides the possibility of an additional CBIT (Continuous Built in Test) for LiDAR sensors
TE-7: Detection of relevant Small Obstacles
TE-8: Dynamic collision warning. Dynamic objects trajectory prediction for collision avoidance

All modules can be purchased from LFT and be used independently from each other and are flexible in adapting to different LiDAR sensors and their LiDAR data formats.

YouTube

With loading the video you accept the YouTube privacy policy.
More information

Load video

TE-1: LiDAR Sensordatafiltering

LiDAR sensor technology is not free of artifacts and unwanted drop-ins. They are caused by various effects.

 

For example, there are drop-in pixels caused by receiver noise, direct sunlight, or bright haze. These are all effects that are not caused by the light pulse itself. There are also drop-ins caused by scattering of the LiDAR pulse on fog, clouds, snow or dust. LFT offers filters for all of these effects.

Three other closely related modules of ASPP TigerEye® are Free Space Detection, Lane Detection and Object Clustering & Tracking.

 

 

TE-2: Free Space Detection

The module “Free Space” first segments the LiDAR point clouds into the ground surface and raised objects. Within the ground area, the accessible area belonging to the road is then determined and can be made available for further procedures.

TE-3: Lane Detection

The “Lane Detection” searches within the ground area for lane boundaries which are used to further define the free space.

TE-4: Object Clustering & Tracking

The “Object Clustering & Tracking” module combines the LiDAR pixels not belonging to the ground area into objects and tracks them.

YouTube

With loading the video you accept the YouTube privacy policy.
More information

Load video

YouTube

With loading the video you accept the YouTube privacy policy.
More information

Load video

TE-5: Offline/Online Calibration

The sensor system of an autonomously driven vehicle has to be calibrated both during production and in later operation. Within ASPP TigerEye®, LFT has developed an SW module for the identification of reference objects in LiDAR point clouds as well as camera images, which provides automated camera image rectification as well as alignment between LiDAR and camera.

 

In addition, LFT has developed online methods that can continuously monitor (Continuous Built in Test) or fine-tune the alignment while driving.

LiDAR Detection Performance monitoring

TE-6: LiDAR Detection Performance Monitoring

For autonomous driving applications with correspondingly higher ASIL levels, live monitoring of the current visibility of the sensor system is essential.

 

Depending on the LiDAR principle, LiDARs react differently to contamination: LFT has procedures to detect different types of degradation, from local blind spots to complete degradation in the entire field of view.

YouTube

With loading the video you accept the YouTube privacy policy.
More information

Load video

TE-7: Detection of relevant Small Obstacles

Depending on light and weather conditions, small obstacles (e.g. tires, wooden pallets and squared timber) can be difficult or impossible to detect by today’s camera systems at a relevant distance of approx. 100 m in front of the vehicle to enable safe braking or evasion on motorways or expressways.


RADAR systems are technologically not able to detect such objects, because typically the reflection coefficient (RCS) is very small. With the ‘”Detection of relevant Small Obstacles” module, LFT offers the detection of small relevant obstacles that endanger the vehicle through recorded or available LiDAR sensor data.


The drivable area of the road can be determined taking into account the speed and the permitted curve radii of a motorway or multi-lane expressway.

YouTube

With loading the video you accept the YouTube privacy policy.
More information

Load video

TE-8: Dynamic Collision Warning

Interpretation of the dynamic environment is an essential requirement for autonomous vehicle systems. In real-world scenarios, objects move and these dynamics must be accounted for in a complete environment perception system. The TE-8 module is a multi-class tracking system with partial and full occlusion handling, labeling and dynamics estimation.

ADFS MENTISFUSION® (DATA FUSION)

An imitation of the mantis or so-called “praying mantis”.

 

Why? Because this flying insect directs all processes, precisely, to an “action” (the catching of prey).

 

This is the merging and processing of fragmented and sometimes contradictory sensor data into an overall picture that can be understood by humans in real time.

 

The LFT product ADFS MentisFusion® (Advanced Data Fusion System) consists of deterministic, rule-based algorithms and SW packages for data fusion that accept pre-processed inputs as 2D and 3D objects as well as segmented 3D data points from the various data sources. Data sources are e.g. the processed camera, LiDAR and database information (including cloud based information) as well as RADAR information.

 

The purpose of sensor data fusion is to compensate for systematic gaps in the individual technologies by combining different sensors and thus to cover the entire environment with the same quality over the widest possible range of applications. In order to meet the high requirements for data integrity, the data is first calculated in independent functional chains and only then merged. Mutual monitoring and support (“Doer – Checker” principle) of the multiple sensor functional chains with different technologies is thus an important part of data fusion.

 

The modules are:

MF-1: Lane-Marker-Fusion

MF-2: Freespace-Fusion

MF-3: Object-Fusion

MF-4: Combined Tracking of Objects

 

All modules can be purchased and used independently from each other at LFT and are flexible in adapting to different sensor branches and their data formats.

Results of “doer-checker”, respectively “checker-checker” fusion for perception processing originating from independent multimodal functional chains (LiDAR and camera, rule-based and AI), overlaid over video. We selected a somewhat challenging sequence with overhanging load, uphill road, ruts and low curb stones at low contrast ambient light.

 

Fusion results nicely show (from 00:19) the benefits from checking and supplementing the results from one functional chain with the results from the other one. Especially the capabilities of LiDAR sensor and LiDAR processing for the nominal case of low contrast lighting and the corner case of an overhanging load are evident.

MF-1: Lane-Marker-Fusion

The MF-1 software combines road marking information coming from one or more sensor branches.

 

In a first step, lane marking hypotheses are analysed to determine whether they match for the same lane marking, i.e. whether there is sufficient agreement in terms of alignment and distance from each other and whether they overlap. If a match is found, the hypotheses are combined with known probabilistic fusion techniques.

 

If no match but no contradiction to other hypotheses is found, a check is made to see whether the lack of match is due to a legitimate cause, e.g. if the intensity of the retroreflective lane markings in the LiDAR data is too low for a stable hypothesis. If the missing data from the other source do not necessarily indicate a malfunction of this functional chain, the hypotheses from a single source are entered into memory with the appropriate probabilistic weighting. If the new hypothesis contradicts an existing hypothesis, a more complex analysis must be performed.

YouTube

With loading the video you accept the YouTube privacy policy.
More information

Load video

MF-2: Freespace-Detection

The MF-2 software combines information on the accessible road area from the semantic segmentation of camera data with LiDAR data based polygons of the obstacle-free road (see TE-2).

 

The fusion of these data is done with a mutual verification, as no single sensor or processing branch has clear advantages over the other. It is therefore a so-called “checker-checker” principle. This method allows specific shortcomings of the individual processes to be identified and compensated for.

 

For example, it is known from camera-based free space data that the detection of overhanging loads is only possible to a limited extent. On the other hand, camera-based open space detection can better detect purely intensity-based or color-based limitations, such as the transition of different pavement patterns in urban environments.

 

Depending on the safety requirements, the process can be further secured by requiring that no moving objects be located within the open space area. The use of moving objects from a functionally independent RADAR branch is suitable for this purpose.

MF-3: Object-Fusion

The MF-3 software combines object information derived from the semantic segmentation of camera data, from LiDAR and from RADAR data (see TE-4).

 

In a first step a spatial match of the relevant objects is checked. If a sufficient match is found, information from the different sources is taken into account. It should be noted that not all information from each source has to be included in the merged data package. Instead, the best available information from each source should be used. For example, camera-based semantic segmentation typically provides the best classification. LiDAR-based clustering and tracking can contribute good localization in all 3 dimensions.

 

RADAR in turn provides the best speed value. Any of this information can be cross-checked with data from another processing source and corrected if necessary. For example, if semantic segmentation classifies a small, weakly contrasting object as “road” and thus overlooks it, size and class (possibly “unknown”) from LiDAR data processing are used.

 

Depending on the distance to the own vehicle, different strategies are used to minimize the risk. The challenge is to achieve a high level of performance, i.e. a low number of false negative detections (i.e. a real object is rejected) with an acceptable low number of false positive detections (i.e. a non-solid object is not rejected, e.g. a water spray cloud).

MF-4: Combined Tracking of Object

The MF-4 software is based on the object fusion of different sources, which is the basis of MF-3, and tracks these objects over time.

 

The different data sources in turn have different roles. LiDAR and RADAR data processing provide speed estimates that have to be combined and filtered.

 

The object class typically derived from semantic segmentation provides important information for tracking objects in complex scenes, e.g. with occlusions, and for classifying static objects as such. The meta-information of the LiDAR data in turn provides information on where occlusions can occur at all.

 

The aim is first to separate static from moving objects and then to track the moving objects with speed information through the sensor’s field of view.