The LFT SW applications focus on the generation of pure LiDAR-based environment models as well as the safety-critical safeguarding of the function and performance for autonomous driving. Rule-based fusion of features and objects from LiDAR data-processing (LiDAR Perception) with data from independent processing chains of other sensors are part of these products.
The visual organs of a tiger are located on the front of the head and not on the side, thus enabling a three-dimensional view; the pair of eyes looks directly forward.
The LFT product "ASPP-TigerEye" (Advanced Sensor Perception Processing) is a modular SW suite for processing LiDAR data. It is based on LFT's experience in LiDAR data filtering, segmentation and classification in the aerospace industry and uses only classical deterministic algorithms. Accordingly, they do not need to be trained like AI-based procedures. By reducing the amount of data already in early processing phases, the computing time required is low. ASPP-TigerEye consists of 6 main modules which together comprise the complete LiDAR data processing chain for environmental perception and monitoring.
The modules are:
All modules can be purchased from LFT and be used independently from each other and are flexible in adapting to different LiDAR sensors and their LiDAR data formats.
LiDAR sensor technology is not free of artifacts and unwanted drop-ins. They are caused by various effects.
For example, there are drop-in pixels caused by receiver noise, direct sunlight, or bright haze. These are all effects that are not caused by the light pulse itself. There are also drop-ins caused by scattering of the LiDAR pulse on fog, clouds, snow or dust. LFT offers filters for all of these effects.
Three other closely related modules of ASPP-TigerEye are Free Space Detection, Lane Detection and Object Clustering & Tracking.
The module "Free Space" first segments the LiDAR point clouds into the ground surface and raised objects. Within the ground area, the accessible area belonging to the road is then determined and can be made available for further procedures.
The "Lane Detection" searches within the ground area for lane boundaries which are used to further define the free space.
The "Object Clustering & Tracking" module combines the LiDAR pixels not belonging to the ground area into objects and tracks them.
The sensor system of an autonomously driven vehicle has to be calibrated both during production and in later operation. Within ASPP-TigerEye, LFT has developed an SW module for the identification of reference objects in LiDAR point clouds as well as camera images, which provides automated camera image rectification as well as alignment between LiDAR and camera.
In addition, LFT has developed online methods that can continuously monitor (Continuous Built in Test) or fine-tune the alignment while driving.
For autonomous driving applications with correspondingly higher ASIL levels, live monitoring of the current visibility of the sensor system is essential. Depending on the LiDAR principle, LiDARs react differently to contamination: LFT has procedures to detect different types of degradation, from local blind spots to complete degradation in the entire field of view.
An imitation of the mantis or so-called "praying mantis".
Why? Because this flying insect directs all processes, precisely, to an "action" (the catching of prey).
This is the merging and processing of fragmented and sometimes contradictory sensor data into an overall picture that can be understood by humans in real time.
The LFT product ADFS-MentisFusion (Advanced Data Fusion System) consists of deterministic, rule-based algorithms and SW packages for data fusion that accept pre-processed inputs as 2D and 3D objects as well as segmented 3D data points from the various data sources. Data sources are e.g. the processed camera, LiDAR and database information (including cloud based information) as well as RADAR information.
The purpose of sensor data fusion is to compensate for systematic gaps in the individual technologies by combining different sensors and thus to cover the entire environment with the same quality over the widest possible range of applications. In order to meet the high requirements for data integrity, the data is first calculated in independent functional chains and only then merged. Mutual monitoring and support ("Doer - Checker" principle) of the multiple sensor functional chains with different technologies is thus an important part of data fusion.
The LFT product Environment Perception is the holistic system for recording the vehicle environment and transferring it to the downstream vehicle system. It contains corresponding sensors HW (e.g. LiDAR, camera, radar) for the acquisition of the scenery, process software for the acquisition of the scene understanding on the basis of the respective data (both AI-based and deterministic) and fusion processes which are required for the functional safeguarding of the environment perception. The latter is a prerequisite for safe operation and thus certification of a vehicle with a higher level of autonomy.
The functional decomposition, the selection of the sensor technology and also the structure of the processing chains and fusion procedures must be adapted to the reliability and safety requirements of the "intended function" to be delivered. Here, too, mutual monitoring and support ("Doer - Checker" principle) of dissimilar sensors is a prerequisite for a complete provision of the "intended function".
LFT has a Reference System for recording time-synchronized and georeferenced raw data from automotive sensors.
It consists of a scalable HW/SW recording platform which currently records sensor raw data from 4 LiDARs, 4 cameras, two navigation systems and one RADAR with sub-µs resolution.
The angular calibration of the recorded sensor data is so good that the resolution of the sensors is not degraded.
The Reference System is primarily used to obtain raw data for the development and validation of our LiDAR Perception products.
It is also used to optimize sensor calibration and performance monitoring procedures. Due to its scalability it is also predestined for the rapid validation of new sensor technology.
The counterpart of this recording platform is a scalable visualization platform (V-SW), which is able to play back and process the recorded 3D data in real time and synchronously.
The V-SW offers the possibility to generate a RAW fusion of any sensors and can provide any kind of display for visualization.
At the same time the LFT LiDAR Perception products are integrated in the V-SW and can be evaluated.
It is also possible to connect further data sources to evaluate fusion processes.
Due to the aviation past of the LFT founding team, LFT builds on its extensive experience in the design of safety critical systems.
The safety requirements of an intended function in an autonomous system is one of the most important inputs for system design. The system architecture to provide the function must decompose the system into available components which, taken together, can guarantee the functional safety of the overall function.
The basis for this is ISO 26262, while at the same time it must be ensured that the function is secured under all permitted boundary conditions (SOTIF).
As a result, weaknesses in the subcomponents of a system are analyzed and compensated for by supplementing them with suitable other elements (Safety by Design).
The LFT team has more than 20 years of experience in the design of Airborne LiDAR sensor technology which our customers can access. Among other things we advise you on the following topics: