Vision-AI IP

Vision-AI IP

Versatile Vision-AI Empowering Smart Vehicles

The Vision-AI models are developed in-house using a comprehensive AI pipeline that encompasses data collection, labeling, training, quantization, and inference. These models can be sold independently to meet specific customer requirements. These models have been successfully applied to different vehicle markets, including passenger cars, commercial vehicles, and two- wheelers, and have been optimized for low-bit cross- platform applications. The types of Vision-AI models are as follows:

 

Multi-camera image stitching

Multi-camera stitching technology seamlessly combines images from multiple cameras to provide full 2D/3D image integration, offering comprehensive vehicle visual monitoring. This technology also supports the transparent chassis feature, allowing drivers to see the road underneath the vehicle, enhancing driving safety. It is widely used in ADAS systems to reduce blind spots, especially when driving in tight or complex environments.

 

Object detection, classification & tracking

The object detection, classification & tracking technology accurately identifies various object types with 3D bounding boxes for each object to estimate their relative positions. This technology is particularly effective for detecting low obstacles, helping to prevent potential collision risks. Additionally, the capability to track multiple objects simultaneously enhances overall driving safety, leading to smarter ADAS solutions.

 

Lane detection and classification

Lane detection and classification technology accurately identifies different lane types and colors, such as solid lines, dashed lines, and double white lines. The technology uses cubic polynomial equations to fit lane curves. Additionally, on-road calibration ensures the precision and stability of lane detection, significantly enhancing the safety and reliability of autonomous driving systems.

 

Image segmentation

Image segmentation technology precisely differentiates between drivable freespace, vehicles, motorcycles, pedestrians, and objects such as curbs, walls, pillars, speed bumps, and rising or falling parking locks. This technology enhances the environmental perception of autonomous driving systems, ensuring safer navigation in complex road conditions.

 

Parking space detection

Parking space detection technology accurately identifies the direction and type of parking spaces while checking their availability in real time. It also includes parking space number recognition, combined with an automatic parking system, allowing drivers to easily find and park in the appropriate space, improving both parking efficiency and safety.

 

Multi-camera Visual SLAM

Multi-camera visual SLAM technology uses a combination of semantic and non-semantic feature points to achieve precise 360-degree mapping and localization, unaffected by wide-angle lens distortion. It also demonstrates robust adaptability to lighting variations, ensuring accurate positioning and map generation in diverse environments for autonomous driving and robotics.

 

Facial recognition and landmark detection

Facial recognition and landmark detection technology enables efficient face detection and identity verification, accurately capturing mouth open/close states, eye open/close states, and head poses. It also analyzes eye-gaze tracking, providing comprehensive driver and user behavior analysis that enhances driving security and interaction experience.

 

Let’s find the perfect solution for you