FAQ

System Performance
What is the protection level of the Zenmuse L2?
Zenmuse L2 achieves an IP54 rating according to the IEC60529 standard under controlled laboratory conditions. To ensure the highest levels of protection:
• Before installing, make sure that the interface and surface of the gimbal are dry;
• Before use, make sure that the gimbal is firmly installed on the drone and the SD card protective cap is clean, free of foreign objects, and closed;
• Before opening the SD card protective cap, wipe the surface of the drone clean.

The protection level will decrease over time due to normal device use and wear.
What aircraft is Zenmuse L2 compatible with? Which gimbal interface can it be mounted on?
Zenmuse L2 is compatible with the Matrice 350 RTK and Matrice 300 RTK and only supports DJI RC Plus as the remote controller. Before using, please upgrade the firmware of the aircraft and remote controller to the latest version. To ensure mapping accuracy, make sure the L2 is mounted on a single downward gimbal connector with the cable connected to the right USB-C port (when facing the aircraft).
What is the Field of View (FOV) of Zenmuse L2's LiDAR?
Repetitive scanning: Horizontal 70°, Vertical 3°
Non-repetitive scanning: Horizontal 70°, Vertical 75°
What is the maximum detection range of Zenmuse L2?
Detection range:
250m @10% reflectivity, 100 klx
450m @50% reflectivity, 0 klx
The recommended operating altitude is 30-150 m.
How many returns does Zenmuse L2 support?
Zenmuse L2 supports five types of returns: Single return (strongest echo), dual returns, triple returns, quad returns, and penta returns.
What is the point cloud rate of Zenmuse L2?
Single return: max. 240,000 pts/s
Multiple returns: max. 1,200,000 pts/s
How many scanning modes does Zenmuse L2 have? In what scenarios do they apply?
Zenmuse L2 has two scanning modes: non-repetitive scanning mode and repetitive scanning mode
In repetitive scanning mode, LiDAR can achieve more uniform and accurate scanning, meeting high-precision mapping requirements.
In non-repetitive scanning mode, it offers stronger penetration, gathering more structural information, making it suitable for power line inspection, forestry surveying, and other scenarios.
What is L2's RGB camera used for?
When collecting point cloud data, the RGB camera can provide real-time color information for the data, and the photos taken can be used for reconstructing 2D RGB models. When there is no need to gather point cloud data, the RGB camera can take photos and videos, and collect images for reconstructing 2D or 3D RGB models
What is the surveying and mapping accuracy of Zenmuse L2?
Horizontal accuracy: 5 cm
Vertical accuracy: 4 cm

Measured under the following conditions in a DJI laboratory environment: Zenmuse L2 mounted on a Matrice 350 RTK and powered up. Using DJI Pilot 2’s Area Route to plan the flight route (with Calibrate IMU enabled). Using repetitive scanning with the RTK in the FIX status. The relative altitude was set to 150 m, flight speed to 15 m/s, gimbal pitch to -90°, and each straight segment of the flight route was less than 1500 m. The field contained objects with obvious angular features, and used exposed hard ground check points that conformed to the diffuse reflection model. DJI Terra was used for post-processing with Optimize Point Cloud Accuracy enabled. Under the same conditions with Optimize Point Cloud Accuracy not enabled, the vertical accuracy is 4 cm and the horizontal accuracy is 8 cm.
What CMOS size is Zenmuse L2’s RGB camera? And what is its pixel size?
The RGB camera uses a 4/3 CMOS, and the pixel size is 3.3 × 3.3 μm.
What improvements does Zenmuse L2 have compared to the previous generation?
The performance of the LiDAR has improved to about 1/5 of L1’s spot size when the object or area is 100 m from the LiDAR. The LiDAR's penetration ability has been significantly increased, and both its detection range and accuracy have improved. The pixel size of the RGB camera has increased by 89% compared to L1's 2.4 × 2.4μm. The LiDAR supports Laser Rangefinder (RNG).
Field Data Collection
Which flight platforms support the Power Line Follow feature of the Zenmuse L2?
Currently, the Power Line Follow feature is supported only when the Zenmuse L2 is mounted on the Matrice 350 RTK. Support for the Matrice 300 RTK will be available soon. Please stay tuned for official updates.
What types of power lines is the Zenmuse L2's Power Line Follow feature suitable for?
The Power Line Follow feature of the Zenmuse L2 is designed for transmission and distribution lines with voltage levels of 10 kV and above. However, it cannot guarantee effective recognition for low-voltage lines, such as those at 400 V, or communication and broadcasting cables.
Is it necessary to install the CSM Radar on the flight platform before performing power line follow tasks?
To ensure flight safety, it is recommended to install the CSM Radar on the flight platform and enable Horizontal Radar Obstacle Avoidance in the DJI Pilot 2 app.
What should I pay attention to when performing power line follow tasks?
During power line follow tasks, ensure flight safety and verify that the aircraft is correctly following the power line. It is recommended to continuously monitor the FPV live view to identify potential risks such as intersecting lines. Additionally, use the point cloud and visible light views from the Zenmuse L2 to verify the accuracy of the power line being followed.
In what scenarios could the recognition and following performance of power lines be affected? How can these issues be resolved?
The performance of power line recognition and following may be compromised under the following scenarios:
1. Insulated lines.
2. Tree canopies that are too close to or even obstructing the lines.
3. Dense distribution of multiple power lines, such as in substation entry and exit lines.
4. Complex intersections between power lines and other cables.
In these situations, while ensuring flight safety, you can attempt to lower the power line follow altitude and speed to improve recognition and following performance.
How do I enhance the accuracy of power line recognition at the beginning of a power line follow task?
To enhance the accuracy of line recognition at the beginning of a power line follow task, you can take the following measures:
1. Start the task from the transmission tower. This can significantly improve recognition accuracy, especially in scenarios with densely distributed parallel power lines. Specifically, position the flight platform and adjust the gimbal angle so that the top and upper part of the transmission tower are centered and toward the bottom of the camera view. Then, configure the parameters and initiate the power line follow task.
2. While ensuring safety, lower the power line follow altitude to get closer to the power lines and transmission towers.
Is it necessary to perform IMU calibrations during a power line follow task?
It is recommended to perform an IMU calibration at both the beginning and end of the power line follow task. There is no need to specifically perform IMU calibrations during the power line follow task, as the acceleration and deceleration movements of the flight platform can also serve as IMU calibrations to some extent.
Why is the AR recognition of the power lines sometimes inaccurate in the camera view? Does this affect the quality of the modeling?
During a power line follow task, the app uses AR projection to display the current power line being followed, assisting users in determining the current power line and flight direction. The AR projection may occasionally deviate from or extend beyond the edges of the power lines, but such discrepancies do not affect the quality of the modeling.
Why does the recognition result in power line branches change at branch points during power line follow tasks? Why are the branch numbers not sequential?
At complex branching points (e.g., low-voltage lines, trees, streetlight poles), the flight platform continuously recognizes and assesses the surrounding environment. However, recognition of interfering objects may be inconsistent. Incorrect recognition results are discarded, and their corresponding numbers are no longer included, which is why the final branch numbering is not sequential.
How efficient is the surveying and mapping operation of Zenmuse L2?
Zenmuse L2 can collect data covering an area of up to 2.5 km2 in a single flight.

Measured when Zenmuse L2 is mounted on Matrice 350 RTK with a flight speed of 15 m/s, flight altitude of 150 m, side overlap rate of 20%, Calibrate IMU enabled, Elevation Optimization turned off, and terrain follow turned off.
What are the application scenarios of Zenmuse L2?
Zenmuse L2 can be widely used in multiple scenarios including topographic surveying and mapping, power line modeling, forestry management, surveying measurement, and more.
What type of SD card is required for Zenmuse L2?
An SD card with a sequential writing speed of 50 MB/s or above and UHS-I Speed Grade 3 rating or above; Max capacity: 256 GB.
Lexar 1066x 64GB U3 A2 V30 microSDXC
Lexar 1066x 128GB U3 A2 V30 microSDXC
Kingston Canvas Go! Plus 128GB U3 A2 V30 microSDXC
Lexar 1066x 256GB U3 A2 V30 microSDXC
What does real-time point cloud modeling of Zenmuse L2 mean? Which coloring modes are supported? What operations are supported during viewing?
During the collection of original point cloud data, Zenmuse L2 can generate and display a real-time point cloud model in the DJI Pilot 2 app processed with sparse resolution. Four coloring modes are supported, including reflectivity, height, distance and RGB. When viewing models in the album on the remote controller, you can rotate, drag, zoom, quickly switch the perspective, and re-center the view.
Which types of flight tasks does Zenmuse L2 support?
The L2 currently supports Waypoint Route, Area Route and Linear Route flight tasks.
Does Zenmuse L2 require warm-up before performing flight tasks?
No warm-up is required. Once the aircraft’s RTK is in the FIX status, it can take off and operate.
Does Zenmuse L2 need to calibrate the IMU during operation?
To ensure the accuracy of the collected data, Calibrate IMU needs to be enabled. Before executing the flight task, please enable Calibrate IMU. Before manual flight, you can tap Calibrate before the operation to manually trigger the calibration. During the operation, manually trigger the IMU calibration again based on the countdown prompt.
What is the purpose of Zenmuse L2’s task quality report?
The Task Quality Report records the effective data duration of the LiDAR, the camera, and the IMU module. Operators can judge the validity of data collection based on the status of each module.
What are the different types of data saved on the SD card of Zenmuse L2?
CLC (camera LiDAR calibration file)
CLI (LiDAR IMU calibration file)
LDR (LiDAR data)
RTK (RTK data of main antenna)
RTL (compensation data of RTK pole)
RTS (RTK data of auxiliary antenna)
RTB (base station RTCM data)
IMU (IMU raw data)
SIG (PPK signature file)
LDRT (point cloud file for playback on the app)
RPT (point cloud quality report)
RPOS (real-time POS solution data)
JPG (photos taken during flight)
Will there be a difference in accuracy when Zenmuse L2 is mounted on Matrice 300 RTK and Matrice 350 RTK?
In the FIX state of RTK, there is no difference in accuracy between the two.
During the operation, can operators playback the point cloud results?
Yes. Operators can view the current point cloud collection on the real-time point cloud display, and also quickly preview the recorded point cloud 3D model. After the operation is completed, you can download and view the point cloud 3D model in the library, and also perform operations such as merging 3D point cloud models of multiple flights.

Operations like model playback and merging need to be performed when the aircraft and Zenmuse L2 are connected.
Is the real-time liveview and playback of the 3D point cloud model a 1:1 match with the model rebuilt in post-processing?
It is not a 1:1 match. Both the liveview and the playback of the 3D point cloud models are processed with sparse representation. In terms of the number of point clouds and accuracy, it is different from the model rebuilt in DJI Terra.
Post-processing
How to build a high-precision model with data collected by Zenmuse L2?
Launch DJI Terra to create a new "LiDAR Point Cloud" task. Follow the instructions to import the data from the SD card into DJI Terra and complete the related settings, and you can initiate high-precision modeling.
In DJI Terra, what result formats can be generated with the data of Zenmuse L2?
Point cloud formats: PNTS, LAS, PLY, PCD, S3MB
Trajectory formats: sbet.out, sbet.txt
What new point cloud processing functions can be achieved by Zenmuse L2 with DJI Terra?
1. Ground point classification;
2. Output digital elevation model (DEM);
3. A new Accuracy Control and Check function that supports local coordinate system, to ensure the results reach surveying and mapping accuracy;
4. Optimization of the point cloud thickness between the flight strips, making it thinner and more consistent;
5. More comprehensive point cloud quality report.
How to understand the reflectivity value in DJI Terra?
The reflectivity range is from 0 to 255, with 0 to 150 corresponding to 0 to 100% reflectivity under Lambertian diffuse reflectance, and 151 to 255 corresponding to full reflectance.

The reflectance value is related to multiple factors such as the surface topography of the geographical object, lighting conditions, and the incident angle, and cannot form a strict correspondence with the absolute reflectance.