Abstract:When conducting flight perception and decision-making tasks, such as target detection and recognition, and online route planning, intelligent aircraft encounter key scenarios that affect flight safety, including false and missed target alarms, and obstacle avoidance failures. Furthermore, due to the combination explosion of the state space of the data-driven intelligent model algorithm and the black-box characteristics of the computing logic, it is challenging to discover and identify its cognitive deception scenarios. In this study, the spoofing attack method was applied to generate targeted micro-disturbances in the system input, creating scenarios that pose risks and challenges to intelligent aircraft. The intelligent aircraft system was then constantly trained to test its operational limits, thereby evaluating safety-critical boundary scenarios for flying objects. This method revealed potential vulnerabilities that standard testing methods may not be able to detect. Meanwhile, the deceptive tests of intelligent aircraft in different risk scenarios ensured the safety and performance in the most challenging situations. The generation of these complex scenarios is crucial for enhancing the robustness of autonomous flight systems and preparing them for a broader range of real-world challenges.