|
|
Adaptive Interference Waveform Design Based on Reinforcement Learning |
CHEN Tao1,2, ZHANG Ying1,2, HUANG Xiangsong1,2 |
1. College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001,
Heilongjiang, China;2. Key Lab on Multi-Disciplines Cooperation Cognition Artificial Intelligence Technologies
and Applications of Heilongjiang Province, Harbin 150001, Heilongjiang, China |
|
|
Abstract Aiming at the problem that the traditional jamming algorithm cannot adapt to the change of signal environment, the Q-learning algorithm and the hypothesis method of "cutting" are combined to apply to the design of jamming waveform, so that the jamming waveform can achieve the effect of adaptive radar signal length change. The algorithm is mainly aimed at jamming the radar detection link. Constant false Alarm rate (CFAR) is adopted as the environment interaction model, and the sampling time and forwarding time of intermittent sampling signals are adjusted adaptively through enhanced learning. On this basis, the radar signals of unknown length are processed by "cutting" to achieve the purpose of optimal interference. Finally, simulation is carried out to realize the jamming of unknown radar signal. The simulation results show that the reinforcement learning algorithm can make full use of the historical data when making decision under the condition of uncertain signal model, and can achieve better interference effect compared with the traditional algorithm.
|
Received: 26 November 2020
Published: 21 June 2021
|
|
|
|
|
[1] |
LIU Huahua, WANG Qing. Multi-Aircraft Target Assignment Method Based on Reinforcement Learning[J]. Air & Space Defense, 2024, 7(5): 65-72. |
[2] |
ZHOU Yi, ZHOU Liangcai, SHI Di, ZHAO Xiaoying, SHAN Xin. Coordinated Active Power-Frequency Control Based on Safe Deep Reinforcement Learning[J]. Journal of Shanghai Jiao Tong University, 2024, 58(5): 682-692. |
[3] |
DONG Yubo1 (董玉博), CUI Tao1 (崔涛), ZHOU Yufan1 (周禹帆),
SONG Xun2 (宋勋), ZHU Yue2 (祝月), DONG Peng1∗ (董鹏). Reward Function Design Method for Long Episode Pursuit Tasks Under Polar Coordinate in Multi-Agent Reinforcement Learning[J]. J Shanghai Jiaotong Univ Sci, 2024, 29(4): 646-655. |
[4] |
LI Shuyi (李舒逸), LI Minzhe (李旻哲), JING Zhongliang∗ (敬忠良). Multi-Agent Path Planning Method Based on Improved Deep Q-Network in Dynamic Environments[J]. J Shanghai Jiaotong Univ Sci, 2024, 29(4): 601-612. |
[5] |
ZHAO Yingce(赵英策), ZHANG Guanghao(张广浩), XING Zhengyu(邢正宇), LI Jianxun(李建勋). Hierarchical Reinforcement Learning Adversarial Algorithm Against Opponent with Fixed Offensive Strategy[J]. J Shanghai Jiaotong Univ Sci, 2024, 29(3): 471-479. |
[6] |
MIAO Zhenhua(苗镇华), HUANG Wentao(黄文焘), ZHANG Yilian(张依恋), FAN Qinqin(范勤勤). Multi-Robot Task Allocation Using Multimodal Multi-Objective
Evolutionary Algorithm Based on Deep Reinforcement Learning[J]. J Shanghai Jiaotong Univ Sci, 2024, 29(3): 377-387. |
[7] |
QUAN Jiale, MA Xianlong, SHEN Yuheng. Multi-agent Formation Method Based on Dynamic Optimization of Proximal Policies[J]. Air & Space Defense, 2024, 7(2): 52-62. |
[8] |
ZHANG Weizhen, HE Zhen, TANG Zhangfan. Reinforcement Learning Control Design for Perching Maneuver of Unmanned Aerial Vehicles with Wind Disturbances[J]. Journal of Shanghai Jiao Tong University, 2024, 58(11): 1753-1761. |
[9] |
WANG Xu, CAI Yuanli, ZHANG Xuecheng, ZHANG Rongliang, HAN Chenglong. Intercept Guidance Law with a Low Acceleration Ratio Based on Hierarchical Reinforcement Learning[J]. Air & Space Defense, 2024, 7(1): 40-47. |
[10] |
GUO Jianguo, HU Guanjie, XU Xinpeng, LIU Yue, CAO Jin. Reinforcement Learning-Based Target Assignment Method for Many-to-Many Interceptions[J]. Air & Space Defense, 2024, 7(1): 24-31. |
[11] |
MA Chi, ZHANG Guoqun, SUN Junge, LYU Guangzhe, ZHANG Tao. Deep Reinforcement Learning-Based Reconfiguration Method for Integrated Electronic Systems[J]. Air & Space Defense, 2024, 7(1): 63-70. |
[12] |
LI Mengxuan, GUO Jianguo, XU Xinpeng, SHEN Yuheng. Guidance Law Based on Proximal Policy Optimization[J]. Air & Space Defense, 2023, 6(4): 51-57. |
[13] |
SUN Jie, LI Zihao, ZHANG Shuyu. Application of Machine Learning in Chemical Synthesis and Characterization[J]. Journal of Shanghai Jiao Tong University, 2023, 57(10): 1231-1244. |
[14] |
YU Xinyi (禹鑫燚), WU Jiaxin (吴加鑫), XU Chengjun (许成军), LUO Huizhen (罗惠珍), OU Linlin∗ (欧林林). Adaptive Human-Robot Collaboration Control Based on Optimal Admittance Parameters[J]. J Shanghai Jiaotong Univ Sci, 2022, 27(5): 589-601. |
[15] |
LÜ Qibing (吕其兵), LIU Tianyuan (刘天元), ZHANG Rong (张荣), JIANG Yanan (江亚南), XIAO Lei (肖雷), BAO Jingsong∗ (鲍劲松). Generation Approach of Human-Robot Cooperative Assembly Strategy Based on Transfer Learning[J]. J Shanghai Jiaotong Univ Sci, 2022, 27(5): 602-613. |
|
|
|
|