This study proposes a computational impact angle control guidance law against a stationary target considering the seeker’s field-of-view (FOV) limit based on a deep reinforcement learning (RL) method. The proposed guidance law generates the acceleration command as a sum of the baseline command and bias command where the bias command is learned through a sequence of learning stages. Each stage trains an RL agent that addresses the impact angle and the FOV limit constraint individually. This approach is favorable in that the succeeding training process tends to preserve the functionality of the guidance law attained in the previous stage. In addition, the proposed method can be easily extended to a missile model with additional elements such as rotational dynamics without the necessity of modifying the algorithm. The proximal policy optimization algorithm is used for training the RL agents. Numerical simulation is carried out under various conditions to analyze the performance and the effects of a design parameter on the performance. The learning strategy proposed in this study provides a way to apply a data-driven method to developing a guidance law under multiple design objectives and more realistic missile models.
Impact Angle Control Guidance Considering Seeker’s Field-of-View Limit Based on Reinforcement Learning
Journal of Guidance, Control, and Dynamics ; 46 , 11 ; 2168-2182
2023-06-22
15 pages
Article (Journal)
Electronic Resource
English
A Two-Phased Guidance Law for Impact Angle Control with Seeker’s Field-of-View Limit
DOAJ | 2018
|Guidance law switching logic considering the seeker's field-of-view limits
Online Contents | 2009
|Guidance law switching logic considering the seeker's field-of-view limits
SAGE Publications | 2009
|Time-Varying Biased Proportional Guidance with Seeker’s Field-of-View Limit
DOAJ | 2016
|