This technology is a method of compensating for both look angle dependent radome refraction and rate gyro scale factor errors using deep learning technology, significantly improving the accuracy and effectiveness of a guidance and control system. Background: Look angle dependent radome refraction (which can be modeled as a time varying scale factor error applied to the line of sight direction vector) can create a false indication of target motion, potentially destabilizing a guidance and control system. Similarly, rate gyro scale factor errors create a time-varying bias in the integrated rotational velocity, which is used to estimate the change in vehicle attitude over the trajectory. When this biased change in attitude is then used to stabilize a seeker platform, either computationally in the case of strap down seekers or mechanically in the case of gimballed seekers, imperfect stabilization occurs, again leading to a false indication of target motion. Prior art to compensate for these errors typically estimates the scale factors using a bank of Kalman filters using adaptive models. However, since these models typically adapt only parameters in the ordinary differential equations describing the equations of motion, performance suffers when the structure of the model differs from the actual deployment environment. In contrast, our approach can learn a more expressive representation of the dynamics, and quickly adapt to the actual deployment environment. This technology is a novel method for adaptively correcting sensor scale factor errors using deep learning techniques to predict both the rotational velocity and line of sight scale factor errors, and compensate for these errors using the predictions, improving the performance of a guidance and control system. Applications:
Advantages: