RL Airfoil Geometry Design

Optimizing aerodynamic efficiency using Reinforcement Learning and XFOIL.

← Back to Portfolio

Project Summary

Overview: Aerodynamic shape optimization typically relies on computationally expensive CFD simulations or gradient-based methods that can get stuck in local optima. This project explores a Reinforcement Learning (RL) approach to autonomously modify airfoil geometry to maximize the lift-to-drag ratio (Cl/Cd). By coupling a Python-based RL agent with the XFOIL solver, the system learns optimal shape modifications through trial-and-error interaction with the flow environment.

Methodology: The environment is built around the NACA 0012 airfoil as a baseline. The RL agent observes the current state (Lift Coefficient, Drag Coefficient, and current geometry) and takes discrete actions to modify the shape parameters. The reward signal is directly tied to the improvement in the Lift-to-Drag ratio. The project implements a Q-Learning algorithm to derive an optimal policy for shape deformation.

Key Outcomes:

  • Automated Optimization Pipeline: Successfully integrated Python with the XFOIL executable to automate aerodynamic analysis steps (paneling, solving for alpha, parsing polar files) within the RL training loop.
  • Performance Gains: The agent successfully learned policies that modified the baseline NACA 0012 profile, achieving improved aerodynamic efficiency. Initial training runs demonstrated an increase in the Lift-to-Drag ratio from a baseline of ~10.6 to over 10.9 within just a few optimization steps.
  • Robust Handling of Solver Failures: Implemented error handling for non-convergence cases in XFOIL, ensuring the RL training process remains stable even when generated geometries are physically infeasible.

Note: A separate investigation into Physics-Informed Neural Networks (PINNs) for flow field prediction was also conducted to potentially accelerate the evaluation step in future iterations.


View Code on GitHub