From 9b2debb7f035fe16879e3e8ff42ef920f1d840d9 Mon Sep 17 00:00:00 2001 From: Peter Sharpe Date: Mon, 11 Mar 2024 20:24:21 -0400 Subject: [PATCH] sync ASB version; update README --- README.md | 28 +++++++++++++++++----------- setup.py | 2 +- 2 files changed, 18 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index 2c82ef2..0255963 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,11 @@ by [Peter Sharpe](https://peterdsharpe.github.io) () NeuralFoil is a tool for rapid aerodynamics analysis of airfoils, similar to [XFoil](https://web.mit.edu/drela/Public/web/xfoil/). Under the hood, NeuralFoil consists of physics-informed neural networks trained on [tens of millions of XFoil runs](#geometry-parameterization-and-training-data). -NeuralFoil is available here as a pure Python+NumPy standalone, but it is also [available within AeroSandbox](#extended-features-transonics-post-stall-control-surface-deflections), which extends it with many more advanced features. Using the AeroSandbox extension, NeuralFoil can give you **viscous, compressible airfoil aerodynamics for (nearly) any airfoil, with control surface deflections, across $360^\circ$ angle of attack, at any Reynolds number, all nearly instantly** (~5 milliseconds). And, it's guaranteed to return an answer (no non-convergence issues), it's vectorized, and it's $C^\infty$-continuous (all very useful for gradient-based optimization). +NeuralFoil is available here as a pure Python+NumPy standalone, but it is also [available within AeroSandbox](#extended-features-transonics-post-stall-control-surface-deflections), which extends it with many more advanced features. Using the AeroSandbox extension, NeuralFoil can give you **viscous, compressible airfoil aerodynamics for (nearly) any airfoil, with control surface deflections, across $360^\circ$ angle of attack, at any Reynolds number, all nearly instantly** (~5 milliseconds). And, it's guaranteed to return an answer (no non-convergence issues), it's vectorized, and it's $C^\infty$-continuous (critical for gradient-based optimization). + +For aerodynamics experts: NeuralFoil will also give you fine-grained boundary layer control ($N_{\rm crit}$, forced trips) and inspection results ($\theta$, $H$, $u_e/V_\infty$, and pressure distributions). + +A unique feature of NeuralFoil is that every analysis will also return an `"analysis_confidence"` output, which is a measure of uncertainty. This is especially useful for design optimization, where constraining this uncertainty parameter will help ensure your designs are [robust to small changes in shape and flow conditions.](https://web.mit.edu/drela/OldFiles/Public/papers/Pros_Cons_Airfoil_Optimization.pdf) NeuralFoil is [~10x faster than XFoil for a single analysis, and ~1000x faster for multipoint analysis](#table), all with [minimal loss in accuracy compared to XFoil](#performance). Due to the wide variety of training data and the embedding of several physics-based invariants, [this accuracy is seen even on out-of-sample airfoils](#performance) (i.e., airfoils it wasn't trained on). It also has [many nice features](#xfoil-benefit-question) (e.g., smoothness, vectorization, all in Python+NumPy) that make it much easier to use. @@ -76,17 +80,23 @@ aero = nf.get_aero_from_airfoil( # You can use AeroSandbox airfoils as an entry ## Performance -Qualitatively, NeuralFoil tracks XFoil very closely across a wide range of $\alpha$ and $Re$ values. In the figure below, we compare the performance of NeuralFoil to XFoil on $C_L, C_D$ polar prediction. Notably, the airfoil analyzed here was developed "from scratch" for a [real-world aircraft development program](https://www.prnewswire.com/news-releases/electra-flies-solar-electric-hybrid-research-aircraft-301633713.html) and is completely separate from [the airfoils used during NeuralFoil's training](#geometry-parameterization-and-training-data), so NeuralFoil isn't cheating by "memorizing" this airfoil's performance: +Qualitatively, NeuralFoil tracks XFoil very closely across a wide range of $\alpha$ and $Re$ values. In the figure below, we compare the performance of NeuralFoil to XFoil on $C_L, C_D$ polar prediction. Notably, the airfoil analyzed here was developed "from scratch" for a [real-world aircraft development program](https://www.prnewswire.com/news-releases/electra-flies-solar-electric-hybrid-research-aircraft-301633713.html) and is completely separate from [the airfoils used during NeuralFoil's training](#geometry-parameterization-and-training-data), so NeuralFoil isn't cheating by "memorizing" this airfoil's performance. Each color in the figure below represents analyses at a different Reynolds number.

- +

NeuralFoil is typically accurate to within a few percent of XFoil's predictions. Note that this figure is on a truly out-of-sample airfoil, so airfoils that are closer to the training set will have even more accurate results. -NeuralFoil also [has the benefit of smoothing out XFoil's "jagged" predictions](#xfoil-benefit-question) (for example, near $C_L=1.4$ at $Re=\mathrm{90k}$) in cases where XFoil is not reliably converging, which would otherwise make optimization difficult. +NeuralFoil also [has the benefit of smoothing out XFoil's "jagged" predictions](#xfoil-benefit-question) (for example, near $C_L=1.4$ at $Re=\mathrm{80k}$) in cases where XFoil is not reliably converging, which would otherwise make optimization difficult. On that note, NeuralFoil will also give you an `"analysis_confidence"` output, which is a measure of uncertainty. Below, we show the same figure as before, but color the NeuralFoil results by analysis confidence. This illustrates how regions with delicate or uncertain aerodynamic behavior are flagged. + +

+ +

+ + In the table below, we quantify the performance of the NeuralFoil ("NF") models with respect to XFoil more precisely. At a basic level, we care about two things: @@ -95,17 +105,13 @@ In the table below, we quantify the performance of the NeuralFoil ("NF") models This table details both of these considerations. The first few columns show the error with respect to XFoil on the test dataset. [The test dataset is completely isolated from the training dataset, and NeuralFoil was not allowed to learn from the test dataset](#geometry-parameterization-and-training-data). Thus, the performance on the test dataset gives a good idea of NeuralFoil's performance "in the wild". The second set of columns gives the runtime speed of the models, both for a single analysis and for a large batch analysis. - +
Aerodynamics ModelMean Absolute Error (MAE) of Given Metric, on the Test Dataset, with respect to XFoilComputational Cost to Run
Lift Coeff.
$C_L$
Fractional Drag Coeff.
$\ln(C_D)$   †
Moment Coeff.
$C_M$
Transition Locations
$x_{tr}/c$
Runtime
(1 run)
Total Runtime
(100,000 runs)
NF "xxsmall"0.0400.0780.0070.0444 ms0.85 sec
NF "xsmall"0.0300.0570.0050.0334 ms0.96 sec
NF "small"0.0270.0500.0050.0275 ms1.08 sec
NF "medium"0.0200.0390.0030.0205 ms1.29 sec
NF "large"0.0160.0300.0030.0148 ms2.23 sec
NF "xlarge"0.0130.0240.0020.01013 ms4.21 sec
NF "xxlarge"0.0120.0220.0020.00916 ms5.16 sec
NF "xxxlarge"0.0120.0200.0020.00756 ms13.6 sec
XFoil000073 ms42 min
-
Aerodynamics ModelMean Absolute Error (MAE) of Given Metric, on the Test Dataset, with respect to XFoilComputational Cost to Run
Lift Coeff.
$C_L$
Fractional Drag Coeff.
$\ln(C_D)$ †
Moment Coeff.
$C_M$
Max Overspeed
$u_\max / u_\infty$ ‡
Top Transition Loc.
$x_{tr, top}/c$
Bottom Trans. Loc.
$x_{tr, bot}/c$
Runtime
(1 run)
Total Runtime
(100,000 runs)
NF Linear $C_L$ Model0.116-----1 ms0.020 sec
NF "xxsmall"0.0650.1210.0100.2150.0730.1003 ms0.190 sec
NF "xsmall"0.0420.0750.0070.1340.0390.0554 ms0.284 sec
NF "small"0.0390.0690.0060.1220.0360.0504 ms0.402 sec
NF "medium"0.0270.0510.0040.0880.0220.0335 ms0.784 sec
NF "large"0.0240.0450.0040.0790.0200.0296 ms1.754 sec
NF "xlarge"0.0230.0430.0040.0760.0190.02810 ms3.330 sec
NF "xxlarge"0.0210.0400.0030.0710.0180.02513 ms4.297 sec
NF "xxxlarge"0.0200.0390.0030.0700.0160.02438 ms8.980 sec
XFoil00000073 ms42 min
- -> † The deviation of $\ln(C_D)$ can be thought of as "the typical relative error in $C_D$". For example, if the mean absolute error ("MAE", or $L^1$ norm) of $\ln(C_D)$ is 0.039, you can think of it as "typically, drag is accurate to within 3.9% of XFoil." Note that this doesn't necessarily mean that NeuralFoil is *less* accurate than XFoil - although XFoil is quite accurate, it is clearly not a perfect "ground truth" in all cases (see $Re=\mathrm{90k}$ in the [figure above](#clcd-polar)). So, NeuralFoil's true accuracy compared to experiment may differ (in either direction) from the numbers in this table. -> -> ‡ This "maximum overspeed" lets you compute $C_{p,\min}$, which can be used to calculate the critical Mach number $M_\mathrm{crit}$. [More details below.](#extended-features-transonics-post-stall-control-surface-deflections) +> † The deviation of $\ln(C_D)$ can be thought of as "the typical relative error in $C_D$". For example, if the mean absolute error ("MAE", or $L^1$ norm) of $\ln(C_D)$ is 0.020, you can think of it as "typically, drag is accurate to within 2.0% of XFoil." Note that this doesn't necessarily mean that NeuralFoil is *less* accurate than XFoil - although XFoil is quite accurate, it is clearly not a perfect "ground truth" in all cases (see $Re=\mathrm{90k}$ in the [figure above](#clcd-polar)). So, NeuralFoil's true accuracy compared to experiment may differ (in either direction) from the numbers in this table. Based on these performance numbers, you can select the right tradeoff between accuracy and computational cost for your application. In general, I recommend starting with the ["large"](#overview) model and adjusting from there. -In addition to accuracy vs. speed, another consideration when choosing the right model is what you're trying to use NeuralFoil for. Larger models will be more complicated ("less parsimonious," as the math kids would say), which means that they may have more "wiggles" in their outputs—this might be undesirable for gradient-based optimization. On the other hand, larger models will be able to capture a wider range of airfoils (e.g., nonsensical, weirdly-shaped airfoils that might be seen mid-optimization), so larger models could have a benefit in that sense. If you try a specific application and have better/worse results with a specific model, let me know by opening a GitHub issue! +In addition to accuracy vs. speed, another consideration when choosing the right model is what you're trying to use NeuralFoil for. Larger models will be more complicated ("less parsimonious," as the math kids would say), which means that they may have more "wiggles" in their outputs as they track XFoil's physics more closely. This might be undesirable for gradient-based optimization. On the other hand, larger models will be able to capture a wider range of airfoils (e.g., nonsensical, weirdly-shaped airfoils that might be seen mid-optimization), so larger models could have a benefit in that sense. If you try a specific application and have better/worse results with a specific model, let me know by opening a GitHub issue! ## Extended Features (transonics, post-stall, control surface deflections) diff --git a/setup.py b/setup.py index 0bf5821..9cb2929 100644 --- a/setup.py +++ b/setup.py @@ -55,7 +55,7 @@ def get_version(rel_path): python_requires='>=3.7', install_requires=[ 'numpy >= 1', - 'aerosandbox >= 4.1.0' + 'aerosandbox >= 4.2.3' ], extras_require={ "training": [