CFD Online Logo CFD Online URL
Home >

CFD Blog Feeds

Another Fine Mesh top

► Farewell, Another Fine Mesh. Hello, Cadence CFD Blog.
    5 May, 2023

They say the only constant in life is change and that’s as true for blogs as anything else. After almost a dozen years blogging here on as Another Fine Mesh, it’s time to move to a new home, the … Continue reading

The post Farewell, Another Fine Mesh. Hello, Cadence CFD Blog. first appeared on Another Fine Mesh.

► This Week in CFD
  31 Mar, 2023

Welcome to the 500th edition of This Week in CFD on the Another Fine Mesh blog. Over 12 years ago we decided to start blogging to connect with CFDers across teh interwebs. “Out-teach the competition” was the mantra. Almost immediately … Continue reading

The post This Week in CFD first appeared on Another Fine Mesh.

► Create Better Designs Faster with Data Analysis for CFD – A Webinar on March 28th
  23 Mar, 2023

Automated design optimization is a key technology in the pursuit of more efficient engineering design. It supports the design engineer in finding better designs faster. A computerized approach that systematically searches the design space and provides feedback on many more … Continue reading

The post Create Better Designs Faster with Data Analysis for CFD – A Webinar on March 28th first appeared on Another Fine Mesh.

► This Week in CFD
    3 Mar, 2023

It’s nice to see a healthy set of events in the CFD news this week and I’d be remiss if I didn’t encourage you to register for CadenceCONNECT CFD on 19 April. And I don’t even mention the International Meshing … Continue reading

The post This Week in CFD first appeared on Another Fine Mesh.

► This Week in CFD
  24 Feb, 2023

Some very cool applications of CFD (like the one shown here) dominate this week’s CFD news including asteroid impacts, fish, and a mesh of a mesh. For those of you with access, NAFEM’s article 100 Years of CFD is worth … Continue reading

The post This Week in CFD first appeared on Another Fine Mesh.

► This Week in CFD
  17 Feb, 2023

This week’s aggregation of CFD bookmarks from around the internet clearly exhibits the quote attributed to Mark Twain, “I didn’t have time to write a short letter, so I wrote a long one instead.” Which makes no sense in this … Continue reading

The post This Week in CFD first appeared on Another Fine Mesh.

F*** Yeah Fluid Dynamics top

► Stopping a Bottle’s Bounce
  28 Sep, 2023

A few years ago, the Internet was abuzz with water bottle flips. Experimentalists are still looking at how they can arrest a partially fluid-filled container’s bounce, but now they’re rotating the bottles vertically rather than flipping them end-over-end. Their work shows that faster rotating bottles have little to no bounce after impacting a surface.

This image sequence shows how water in a rotating bottle moves during its fall (top row) and after impact (bottom row). Water climbs the walls during the fall, creating a shell of fluid that, after impact, forms a central jet that arrests the bottle's momentum.
This image sequence shows how water in a rotating bottle moves during its fall (top row) and after impact (bottom row). Water climbs the walls during the fall, creating a shell of fluid that, after impact, forms a central jet that arrests the bottle’s momentum.

The reason for this is visible in the image sequence above, which shows a falling bottle (top row) and the aftermath of its impact (bottom row). When the bottle rotates and falls, water climbs up the sides of the bottle, forming a shell. On impact, the water collapses, forming a central jet that shoots up the middle of the bottle, expending momentum that would otherwise go into a bounce. It’s a bit like the water is stomping the landing.

The authors hope their observations will be useful in fluid transport, but they also note that this bit of physics is easily recreated at home with a partially-filled water bottle. (Image and research credit: K. Andrade et al.; via APS Physics)

► Mitigating Urban Floods
  27 Sep, 2023

For densely-populated urban areas, floods are one of the most damaging and expensive natural disasters. We can’t control the amount of rain that falls, so engineers need other ways to mitigate damage. It’s not usually possible to remove people and property from floodplains, so instead civil engineers look below the surface, building flood tunnel networks to alleviate floodwaters. In this Practical Engineering video, Grady demonstrates how these systems work and what some of their challenges are. (Video and image credit: Practical Engineering)

► Weathering Spilled Oil
  26 Sep, 2023

As long as we continue to extract and transport oil, marine oil spills will continue to be a problem. Recent work shows that spilled oil weathers differently depending on both sunlight and water temperature. When exposed to sunlight, crude oil undergoes chemical reactions that can change its makeup. Researchers studied the mechanical properties of crude oil samples kept at different temperatures in both sunlight and the dark.

They discovered that sunlight-exposed crude oil kept at a high temperature had twice the viscosity of a sample kept in the dark at the same temperature. In contrast, the high-temperature sunlit sample’s viscosity was 8 times lower than a sunlit sample kept at a lower temperature. That’s quite a large difference, and it implies that tropical oil spills may behave quite differently than Arctic ones. Cold-water spills will entrain and dissolve less than warm-water ones, so there may be more surface oil to collect at high-latitude spills. The differences in viscosity may also necessitate different spill mitigation techniques. (Image credit: NOAA; research credit: D. Freeman et al.; via APS Physics)

► Rolling Over Wisconsin
  25 Sep, 2023

Although they may look sinister, roll clouds like this one are no tornado. These unusual clouds form near advancing cold fronts when downdrafts cause warm, moist air to rise, cool below the dew point, and condense into a cloud. Air in the cloud can circulate around its long horizontal axis, but the clouds won’t transform into a tornado. Roll clouds are also known as Morning Glory clouds because they often form early in the day along the Queensland coast, where springtime breezes off the water promote their growth. The clouds do form elsewhere, though; this example is from Wisconsin in 2007. (Image credit: M. Hanrahan; via APOD)

► Diving From Above
  22 Sep, 2023

Blue-footed boobies, like many other seabirds, climb to a particular altitude before folding their wings and diving head-first into the water. This acrobatic feat balances the bird’s force of impact and the depth it can reach to ensnare fish swimming there. It’s an incredible process to watch, a fascinating one to study, and, here, a beautiful glimpse of the natural world from a perspective we don’t typically see. (Image credit: H. Spiers, Bird POTY; via Colossal)

► Butterfly Scales
  21 Sep, 2023

Catch a butterfly, and you’ll notice a dust-like residue left behind on your fingers. These are tiny scales from the butterfly’s wing. Under a microscope, those scales overlap like shingles all over the wing. Their downstream edges tilt upward, leaving narrow gaps between one scale and the next. Experiments show that, although butterflies can fly without their scales, these tiny features make a big difference in their efficiency.

At the microscale, a butterfly's scales overlap like roof shingles but are tilted upward, leaving cavities in the downstream direction.
At the microscale, a butterfly’s scales overlap like roof shingles but are tilted upward, leaving cavities in the downstream direction.

When air flows over the scales, tiny vortices form in the gaps between. These laminar vortices act like roller bearings, helping the flow overhead move along with less friction and, thus, less drag. Compared to a smooth surface, the scales reduce skin friction on the wing by 26-45%. (Image credit: butterfly – E. Minuskin, scales – N. Slegers et al., experiment – S. Gautam; research credit: N. Slegers et al. and S. Gautam; via Physics Today)

This lab-scale experiment shows how air moves over butterfly scales. As flow moves from left to right, small persistent vortices form in the gaps between scales. These act like roller bearings that reduce the skin friction from air moving past.
This lab-scale experiment shows how air moves over butterfly scales. As flow moves from left to right, small persistent vortices form in the gaps between scales. These act like roller bearings that reduce the skin friction from air moving past.

CFD Online top

► So gone and lost not even God can find me
  19 May, 2023
I started a CFD project and lost my way. I keep going around like the Blair Witch Project. Nothing I have tried has worked.

I did some simple simulations and got good results -- numbers agree with empirical results.

So I decided to try to apply CFD to an inline gas filter. The filter consists of a housing and a pleated fabric filter. The fabric has around 50 pleats with a porosity of 0.4. I simulated the porosity by cutting slots in the CAD model pleats equally to about 40% of the surface area.

Initially, I started out with USCS units without changing the openFoam unit system and got results within expectations.
Unfortunately, ParaView only appends metric values to the charts so I decided to change to metric units.

By the way, I am using the snappyHexMesh GUI for Blender and STL files generated by Fusion 360. I am trying to use icoFoam. I measure U by using a ParaView line plot across a suitable section of the filter. I use U to calculate the flow rate in SCFH.

I fix the pressure across the filter and want to calculate U. The study I now have is not affected by the pressure values, and has huge Courant values, and divergent residuals. Because of the pleated fabric filter, a lot of small cells are in the model and my delta t is very, very small.

So far I can't seem to get a handle on the problem.
► Choice of Turbulence Model
    3 May, 2023
Turbulence is a complex phenomenon that occurs in most engineering applications involving fluid dynamics. It is characterized by the irregular and chaotic motion of fluid particles, which can cause significant fluctuations in velocity and pressure. Until now, there has not been a single and practical turbulence model that can reliably predict all turbulent flows with sufficient accuracy. whereas many turbulence models have been developed from the perspective of finding different compromises between solution accuracy and computational cost. Going from DNS to RANS models, passing through DES, LES, and many others, the computational cost decreases significantly due to the cost of more and more flow averaging, which in some cases may lead to the loss of relatively important flow features.

In the following discussion, we will mainly focus on the use of RANS models since they are the most widely used approach for calculating industrial flows and can be found in most commercial CFD softwares (noting StarCCM+ and Fluent) and non-commercial CFD softwares (like OpenFOAM).

RANS stands for Reynolds-averaged Navier-Stokes equations. The main advantage of the method is its capacity to simulate complex geometries at a relatively low computational cost. This was possible due to the small number of degrees of freedom resulting from flow averaging. The three most popular turbulence models using the RANS approach are, to my knowledge :
  • k-epsilon model
  • k-omega model
  • Spalart-Allmaras model

k-epsilon : The k-epsilon model is a two-equation model that solves for turbulent kinetic energy and dissipation rate. It is the most widely-used engineering turbulence model for industrial applications. It is robust, reasonably accurate, and contains submodels for compressibility, buoyancy, combustion, and many others. Its main limitations are that the epsilon equation contains a term which cannot be calculated at the wall (therefore, wall functions must be used), and that it generally performs poorly for flows with strong separation, large streamline curvature, and large pressure gradient. k-epsilon models are best suited to applications that contain complex recirculation, with or without heat transfer.

k-omega : The k-omega model is similar to the k-epsilon model in that two transport equations are solved, but differs in the choice of the second transported turbulence variable. Indeed, it solves for the specific dissipation rate in addition to the turbulent kinetic energy. The added value of this substitution is that the specific dissipation rate can be integrated at the wall, so there is no obligation of using wall functions. It is accurate and robust for a wide range of boundary layer flows with pressure gradient. It is, thus, best suited for aerospace and turbo-machinery applications.
An interesting variation of the standard k-omega model, is the k-omega SST, where SST stands for Shear Stress Transport. The k-omega SST contains a blending function to gradually transition from the standard k–ω model near the wall to a high Reynolds number version of the k–ε model in the outer portion of the boundary layer. In other terms, it uses the standard k-omega formulation in the inner parts of the boundary layer, and switches to a k-epsilon behaviour in the free-stream. This ensures that the appropriate model is utilized throughout the flow field. Although this model comes with many advantages, its main disadvantage is that it is harder to convegre compared to the standard models, and thus is more numerically expensive.

Spalart-Allmaras : The Spalart-Allmaras model is relatively new compared to the first two discussed models. It maily differs by being a single equation model that solves for a modified eddy viscosity. It is thus also relativaly less expensive, especially that the transport of the modified eddy viscosity is easy to resolve near the wall. It is best suited for aerospace and turbo-machinery applications where boundary layers are largely attached and separation is mild if it occurs. This is for example the cas of flows over airfoils or boundary-layers flows. The Spalart-Allmaras model is gaining in popularity, but faces some limitations since it is not suited for flows where complew recirculation occurs. It also usually over-predicts the boundary layer thickness which mainly deterior the accuray of heat transfer solution.

For more curious readers, i would suggest the following book, from which i would suggest reading the following book : Rodriguez, Sal. (2019). Applied Computational Fluid Dynamics and Turbulence Modeling: Practical Tools, Tips and Techniques. 10.1007/978-3-030-28691-0.

or watch the following video : RANS Turbulence Models: Which Should I Choose?

You can also dowload my open source calculator of initial values and boundary conditions of some of the most common turbulence models :

The following animation shows the velocity profile of an air flow over NACA 4415 airfloil, free-stream velocity is 1 m/s.
► Monthly informal OpenFOAM meeting
  31 Jan, 2023
Hello everyone,

I am happy to announce the next occasion of the monthly meeting on the 18th of February 2023 at 14:00 German Time (UTC+1). The meeting takes place using zoom with the attached room details (only visible when logged in). In parallel the meeting times are announced in this calendar for integration in a mail client and as website. Details can be found in this thread.

Everybody is welcome.

See you there.
► How to install unbuntu system and open foam in your computer by VM
    2 Jan, 2023
How to install unbuntu system and open foam in your computer by VM

First,you should go to this site: In this site ,you can install this package. The package's name is ubuntu-22.04.1-desktop-amd64 (1).iso . it's type is iso. After you download it, you should reserve it instead of releasing it. Then, you should install a software named Vmware Work Staion. Last, you should download openfoam and thirdParty. When you finish the installation of Vm and ubuntu Then you can begin to install your openfoam.

First,when you open the terminal in the ubuntu system, you can put this code in it: mkdir OpenFOAM Then copy your openfoam and thirdParty into the OPenFOAM docement.

Then, you should input this code:

sudo apt-get update

sudo apt-get install build-essential autoconf autotools-dev cmake gawk gnuplot

sudo apt-get install flex libfl-dev libreadline-dev zlib1g-dev openmpi-bin libopenmpi-dev mpi-default-bin mpi-default-dev

sudo apt-get install libgmp-dev libmpfr-dev libmpc-dev

After you finish it, you also should input this code to check your software version:

sudo apt-cache show gcc
sudo apt-cache show libopenmpi-dev
sudo apt-cache show cmake
sudo apt-cache show flex
sudo apt-cache show m4

After the system running it, you can:

sudo apt-get install libfftw3-dev libscotch-dev libptscotch-dev libboost-system-dev libboost-thread-dev libcgal-dev

Then, it is time for you to set up enviorment varies:

gedit ~/.bashrc

When you input the code, a texttile will appear and you can put code:source ~/OpenFOAM/[your openfoam name]/etc/bashrc at the last to set up correct enviorment varies.

After you finish it , you should close the terminal and restart it to create the varies.

Finally it is time for you to install your openfoam what you have download from Internet. First, you should put code: cd Openfoam

then : ./Allwmake -j -s -q -l [Pay attention, if you receive the error "icoFoam not installed" at last, you should exclude the -p out of this code]

Finally, we can install the thirdparty; First, you can input : sudo apt install paraview-dev

sudo apt install cmake qtbase5-dev qttools5-dev qttools5-dev-tools libqt5opengl5-dev libqt5x11extras5-dev libxt-dev

After yuo finish the progress, you can input:
cd Openfoam [if you have already in this oposition, you can not input the code]

Second, input this kind of code : ./Allwmake -j -s -q -l

Finally, you will finish the openfoam and paraview in your virtual system.

I hope it can help you to solve your troubles.

At last , if you do not want to follow this progress, you can down a complete package from this site() and then install to your virtual system directly.

If you have any question about it , we can discuss with each others below the comments.
► Unofficial theory guide for relativeVelocityModel in OpenFOAM8 (
  19 Sep, 2022
Here's the theory for relativeVelocityModel in OpenFOAM8 that I uncovered manually going through the code and commit history of OpenFOAM8.

Before we proceed, since there are a couple of main scientific schools in the world that use different notation, let me declare some notations that I'm going to be using:

\cdot <-- this dot is just a general sign for multiplication; both multiplication of scalars and scalar multiplication of vectors can be denoted by it; obviously, if I multiply vectors, I will denote them as vectors (i.e. with an arrow above), everything that doesn't have an arrow above is a scalar

tg and ctg are tangent and cotangent respectively

lg is logarithm with the base of 10

ln is natural logarithm

momentum, impulse and quantity \ of \ motion are all the same thing

General idea

If we want to describe a two-phase gas-liquid or liquid-liquid flow mathematically, we write the Navier-Stokes for each phase. That's the general consensus of fluid mechanics community (though, I, personally, do not absolutely agree with it).

Such a system of equations is difficult to solve. Therefore, people started simplifying the equations - even throwing away some equations - by, of course, simplifying the physics of the flow they want to describe.

Such systems of equations are called reduced order models. Note, that when you simplify and throw away the equations, you end up having less equations than unknowns in general. Therefore, people try to come up with so called closure relations that are meant to be very simple (preferably, linear algebraic equations) and bring the total number of equations to the total number of unknowns.

That changes the flow physics a lot, but gives you general understanding of the flow behavior. In other words, that doesn't give you the details of the flow but, rather, gives you general characteristics of the flow.

One of such models is called drift-flux model. Its closure relation is called slip relation.

Drift-flux model is one of those models that simplifies the physics to the highest degree possible. It's not suitable for detailed flow description. But if, for instance, you are interested in an approximate pressure drop in a several kilometers deep oil well, that's your model of choice. It will give general understanding of what pumps to use and the cost of running it is very low.

The theory of the drift-flux model was developed by Mamoru Ishii, an emeritus professor at Purdue.

The development of the slip relation started before Mamoru Ishii, but he made a significant contribution to it. The slip relation is used on its own sometimes.

Mamoru Ishii, Takashi Hibiki, "Thermo-fluid dynamics of two-phase flow", 2nd edition, 2011, Springer is the fundamental book on the modeling of two-phase flows in general and the drift-flux model in particular.

The reduction of the physics in the drift-flux model is briefly described by the following. What if one imagines a fluid-fluid flow as the flow of fully diluted gas mixture for which the theory is well developed. One can do that, but should do something with the fact that as opposed to a gas mixture, a bubble in water moves relative to the water due to buoyancy. The theory of gas mixture flow doesn't account for that. Therefore, one must amend the theory of gas mixture flow to account for the drift (slip) velocity of bubbles if he wants to apply that theory to bubbly flows (or other two-phase flows).

In order to account for that, one should use the slip relation.

One of the main parameter in the slip relation is drift velocity. There are many empirical equations for the drift velocity.

OpenFOAM offers the choice of two equations for the drift velocity.

Those equations are accessible under the relativeVelocityModels in OpenFOAM.

NOTE: I have a suspicion that OpenFOAM means something else under driftFluxFoam, I'm still investigating that.


The structure of the code behind relativeVelocityModels is shown here.

You can choose between simple and general drift velocity models.

Note, that in C++, you use two-file system. In .H files, you declare variables and functions. In .C files, you assign values and expressions to the variable and functions declared in .H files.

Therefore, the formula for the simple drift velocity model is shown in the file simple.C, see line 66. It was declared in the file simple.H, see line 90.

The simple drift velocity model goes as follows:

U_{dm} = \frac{\rho_c}{\rho} \cdot V_0 \cdot 10^{-A \cdot \alpha_d}

The formula for the general drift velocity model is shown in the file general.C, see line 67. It was declared in the file general.H, see line 93.

The general drift velocity model goes as follows:

U_{dm} = \frac{\rho_c}{\rho} \cdot V_0 \cdot (e^{-A \cdot (\alpha_d - \alpha_{residual})} - e^{-a_1 \cdot (\alpha_d - \alpha_{residual})})

The names of some of the parameters in these formulas are:
  • U_{dm} is called diffusion velocity, see, e.g., general.H line 92
  • V_0 is called drift velocity, see, e.g., general.H line 63
  • \rho = \alpha_1 \cdot \rho_1 + \alpha_2 \cdot \rho_2 is declared in the createFields.H file (see line 57), which is a part of interPhaseChangeFoam, and not the part of driftFluxFoam.
In order to find the article on which these equations are based, I had to go deep into the commit history of, even, previous versions of OpenFOAM. Which I didn't do.

Instead, these equations are pretty much the same in OpenFOAM10 (the differences are negligible). And OpenFOAM10 commit history readily gives you the commit where the reference to the article is given.

Thus, these equations and their parameters are after Michaels, Bolger, "Settling rates and sediment volumes of flocculated kaolin suspensions", 1962, Industrial and engineering chemistry fundamentals, 1(1), p.24-33. See this commit in the OpenFOAM10 general.C file.

Once I've found the article, it became clear to me that the drift velocity models used in driftFluxFoam are designed for liquid-liquid flows, where one of the liquids should better be non-Newtonian mud (sludge, slurry).

It became clear to me why all the driftFluxFoam tutorials are focused on liquid-liquid scenarios. Especially, dahl tutorial that talks about sludge and water.

That is sufficient knowledge for me at this point, because I'm working with gas-liquid flows, closure relations for which are different from liquid-liquid flows. That is why I didn't look deeper into the theory of the presented closure relations for drift velocity and, thus, I'm not talking about them here. Dear community members with the knowledge on them, please, provide them in the comments and I'll amend the blog.

I'm turning my attention to the main system of equations that constitutes driftFluxFoam.

I've been digging them out from the code for several days already to no success so far. Once I'm ready, I'll post them in another blog entry.
► Installing foam-extend-4.1 from Source (Fedora 36)
  30 Aug, 2022
Just a reminder what I did on my Fedora 36
 dnf install -y  python3-pip m4 flex bison git git-core mercurial cmake cmake-gui openmpi openmpi-devel metis metis-devel metis64 metis64-devel
llvm llvm-devel zlib  zlib-devel  ....
  echo 'export PATH=/usr/local/cuda/bin:$PATH' 
  echo 'module load mpi/openmpi-x86_64' 
}>> ~/.bashrc

cd ~
mkdir foam && cd foam
git clone foam-extend-4.1
 echo '#source ~/foam/foam-extend-4.1/etc/bashrc' 
 echo "alias fe41='source ~/foam/foam-extend-4.1/etc/bashrc' "
}>> ~/.bashrc
 pip install --user PyFoam
cd ~/foam/foam-extend-4.1/etc/
Edit ->which bison
# Specify system openmpi
# ~~~~~~~~~~~~~~~~~~~~~~
# System installed CMake
export CMAKE_DIR=/usr/bin/cmake

# System installed Python
export PYTHON_DIR=/usr/bin/python

# System installed PyFoam

# System installed ParaView
export PARAVIEW_DIR=/usr/bin/paraview 

# System installed bison
export BISON_DIR=/usr/bin/bison

# System installed flex. FLEX_DIR should point to the directory where
# $FLEX_DIR/bin/flex is located
export FLEX_SYSTEM=1
export FLEX_DIR=/usr/bin/flex  #export FLEX_DIR=/usr

# System installed m4
export M4_SYSTEM=1
export M4_DIR=/usr/bin/m4
; which flex ; which m4 ... all the 3rdParty Stuff

Allwmake.firstInstall -j

Cadence CFD Blog top

► Role of Simulation In Making Aviation Cleaner
  28 Sep, 2023
A viable approach to achieving cleaner aviation is switching to sustainable aviation fuels and leveraging simulations in lieu of real-world testing during the aircraft design cycle. Simulation technology ought to play a significant role as we continue to work towards a cleaner and greener aviation industry. (read more)
► Women in CFD with Mary Alarcón Herrera
  26 Sep, 2023
The eighth edition of the Women in CFD series features Mary Alarcon Herrera, a product engineer for the Cadence Computational Fluid Dynamics (CFD) team. Read our conversation with Mary as she shares insights on her career journey, her thoughts on women in engineering, and her advice for anyone interested in pursuing a career in CFD.(read more)
► AESIN TECH TALK: Accurate Electro-Thermal Simulation for Thermal Management of Electric Drivetrain
  21 Sep, 2023
In this AESIN TECH TALK, Matt Evans, Principal Product Engineer at Cadence, will discuss the thermal management of electric drivetrains, the importance of thermal simulation, and the role of Celsius EC Solver in this process. (read more)
► Discover What’s New in Fine Marine CFD
  20 Sep, 2023
The newest version of Fine Marine offers critical enhancements that improve solver performances and sharpen the C-Wizard’s capabilities even further. Check out the highlights...(read more)
► Sustainable Design of Data Centers Using the Cadence Digital Twin Platform
  20 Sep, 2023
In the Cadence Live Silicon Valley presentation on From Chips to Chillers: Electronics Cooling Through to Sustainability, Sherman Ikemoto, System Sales Group Director for the Cadence Digital Twin platform, discusses the sustainable methods for electronics cooling using Cadence tools.(read more)
► Automatically Generate the Best Mesh Each Time with Adaptive Grid Refinement
  13 Sep, 2023
The goal of simulation preprocessing is to create a mesh that is suitable for the analysis. We aim for computational efficiency when generating a mesh while resolving geometry and physics. Automated adaptive grid refinement feature in Fidelity Pointwise manages numerical errors, adheres to user-defined boundaries, and resolves all flow features for diverse applications.(read more)

GridPro Blog top

► Know your mesh for Hypersonic Intake CFD Simulations
    6 Jul, 2023

Figure 1: Hexahedral mesh for HiFire6 vehicle with Busemann hypersonic intake.

                                                                                                                                                                                                                              1200 words / 6 minutes read

Hypersonic flow phenomena, such as shock waves, shock-boundary layer interactions, and laminar to turbulent transitions, necessitate flow-aligned, high-resolution hexahedral meshes. These meshes effectively discretize the flow physics regions, enabling accurate prediction of their impact on the flow.


In light of successful scramjet-powered hypersonic flight tests conducted by numerous countries, the pressure is mounting for other nations to keep up with this technology. Extensive testing and computational fluid dynamics (CFD) simulations are underway to develop a scramjet design capable of withstanding the demanding conditions of hypersonic flight.

As an effective and efficient design tool, CFD plays a pivotal role in rapidly designing and optimizing various parametric scramjet configurations. However, simulating these extreme flow fields using CFD is a formidable challenge, and proper meshing is of utmost importance.

The meshing requirements for CFD of hypersonic flows in intakes differ significantly from those for low Mach number flows. High-speed flows involve elevated temperatures and interactions between shockwaves and boundary layers, which were previously negligible. Boundary layers are particularly critical as they experience high rates of heat transfer. Furthermore, the transition of the boundary layer from laminar to turbulent flow is a complex phenomenon that is challenging to capture and simulate accurately. Nonetheless, this transition is of paramount importance, as it has a profound impact on flow behaviour.

What Should the Mesh Capture?

Flow physics in a typical scramjet intake.
Figure 2: Scramjet hypersonic intake flow physics. Image source Ref [9].

Change in the flow field demands a change in meshing requirements. As one may expect, the boundary layer should have a high resolution to capture the velocity boundary layer and the enthalpy boundary layer. Next, the shocks must also be captured precisely since the flow turns through the shock wave in hypersonic flows. But more importantly, shocks have extremely strong gradients, which can lead to large errors if not resolved accurately.

Multiple shocks and boundary layer interactions happen in hypersonic intake flows at different locations. If these effects are not resolved precisely, it is impossible to predict whether the hypersonic engine works effectively or not. To summarise, we must deal with multiple effects with different strength levels. The gridding system we adopt should create a grid that adequately resolves all effects with sufficient precision to achieve the needed level of solution reliability.

Other regions of concern in scramjet are the inlet leading edge, injector and cavity. Not only does the mesh topology have to be appropriately structured around these regions, but it must also align with the surfaces as best as possible to avoid introducing unnecessary skewing and warpage.

Structured mesh for the Hifire -6 hypersonic vehicle with busemann intake, combustion chamber and exhaust.
Figure 3: High-resolution structured hexahedral mesh through the Busemann intake, combustion chamber and exhaust of the HiFire- 6 Hypersonic configuration.

Boundary Layer Capturing

The boundary layer, a home for laminar to turbulent transitions and shock-induced boundary layer separation, must be properly resolved. Usually, structured meshes are preferred. Even the hybrid unstructured approach adopts finely resolved stacked prism or hexahedral cells in viscous padding.

This is necessary because resolving the boundary layer close to the wall aids in accurately representing its profile, leading to correct predictions of wall shear stress, surface pressure and the effect of adverse pressure gradients and forces.

Further, at hypersonic speeds, the transition of laminar to turbulent boundary layer inside the boundary layer significantly influences aircraft aerodynamic characteristics. It affects the thermal processes, the drag coefficient and the vehicle lift-to-drag ratio. Hence, paying attention to how well the cells are arranged in the boundary layer padding is critically essential.

Shock Wave Boundary Layer Interactions (SWBLI)

shock boundary layer interaction locations in a hypersonic aircraft.
Figure 4: Typical locations where shock boundary layer interactions likely occur in hypersonic aircraft. Image source Ref [7].

Another important aspect of the proper resolution of the boundary layer is how it helps predict shock-induced flow separation. Shock wave interaction with a turbulent boundary layer generates significant undesirable changes in local flow properties, such as increased drag rise, large-scale flow separation, adverse aerodynamic loading and heating, shock unsteadiness and poor engine inlet performance.

CFD simulation results showing the shock boundary layer interactions in a scramjet intake.
Figure 5: Numerical schlieren showing the shock boundary layer interactions in a scramjet hypersonic intake. Image source Ref [2].

Unsteadiness induces substantial variations in pressure and shear stress, leading to flutter that impacts the integrity of aircraft components. Additionally, the operational efficiency of engines can be considerably compromised if the shock-wave-induced boundary layers separation deviates from the anticipated location. If the computational grid fails to accurately represent the interaction between shock waves and boundary layers due to inadequate resolution or improper cell placement, the obtained results from CFD will lack practical utility or advantages. This underscores the critical significance of well-designed grids in the context of hypersonic flows.

Shock Capturing

Figure shows the effect of grid misalignment with the shock waves.
Figure 6: a. Near the leading edge, the O-grid edge is aligned with the curved shock, and the cells follow the shape of the sonic line. b. Grid misalignment results in non-physical waves. Image source Ref [1].

Ideally, grid lines need to be aligned to the shock shape. For this, hexahedral meshes are better suited. They can be tailored to the shock pattern and made finer in the direction normal to the shock or adaptively refined. This brings the captured shock thickness closer to its physical value and improves the solution quality by aligning the faces of the control volumes with the shock front. Shock-aligned grids reduce the numerical errors induced by the captured shock waves, thereby significantly enhancing the computed solution quality in the entire region downstream of the shock.

This grid alignment is necessary for both oblique and normal bow shock. Grid studies have shown that solver convergence is extremely sensitive to the shape of the O-grid at the stagnation point. Matching the edge of the O-grid with the curved standing shock and maintaining cell orthogonality at the walls was necessary to get good convergence.

Figure shows the effect of proper and poor mesh alignment with shock waves.
Figure 7: Effect of a. Fair b. Poor mesh alignment with the leading edge shock. Image source Ref [1].

Also, grid misalignment is observed to generate non-physical waves, as shown in Figure 7. For CFD solvers with low numerical dissipation, a strong shock generates spurious waves when it goes through a ‘cell step’ or moves from one cell to another. Such numerical artefacts can be avoided, or at least the strength of the spurious waves can be minimized by reducing the cell growth ratio and cell misalignment w.r.t the shock shape.

Aspects to Consider While Doing Grid Refinement

A sparser grid density may suffice in areas where flow is uniform and surfaces have slight curvatures. Nevertheless, it becomes necessary to employ grid clustering and increase the resolution in regions characterized by abrupt flow gradients, geometric or topological variations, regions accommodating critical flow phenomena (such as near walls, shear and boundary layers, shock interactions), geometric cavities, injectors, and other solid structures. The appropriate refinement of these regions holds significance as it contributes to enhancing the efficacy of numerical schemes and models at both local and global levels. Consequently, this refinement leads to the generation of more precise and reliable results.

When employing a solution-based grid adaptation approach, the selection of an appropriate refinement ratio and initial grid density becomes crucial. If the refinement ratio is too low, it may be inefficient and ineffective. This is due to the limited coverage of the asymptotic region, which may not be sufficient to accurately determine the convergence behaviour. Additionally, it may necessitate multiple flow solutions before reaching a valid conclusion.

Another aspect which needs due attention while making grid adaptation is the initial grid employed. The initial grid should possess a sufficient level of resolution. Employing a low initial grid density can lead to inaccurate simulation results and unsatisfactory flow field solutions. On the other hand, an excessively refined initial grid may not be feasible for high-fidelity studies involving viscous, turbulent or fully reacting flows. This is because the initial cell density may already be too high, making creating subsequent grids with even higher densities impractical.

Structured surface mesh for HiFire6 hypersonic vehicle.
Figure 8: Structured multiblock mesh for HiFire-6 hypersonic vehicle.


Grid accuracy plays a critical role in the reliability and precision of hypersonic CFD simulations, as it directly influences the computed flow field. Given the high velocities involved, errors introduced upstream can rapidly amplify downstream.

Consequently, it is imperative to employ a meticulous grid or topology design to achieve suitable cell discretization and blocking structures. Factors such as grid resolution, grid clustering, cell shape, and cell size distribution must be thoroughly evaluated and selected both locally and across the entire domain. This careful assessment is essential for preventing the introduction of errors and inaccuracies into the computed results through numerical artefacts and uncaptured phenomena.


1.“Experimental Study of Hypersonic Fluid-Structure Interaction with Shock Impingement on a Cantilevered Plate”, Gaetano M D Currao, PhD Thesis, UNSW AUSTRALIA, March 2018.
2.“Investigation of “6X” Scramjet Inlet Configurations”, Stephen J. Alter, NASA/TM–2012–217761, September 2012.
3.“Numerical Simulation of Hypersonic Air Intake Flow in Scramjet Propulsion Using a Mesh-Adaptive Approach”, Sarah Frauholz, et al, AIAA Conference Paper · September 2012.
4.“Parametric Geometry, Structured Grid Generation, and Initial Design Study for REST-Class Hypersonic Inlets”, Paul G. Ferlemann et al.
5.“Numerical Simulation of Hypersonic Air Intake Flow in Scramjet Propulsion”, Sarah Frauholz et al, 5TH EUROPEAN CONFERENCE FOR AERONAUTICS AND SPACE SCIENCES (EUCASS), July 2013.
6.”Computational Prediction of NASA Langley HYMETS Arc Jet Flow with KATS”, Umran Duzel,AIAA conference paper, Jan 2018.
7.“Numerical simulations of the shock wave-boundary layer interactions”, Ismaïl Ben Hassan Saïdi, HAL Id: tel-02410034, 13 Dec 2019.
8.“The Role of Mesh Generation, Adaptation, and Refinement on the Computation of Flows Featuring Strong Shocks”, Aldo Bonfiglioli et al, Hindawi Publishing Corporation Modelling and Simulation in Engineering, Volume 2012, Article ID 631276.
9.”Numerical Investigation of Compressible Turbulent Boundary Layer Over Expansion Corner“, Tue T.Q. Nguyen et al., AIAA Conference Paper, October 2009.

Subscribe To GridPro Blog

By subscribing, you'll receive every new post in your inbox. Awesome!

The post Know your mesh for Hypersonic Intake CFD Simulations appeared first on GridPro Blog.

► The Importance of Flow Alignment of Mesh
  16 Jan, 2023

Figure 1: Flow-aligned mesh around an MDA -3 element configuration.

                                                                                                                                                                                                                              1350 words / 7 minutes read

Alignment of grid lines with the flow aid in lower diffusion and numerical error, faster convergence and accurate capturing of high gradient flow features like a shock. This subtle gridding detail makes a significant difference to the CFD simulation’s solution quality and accuracy.


In the rapid world of product design, CFD simulations are expected to generate quick results. Quick results mean faster grid generation, which inevitably leads to a loss of attention to subtle gridding details. One such critically important gridding aspect that most CFD practitioners have less appreciation for is that of alignment of the grid to the flow.

Three aspects of gridding dictate the final solver solution outcome – grid quality, mesh resolution and grid alignment. Most grid generators pay attention to the first two aspects of mesh cell quality and refinement but ignore grid line alignment to the flow. This is understandable as rapid domain filling algorithms like unstructured meshing and Cartesian will not be able to meet the meshing criteria of flow alignment, as these algorithms are inherently handicapped to do so. Only inside the boundary layer, where they adopt stacking of prism or hexahedral cells, is some flow alignment achieved. Currently, only the structured multi-block technique is capable of orienting the grid cells to the flow inside the boundary layer padding as well as outside.

It is critically essential that CFD practitioners know how alignment or non-alignment of the grid to flow, how the presence of different degrees of mesh singularities affects the flow field and how grid alignment to high gradient flow phenomena like shock influences the final solution outcome. This article attempts to address these meshing aspects.

A Gridding Experiment to Demonstrate the Need for Alignment of Grid to Flow:

Flow aligned grids with no diffusion or numerical error.
Figure 2: a. Structured grid with cells aligned to the flow. a. Cells aligned to the regular cartesian coordinate system. b. Cells not aligned to the regular cartesian coordinate system. Image source Ref [4].

The importance of grid cell orientation w.r.t to the flow can be demonstrated with a simple convective-diffusive flow in a square domain. Figures 2 and 3 show the errors produced due to different orientations of the cells to the flow direction.

If we have two velocities, V1 and V2, flowing on a structured mesh in the direction of the grid lines, the solution will be completely conformal without any diffusion or numerical error, as shown in Figure 2a. This is true, even for a grid where the mesh lines are not oriented in the direction of the coordinate system, as illustrated in Figure 2b.

Flow dissipation due to non-alignment of cells in unstructured meshes to flow direction.
Figure 3: a. Random orientation of cells to the flow direction. b. Structured mesh with cells not oriented to flow direction. Image source Ref [4].

However, if we have an unstructured mesh or a structured mesh, but the flow is not aligned, then there is diffusion taking place. The amount of diffusion depends on differencing scheme used in the flow solver and on the size of the mesh. The finer the mesh, the lower the diffusion. But, never the less, it still exists.

Effect of Grid Singularities

A grid singularity is nothing but a grid point in 2-Dimension where more or less than four grid lines radiate from a point. Singularities exist in large numbers in unstructured meshes and in very small numbers in multi-block meshes for complex configurations.

Negligible flow disruption due to 3- and 5-way singularities.
Figure 4: 3 and 5-way singularities. Image source Ref [3].

Results from the gridding experiment on singularities show that the error magnitudes are least for lesser singularities ( 3-way singularity) while it is high for larger singularities like an 8-way singularity, as shown in Figures 5 and 6.

Flow dissipation due to 6 -point singularity.
Figure 5: 3- and 6- way singularities. Image source Ref [3].

A closer review of the results shows that the results for 3- and 5- way singularity grids are quite acceptable and actually are as good as the results from the non-singular grids from the same grid generator.

Flow dissipation due to 8 -point singularity.
Figure 6: 3- and 8-way singularities. Image source Ref [3].

Hex Cells in Cartesian and Structured Grids are Not the Same

Though both Cartesian grids and the classical structured grids use hexahedral cells, the effect of the grid on the flow solver output is not the same. The subtle difference in the alignment of the cells and the need for interpolation in Cartesian grids show up in the computed results. In a Cartesian grid, the grid lines are aligned to the regular Cartesian coordinates, while the grid lines in structured grids are aligned to the geometric body and the flow field.

Interpolation results on cartesian and flow aligned structured meshes.
Figure 7: Comparison of the interpolation on a cartesian mesh ( thin line) and on a structured flow aligned mesh (thick line) with the exact solution for two different stoichiometric scalar dissipation rates of 0.014 and 653. a. Mass fraction of H vs mixture fraction Z. b. Temperature in Kelvin vs mixture fraction Z. Image source Ref [1].

Figure 7 illustrates the computed species mass fraction and temperature distribution for a CFD simulation involving fuel injection in a combustor of a hypersonic vehicle. As shown in Figure 7a, the Cartesian interpolation leads to dramatic spurious oscillations for the species mass fraction, especially at small stoichiometric scalar dissipation rate. On the other hand, structured curvilinear meshes show a very smooth interpolation without any oscillation. Similar results can be seen in the computed temperature distribution in Figure 7b. As V. E. Terrapon, the author of the research work [ref 1], says,

“The small additional lookup cost in a curvilinear mesh is largely compensated by a much smoother interpolation.”

Flow Aligned Mesh for Boundary Layer Capturing

Flow-aligned cells in the viscous padding to accurately capture the boundary layer profile.
Figure 8: Flow-aligned mesh inside the viscous padding to capture the boundary layer profile accurately. Image source leap australia.

The boundary layer, which is home to wall-bounded viscous flows, experiences high gradients. To capture the high gradients, finely stacked flow-aligned cells are required. Maintaining cell orthogonality w.r.t to the wall is another key factor in boundary layer generation. So, to maintain optimal cell count and yet finely resolve the boundary layer, stretched elements in the form of prisms or hexahedral cells are preferred. For the same reason, even the hybrid unstructured meshing approach adopts stacked prism cells in the viscous padding, as stacking high aspect ratio tetrahedral is not preferred due to deterioration in cell skewness.

Orderly arranged flow-aligned mesh in the boundary layer are critical and essential as it aids in the accurate representation of its profile, leading to accurate predictions of wall shear stress, surface pressure and also the effect of adverse pressure gradients and forces.

Further, at very high Mach numbers in the supersonic or hypersonic flow regimes, the laminar to turbulent boundary layer transition and shock boundary layer interactions significantly influence aircraft aerodynamic characteristics. They affect the thermal processes, the drag coefficient and the vehicle lift-to-drag ratio. Hence, it is critical essentially to pay attention to how well the cells are arranged in the boundary layer padding.

Flow Aligned Mesh for Shock Capturing

Figure showing flow aligned mesh to curved shock and grid misalignment leading to non-physical waves.
Figure 9: a. Near the leading edge, the O-grid edge is aligned with the curved shock, and the cells follow the shape of the sonic line. b. Grid misalignment results in non-physical waves. Image source Ref [5].

To capture the effects of high gradient flow phenomena like shocks on the flow field downstream, it is essential to align the grid lines to the shock shape and have refined cells.

For this, hexahedral meshes are better suited. They can be tailored to the shock pattern and can be made finer in the shock normal direction or can be adaptively refined. This not only brings the captured shock thickness closer to its physical value but also allows for the improvement of the solution quality by aligning the faces of the control volumes with the shock front. Aligned grids reduce the numerical errors induced by the captured shock waves and thereby significantly enhance the computed solution quality in the entire region downstream of the shock.

Grid alignment is necessary for both oblique and normal bow shock. Grid studies have shown that solver convergence is extremely sensitive to the shape of the O-grid at the stagnation point. Matching the edge of the O-grid with the curved standing shock and maintaining cell orthogonality at the walls was found to be necessary to get good convergence.

Effect of fair and poorly flow aligned mesh with shock.
Figure 10: Effect of a. Fair b. Poor mesh alignment with the leading edge shock. Image source Ref [5].

Also, grid misalignment is observed to generate non-physical waves, as shown in Figure 10. For CFD solvers with low numerical dissipation, a strong shock generates spurious waves when it goes through a ‘cell step’ or moves from one cell to another. Such numerical artefacts can be avoided, or at least the strength of the spurious waves can be minimized by reducing the cell growth ratio and cell misalignment w.r.t the shock shape.

Check out the importance of flow alignment and comparison on various grid types for an airfoil and Onera M6 wing.

Do Mesh Still Play a Critical Role in CFD?


For ultra-accurate CFD results, flow alignment of grids is a must. It is a subtle detail in grid generation which can make a mammoth difference in the computed solution. Out of all the gridding methodologies developed to date, structured hexahedral meshing is the best candidate for the job. Whether it is near the wall in the boundary layer or in the interior of the domain to discretize shocks, structured meshes optimally align to the flow features and helps to avoid dissipation or numerical errors.

To sum up, if accurate CFD results are the top priority in your CFD cycle, then having flow-aligned grids is your secret recipe.

To know about generating flow-aligned meshes in GridPro, contact us at:

Further Reading


1. “A flamelet-based model for supersonic combustion”, V. E. Terrapon et al, Center for Turbulence Research Annual Research Briefs, 2009.
2. “HEC-RAS 2D – AN ACCESSIBLE AND CAPABLE MODELLING TOOL“, C. M. Lintott Beca Ltd, Water New Zealand’s 2017 Stormwater Conference.
3. “Effect of Grid Singularities on the Solution Accuracy of a CAA Code”, R. Hixon et al, 41st Aerospace Sciences Meeting and Exhibit, 6-9 January 2003, Reno, Nevada.
4. “Challenges to 3D CFD modelling of rotary positive displacement machines”, Prof Ahmed Kovacevic, SCORG Webinar.
5. “Experimental Study of Hypersonic Fluid-Structure Interaction with Shock Impingement on a Cantilevered Plate”, Gaetano M D Currao, PhD Thesis, March 2018.

Subscribe To GridPro Blog

By subscribing, you'll receive every new post in your inbox. Awesome!

The post The Importance of Flow Alignment of Mesh appeared first on GridPro Blog.

► The Challenges of Meshing Ice Accretion for CFD
  12 Jul, 2022

Figure 1: Hexahedral mesh for an aircraft icing surface.

1228 words / 6 minutes read

Complex ice shapes make generating well-resolved mesh extremely difficult, compelling CFD practitioners to make geometric and meshing compromises to understand the effect of Ice accretion on UAVs.


Flying safely and reliably depends on how well icing conditions are managed. Atmospheric icing is one of the main reasons for the operational limitations, Icing disturbs the aerodynamics and limits the flight capabilities such as range and duration. In some scenarios, it can even lead to crashes.

Icing has been under research for manned aircraft since the 1940s. However, the need to understand icing effects for different flying scenarios in unmanned aerial vehicles (UAVs) or drones has reignited the research. Drones are used for a wide range of applications like package delivery, military, glacier studies, pipeline monitoring, search and rescue, etc.

Ice accumulation on different aircraft parts such as nose cone, engine, pitot probe.
Figure 2: a. Ice on nose cone. b. Ice on an engine. c. Ice on a pitot probe. Image source – Ref [4]

The well-understood icing process of manned civil and military aircraft does not hold good for most UAVs. UAVs fly at a lower air speed and are smaller in size. They operate at a low Reynolds Number in the range of 0.1-1.0 million as against manned aviation which fly at Reynolds Numbers of the order of 10-100 million. This huge difference necessitates the need to gain a better understanding of the icing process at low Reynolds numbers.

CFD simulation of aircraft ice accretion is a natural choice for researchers due to its cost-effective approach when compared to flight testing. In this article, we will discuss how researchers navigate through geometry and meshing challenges to understand the icing effects.

Ice Accretion Analysis

Icing analysis covers a large variety of physical phenomena. From droplet or ice crystal impact on cold surfaces to solidification process at different scales. Ice accumulation degrades aerodynamic performances such as the lift, drag, stability and stall behaviour of lifting surfaces by modifying the leading-edge geometry and the state of the boundary layer downstream. This results in premature and highly undesirable flow separation.

Aircraft Icing: Flow field around an iced airfoil.
Figure 3: Aircraft Icing: Flow field around an iced airfoil. Image source – [Ref 5, 6]

Such flow transition and turbulently active regions need well-resolved grids. However, the complex icing undulations make meshing very hard, forcing the CFD practitioners to face geometric and meshing challenges.

Complex Geometric Shapes

Icing develops different kinds of geometric features such as conic shapes, jagged ridges, narrow, deep valleys and concave regions. In 3D, the spanwise variation of these features creates further complexities.

Meshes for aircraft icing simulation: Inviscid unstructured mesh using tetrahedral elements to discretize the complex 3D iced wing.
Figure 4: Inviscid unstructured mesh using tetrahedral elements to discretize the complex 3D iced wing. [Image source: Ref 3]

Geometric simplification is more often done while attempting 3D simulations. Even though fine resolution 3D scanned ice feature data is available, incapability to create quality normal wall resolved cells compels CFD practitioners to either simplify the ice features or settle down for some kind of inviscid simulation without capturing the viscous effects. Figure 4 shows such a compromised unstructured mesh without viscous padding for a DLES simulation. Figure 5 shows the extraction of a smoothened and simplified ice geometry from an actual icing surface.

Aircraft icing: Geometric simplification done to 3D ice surface to ease meshing difficulties.
Figure 5 Geometric simplification done to 3D ice surface to ease meshing difficulties. [Image source- Ref 9].

It is extremely difficult to mesh such realistic ice shapes for any mesh generation algorithm let alone the aspect of mesh quality.

As a compromise, the sub-scale surface roughness is smoothened out and is not captured. As a consequence, the turbulence effects due to sub-scale geometric features get ignored.

Wide-Ranging Geometric Scales

Ice features range widely in geometric scales. For, e.g., ice horns can be as big as 1-2 centimetres, while sub-scale surface roughness can be as small as a few microns.

The level of deterioration in performance is directly related to the ice shapes and to the degree of aerodynamic flow disruption they rake up. Sub-scale ice surface roughness triggers laminar to turbulent transition while large size ice-horns cause large-scale separation.

Orthogonal boundary layer padding to capture the viscous activities near the wall.
Figure 6: Orthogonal boundary layer padding to capture the viscous activities near the wall.

Meshing such wide-ranging geometric scales poses a few challenges. Firstly, they will need a massive number of cells to capture the micron-level features, directly posing a challenge to the computational power and considerable time for both meshing and CFD.

Literature review shows that certain CFD practitioners, foreseeing these challenges, settle down for 2D simulations to avoid computationally expensive 3D simulations. Even at the 2D level, finer ice-roughness features are smoothened to make viscous padding creation more manageable.

Finely refined flow aligned hexahedral grid to capture the ice horn wake using GridPro.
Figure 7: Finely refined flow-aligned hexahedral grid to capture the ice horn wake.

Horns and Crevices

Crevices and concave regions are home to re-circulation flows. These viscous regions need finely resolved unit aspect ratio cells to capture them. But since many grid generators find it difficult to mesh these regions, the crevices are removed and replaced by a small depression.

Hexahedral meshing of the narrow crevices and concave regions of the aircraft icing surface using GridPro.
Figure 8: Hexahedral meshing of the narrow crevices and concave regions of the aircraft icing surface.

Aft of the horns, large-scale wakes are created, which are highly unsteady and three-dimensional in nature. Also, with an increase in the angle of attack, these turbulent features grow in size and start to extend further in the normal and axial direction w.r.t the wing surface. In concave regions and narrow crevices, recirculation flows can be observed.

Boundary-Layer Mesh

The boundary layer padding needs to have a good wall-normal resolution with first spacing equivalent to Y+ not more than 1. The rough ice surfaces aggravate flow separation and adequate viscous padding with a uniform number of layers with orthogonal cells is necessary at all locations.

Growing wall-normal quadrilateral or hexahedral cells from the ice walls for the entire region is a challenge since the crevices are very narrow with irregular protrusions, and generating continuous viscous padding causes cells to collapse one over the other.

Aircraft icing meshes: Viscous boundary layer padding in narrow crevices. a. Hybrid unstructured mesh. b. Hexahedral mesh.
Figure 9: Viscous boundary layer padding in narrow crevices. a. Hybrid unstructured mesh. Image source [Ref 7] b. Hexahedral mesh.

To overcome this some grid generators resort to partial normal wall padding to the extent the local geometry permits and quickly transition to unstructured meshing, as shown in Figure 9a.

Meshing Transient Ice Accumulation

Research has shown that airframe size and air speed are two main important parameters influencing ice accretion.

One of the icing simulation requirements is computing ice accumulation for a finite time period spanning 15 to 20 minutes. Multiple CFD simulations are done for different chord lengths and air velocities. As one can perceive, this is a numerically intensive job requiring automated geometry building and mesh generation. In such studies, it is necessary to generate new mesh for every minute or even less to make a CFD run for newer instances of ice deposition.

Figure 10: Ice accumulation due to change in a. Airframe. b. Airspeed. Image source Ref [5].

With each time step the shape of the ice-feature changes and with time, they take fairly complex shapes with horns and crevices, making local manual intervention an inevitable necessity.

GridPro's single-topology multiple grid approach helps to rapidly generate high-quality meshes for multiple icing variants.
Figure 11: GridPro’s single-topology multiple grid approach helps to rapidly generate high-quality meshes for multiple icing variants-ice accretion analysis automatically.

Parting Remarks

For the safe operation of UAVs without an icing protection system, the common solution is to ground the aircraft when icing conditions prevail. This limitation can be overcome by having a better de-icing system. Through CFD analysis of ice accretion at different atmospheric conditions, the amount of optimal onboard electrical power needed to do de-icing can be known.

However, accurate CFD analysis hinges on precise capturing of the ice features by the mesh. A meshing system which can aptly meet this requirement without making geometric or meshing compromises is the need of the hour.

For structured meshing needs for icing analysis reach out to GridPro, please contact:

Further Reading


1.”Comparison of LEWICE 1.6 and LEWICE/NS with IRT Experimental Data from Modern Airfoil Tests“, William B. Wright, Mark G. Potapczuk.
2. “Geometry Modeling and Grid Generation for Computational Aerodynamic Simulations around Iced Airfoils and Wings“, Yung K. Choo, John W. Slater, Mary B. Vickerman, Judith F. VanZante.
3. “COMPUTATIONAL MODELING OF ROTOR BLADE PERFORMANCE DEGRADATION DUE TO ICE ACCRETION“, A Thesis in Aerospace Engineering, Christine M. Brown, The Pennsylvania State University The Graduate School, December 2013.
5. “Atmospheric Ice Accretions, Aerodynamic Icing Penalties, and Ice Protection Systems on Unmanned Aerial Vehicles“, Richard Hann, PhD Thesis,  Norwegian University of Science and Technology, July 2020.
6. “Icing on UAVs“, Richard Hann, NASA Seminar.
9. “An Integrated Approach to Swept Wing Icing Simulation“, Mark G. Potapczuk et al, Presented at 7th European Conference for Aeronautics and Space Sciences Milan, Italy, July 3-6, 2017.

Subscribe To GridPro Blog

By subscribing, you'll receive every new post in your inbox. Awesome!

The post The Challenges of Meshing Ice Accretion for CFD appeared first on GridPro Blog.

► Challenges in Meshing Scroll Compressors
  25 Mar, 2022

Figure 1: Structured multi-block mesh for scroll compressors with tip seal.

804 words / 4 minutes read

Scroll compressors with deforming fluid space, narrow flank, and axial clearance pose immense meshing challenges to any mesh generation technique.


Scroll compressors and expanders have been in extensive usage in refrigeration, air-conditioning, and automobile industries since the 1980s. A slight improvement in scroll efficiency results in significant energy savings and reduction in pollution on the environment. It is therefore important to minimize frictional power loss at each pair of the compressor elements and also the fluid leakage power loss at each clearance between the compressor elements. So developing ways to minimize leakage losses is essential to improve scroll performance.

Scroll Compressor CFD Challenges

Unlike other turbomachines like compressors and turbines, Positive Displacement (PD) machines like scroll suffer from innovative designs and performance enhancements. This is mainly due to difficulties in applying CFD to these machines because of the challenges in meshing , fluid real equations and long computational time.

Scroll compressor Working
Figure 3: Deforming fluid pockets at different stages in the compression process. Image source Ref [11].

Geometric Challenges for meshing

Deforming Flow Field:

The fluid flow is transient and the flow volume changes with time (Figure 3). The fluid is compressed and expanded as it passes through different stages of the compression process. The mesh for the fluid space should be able to ‘follow’ the deformation imposed by the machine without losing its quality.

When the deformation is small, the initial mesh maintains cell quality, however, for large deformations, mesh quality deteriorates and collapses near the contact points between the stator and moving parts.

Scroll compressors - leakage through flank clearance.
Figure 4: Leakage through flank clearance. Image source – Ref [10].

Flank Clearance:

The narrow passage between the stationary and moving scroll in the radial direction is called the Flank clearance. A clearance of [~ 0.05 mm] is generally used to avoid contact, rub and tear.

Adequately resolving this clearance with a fine mesh is one of the key factors in obtaining an accurate CFD simulation. However, the narrowness of this gap poses meshing challenges for many grid generators.

scroll compressor - leakage through axial clearance.
Figure 5: Leakage through axial clearance. Image source – Ref [10].

Axial Clearance:

The narrow passage between the stationary and moving scroll in the axial direction is called the Axial clearance. The axial clearance is about one thousand of the axial scroll plate height, which is much smaller than the flank clearance.

The gap actually forces to have separate zones of mesh in some cases. Adequate resolution of axial clearance gaps is also equally important since it leads to inaccurate flow field prediction.

Scroll compressor tip seal.
Figure 6: Tip seal used to reduce axial clearance leakage. Image source Ref [5, 8].

Tip Seal Modeling:

Tip seals are used to reduce axial leakages which are caused due to wear and tear. The tip seals influence the mass flow rate of the fluid. Modeling internal leakages with tip seals would require many numerical techniques ranging from fluid-structure interaction to special treatments for thermal deformation and tip seals efficiency.

GridPro's structured mesh for capturing axial gap and tip seal in scroll compressor.
Figure 7: GridPro’s structured mesh for capturing axial gap and tip seal: a. With axial gap. b. Axial gap with tip seal.

Discharge Check Valve Modeling:

Valves called reed valves are installed at the discharge to prevent reverse flow. Understanding the dynamics of the check valves is important because they significantly influence scroll efficiency and noise levels. The losses at the discharge can significantly reduce the overall efficiency.

However, modeling the valve with appropriate simplification is a challenge for any meshing technique.

Reed valve and flip valve's in scroll compressors.
Figure 8: a. Reed valve geometry. b. Flip valve geometry. Image source Ref [2].

Influence of Mesh Element Type

A lot of different meshing methods have been employed from tetrahedral to hexahedral to polyhedral cells to discretize the fluid passage. However, researchers who tend to weigh more on the accuracy of the solution tend to weigh more to mesh with structured hexahedral cells.

Hexahedral meshing outweighs other element types w.r.t grid quality, domain space discretization efficiency, solution accuracy, solver robustness, and convergence levels.

One of the reasons why structured hexahedral mesh offers better accuracy is that it can be squeezed without deteriorating the cell quality. This allows to place, a large number of mesh layers in the narrow clearance gap. Better resolution of the critical gap results in better CFD prediction.

Parting Remarks

Understanding the key meshing challenges before setting forth to mesh scrolls is very essential. Becoming aware of the regions that pose difficulties to mesh and regions that strongly influence the accuracy of the CFD prediction is critically important. More importantly, which meshing approach to pick – structured, unstructured, or cartesian also influence the quality and accuracy of your CFD prediction.

In the next article on Automating meshing for scroll compressors, we discuss, how we can mesh scroll compressors in GridPro.


1.“Study on the Scroll Compressors Used in the Air and Hydrogen Cycles of FCVs by CFD Modeling”, Qingqing ZHANG et al, 24th International Compressor Engineering Conference at Purdue, July 9-12, 2018.
2. “Numerical Simulation of Unsteady Flow in a Scroll Compressor”, Haiyang Gao et al, 22nd International Compressor Engineering Conference at Purdue, July 14-17, 2014.
3. “Novel structured dynamic mesh generation for CFD analysis of scroll compressors”, Jun Wang et al, Proc IMechE Part A: J Power and Energy 2015, Vol. 229(8), IMechE 2015.
4. “Modeling A Scroll Compressor Using A Cartesian Cut-Cell Based CFD Methodology With Automatic Adaptive Meshing”, Ha-Duong Pham et al, 24th International Compressor Engineering Conference at Purdue, July 9-12, 2018.
5. “3D Transient CFD Simulation of Scroll Compressors with the Tip Seal”, Haiyang Gao et al, IOP Conf. Series: Materials Science and Engineering 90 (2015) 012034.
6.“CFD simulation of a dry scroll vacuum pump with clearances, solid heating and thermal deformation”, A Spille-Kohoff et al, IOP Conf. Series: Materials Science and Engineering 232 (2017).
7.  “Structured Mesh Generation and Numerical Analysis of a Scroll Expander in an Open-Source Environment”, Ettore Fadiga et al, Energies 2020, 13, 666.
8. “Analysis of the Inner Fluid-Dynamics of Scroll Compressors and Comparison between CFD Numerical and Modelling Approaches”, Giovanna Cavazzini et al, Energies 2021, 14, 1158.
9. “FLOW MODELING OF SCROLL COMPRESSORS AND EXPANDERS”, by George Karagiorgis, PhD- Thesis, The City University, August 1998.
10. “Heat Transfer and Leakage Analysis for R410A Refrigeration Scroll Compressor“, Bin Peng et al, ICMD 2017: Advances in Mechanical Design pp 1453-1469.
11. “Implementation of scroll compressors into the Cordier diagram“, C Thomas et al, IOP Conf. Series: Materials Science and Engineering 604 (2019) 012079.

Subscribe To GridPro Blog

By subscribing, you'll receive every new post in your inbox. Awesome!

The post Challenges in Meshing Scroll Compressors appeared first on GridPro Blog.

► Automation of Hexahedral Meshing for Scroll Compressors
  25 Mar, 2022

Figure 1: Structured multi-block mesh for scroll compressors.

1167 words / 5 minutes read

Developing a three-dimensional mesh of a scroll compressor for reliable Computational Fluid Dynamics (CFD) Analysis is challenging. The challenges not only demand an automated meshing strategy but also a high-quality structured hexahedral mesh for accurate CFD results in a shorter turnaround time.


The geometric complexities of Meshing Scroll Compressors discussed in our previous article give us a window into the need for creating a high-quality structured mesh of scroll compressors.

A good mesher should handle the following challenges in a positive displacement machine:

  • The continuously deforming pocket volume.
  • Since its a complex and time-dependent fluid dynamic phenomena. The mesher should be able to accurately “follow” the deformation imposed by the machine moving part without losing the mesh quality.
  • The mesh should not suffer quality decay, or have uncontrolled mesh refinements and mesh collapses near contact points between the stator and moving parts, etc.
  • Should offer higher accuracy of numerical simulations and the short simulation turn-around time.

Meshing Strategy

The scroll compressor fluid mesh region on a given plane is a helical passage, with varying thickness, expanding, and contracting based on the crank angle and the fluid domain is topologically a rectangular passage. So we use the same approach as that of meshing a rectangle for the Scroll Compressor.

scroll compressor structured mesh blocking: Blocking for a linear and curved rectangular passage.
Figure 2: Blocking for a linear and curved rectangular passage.
Animation video 1: Block creation by sweeping in GridPro.

Mesh Topology

One of the main obstacles for simulation in scroll compressors is the generation process of dynamic mesh in fluid domains, especially in the region of flank clearance. The topology based approach offers a perfect solution for such scenarios. Primarily because the deforming fluid domain in the Scroll compressor does not change the topology of the fluid region.

Animation video 2: Mesh at every time step for a scroll compressor.

Advantages of Topology based Meshing:

  • At each time step, when the orbiting rotor moves to a new position, the new mesh is generated without any user intervention.
Animation video 3: Mesh in the Discharge Chamber of the Scroll Compressor.
  • The block built becomes a template for a new variation of the scroll rotors, this makes it ideal for optimization and even meshing variable thickness scroll compressors.
  • Since meshes share the same topology, i.e. the number of blocks and their connectivity and cells remain the same, which avoids the need for interpolation of results. The computational effort is significantly reduced and the mesh quality is high, leading to reliable CFD analysis.

Flank Clearance and it’s Meshing Needs

The flank clearance could reduce to as low as 0.05 mm and an adequate resolution of the flank clearance with low skewness is the key reason for better prediction of performance by structured meshes when compared to unstructured meshes.

Animation video 4: Mesh in the flank clearance at different scroll rotor positions. 12 layers of cells finely discretize the narrow flank clearance.

The dynamic boundary conforming algorithm of GridPro moves the blocks into the compressed space automatically and generates the mesh. The smoother ensures that the mesh has a homogenous mesh distribution and is orthogonal. Orthogonality is another important mesh quality metric that sets structured meshes against moving mesh approaches. Orthogonality improves the numerical accuracy, stability of the solution and prevents numerical diffusion.

Solid Scroll Meshing for FSI

Understanding the heat transfer towards and inside the solid components is important since the heat transfer influences the leakage gap size. Heat transfer analysis is especially required in vacuum pumps where the fluid has low densities and low mass flow rates.

Structure multi-block mesh for the solid and fluid zone in a scroll compressor.
Figure 4: Structure Hexahedral mesh for the solid and fluid zone in a scroll compressor.


One of the major drawbacks of scroll compressors is the high working temperature (maximum temperature of up to 250 degrees Celsius is reported [Ref 3]). The higher temperatures increase excessively the thermal expansion of scroll spirals, leading to significant increments of internal leakages and thereby affecting the efficiency.

A mesh created for conjugate heat transfer has to model the in-between compression chamber, the scrolls and the convective boundary condition at the outer surface of the scrolls. This type of mesh enables to get consistent temperatures in the solids, to calculate the thermal deformation of the scrolls.

Automation and Optimization of Scroll Compressor

Even though scroll compressors enjoy a high volumetric efficiency in the range of 80-95%, there is still room for improvements. Optimization of the geometric parameters is necessary to reduce the performance degradation due to leakage flows in radial and axial clearances.

CFD as a design tool plays a significant role in optimizing scroll geometry. The major advantage of a 3D CFD simulation combined with fluid-structure interaction (FSI) is that the 3D geometry effect is directly considered. This makes CFD analysis highly suitable for the optimization of the design.

GridPro provides an excellent platform for automating hexahedral meshing through because of its working principle and the python based API.

The key features are:

  • Quick set up of a CFD model from CAD geometry.
  • Parametric design of geometry can be incorporated into the same blocking and can be used even for variable thickness scrolls.
  • The mesh at each time interval is of high quality with orthogonal cells and even distribution.
  • The other advantage of this strategy is that it is respectful of the space conservation law while conserving mass, momentum, energy, and species.

Since GridPro offers both process automation through scripting and API level automation. The automation can either be triggered outside of a CAD environment or inside the CAD environment.

This flexibility provides companies and researchers to develop full-scale meshing automation with GridPro while the user only interacts with CAD / CFD or any software connector platform.

GridPro coupled with CAESES software connector to generate meshes automatically for every change in geometry.
Figure 5: GridPro coupled with CAESES software connector to generate meshes automatically for every change in geometry.

Parting Remarks

The generation of a structured mesh for the entire scroll domains, including the port region, is a very challenging task. It could be very difficult to model narrow gaps and complex features of the geometry. However, with GridPro’s template-based approach and dynamic boundary conforming technology the setup is reduced to a few specifications and the user can develop his own automation module for structured hexahedral meshing.

If scroll compressor meshing is your need and you are looking out for solutions. Feel free to reach out to us at:

Contact GridPro


1.”Analysis of the Inner Fluid-Dynamics of Scroll Compressors and Comparison between CFD Numerical and Modelling Approaches“, Giovanna Cavazzini et al, Advances in Energy Research: 2nd Edition, 2021.

2. “Structured Mesh Generation and Numerical Analysis of a Scroll Expander in an Open-Source Environment”, Ettore Fadiga et al, Energies 2020, 13, 666.

3. “Waste heat recovery for commercial vehicles with a Rankine process“, Seher, D.; Lengenfelder, T.; Gerhardt, J.; Eisenmenger, N.; Hackner, M.; Krinn, I., In Proceedings of the 21st Aachen Colloquium on Automobile and Engine Technology, Aachen, Germany, 8–10 October 2012; pp. 7–9.

Subscribe To GridPro Blog

By subscribing, you'll receive every new post in your inbox. Awesome!

The post Automation of Hexahedral Meshing for Scroll Compressors appeared first on GridPro Blog.

► GridPro Version 8.1 Released
  16 Feb, 2022

About GridPro Version 8.1 

The GridPro version 8.1 release marks the completion of yet another endeavor to provide a feature-rich, powerful and reliable package to the Structured meshing software to the CAE community.

In every cycle of development, we fulfill the feature requests from our users, improve workflow challenges and democratize the feature to enable newer users to transition without much learning. Along the way, we are improvising on the performance of the tool with the increasing demand to handle challenging geometries in meshing.

Here is a quick Preview of the Major Features:

  • New License Monitoring System for Network license users.
  • Automatic grouping of Boundary faces for quicker workflow.
  • New Face display for better understanding of Topology.
  • Faster and Robust block extrusion for Tubular geometries ( ducts, arteries, volutes, etc).

Major Highlights of Version 8.1

License Monitoring System

Network / Float Licenses

The License Management System now has GUI access to most of the features that a user or a system admin would look for. The License Manager GUI now displays all the license-related information. When the user loads the license file and starts the license manager, all the initialization process is done before the license manager is started. The license manager also displays the number of licenses used and the MAC id/ hostname of the user using the license.

Node-locked / Served Licenses

The client license management system is now packaged along with the GUI. When the GUI is opened for the first time the license popup appears where the user is asked to upload the license and Initialise. The initialization process runs in the background and opens up the GUI. This process irons out the need to go through a list of specific commands listed in section 9.11 of the utility manual.

Smart Face Groups to Enhance user workflow in GridPro

The quest to improve user experience and provide easy access to the entities continues. The current version has made a major stride in this direction. From version 8.1 onwards the user has a list of smart selections of face groups available as a part of the Selection Panel. From the blocking, the algorithm calculates the boundary faces and smart groups, based on certain checks. These face groups are displayed and the user can select a single group or a combination of groups to progress in further modifying the structure or assigning to surfaces.

The selection pane also has a temporary selection group to provide flexibility in the workflow. In the past, the user had to select a group to select the entities in the GL. However, the present version enables the users with an alternative workflow where they can right-click and drag in the GL to select faces /blocks. These selected blocks/faces/edges/corners are stored in the Selection Group. It is overwritten when the next selection is made. However, the user has an option to move the selection into one of the permanent groups.

Topology now has Face Display for Better Visualization

The topology now has a Face display along with the corners and edges. The face display now helps the user to have a better perception of the faces and blocks both displayed in the GL and grouped in individual groups. To reassure the user of the topology entities selected, the display mode is automatically changed to face display mode in the following scenarios.

  • User selects corners and edges into a group.
  • Wrap displays the new faces created after an operation.
  • Copy shows the blocks that are created when a face/s are created.
  • Extrude displays the output blocks created.

There are many such scenarios where the user is provided feedback on the operations visually.

Fast Blocking for Tubular Geometries (Arteries, Ducts, etc)

The improved centreline evaluation tool is now robust and fast. This speeds up the topology building for geometries like pipes and human arteries and ducts. The algorithm extrudes the given input along the centreline of the geometry resection the change in cross-sectional area change. The algorithm is now available under extrude option in the GUI.

For more details about the new features, enhancements, and bug fixes please, refer to:

Supported Platforms

GridPro WS works on Windows 7 and above, Ubuntu 12.04 and above, Rhel 5.6 and above, MacOS 10 and above.

The support for the 32-bit platform has been discontinued for all operating systems.

GridPro AZ will be discontinued from version 9 onwards.


GridPro Version 8.1 can be downloaded by registering here.

All tutorials can be found in the Doc folder in the GridPro Installation directory. Alternatively, it can be downloaded from the link here.

All earlier software versions can be found in the Download sections.

Subscribe To GridPro Blog

By subscribing, you'll receive every new post in your inbox. Awesome!

The post GridPro Version 8.1 Released appeared first on GridPro Blog.

Hanley Innovations top

► Aerodynamics of a golf ball
  29 Mar, 2022

 Stallion 3D is an aerodynamics analysis software package that can be used to analyze golf balls in flight. The software runs on MS Windows 10 & 11 and can compute the lift, drag and moment coefficients to determine the trajectory.  The STL file, even with dimples, can be read directly into Stallion 3D for analysis.

What we learn from the aerodynamics:

  • The spinning golf ball produces lift and drag similar to an airplane wing
  • Trailing vortices can be seen at the "wing tips"
  • The extra lift helps the ball to travel further

Stallion 3D strengths are:

  • The built-in Reynolds Averaged Navier-Stokes equations provide high fidelity CFD solutions
  • The grid is generated automatically 
  • Built-in  menus are used to specify speed, angle, altitude and even spin
  • Built-in visualization
  • The numbers are generated to compute the trajectory of the ball
  • The software runs on your laptop or desktop under Windows 7, 10 and 11
More information about Stallion 3D can be found at
Thanks for reading 🙋

► Accurate Aircraft Performance Predictions using Stallion 3D
  26 Feb, 2020

Stallion 3D uses your CAD design to simulate the performance of your aircraft.  This enables you to verify your design and compute quantities such as cruise speed, power required and range at a given cruise altitude. Stallion 3D is used to optimize the design before moving forward with building and testing prototypes.

The table below shows the results of Stallion 3D around the cruise angles of attack of the Cessna 402c aircraft.  The CAD design can be obtained from the OpenVSP hangar.

The results were obtained by simulating 5 angles of attack in Stallion 3D on an ordinary laptop computer running MS Windows 10 .  Given the aircraft geometry and flight conditions, Stallion 3D computed the CL, CD, L/D and other aerodynamic quantities.  With this accurate aerodynamics results, the preliminary performance data such as cruise speed, power, range and endurance can be obtained.

Lift Coefficient versus Angle of Attack computed with Stallion 3D

Lift to Drag Ratio versus True Airspeed at 10,000 feet

Power Required versus True Airspeed at 10,000 feet

The Stallion 3D results shows good agreement with the published data for the Cessna 402.  For example, the cruse speed of the aircraft at 10,000 feet is around 140 knots. This coincides with the speed at the maximum L/D (best range) shown in the graph and table above.

 More information about Stallion 3D can be found at the following link.

About Hanley Innovations
Hanley Innovations is a pioneer in developing user friendly and accurate software that is accessible to engineers, designers and students.  For more information, please visit >

► 5 Tips For Excellent Aerodynamic Analysis and Design
    8 Feb, 2020
Stallion 3D analysis of Uber Elevate eCRM-100 model

Being the best aerodynamics engineer requires meticulous planning and execution.  Here are 5 steps you can following to start your journey to being one of the best aerodynamicist.

1.  Airfoils analysis (VisualFoil) - the wing will not be better than the airfoil. Start with the best airfoil for the design.

2.  Wing analysis (3Dfoil) - know the benefits/limits of taper, geometric & aerodynamic twist, dihedral angles, sweep, induced drag and aspect ratio.

3. Stability analysis (3Dfoil) - longitudinal & lateral static & dynamic stability analysis.  If the airplane is not stable, it might not fly (well).

4. High Lift (MultiElement Airfoils) - airfoil arrangements can do wonders for takeoff, climb, cruise and landing.

5. Analyze the whole arrangement (Stallion 3D) - this is the best information you will get until you flight test the design.

About Hanley Innovations
Hanley Innovations is a pioneer in developing user friendly and accurate software the is accessible to engineers, designs and students.  For more information, please visit >

► Accurate Aerodynamics with Stallion 3D
  17 Aug, 2019

Stallion 3D is an extremely versatile tool for 3D aerodynamics simulations.  The software solves the 3D compressible Navier-Stokes equations using novel algorithms for grid generation, flow solutions and turbulence modeling. 

The proprietary grid generation and immersed boundary methods find objects arbitrarily placed in the flow field and then automatically place an accurate grid around them without user intervention. 

Stallion 3D algorithms are fine tuned to analyze invisid flow with minimal losses. The above figure shows the surface pressure of the BD-5 aircraft (obtained OpenVSP hangar) using the compressible Euler algorithm.

Stallion 3D solves the Reynolds Averaged Navier-Stokes (RANS) equations using a proprietary implementation of the k-epsilon turbulence model in conjunction with an accurate wall function approach.

Stallion 3D can be used to solve problems in aerodynamics about complex geometries in subsonic, transonic and supersonic flows.  The software computes and displays the lift, drag and moments for complex geometries in the STL file format.  Actuator disc (up to 100) can be added to simulate prop wash for propeller and VTOL/eVTOL aircraft analysis.

Stallion 3D is a versatile and easy-to-use software package for aerodynamic analysis.  It can be used for computing performance and stability (both static and dynamic) of aerial vehicles including drones, eVTOLs aircraft, light airplane and dragons (above graphics via Thingiverse).

More information about Stallion 3D can be found at:

► Hanley Innovations Upgrades Stallion 3D to Version 5.0
  18 Jul, 2017
The CAD for the King Air was obtained from Thingiverse

Stallion 3D is a 3D aerodynamics analysis software package developed by Dr. Patrick Hanley of Hanley Innovations in Ocala, FL. Starting with only the STL file, Stallion 3D is an all-in-one digital tool that rapidly validate conceptual and preliminary aerodynamic designs of aircraft, UAVs, hydrofoil and road vehicles.

  Version 5.0 has the following features:
  • Built-in automatic grid generation
  • Built-in 3D compressible Euler Solver for fast aerodynamics analysis.
  • Built-in 3D laminar Navier-Stokes solver
  • Built-in 3D Reynolds Averaged Navier-Stokes (RANS) solver
  • Multi-core flow solver processing on your Windows laptop or desktop using OpenMP
  • Inputs STL files for processing
  • Built-in wing/hydrofoil geometry creation tool
  • Enables stability derivative computation using quasi-steady rigid body rotation
  • Up to 100 actuator disc (RANS solver only) for simulating jets and prop wash
  • Reports the lift, drag and moment coefficients
  • Reports the lift, drag and moment magnitudes
  • Plots surface pressure, velocity, Mach number and temperatures
  • Produces 2-d plots of Cp and other quantities along constant coordinates line along the structure
The introductory price of Stallion 3D 5.0 is $3,495 for the yearly subscription or $8,000.  The software is also available in Lab and Class Packages.

 For more information, please visit or call us at (352) 261-3376.
► Airfoil Digitizer
  18 Jun, 2017

Airfoil Digitizer is a software package for extracting airfoil data files from images. The software accepts images in the jpg, gif, bmp, png and tiff formats. Airfoil data can be exported as AutoCAD DXF files (line entities), UIUC airfoil database format and Hanley Innovations VisualFoil Format.

The following tutorial show how to use Airfoil Digitizer to obtain hard to find airfoil ordinates from pictures.

More information about the software can be found at the following url:

Thanks for reading.

CFD and others... top

► Is High-Order Wall-Modeled Large Eddy Simulation Ready for Prime Time?
  27 Dec, 2022

During the past summer, AIAA successfully organized the 4th High Lift Prediction Workshop (HLPW-4) concurrently with the 3rd Geometry and Mesh Generation Workshop (GMGW-3), and the results are documented on a NASA website. For the first time in the workshop's history, scale-resolving approaches have been included in addition to the Reynolds Averaged Navier-Stokes (RANS) approach. Such approaches were covered by three Technology Focus Groups (TFGs): High Order Discretization, Hybrid RANS/LES, Wall-Modeled LES (WMLES) and Lattice-Boltzmann.

The benchmark problem is the well-known NASA high-lift Common Research Model (CRM-HL), which is shown in the following figure. It contains many difficult-to-mesh features such as narrow gaps and slat brackets. The Reynolds number based on the mean aerodynamic chord (MAC) is 5.49 million, which makes wall-resolved LES (WRLES) prohibitively expensive.

The geometry of the high lift Common Research Model

University of Kansas (KU) participated in two TFGs: High Order Discretization and WMLES. We learned a lot during the productive discussions in both TFGs. Our workshop results demonstrated the potential of high-order LES in reducing the number of degrees of freedom (DOFs) but also contained some inconsistency in the surface oil-flow prediction. After the workshop, we continued to refine the WMLES methodology. With the addition of an explicit subgrid-scale (SGS) model, the wall-adapting local eddy-viscosity (WALE) model, and the use of an isotropic tetrahedral mesh produced by the Barcelona Supercomputing Center, we obtained very good results in comparison to the experimental data. 

At the angle of attack of 19.57 degrees (free-air), the computed surface oil flows agree well with the experiment with a 4th-order method using a mesh of 2 million isotropic tetrahedral elements (for a total of 42 million DOFs/equation), as shown in the following figures. The pizza-slice-like separations and the critical points on the engine nacelle are captured well. Almost all computations produced a separation bubble on top of the nacelle, which was not observed in the experiment. This difference may be caused by a wire near the tip of the nacelle used to trip the flow in the experiment. The computed lift coefficient is within 2.5% of the experimental value. A movie is shown here.        

Comparison of surface oil flows between computation and experiment 

Comparison of surface oil flows between computation and experiment 

Here are some lessons we learned from this case. Besides the space and time discretization methods, the computational mesh and the SGS model strongly affect WMLES results. 
  • Since we obtain wall model data from the 2nd element away from the wall, it is important that isotropic elements be used near solid walls to ensure that turbulent eddies are resolved well there. That's why we prefer tetrahedral elements for complex geometries since one can always generate isotropic elements. In other words, inviscid meshes are preferred for WMLES!

  • For very under-resolved turbulent flow, the use of an explicit SGS model such as WALE produces more accurate and robust results than a shock-capturing limiter. It is quite difficult to determine the appropriate amount of limiting.  
The recent progress has been documented in an AIAA Journal paper, and an upcoming conference paper in SciTech 2023. The latest high-order results indicate that high-order LES can reduce the total DOFs by an order of magnitude compared to 2nd order methods. We believe it is ready for prime time for high-lift configurations, turbomachinery, and race car aerodynamics. You are welcome to try high-order WMLES by getting the flow solver from   

► A Benchmark for Scale Resolving Simulation with Curved Walls
  28 Jun, 2021

Multiple international workshops on high-order CFD methods (e.g., 1, 2, 3, 4, 5) have demonstrated the advantage of high-order methods for scale-resolving simulation such as large eddy simulation (LES) and direct numerical simulation (DNS). The most popular benchmark from the workshops has been the Taylor-Green (TG) vortex case. I believe the following reasons contributed to its popularity:

  • Simple geometry and boundary conditions;
  • Simple and smooth initial condition;
  • Effective indicator for resolution of disparate space/time scales in a turbulent flow.

Using this case, we are able to assess the relative efficiency of high-order schemes over a 2nd order one with the 3-stage SSP Runge-Kutta algorithm for time integration. The 3rd order FR/CPR scheme turns out to be 55 times faster than the 2nd order scheme to achieve a similar resolution. The results will be presented in the upcoming 2021 AIAA Aviation Forum.

Unfortunately the TG vortex case cannot assess turbulence-wall interactions. To overcome this deficiency, we recommend the well-known Taylor-Couette (TC) flow, as shown in Figure 1.


Figure 1. Schematic of the Taylor-Couette flow (r_i/r_o = 1/2)

The problem has a simple geometry and boundary conditions. The Reynolds number (Re) is based on the gap width and the inner wall velocity. When Re is low (~10), the problem has a steady laminar solution, which can be used to verify the order of accuracy for high-order mesh implementations. We choose Re = 4000, at which the flow is turbulent. In addition, we mimic the TG vortex by designing a smooth initial condition, and also employing enstrophy as the resolution indicator. Enstrophy is the integrated vorticity magnitude squared, which has been an excellent resolution indicator for the TG vortex. Through a p-refinement study, we are able to establish the DNS resolution. The DNS data can be used to evaluate the performance of LES methods and tools. 


Figure 2. Enstrophy histories in a p-refinement study

A movie showing the transition from a regular laminar flow to a turbulent one is posted here. One can clearly see vortex generation, stretching, tilting, breakdown in the transition process. Details of the benchmark problem has been published in Advances in Aerodynamics.
► The Darkest Hour Before Dawn
    2 Jan, 2021

Happy 2021!

The year of 2020 will be remembered in history more than the year of 1918, when the last great pandemic hit the globe. As we speak, daily new cases in the US are on the order of 200,000, while the daily death toll oscillates around 3,000. According to many infectious disease experts, the darkest days may still be to come. In the next three months, we all need to do our very best by wearing a mask, practicing social distancing and washing our hands. We are also seeing a glimmer of hope with several recently approved COVID vaccines.

2020 will be remembered more for what Trump tried and is still trying to do, to overturn the results of a fair election. His accusations of wide-spread election fraud were proven wrong in Georgia and Wisconsin through multiple hand recounts. If there was any truth to the accusations, the paper recounts would have uncovered the fraud because computer hackers or software cannot change paper votes.

Trump's dictatorial habits were there for the world to see in the last four years. Given another 4-year term, he might just turn a democracy into a Trump dictatorship. That's precisely why so many voted in the middle of a pandemic. Biden won the popular vote by over 7 million, and won the electoral college in a landslide. Many churchgoers support Trump because they dislike Democrats' stances on abortion, LGBT rights, et al. However, if a Trump dictatorship becomes reality, religious freedom may not exist any more in the US. 

Is the darkest day going to be January 6th, 2021, when Trump will make a last-ditch effort to overturn the election results in the Electoral College certification process? Everybody knows it is futile, but it will give Trump another opportunity to extort money from his supporters.   

But, the dawn will always come. Biden will be the president on January 20, 2021, and the pandemic will be over, perhaps as soon as 2021.

The future of CFD is, however, as bright as ever. On the front of large eddy simulation (LES), high-order methods and GPU computing are making LES more efficient and affordable. See a recent story from GE.

the darkest hour is just before dawn...

► Facts, Myths and Alternative Facts at an Important Juncture
  21 Jun, 2020
We live in an extraordinary time in modern human history. A global pandemic did the unthinkable to billions of people: a nearly total lock-down for months.  Like many universities in the world, KU closed its doors to students since early March of 2020, and all courses were offered online.

Millions watched in horror when George Floyd was murdered, and when a 75 year old man was shoved to the ground and started bleeding from the back of his skull...

Meanwhile, Trump and his allies routinely ignore facts, fabricate alternative facts, and advocate often-debunked conspiracy theories to push his agenda. The political system designed by the founding fathers is assaulted from all directions. The rule of law and the free press are attacked on a daily basis. One often wonders how we managed to get to this point, and if the political system can survive the constant sabotage...It appears the struggle between facts, myths and alternative facts hangs in the balance.

In any scientific discipline, conclusions are drawn, and decisions are made based on verifiable facts. Of course, we are humans, and honest mistakes can be made. There are others, who push alternative facts or misinformation with ulterior motives. Unfortunately, mistaken conclusions and wrong beliefs are sometimes followed widely and become accepted myths. Fortunately, we can always use verifiable scientific facts to debunk them.

There have been many myths in CFD, and quite a few have been rebutted. Some have continued to persist. I'd like to refute several in this blog. I understand some of the topics can be very controversial, but I welcome fact-based debate.

Myth No. 1 - My LES/DNS solution has no numerical dissipation because a central-difference scheme is used.

A central finite difference scheme is indeed free of numerical dissipation in space. However, the time integration scheme inevitably introduces both numerical dissipation and dispersion. Since DNS/LES is unsteady in nature, the solution is not free of numerical dissipation.  

Myth No. 2 - You should use non-dissipative schemes in LES/DNS because upwind schemes have too much numerical dissipation.

It sounds reasonable, but far from being true. We all agree that fully upwind schemes (the stencil shown in Figure 1) are bad. Upwind-biased schemes, on the other hand, are not necessarily bad at all. In fact, in a numerical test with the Burgers equation [1], the upwind biased scheme performed better than the central difference scheme because of its smaller dispersion error. In addition, the numerical dissipation in the upwind-biased scheme makes the simulation more robust since under-resolved high-frequency waves are naturally damped.   

Figure 1. Various discretization stencils for the red point
The Riemann solver used in the DG/FR/CPR scheme also introduces a small amount of dissipation. However, because of its small dispersion error, it out-performs the central difference and upwind-biased schemes. This study shows that both dissipation and dispersion characteristics are equally important. Higher order schemes clearly perform better than a low order non-dissipative central difference scheme.  

Myth No. 3 - Smagorisky model is a physics based sub-grid-scale (SGS) model.

There have been numerous studies based on experimental or DNS data, which show that the SGS stress produced with the Smagorisky model does not correlate with the true SGS stress. The role of the model is then to add numerical dissipation to stablize the simulations. The model coefficient is usually determined by matching a certain turbulent energy spectrum. The fact suggests that the model is purely numerical in nature, but calibrated for certain numerical schemes using a particular turbulent energy spectrum. This calibration is not universal because many simulations produced worse results with the model.

► What Happens When You Run a LES on a RANS Mesh?
  27 Dec, 2019

Surely, you will get garbage because there is no way your LES will have any chance of resolving the turbulent boundary layer. As a result, your skin friction will be way off. Therefore, your drag and lift will be a total disaster.

To actually demonstrate this point of view, we recently embarked upon a numerical experiment to run an implicit large eddy simulation (ILES) of the NASA CRM high-lift configuration from the 3rd AIAA High-Lift Prediction Workshop. The flow conditions are: Mach = 0.2, Reynolds number = 3.26 million based on the mean aerodynamic chord, and the angle of attack = 16 degrees.

A quadratic (Q2) mesh was generated by Dr. Steve Karman of Pointwise, and is shown in Figure 1.

 Figure 1. Quadratic mesh for the NASA CRM high-lift configuration (generated by Pointwise)

The mesh has roughly 2.2 million mixed elements, and is highly clustered near the wall with an average equivalent y+ value smaller than one. A p-refinement study was conducted to assess the mesh sensitivity using our high-order LES tool based on the FR/CPR method, hpMusic. Simulations were performed with solution polynomial degrees of p = 1, 2 and 3, corresponding to 2nd, 3rd and 4th orders in accuracy respectively. No wall-model was used. Needless to say, the higher order simulations captured finer turbulence scales, as shown in Figure 2, which displays the iso-surfaces of the Q-criteria colored by the Mach number.    

p = 1

p = 2

p = 3
Figure 2. Iso-surfaces of the Q-criteria colored by the Mach number

Clearly the flow is mostly laminar on the pressure side, and transitional/turbulent on the suction side of the main wing and the flap. Although the p = 1 simulation captured the least scales, it still correctly identified the laminar and turbulent regions. 

The drag and lift coefficients from the present p-refinement study are compared with experimental data from NASA in Table I. Although the 2nd order results (p = 1) are quite different than those of higher orders, the 3rd and 4th order results are very close, demonstrating very good p-convergence in both the lift and drag coefficients. The lift agrees better with experimental data than the drag, bearing in mind that the experiment has wind tunnel wall effects, and other small instruments which are not present in the computational model. 

Table I. Comparison of lift and drag coefficients with experimental data

p = 1
p = 2
p = 3

This exercise seems to contradict the common sense logic stated in the beginning of this blog. So what happened? The answer is that in this high-lift configuration, the dominant force is due to pressure, rather than friction. In fact, 98.65% of the drag and 99.98% of the lift are due to the pressure force. For such flow problems, running a LES on a RANS mesh (with sufficient accuracy) may produce reasonable predictions in drag and lift. More studies are needed to draw any definite conclusion. We would like to hear from you if you have done something similar.

This study will be presented in the forthcoming AIAA SciTech conference, to be held on January 6th to 10th, 2020 in Orlando, Florida. 

► Not All Numerical Methods are Born Equal for LES
  15 Dec, 2018
Large eddy simulations (LES) are notoriously expensive for high Reynolds number problems because of the disparate length and time scales in the turbulent flow. Recent high-order CFD workshops have demonstrated the accuracy/efficiency advantage of high-order methods for LES.

The ideal numerical method for implicit LES (with no sub-grid scale models) should have very low dissipation AND dispersion errors over the resolvable range of wave numbers, but dissipative for non-resolvable high wave numbers. In this way, the simulation will resolve a wide turbulent spectrum, while damping out the non-resolvable small eddies to prevent energy pile-up, which can drive the simulation divergent.

We want to emphasize the equal importance of both numerical dissipation and dispersion, which can be generated from both the space and time discretizations. It is well-known that standard central finite difference (FD) schemes and energy-preserving schemes have no numerical dissipation in space. However, numerical dissipation can still be introduced by time integration, e.g., explicit Runge-Kutta schemes.     

We recently analysed and compared several 6th-order spatial schemes for LES: the standard central FD, the upwind-biased FD, the filtered compact difference (FCD), and the discontinuous Galerkin (DG) schemes, with the same time integration approach (an Runge-Kutta scheme) and the same time step.  The FCD schemes have an 8th order filter with two different filtering coefficients, 0.49 (weak) and 0.40 (strong). We first show the results for the linear wave equation with 36 degrees-of-freedom (DOFs) in Figure 1.  The initial condition is a Gaussian-profile and a periodic boundary condition was used. The profile traversed the domain 200 times to highlight the difference.

Figure 1. Comparison of the Gaussian profiles for the DG, FD, and CD schemes

Note that the DG scheme gave the best performance, followed closely by the two FCD schemes, then the upwind-biased FD scheme, and finally the central FD scheme. The large dispersion error from the central FD scheme caused it to miss the peak, and also generate large errors elsewhere.

Finally simulation results with the viscous Burgers' equation are shown in Figure 2, which compares the energy spectrum computed with various schemes against that of the direct numerical simulation (DNS). 

Figure 2. Comparison of the energy spectrum

Note again that the worst performance is delivered by the central FD scheme with a significant high-wave number energy pile-up. Although the FCD scheme with the weak filter resolved the widest spectrum, the pile-up at high-wave numbers may cause robustness issues. Therefore, the best performers are the DG scheme and the FCD scheme with the strong filter. It is obvious that the upwind-biased FD scheme out-performed the central FD scheme since it resolved the same range of wave numbers without the energy pile-up. 

AirShaper top

► World’s most aerodynamic suits - Interview with Deanna Panting, founder of Qwixskinz.
  31 Jul, 2023
Aerodynamic clothing can make a substantial difference when it comes to aerodynamic drag. Over the years, Qwixskinz has helped numerous athletes win medals & break records.
► Free Speed: how properly designed clothing can make Olympic athletes faster.
    5 Jul, 2023
Free Speed: how properly designed clothing can make Olympic athletes faster.
► How does a Wind Tunnel work?
    9 Jun, 2023
We discover how a closed loop Wind Tunnel works and the techniques used to condition the airflow to achieve accurate results.
► Reducing the drag of an AUV with AirShaper
  17 May, 2023
AUVs are becoming essential research tools for oceanographers, but the range of the onboard battery limits the lengths of missions. AirShaper CFD software was used to reduce the drag and improve the hydrodynamic efficiency of a new AUV design, increasing its range.
► Hydrofoil Design - America's Cup Technology for Commercial Products
  12 May, 2023
The America's Cup is the pinnacle of sail boat technology. The move to hydrofoils has opened up many opportunities which are finding their way to commercial products.
► Technology - Can it improve lives?
  12 May, 2023
Technology often comes with the promise to improve our lives. Is this so? And how long do those improvements last?

Convergent Science Blog top

► Leonardo Pagamonci Wins 2023 CONVERGE Academic Competition With Tandem Onshore Wind Turbine Study
    1 Sep, 2023

Leonardo Pagamonci

We’re thrilled to announce Leonardo Pagamonci, graduate student at the University of Florence, as the winner of the 2023 CONVERGE Academic Competition. The competition challenged students to design and run a novel CONVERGE simulation that demonstrates significant engineering knowledge, accurately reflects the real world, and represents progress for the engineering community. 

Leonardo, who is pursuing a Ph.D. in industrial engineering, developed an interest in wind energy during his studies. “It strongly caught my attention because it’s a very interesting, modern field. The wind energy sector is relatively new, compared to other energy sectors.”

For his Ph.D., Leonardo is combining wind energy with another passion of his: computational fluid dynamics (CFD). He is developing a modeling approach to study the aeroelastic response of the wind turbine blades, i.e., the mutual interaction between the rotor structure and aerodynamics. When he learned about the CONVERGE Academic Competition, he thought it was the perfect opportunity to put his new modeling approach to the test. For his submission, he performed an aero-servo-elastic study of tandem onshore wind turbines operating in an atmospheric boundary layer (ABL), with the upwind turbine undergoing a yaw maneuver.

Visualization of Leonardo’s CONVERGE simulation showing tandem onshore wind turbines operating in an atmospheric boundary layer. At T=500, the upwind turbine maneuvers to a 25° yaw angle.

“The goal of this project was to simulate the operation of two turbines in an atmospheric boundary layer with realistic wind field conditions using a control technique that is common for wind farms,” said Leonardo.

The geometry for his study consists of two 5 MW onshore turbines separated by a distance of 7 rotor diameters (Figure 1). To simulate the rotor, Leonardo employed CONVERGE’s actuator line model (ALM), which is a cost-efficient method to model the aeroelastic response of the rotor blades without needing to solve the 3D geometry. He also included an actuator line for the wind turbine tower in his model to account for the aerodynamic effects of the tower and the aeroelastic interactions between the tower and the blades. 

Figure 1: Mesh resolution around the rotors and contour visualization of the turbulent flow field in the simulation domain.

To conduct the aero-servo-elastic study, Leonardo coupled CONVERGE with OpenFAST, a multi-physics tool for simulating the coupled dynamic response of wind turbines, through a user-defined function in CONVERGE. With this approach, CONVERGE solves the flow domain, predicting the inflow velocities. These data are passed to OpenFAST and used as inputs to solve for the aerodynamics of the structure and calculate the new positions of the ALM nodes. Furthermore, Leonardo used a synthetic turbulence generator developed at the University of Florence1 to generate the macro-structures of the turbulent wind conditions. 

The purpose of Leonardo’s study was to investigate the effects of a yaw misalignment on the tandem wind turbines. Initially, the two rotors operate with zero yaw angle. At a specified time, the upwind rotor (T1) is controlled to maneuver to a 25° yaw angle. The effects of this maneuver on the downwind turbine (T2), as well as on the system as a whole, are then quantified.

Table 1 shows the results for aerodynamic power both before (pre) and after (post) the yaw maneuver. The yaw maneuver caused a decrease in performance in T1 and an increase in performance in T2, although of a smaller magnitude. Overall, the yaw maneuver resulted in a 3.6% decrease in performance for the whole system. The decrease in total power is likely because the yaw angle is not optimal. Further simulation studies of different angles could help identify an optimal configuration.

T1 T2 Tandem
Power_pre (kW) 2935 1263 4198
Power_post (kW) 2376 1672 4048
Delta -559 kW +409 kW -3.6%
Table 1: Aerodynamic power before (pre) and after (post) the yaw maneuver for the upwind turbine (T1), downwind turbine (T2), and tandem system.

Looking at the structural response of the blades, Leonardo found a substantial redistribution of the loads following the yaw maneuver, with significant changes in the mean displacements of the blade tips (Figure 2). 

Figure 2: Top – Blade span distribution of blade deformation in the flapwise direction (line indicates mean values; shading indicates the standard deviation of the time series data). Bottom – Power spectral density (PSD) of the time series trends of blade tip displacement.

“Aeroelasticity is a very important aspect of wind turbine analysis, especially because horizontal-axis wind turbines have very large rotors,” Leonardo explained. “With such long, slender, and flexible blades, it is important to analyze the mutual interaction of the aerodynamics and the structure, since each one interacts with and modifies the response of the other.”

Being able to accurately predict these interactions becomes even more important when looking at larger wind farms, where the wakes from the upwind rows propagate to the downwind ones, affecting the performance of the entire wind farm. In addition, the structural response of each individual turbine must be taken into account. These kinds of studies are exactly what Leonardo has planned for the future using this methodology.

“This tool is applicable to a very wide range of analyses,” said Leonardo. “You could analyze more yaw maneuver angles to see which is optimal, look at a broad range of operating conditions, investigate cases where the turbines aren’t aligned with the wind, study a greater number of turbines, or simulate much larger turbines. And because the controller is available with this tool, the studies have another degree of realism.”

Leonardo’s work is not only extending the modeling capabilities of CONVERGE, but also enabling more realistic studies of complex wind turbine dynamics, which will ultimately help the wind energy industry continue to grow to meet rising consumer demand. We look forward to seeing more of Leonardo’s impressive work in the future!

Learn more about the CONVERGE Academic Program here.


[1] Balduzzi, F., Zini, M., Ferrara, G., and Bianchini, A., “Development of a Computational Fluid Dynamics Methodology to Reproduce the Effects of Macroturbulence on Wind Turbines and Its Application to the Particular Case of a VAWT,” Journal of Engineering for Gas Turbines and Power, 141(11), 2019. DOI: 10.1115/1.4044231

► In Memoriam: Scott Drennan – A Legacy of Dedication and Friendship
  18 Aug, 2023

Scott Drennan
November 5, 1962 – August 7, 2023

It is with heavy hearts that we mourn the passing and honor the life of Scott Drennan, a remarkable individual whose impact reached far beyond his professional achievements. As the director of both gas turbine and aftertreatment applications at Convergent Science, Scott’s journey was one of dedication, innovation, and unwavering support for his colleagues, friends, and family.

Scott joined Convergent Science in 2012, when the company was aiming to branch out into gas turbine and aftertreatment modeling. In search of someone who would own and evolve our presence in these new markets, Scott emerged as a natural choice to lead our endeavors because of his renowned reputation in the field. Relocating his family from California to Texas demonstrated not only his dedication but also his willingness to embrace new challenges. Scott’s contributions to our gas turbine solutions were nothing short of transformative, a reflection of his ability to drive progress. 

Throughout his years at the helm of the Aftertreatment team, Scott exhibited an inspiring passion for growth. He masterfully guided the team’s evolution, from nurturing talent to crafting the very training program that paved the way for groundbreaking aftertreatment modeling with CONVERGE. Scott’s commitment to validation laid the cornerstone for client acquisition, future benchmarks, and software development. His oversight of key initiatives, such as urea deposit and filter modeling, was a testament to his visionary leadership.

Scott was more than just a professional. His love for live music, sports, and culinary experiences showcased his zest for life. His ability to find hidden gems in gastronomy enriched every journey. As a friend and colleague, he radiated warmth, leaving memories of shared laughter and camaraderie from countless trips and projects.

Above all, Scott’s conversations were frequently punctuated with stories of his greatest treasures: his wife, Julie, and his three children. His dedication to family radiated as he spoke with pride about his daughter’s accomplishments and his boys’ martial arts victories and educational achievements. Scott’s anecdotes and wisdom on parenting forged a bond, reminding us of the shared joys and challenges of fatherhood.

Scott’s legacy will forever remain a testament to the power of friendship, the pursuit of excellence, and the importance of cherishing those we hold dear. As we grieve this immeasurable loss, let us remember the light he brought to our lives and extend our deepest condolences to his beloved family. Though he is no longer with us, his spirit lives on in the memories we share and the values he instilled. Rest in peace, dear friend.

**Following his wishes, in lieu of flowers, contributions may be made to the boys’ college funds at Codes for: Christopher Q17-G8X, Sean H5R-C42

► Streamline Your CONVERGE Workflow With In Situ Post-Processing
  17 Jul, 2023

Alexandre Minot

Senior Research Engineer

At Convergent Science, we recently selected ParaView Catalyst as our in situ post-processing solution for solving computational fluid dynamics (CFD) problems. ParaView Catalyst is a library that allows ParaView, an open-source data analysis and visualization program distributed by Kitware, to connect to simulation codes. With ParaView Catalyst, ParaView can access the simulation code’s data and post-process it on the fly directly on the high-performance computing (HPC) cluster. This feature eliminates the need to write large 3D results files. Additionally, you get results tailored to your application during the run.

Coupling with ParaView Catalyst allows you to track high frequency phenomena, monitor the convergence of your simulation, or simply have your results ready to go for your presentation at any time. Because in situ post-processing allows you to extract only the most important data from your simulation, it significantly reduces the size of the files you need to download from the computational server to your workstation.

While the simulation is running, CONVERGE uses ParaView Catalyst to open background instances of ParaView automatically. CONVERGE then shares its data with ParaView and triggers the run of a post-processing script. ParaView runs in parallel on the same HPC nodes as CONVERGE and accesses CONVERGE’s memory directly, guaranteeing fast and fully automatic data processing. ParaView will write only the data and images you asked for in the CONVERGE results directory.

Figure 1: Diagram of how Catalyst links CONVERGE and ParaView.

Suppose you want to visualize autoignition in a piston engine, a fast moving phenomenon. In a typical CFD workflow, you would need to save the 3D data at a high frequency, potentially at every time-step, in order to capture the autoignition. At the end of the simulation, this large amount of data is downloaded onto the post-processing machine, where it has to be loaded again and processed for visualization.

For knock identification, we recommend the extraction of an isosurface of 1700 K to visualize the main flame front and an isosurface of pressure difference colored by the mass fraction of CH2O to identify the autoignition pockets. With ParaView Catalyst, CONVERGE can write out these isosurfaces directly during the simulation. For our knock demonstration case, this coupling decreases the total runtime of the simulation by about 20%, compared with saving 3D files at the same frequency. Since no post-processing of the 3D files is necessary, you can then directly load the isosurfaces in your favorite visualization tool.

There are two ways to configure in situ post-processing actions in CONVERGE. The first way is through predefined scripts in CONVERGE Studio. Using these predefined scripts, you can set up in situ post-processing in just a few clicks. No knowledge of ParaView is required to configure a Catalyst script, and everything is accessible directly in a classic CONVERGE Studio panel (Figure 2).

Figure 2: ParaView Catalyst panel in CONVERGE Studio.

Figure 3 shows an image of a slice generated during a spray simulation. Its extraction was set up directly in CONVERGE Studio using the ParaView Catalyst panel. Slices, which allow us to easily visualize flow, are among the most common CFD data extractions. By extracting slices at high frequency during the simulation, you can access more detailed information sooner than with a classic post-processing workflow.

Figure 3: Rendering of a slice outputted during a simulation of an example LES spray case.

The second way to configure in situ post-processing actions is to create a custom Catalyst script in ParaView. Creating your own post-processing scripts can be done easily before you start your simulation using Studio ParaView, our integration of the ParaView software available starting in CONVERGE Studio 3.1_10May2023. Using the Studio ParaView graphical user interface, you can set up your post-processing the way you would a classic post-processing workflow. Once configured, ParaView allows you to export your setup in the form of a Catalyst script, which is ready to be used by CONVERGE during the simulation.

Figure 4: ParaView Catalyst rendering of gas venting in a battery cell undergoing thermal runaway.

For example, Figure 4 shows a video of gas venting in a single cell undergoing thermal runaway in an e-bike battery pack. To generate the images for this video, we used ParaView to set up isosurfaces of H2, C2H2, and CH4 and exported the setup to a Catalyst script.

ParaView Catalyst allows you to extract only the most important data from your simulation in real time, enabling you to transfer results faster and incorporate them directly into your design review process. In situ post-processing with ParaView Catalyst filters the unnecessary data and saves only the data you need for your analysis.

Interested in finding out more about how ParaView Catalyst can help you streamline your CFD workflow? Contact us today!

► Capturing Heart Valve Dynamics With Implicit Fluid-Structure Interaction Modeling
  28 Jun, 2023

Wendy Lovinger

The heart is a vital organ that pumps blood throughout the body, carrying oxygen and nutrients critical to organ function and sustaining life. It is, nevertheless, susceptible to disease. Heart disease touches the lives of almost everyone. The line between a healthy heart and an unhealthy heart is a fine one. Modern medicine has made significant advances in the technology needed to successfully intervene in the event of illness, but the technology can always be improved. One of the areas where improvements can continue to be made is mechanical heart valves.

Determining whether an implanted mechanical heart valve will open and close properly based on the actual blood flow usually requires patient participation, a high-risk proposition. Using computational fluid dynamics (CFD) to model mechanical heart valves, on the other hand, is a low-cost, low-risk method to evaluate device performance before performing an invasive procedure.

In this blog post, we explain how we simulated an idealized mechanical 3D heart valve with a small leaflet-to-blood density ratio using CONVERGE. We validated our results with the data from Banks et al., 2018.1

We modeled the motion of the mechanical heart valve with CONVERGE’s implicit fluid-structure interaction (FSI) solver. Because the density of blood is so close to the density of the heart valve, the added mass effect is significant, which can cause explicit FSI solvers to become unstable. CONVERGE’s implicit FSI solver can account for the additional inertial forces from the added mass effect. The implicit method tightly couples the CFD solver with the six degree-of-freedom rigid FSI solver, iterating between the two in a single-time step until the solution converges.

This implicit coupling allows us to predict the movement of an FSI object submerged in a fluid of a similar or higher density, such as a mechanical heart valve in blood. Figure 1 shows that our implicit FSI solver can accurately model how an idealized heart valve opens and closes for a range of leaflet-to-blood density ratios.

Figure 1: Comparing the leaflet motion with published data at different density ratios

To capture the moving geometry of the mechanical heart valve, we used CONVERGE’s Cartesian cut-cell method with autonomous mesh generation. In some CFD solvers, creating an appropriate mesh for an FSI simulation can be challenging because you don’t know the motion profile ahead of time. In CONVERGE, the mesh is automatically regenerated near the FSI object at each time-step, easily accommodating the motion without any additional setup. We also deployed our Adaptive Mesh Refinement (AMR) to refine the grid in areas of high velocity gradient, which allows us to accurately capture the changes in velocity around the valve leaflet.

Figure 2 shows four velocity contour images at different stages of the heart valve opening and closing. CONVERGE’s AMR refines the grid only where the velocity changes the most and leaves the grid coarser where the flow is stagnant, greatly reducing computational expense.

Figure 2: Velocity contour of idealized mechanical heart valve showing Adaptive Mesh Refinement and streamlines

Our results show you can accurately simulate an artificial heart valve with CONVERGE’s implicit FSI solver and autonomous meshing feature. Because CONVERGE allows you to easily modify your geometry, it is an excellent tool for evaluating the performance of different heart valve designs. Interested in finding out what other biomedical applications CONVERGE can be used for? Check out our biomedical webpage here!


[1] Banks, J.W., Henshaw, W.D., Schwendeman, D.W., and Tiang, Q., “A Stable Partitioned FSI Algorithm for Rigid Bodies and Incompressible Flow in Three Dimensions,” Journal of Computational Physics, 373, 455-492, 2018. DOI: 10.1016/

► Analyzing Flashback in Hydrogen-Fueled Gas Turbines with CONVERGE
  19 Jun, 2023

Jameil Kolliyil

Engineer, Technical Marketing

From refineries to planes, gas turbines are vital to several industries. In addition to providing thrust to keep planes in the air, gas turbines account for almost a quarter of the world’s electricity production.1 Given their prominence in the industry, reducing emissions from gas turbines is crucial. Hydrogen has emerged as one of the more attractive alternative fuels for gas turbines and is backed by several nations to replace or supplement conventional fuels. Hydrogen offers numerous advantages: it has a higher calorific value, produces no greenhouse gases when combusted, and can be blended with existing fuels without major changes to the combustor. 

While the use of hydrogen fuel is desirable, there are a number of design, storage, and operational challenges that come with it. One major challenge in designing new gas turbines or retrofitting old ones is the prevention of a phenomenon called flashback in the combustor. During flashback, the flame propagates upstream at speeds higher than the incoming gas flow. Sustained upstream propagation can cause substantial thermal damage to the combustor hardware. Hydrogen has faster kinetics and a higher flamespeed than conventional fuels, making it more prone to flashback. To mitigate the phenomenon, various studies are being performed to find the limits of safe operation for hydrogen fuel. At Convergent Science, we used CONVERGE to perform one such study to analyze flashback in a swirling combustor.2 We compared our simulation results with experimental work performed at The University of Texas at Austin by D. Ebi.3


Figure 1 shows the geometry of the swirling combustor that was investigated in our study. Premixed fuel and air enter through the bottom, pass the swirler, and ignite in the combustion chamber. To accurately predict flashback, we employed the dynamic structure large eddy simulation (LES) model and a detailed chemistry mechanism4 fully coupled with the flow solver. Because the flame travels upstream during flashback, the mesh in the premixing section and the combustion chamber must be refined enough to capture the flame front. However, such an approach will result in unrealistically long simulation times. To obtain accurate results in a reasonable timeframe, we used CONVERGE’s Adaptive Mesh Refinement (AMR) technology to add mesh resolution along the flame front while maintaining a coarser mesh in other parts of the computational domain. 

Figure 1: Geometry of the swirling combustor.


In Figure 2, we have shown a visual comparison between experimental3 and simulation results for a CH4 + air (equivalence ratio Φ = 0.8) fuel mixture. You can see there is a good resemblance in the flame structure and temporal location. We also analyzed the flashback limit for a CH4 + H2 + air (Φ = 0.4) fuel mixture. For this particular fuel mixture, the experimental value for the onset of flashback is 75% H2 by volume.3 Based on our simulations, we predicted a value of 77% of H2 by volume. 

Figure 2: Flashback in CH4 + air flame at Φ = 0.8, Tin = 293K, Reh = 4000. Experimental data3 is at the top, and the simulated flame is at the bottom. 


The present study demonstrates an engineering solution for accurately predicting flashback and analyzing flame propagation using CONVERGE. For more details about this research, take a look at our paper here! With a long history of simulating complex geometries and combustion, CONVERGE is the go-to tool for all your gas turbine flow simulations. Check out our gas turbine webpage for more information on how CONVERGE can help you design the gas turbines of the future! 


[1] “bp Statistical Review of World Energy, 2022 | 71st Edition”, bp, 2022.

[2] Kumar, G., and Attal, N., “Accurate Predictions of Flashback in a Swirling Combustor with Detailed Chemistry and Adaptive Mesh Refinement,” AIAA SciTech Forum, San Diego, CA, United States, Jan 3–7, 2022. DOI: 10.2514/6.2022-1722

[3] Ebi, D.F., “Boundary Layer Flashback of Swirl Flames,” Ph.D. thesis, The University of Texas at Austin, Austin, TX, United States, 2016.

[4] G.P. Smith, Y. Tao, and H. Wang, Foundational Fuel Chemistry Model Version 1.0 (FFCM-1),, 2016.

► Academic Spotlight: Assessing Wind Turbine and Wind Farm Wakes on Uneven Terrain
  10 Apr, 2023

Jameil Kolliyil

Engineer, Technical Marketing

Last year while traveling through the countryside of Tamil Nadu, India, I was struck by the sight of numerous wind turbines dotting the landscape. Those towering machines were not only a testament to the ingenuity of human engineering but also a symbol of the growing importance of wind energy in India. In recent years, wind energy has emerged as a significant source of renewable energy in India, contributing to the country’s efforts to reduce its dependence on fossil fuels and mitigate the effects of climate change. With its vast coastline, ample wind resources, and growing demand for electricity, India has the potential to become a global leader in wind energy.

To promote research and development of wind energy technology, the Indian government is taking steps to support universities and research institutions by providing funding, incentives, and skill development programs. At Convergent Science, we recognize the importance of advancing research through academia and offer exclusive CONVERGE license deals to universities. Kingshuk Mondal is a graduate student working with Professor Niranjan S. Ghaisas at the Indian Institute of Technology Hyderabad (IITH), and he is leveraging CONVERGE to study wind farm wakes on complex terrain. Kingshuk also presented his research at the CONVERGE User Conference–India 2023. I’ll let Kingshuk explain what he’s been working on.

Kingshuk presenting his research at the CONVERGE User Conference- India 2023.

Kingshuk Mondal

Graduate Student, Indian Institute of Technology Hyderabad (IITH)

The wind energy sector has seen rapid growth in the context of sustainable development, resulting in large installations of onshore and offshore wind farms. Onshore wind turbines are often situated on complex terrain because of the high wind resource potential in hilly regions. Accurate estimations of power output and turbine lifetime are essential aspects of wind turbine and wind farm design and operation. To achieve accurate estimations, you must predict the turbulent flow conditions, the wind turbine wake recovery, and the interactions between wakes of multiple turbines in a wind farm. The wake of a wind turbine evolves differently when sited on complex terrain (e.g., on a hill) compared to a flat surface. Our study aims to optimize the layout of a wind farm over a complex topology for efficient energy extraction and minimal structural stresses.

In this work, we focus on the evolution of an isolated wind turbine’s wake and the wake interactions in a row of wind turbines sited on an idealized cosine-shaped hill. CONVERGE is a useful tool for these simulations because of its ability to simulate flow in complex geometries without time-consuming mesh generation and the flexibility to use a range of turbulence closure models. In addition, CONVERGE’s Adaptive Mesh Refinement feature automatically concentrates grid points in regions with large gradients. For this work, we used large eddy simulations (LES) with the dynamic Smagorinsky model as the sub-grid scale model.

First, we validated a single turbine on a flat surface with an experimental study by Chammorro et al., 2009.1 We found fair quantitative and qualitative agreement between the simulation results and the experimental data. We then proceeded to simulate the flow over a cosine-shaped hill. The flow accelerates on the windward slope of the hill and attains the highest velocity at the top of the hill as shown in Figure 1(a). These areas have low turbulence intensity (TI) and total shear stress (TSS), making them appropriate sites for installing wind turbines. A long wake region is formed on the leeward side of the hill stretching up to 15 hill heights. This region is characterized by enhanced TI and TSS along with low wind potential, making it unfavorable for wind turbine installation.

Placing a wind turbine in front of and on the top of the hill has a similar effect on the hill wake. The wake recovery behind the hill is faster due to the influence of TI from the turbine wake. Because of this, reasonable wind potential is observed after 5 hill distances on the leeward side of the hill as shown in Figure 1(b).

Figure 1: Contours of streamwise velocity for (a) flow over a hill and (b) turbine placed
before a hill. The solid black line represents the wind turbine.

With these findings in mind, we placed a row of five turbines (T1–T5) along the hill as shown in Figure 2. T3 and T4 are placed on the windward slope and on top of the hill, respectively, to minimize the effect of the wakes from T1 and T2. 

Figure 2: Schematic of the case with a row of five turbines along a cosine-shaped hill. H is the turbine hub height; D is the turbine diameter; z0 is the aerodynamic surface roughness.

Because the flow accelerates as it climbs the slope of the hill, T5 is placed at a distance of approximately 5H after the hill to get reasonable wind potential. In addition to considerable wind input, T5 encounters high TI and TSS—reinforcing the structure of T5 is imperative to reduce fatigue stresses. These results are shown in Figure 3.

Figure 3: Contours of (a) streamwise velocity, (b) TI, and (c) TSS for a row of five turbines placed along a cosine-shaped hill. The solid black lines represent the wind turbines.

This study is a first step toward optimizing the layout of a wind farm over complex topology. Future work will consist of rigorous validation of different cases with multiple turbines and flow over various topologies. We also aim to estimate the power output for the optimized layout.

Thanks, Kingshuk! Analyzing potential wind farm locations to extract maximum energy and ensure smooth operation is crucial to future wind energy projects. Wind energy is expected to play a critical role in the world’s energy transition to help meet our climate goals, and Kingshuk’s work is a promising step toward creating more efficient wind farms. From analyzing renewable sources of energy to assessing battery energy storage systems where the generated electricity is stored, CONVERGE is the go-to tool for designing sustainable technologies! 


[1] Chamorro, L. P., Fernando Porté-Agel, “A wind-tunnel investigation of wind-turbine wakes: boundary-layer turbulence effects,” Boundary-layer meteorology 132 (2009): 129-149, 2009.

Numerical Simulations using FLOW-3D top

► 2024 Conference Registration
  19 Sep, 2023

Conference Registration

Registration is now open for the FLOW-3D World Users Conference 2024 in Hamburg, Germany, June 10-12! Connect with FLOW-3D users around the world. Enjoy social events, a poster session, technical presentations, product development talks and free advanced training.

Registration closes Friday, May 10.

Registration for the FLOW-3D World Users Conference 2024
I am interested in being a:
Which track would you like to be considered for?
I am attending:
Everyone, but especially presenters are strongly encouraged to attend both days of the conference.
I will attend the following training sessions the afternoon of June 10:
I am attending the Opening Reception on June 10 *
I am attending the Conference Dinner on June 11 *
I will be bringing a guest *
A 50 € charge includes access to the opening reception and conference dinner. It does not include access to the conference itself.

FLOW-3D News
Privacy *
I agree to be photographed and/or filmed during the conference.

Requesting an Invitation Letter

If you are traveling from a country that is not within the Schengen area, we recommend that you request a letter of invitation and begin your visa application process 60-90 days before your travel date, as visa wait times may be significant.

Please provide me with an invitation letter for visa or business needs.

Cancellation Policy

Flow Science reserves the right to refuse or cancel a conference registration at any time. In such cases, a refund will be given for the full registration amount paid, less the payment processing fees. Flow Science is not responsible for any costs incurred. Registrants who are unable to attend the conference may cancel up to Friday, May 10 to receive a full refund, less payment processing fees. After that date, no refunds will be given.

► Flow Science Receives the 2023 Flying 40
  13 Sep, 2023

Flow Science Receives the 2023 Flying 40

Flow Science is named one of the fastest growing technology companies in New Mexico for the eighth year running.

Santa Fe, NM, September 13, 2023 – Flow Science has been named one of New Mexico Technology’s Flying 40 recipients for the last eight consecutive years. The Flying 40 awards recognize the forty fastest growing, locally-owned technology companies in New Mexico each year, highlighting the positive impact the tech sector has on growing and diversifying New Mexico’s economy.

The New Mexico Tech Council (NMTC) announced that Flow Science, Inc. has been selected as a recipient of the prestigious 2023 Flying 40 Award in the “Top 10 Firms with $7.5 to $23 Million/Year in Revenue in 2022” category. An awards ceremony was held in conjunction with the NM Tech Summit, the flagship tech event in New Mexico, on September 7 at the Albuquerque Convention Center.

“I would like to congratulate Flow Science for their accomplishments and impact on the local tech industry. Cheers to another successful year ahead!” said Mia Petersen, NMTC Executive Director and CEO.

The 2023 Flying 40 awards are based on the following categories: top 10 firms by revenue growth (2018-2022); top 10 firms with $1-7.4 million in revenue in 2022; top 10 firms with $7.5-23 million in revenue in 2022; top 10 firms with $23-500 million in revenue in 2022; and the Falcon Start Up Award.

“We continue to expand important areas of growth for our company including modeling highly complex processes in additive manufacturing, metal casting, and aerospace. We are also proud to provide our cutting-edge software to the civil and environmental engineering industry, supporting their efforts to improve and build hydraulic infrastructure resilient to the demands of climate change and to the automotive industry whose innovative work on lightweighting and giga casts for the EV market is helping to create a future where we can burn less fuel and extract less gas, ultimately creating a better planet for all of us,” said Dr. Amir Isfahani, President & CEO of Flow Science.

About Flow Science

Flow Science, Inc. is a privately held software company specializing in computational fluid dynamics software for industrial and scientific applications worldwide. Flow Science has distributors and technical support services for its FLOW-3D products in nations throughout the Americas, Europe, Asia, the Middle East, and Australasia. Flow Science is headquartered in Santa Fe, New Mexico.

Media Contact

Flow Science, Inc.
683 Harkle Rd.
Santa Fe, NM 87505
+1 505-982-0088

► A Life in CFD by Dr. C.W. “Tony” Hirt
  23 Aug, 2023

A Life in CFD by Dr. C.W. “Tony” Hirt

The views and opinions expressed in this article are the author’s own and do not represent those of Flow Science Inc. or its affiliates.

I have spent 60 years working in computational fluid dynamics (CFD). These notes consist of personal remembrances, history, incremental advances, and a discussion of where CFD is today and where it will be in the future. Predicting the future of CFD is not so easy. The reasons for this and what might be possible in the future are briefly discussed. Along the way, it is hoped that readers will get some idea of what it was like to pursue CFD from its beginnings when pre and post processors, advanced programing languages and graphic display methods were not available, not to mention that the computers back then had little memory or speed.

Joining Group T-3 – How it all began

The history presented here is primarily about the Los Alamos National Laboratory Group T-3, which was headed by Dr. Francis H. Harlow, originator of many fundamental computational techniques for modeling fluids that are still used today. I was lucky to be invited into this group at the beginning of my career in 1963 after being a summer graduate student in 1960 and 1961, where I discovered that CFD was an interesting and rewarding activity to pursue. In Group T-3 members worked in teams, so anything that I contributed to new computational methods was done in collaboration with other group members.

What is CFD and why do we need it?

To begin any discussion of the future of CFD, it seems appropriate to question whether CFD will be important and useful in the future. This question was addressed in a short article, Why CFD? The short answer is that there are many things CFD can contribute to understanding fluid dynamic processes that cannot be studied in any other way. This is why CFD will continue to be important and should be the subject of considerably more development.

It is clear to everyone involved with CFD that the major advancements in computational speed and complexity over the past 60 years have been on the hardware side, which has exhibited an exponential growth in computer memory sizes and speeds as well as reduced cost for computer components. Over the same 60 years, computational methodology has not advanced in any dramatic way. The reasons for this lack of advancement are varied and are partially answered later in this article.

LOS ALAMOS MANIAC, All vacuum tube computer. Ulam/Pasta “CFD” was simple force interaction between points, all parameters were powers of 2 for binary shifts instead of multiply/divide operations.
LOS ALAMOS MANIAC, all vacuum tube computer. Ulam/Pasta “CFD” was simple force interaction between points, all parameters were powers of 2 for binary shifts instead of multiply/divide operations. Credit: LANL.

Computational fluid dynamic modeling is based on the idea of using computers to solve mathematical equations describing the dynamics of a fluid. Such equations are the equations of conservation of mass, momentum, and energy. These equations are typically partial differential equations that describe, for example, the change in mass of a small element of fluid over a short period of time. Such advancement occurs by interactions of the fluid element with its environment. That is, the mass particle is moved over a short period of time through its local neighborhood whose property variations are estimated by spatial gradients, i.e., partial differential equations. Computationally, the idea is to repeat these small incremental changes for all the particles in a fluid over many small time steps to evolve the dynamics of a large fluid region.

In general, there seems to be no way to avoid the use of advancement in small time steps for the dynamics of every fluid element to have an accurate time-dependent solution. Some simplifications may be made for special cases, but the basic incremental time advancement of small fluid elements would appear to be essential.

If the idea of CFD is to advance the motion of small fluid elements through a sequence of small time increments, then the only way to speed up computations is to do more such incremental advances in a given time. The current trend in CFD takes advantage of computer processors with multiple CPUs, parallel computing and cloud computing, all of which take advantage of many processors all at one time. Again, this is a hardware advancement, not a new way to solve the fluid dynamic equations of conservation.

The role of particles in CFD

To see where we are today it is useful to review where CFD started. For that, there are a couple of issues that must first be addressed. For instance, what is meant by a fluid particle or element? This can be answered in a variety of ways. Typically, a fluid region is divided into a set of volumetric elements in the form of some sort of grid. Alternatively, a fluid can be condensed into a set of mass particles, often an attractive option. Or a fluid may even be represented by an expansion in a series of mathematical basis functions (e.g., as in solid mechanics).

Lagrangian vs. Eulerian particles

Whatever choice is made for “particle” there are two views, or choices for the reference frame, used to advance the equations of motion. One is the Lagrangian method in which the fluid particles move with the fluid, while the other is the Eulerian method in which the grid remains fixed in space and the fluid is moved through it. Historically, only the Lagrangian method was used in the early days of computing to study converging and expanding spherical masses of fluids associated with explosions being developed in the Manhattan Project during WWII.

Simple one-dimensional, Lagrangian models were also used to investigate shock interactions passing through layers of different materials. Because of the limited memory and speeds of the earliest computers these models were typically confined to a small number of fluid elements. For example, Francis Harlow reported that his earliest use of a computer to solve fluid dynamic problems typically involved 25 one-dimensional Lagrangian elements that took on the order of 1 hour of computer time on the first commercial IBM 701 computer [1]. Harlow also reported that he thought the same computation could be done on his mobile phone in a couple of milliseconds.

IBM 701 circa 1953, 2048 36-bit Word Memory
IBM 701 circa 1953, 2048 36-bit Word Memory. Credit BRL Report No. 1115, 1961.

The extension of the one-dimensional Lagrangian models to more dimensions creates some difficult problems. Foremost is the fact that such things as grid elements do not retain their shape as they move, for example, they typically undergo shears that distort them so much that they can no longer be used for accurate numerical approximations. To overcome this problem some scheme must be introduced for rezoning the grid to undo large distortions. That means introducing some sort of averaging process to convert between old and new grid shapes.

Averaging always introduces some smoothing and therefore may introduce a loss of fine scale details, something that is difficult to avoid. Many researchers have proposed different averaging methods, attempting to reduce the smoothing process and, in some cases, obtaining improved results. However, there is no perfect answer because the distribution of the material in a grid element is unknown. The amount of material may be known, but how it is distributed is not. Thus, subdividing the material for a new rezoned distribution cannot be perfect.

Programming back in the day

A little history is in order. As late as 1963, the computer programs developed at Los Alamos were written in machine language, which required dedicated programmers. Developers would write out the equations they wanted solved and give them to the programmers to translate into machine language. If a bug appeared, debugging consisted of the programmer, with a listing of the program, and a developer, with his list of instructions, sitting down together. The programmer would say something like “quantity so and so is added to such and such,” and the developer would agree with that operation. Eventually, step by step, a mistake would be found in which the program was not doing what the developer wanted. It was a time-consuming process, yet it was, in some ways, a social and enjoyable process.

The programmers would then take their programs, most often a deck of punched cards, to the computer. They could sit at the computer console and watch all the flashing lights on a dashboard, and based on the lights that were flashing, they could tell, for example, when the computer was doing multiplication or a division. Those days quickly changed with the introduction of programming languages like FORTRAN and professional computer operators, and programmers could no longer sit at the computer consoles.

IBM 7030 “Stretch”. Programmers could sit at the computer and tell what was going on from the flashing lights. Debugging was a two-person affair. Slow, but in its peculiar way, fun.
IBM 7030 “Stretch”. Programmers could sit at the computer and tell what was going on from the flashing lights. Debugging was a two-person affair. Slow, but in its peculiar way, fun. Credit: LANL.

One of the first multi-dimensional CFD programs – the PIC method

One of the earliest two-dimensional CFD programs to be written was the particle-in-cell (PIC) method created in 1955 by Frances Harlow [2]. This was an innovative development because the program could model the compressible flow of multiple materials undergoing large distortions, something that no other computer code could do. The PIC method combined both Eulerian and Lagrangian elements by using a fixed Eulerian grid of rectangular elements and mass particles, which are Lagrangian elements, to represent the different fluid materials. The motion of the mass particles carried the fluid material from one grid cell to another. Of course, this required the averaging of the particles in a grid cell at the end of a time step to determine the new density and pressure in each grid cell. Velocities were computed with respect to the Eulerian grid and interpolated to the particles to move them. The PIC code was for compressible fluids, the application of most interest at Los Alamos in the early days. Today there are numerous variants of the PIC method, including three-dimensional versions and schemes to smooth out the particles to reduce the discrete changes in material properties when a particle moves from one cell to another.

It might also be mentioned that there were no graphic display methods available to display the results of a PIC computation, so Frank and his team had to plot all the particles on a piece of paper by hand, Fig.1. Of course, the computers at the time were also limited in memory so results were limited in the number of particles that could be plotted (not a very satisfying trade off). This hand plotting was later replaced by a commercial pen plotter that used two perpendicular wires laid across a sheet of paper with a pen held at the crossing of the wires. The wires would be moved to the x and y coordinates of a particle according to input data and then the pen would place a mark on the paper. This sped up the plotting and was more accurate than the hand plotted results.

Plotting along

It was quite some time before better graphic display software became available. Until that time, a CFD program would typically have input data for plots written out at the beginning of the program and during execution, requested plots would be generated as part of the program. In other words, there were no pre or post processors available to CFD users. To get different plots it was necessary to rerun the program.

Plotting results were next improved by displaying results on CRT screens that were then photographed on 35mm film. These films could be displayed on viewers that projected the images onto a screen. Programmers submitted their programs on decks of punched cards and after being run by the computer operators, they would retrieve the output in the form of printed results on large pages of paper stacked accordion style, plus a reel of 35mm film. Some programmers accumulated stacks of output paper so high they were in danger of having the piles fall on top of them.

When three-dimensional simulations became possible because of hardware advances in the 1970s, the need for better graphic displays was essential. Trying to understand a three-dimensional flow structure using only two-dimensional plots is difficult. Consequently, a three-dimensional perspective plotting capability was developed in Group T-3 with some simple hidden line capabilities [3]. An example can be seen in Figure 2 without hidden lines, of flow over a simplified cab and trailer configuration, where multiple vortices are seen along the top and bottom surfaces of the vehicle. This flow structure was not evident until revealed by the 3D plots.

Figure 1. Early PIC
Figure 1. Early PIC plot from reference 1. Showing growth of a spherical gas bubble; dashed line represents theory; solid line is PIC. Particles inside the bubble are shown as dots and outside as crosses. Cylindrical axis at the bottom. Not all particles are shown.

The arbitrary Lagrangian-Eulerian (ALE) method

Figure 2. Three-dimensional perspective plot of flow over a large truck. Vortices over the top and bottom edges of the vehicle were not evident until viewed in perspective [4].

The idea of combining Lagrangian and Eulerian methods in one program, as was done in the PIC method, was the inspiration for what is now referred to as the arbitrary Lagrangian-Eulerian (ALE) method, another Group T-3 advancement. Should you want to move a Lagrangian grid line to a new position to straighten a distorted grid cell, you compute how much mass, momentum, and energy the line sweeps across when moved and subtract and add those amounts to the grid cells on either side of the grid line. This is a specialized rezoning method that allows arbitrary possibilities for the movement of grid lines.

One possibility, of course, is to return the grid back to its initial shape, which would make it a Eulerian model. But the advantage of the ALE method is that you can limit the amount of rezoning to only what seems necessary and, in this way, reduce the amount of smoothing that rezoning introduces. Figure 3 illustrates the technique in its first publication [4].

The majority of Eulerian CFD codes today do not use PIC particles, but instead try to improve the estimate of mass, momentum or energy that moves across a grid cell boundary from one cell to a neighboring cell in each time step interval. A simple example illustrates a fundamental problem with this process. Consider a one-dimensional arrangement of grid cells lined up in the x direction. Suppose there is some color concentration in a cell numbered i and none in any of the cells for larger i values. If there is a uniform flow U in the positive x direction, a simple estimate can be made for color concentration that moves from cell i into cell i+1 in a time step δt that is AUδtC, where A is the area of the boundary between the cells and C is the color concentration. The coefficient of C is the volume of fluid moved across the cell boundary with the speed U in time interval δt. This concentration change would then be averaged in with the existing concentrations in cell i and cell i+1. Cell i+1 now has some non-zero color concentration.

Here’s the rub: in the following time cycle, some of this color in cell i+1 will be advected into the next cell i+2 because it has been mixed uniformly into cell i+1. And for each additional cycle, some color will be advected into more downstream cells, resulting in the color moving faster than the flow velocity. This is often referred to as numerical diffusion. There has been a large amount of work to devise different methods to minimize this behavior. It can certainly be improved, but unfortunately not eliminated.

The idea of using particles, as in the PIC method appears at first sight to have some advantages since the location of a particle carrying some color is known precisely and it could require several time steps to move across the i+1 cell before entering cell i+2. The difficulty with this approach is to define exactly what the particle represents. It is likely thought to be a small region of fluid, but the assumption that it will remain a small region of fluid may not hold.

Figure 3. Examples of the ALE method
Figure 3. Examples of the ALE method for modifying moving grids in a simple slosh problem. (a) pure Lagrangian, (b) vertical lines lie beneath a surface vertex, (c) all vertices at initial horizontal locations, (d) same as (c) plus uniform spacing on vertical lines, (e) bottom four rows treated as Eulerian, (f) same as (e) but each vertex above the bottom 4 rows have moved to the average position of their 8 nearest neighbors. [4]

Physics inspiration over the years – anecdotes and observations

Smoking a pipe – understanding material distribution

A small personal experience may be useful to support this problem of using particles. Years ago, I was watching home slides one night when I was still smoking a pipe. Out of curiosity I blew a small puff of smoke into the light cone from the projector. What blew me away was that that puff didn’t remain a simple blob and certainly didn’t diffuse isotopically as generally treated in turbulence models. Instead, it was immediately sheared out into thin curtains of smoke by the eddies in the air. The initial puff was quickly dispersed into a region that was no longer local but had one or more dimensions that were much larger than the diameter of the original puff. That picture has remained vivid in my mind. I’ve often tried to think what could be done to better model such dispersion. I still have no clue!

The point of this discussion is that you have no idea of how material is distributed in a grid element. Some people believe introducing particles, like the smoke puff, is one way to improve advection between grid elements, but it is not much of a solution as the smoke puff testifies to. The dispersion of material in a particle is not well represented by Lagrangian methods, including particles themselves. Some “rezoning” must be done to account for the shears and other distortions occurring in the flow. A puff that immediately spreads out into thin sheets of smoke is no longer a “particle.”

The problem with scales

A problem of scales occurs in CFD in many ways. It is often the case that there are small scale phenomena associated with fluid flows that are mostly covering large scales. For example, thin boundary layers, or fine scale details of shock waves that affect larger scale behavior. Incorporation of both large and small scales in a computational model is difficult. Not only the problem of devising a grid that accommodates both large and small scales, but also small sizes, for example, may require small time scales as well, making it computationally expensive to simulate the times appropriate for the larger scales. Typical solutions to such problems are to introduce approximations to the small-scale processes such as wall frictional losses or artificial viscosity to spread out a shock wave to where it can be resolved with a coarser grid. These often require specialized treatments and are not generally available in most CFD codes.

Gridding approaches – introducing FAVOR™

Another important issue with CFD grids is how to represent a complicated flow region such as a die for an engine block casting or an intake, a spillway, and other important elements of a hydroelectric power plant. The majority of commercial CFD codes use what are called “body fitted grids”, in which the grid elements are distorted to allow their faces to lie on curved solid surfaces. Finite difference approximations are more complicated when the grid elements are not simply rectangular, but that is the price to pay for conforming to complicated shapes. Generating such grids is not a simple process, especially when the elements should not change in shape and size too much between neighboring elements. If some solid object is to move, such as an opening/closing valve, or rotating crankshaft or flying projectile then the grid would have to be changed each time step and its state values recomputed. Not a very convenient or efficient computational method.

There is an alternative gridding scheme that is worth mentioning [5]. It is the Fractional Area Volume Obstacle Representation (FAVOR™) method. The basic grid is a simple rectangular mesh, possibly with varying element sizes. In each grid element the open volume fraction of the element, not blocked by an obstacle, is stored, and the open area fractions of the sides of the element, not blocked by obstacles, are also stored. This is a kind of generalized porous media approach. The advantage compared to body fitted grids, is that the underlying grid is very simple and easy to set up. Computing the areas and volume fractions blocked by obstacles can also be done with relatively simple preprocessor routines. Of course, the difference equations must incorporate these blockages, but that is not difficult. As for moving obstacles, instead of having to create a new grid each cycle, it is only necessary to make the area and volume fractions time dependent, which is much simpler.

The diffusion of Lagrangian particles

Returning briefly to particles. Suppose particles are introduced as a mass being emitted by some source, for instance, pollution. In addition to advection by the surrounding air flow, this mass should be diffusing. To add diffusion to the particles one option might be to subdivide each particle into several smaller particles each time step with a variety of random velocities, but this approach increases the number of particles exponentially and would quickly overwhelm the computer’s memory. An alternative approach introduced in Group T-3 was to imagine that at each time step a particle is a new source of pollution that spreads out in a Gaussian distribution. But, instead of dividing the particle up, imagine this distribution to be a probability of where the particle might move in a given time step. Choosing random numbers to pick a position in the Gaussian distribution and then moving the particle to that location approximates the process of diffusion. One can think of the selection of random numbers as a Monte Carlo sampling process or as each particle undergoing a random walk process [6], see Figure 4.

Figure 4. Random velocity particle diffusion compared with theory for a Gaussian puff [6].
Figure 4. Random velocity particle diffusion compared with theory for a Gaussian puff [6].

The bottom line

The bottom line for CFD is that numerically computing the dynamics of a fluid is not an easy problem, in general. Some of the fundamental problems have been outlined above. There are, of course, some special problems for which simpler methods may be introduced that can reduce computational effort. The example of “incompressible fluids” is one case that deserves mention.

Historically it was thought that if you used an equation of state for a fluid simulation that it would apply to any type of fluid, compressible or incompressible. The reality is not that simple. If one tries to compute an incompressible fluid in which the fluid speeds are much smaller than the speed of sound in the fluid, it is still necessary to track all the sound waves bouncing around based on the equation of state, but that requires a great amount of time, and it turns out, that one cannot average out the pressure waves in any simple way. For these reasons an alternative solution procedure was developed in which an implicit treatment was used to solve the vanishing of the fluid divergence expression, which implies incompressibility.

The introduction of the marker-and-cell (MAC) method

The first incompressible fluid code for two-dimensional applications was again a creation of Francis Harlow, who introduced a flood of new developments in a single advancement called the marker-and-cell (MAC) method [7]. To ensure incompressibility, Harlow devised a way to drive the velocity divergence for each cell in a Eulerian grid of rectangular cells to zero. First, he modified the placement of velocity components in a cell from the cell center to the respective faces for each velocity component, such that each face of the grid cell stores the velocity component normal to that face. Nowadays this is referred to as a staggered grid.

This made it easy to compute the volume of fluid passing through each cell face in a time step. Clearly, if there is a net flux of volume out of the cell then reducing the pressure in the cell (still located at cell center) will reduce all velocity components at the cell’s faces and hence the net outflow. If there is a net flux into the cell an increase in the pressure will increase the velocities out of the cell, which reduces the inflow. The actual computational process requires an iteration because an adjustment in one cell will affect the net flow in all its neighboring cells since they share the faces. This carryover to neighboring cells can be somewhat accounted for by over-relaxing the change in pressure needed to drive a cell’s divergence to zero.

In this way the MAC method, using a pressure-velocity iteration, was able to generate a velocity field that satisfied a zero divergence in every grid element (to some degree of convergence). But there was more to MAC. Because of the use of Marker particles that were placed only in grid elements that contained fluid, while elements without markers were treated as void elements, this meant the MAC method was able to model fluids having free surfaces, another first, as shown in Figure 5.

The implicit treatment – a pressure-velocity iteration – smoothed out and damped the pressure waves. An important feature of the incompressible method is that the pressures introduced are not physical pressures, that is, pressures arising from an equation of state, but pressures needed to drive the velocity divergences locally to zero. It’s an artifice that works well.

Figure 5. MAC example of the collapse of a reservoir of fluid, showing its free surface capability [7].
Figure 5. MAC example of the collapse of a reservoir of fluid, showing its free surface capability [7].
von Kármán vortex street
A T-3 First: von Kármán vortex street, first computation of vortex street using vorticity-stream function method (Fromm). Later reproduced using the marker-and-cell method.

Basically, the idea is to determine the pressure needed in each computational grid cell that will make the velocity divergence in that cell, i.e., a net zero volume flow across the sides of the cell), equal to zero,

$latex \displaystyle \frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}+\frac{\partial w}{\partial z}=0$

Harlow did this by replacing each velocity component in the divergence, for example u by its difference equation,

$latex \displaystyle {{u}^{n+1}}=\overline{u}-\frac{\delta t}{\rho }\frac{\partial p}{\partial x}$

Where $latex \displaystyle \overline{u}$ contains the initial velocity un plus all the accelerations of u except that of the pressure gradient. This leads to an equation for p that includes the cell pressure and all its neighboring cell pressures, which is a Poisson equation. Boundary conditions are a little complicated because at a solid wall, for instance, the pressure to be set outside the wall must be chosen to keep the normal velocity on the wall zero. Solving these equations simultaneously for all the cells in the grid can be done, for example, with a simple Gauss-Seidel iteration. This approach may be compared to an alternative approach that was published a couple years after the MAC method [8] in which the author, A.J. Chorin, simply set a change in cell pressure to be proportional to the negative of the velocity divergence,

$latex \displaystyle \delta p=-\alpha \cdot Divergence$

Here divergence is evaluated in terms of the new time-level velocities and α is a relaxation coefficient. This approach eliminates the complication of setting pressures outside solid walls, instead it simply sets the normal velocities to zero directly. The principal difficulty with this approach is that the relaxation coefficient α must be chosen experimentally; too small and convergence is too slow, too large and the iteration is unstable. This author was trying to apply the Harlow technique for incompressible flow to a two-dimensional Lagrangian grid, where because of the distorted geometry of the grid cells, complicated numerical pressure gradient expressions appear in the velocity divergence.

A light came on that a simple combination of the two approaches for incompressibility could be combined into a much simpler and better method than either of the original methods, by using the velocities directly, as in the second method, and using the Harlow approach to analytically evaluate the exact value of the relaxation coefficient α to use. This is the approach used ever since by many modelers because it replaces the use of a constant relaxation coefficient with something that also accounts for variable grid sizes and different types of boundary conditions.

One problem with the MAC method was that it was not computationally stable unless enough viscosity was introduced. The amount of viscosity needed was determined by experimentation. This problem was not fully understood until an analysis of the truncation errors arising from finite difference approximations provided explanations about why some approximations were unstable [9]. In the case of the MAC method the culprit was using centered difference approximations for the advection terms. By changing to an upstream, or donor, cell approximation, for example, the instability was eliminated, and it was no longer necessary to introduce viscosity.

The challenge: Capturing physical accuracy with numerical models

Truncation errors

Looking at truncation errors associated with difference equations has opened many possibilities because it gives quite a bit of information about what is generated by a numerical approximation. Researchers are always interested in improving accuracy, so they spend considerable time trying to devise higher order approximations. One way to do this is to first evaluate the truncation errors and then add terms to the difference equations to cancel those errors. One case of truncation error subtraction that was quite successful for the modeling of chemical lasers was done by members of the fluid dynamics group T-3 [10]. Why more examples of subtracting truncation errors have not been done is curious. One reason for this might be that truncation errors that are higher order involve higher order derivatives and writing approximations for such terms involves reaching out in a grid over more than just one or two grid elements. This becomes a problem at material boundaries where there are no grid elements extending outside the boundary.

Incompressible flows

Incompressible flows have been a remarkable success, but they are not without their limitations. A simple example will illustrate one of the difficulties of CFD that needs more attention. The example is the collapse of a steam bubble in a pool of water, a problem associated with steam suppression in light-water nuclear reactors. The injection of the steam bubble is slow enough that the water can be treated as incompressible, but as the steam condenses the bubble is collapsing. At the instant of collapse all the water rushing to fill the bubble space must instantly be stopped if the flow is incompressible. This requires a large pressure pulse to terminate the inflowing momentum, one that is much larger than experimentally observed.

The problem is that the final collapse happens over a small time interval, and the assumption of incompressibility in the fluid is not satisfied during the short collapse time. In this case some compressibility must be allowed for the pressure to propagate out at a certain rate which only stops the incoming fluid momentum out to the distance the pressure wave has traveled at any given time [11].

This complicates the numerical solution a bit but is necessary for physical accuracy. Importantly, it illustrates the need for considerable caution in developing numerical models. Effort must be made to prepare for exceptions and the possible need for the addition of more physical processes to make the models more realistic. This is one major area of development necessary for the future of CFD.

The breakthrough of the VOF method

The use of marker particles in the MAC method was a breakthrough that allowed free fluid surfaces to be modeled. The use of particles, however, is limited because to minimize the discrete changes in cells gaining or losing particles can only be improved by using many particles. More particles imply more computational time. Additionally, marker particles do not remain uniformly distributed. For example, a drop of fluid falling onto a rigid surface will have the particles collapsing closer together in the downward direction and spreading out more in the horizontal direction. The spreading might even result in grid cells without markers where there should be some.

Figure 6. The earliest example of a volume-of-fluid treatment for free fluid surfaces, left edge is an axis of rotation for cylindrical coordinates,
Figure 6. The earliest example of a volume-of-fluid treatment for free fluid surfaces, left edge is an axis of rotation for cylindrical coordinates, showing cylinder of liquid hitting the surface of a pool of liquid, and generating a splash [13].

To remedy this an alternative model was proposed that has caught on as evidenced by currently 19,134 (as of 5/2023) citations to the original publication [12]. This is the volume-of-fluid (VOF) method, in which the fractional volume of a grid element occupied by fluid is recorded by a variable that is typically denoted by the symbol F, for fluid fraction. The advantage of using this variable is that it can range from zero to one, so it does not have the discreteness of particles. Furthermore, it automatically accounts for the breakup or coalescence of fluid masses. For these reasons, the VOF method is now the most often used method for numerically tracking free surfaces and other fluid interfaces.

The origin of the VOF method resulted from models being developed by the T-3 Group for water/steam flows associated with light water nuclear reactor safety studies. In two-phase water/steam modeling it is customary to use a steam volume fraction in the mixture to evaluate the mixture mass and other properties. Musing on this, it was natural to wonder why not allow the volume fraction to have values of 0 and 1, and make that transition be at a liquid surface. For this to work, it required special numerical approximations to keep the interface sharp when it was moved through a grid. The first example of this [13] is shown (crudely) in Figure 6.

A discussion of limitations and what’s next for CFD?

Numerous alternative computational models have been devised for the advection of the discontinuous fluid fraction variable F, although none is perfect. Nevertheless, the models have been extremely successful in solving many fluid problems having multiple and complicated free surfaces.

While the incompressible flow models solve one important problem, they still are based on partial differential equations requiring the advancement of small fluid elements in a sequence of small-time steps. This remains a basic limitation of CFD.

With the continued development of computing hardware, it became possible, in the mid-1970s, to perform fully three-dimensional computations. No new computational techniques were required, only more computations.

What about other approaches to CFD? Is there likely to be a breakthrough technique that will revolutionize the computation of fluid dynamic processes? The past may offer a clue.

Several announcements have been made in the past few years of entirely new modeling methods that are sure to revolutionize CFD and replace the current finite-differencing methods. Included in these developments are the Lattice-Boltzmann method, and the smooth particle hydrodynamics method. While innovative, neither method has come near to reaching its expected revolution. Perhaps more study will increase their applicability. In the meantime, it would be wise to be skeptical of extreme claims, without reliable verifications.

Artificial intelligence (AI) has been suggested as an advance that will greatly improve CFD modeling. This is not clear. AI rests on the evaluation of many simulations and what might be learned from them. However, the choice of examples to include in an evaluation is critical and how can one be assured that all possible physical features are included in the sampling? A real difficulty with AI is that one cannot know what has and has not been included. Because of this the outcome cannot be evaluated at this time.

Quantum computing has advanced recently, and for some special problems has shown great promise. For the vast majority of CFD problems, however, there will have to be many more quantum particles, or qubits, introduced and properly entangled to represent all the complexity of real fluid dynamics. This area requires more study.

Transitioning to commercial CFD

Flow Science Corporate Officers
Flow Science Corporate Officers

The discussion so far has been on the development of CFD methods, all of which were accomplished under government and/or academic programs. It was inevitable that I would eventually transition to commercial CFD. This happened because all my experience at the Los Alamos National Laboratory was a lucky break, i.e., being in the right place at the right time. Up until the 1980s, the laboratory could not perform work for non-government organizations. Therefore, the application of the CFD methods being developed at the laboratory could not be applied to important problems facing industry.

It was disappointing to see that work believed to be useful to others was not being used. To rectify this, in 1980 I started a commercial company called Flow Science, Inc. Initially the company performed contract work using the new CFD tools. The truth is that a small company cannot easily exist on contract work because it either has too much work and not enough workers, or too many workers and not enough work.

In 1984, Flow Science, Inc. began to sell its software under the name FLOW-3D instead of selling contract work. It was a good decision. At the time there were a few other companies marketing CFD software, so Flow Science made the decision to concentrate on its expertise, which was CFD for fluid problems involving free surfaces.

The company also decided to use a simple gridding technique, instead of body-fitted grids, and to represent geometry using a technique it developed called the Fractional Area and Volume Obstacle Representation (FAVOR™) method [5]. These choices were based on a long history of using many different CFD techniques and have made FLOW-3D a powerful and easy to use software that is used worldwide.


  1. F.H. Harlow, “Adventures in Physics and Pueblo Pottery Memoirs of a Los Alamos Scientist,” p.57, Museum of New Mexico Press (2016)
  2. M.W. Evans and Francis H. Harlow, “The Particle-in-Cell Method for Hydrodynamic Calculations,” Los Alamos Scientific Laboratory, report LA-2130, (1957).
  3. C.W. Hirt and J.L. Cook, “Perspective Displays for Three-Dimensional Finite Difference Calculations,” J. Comp. Phys., 3 293 (1975).
  4. C.W. Hirt, “An Arbitrary Lagrangian-Eulerian Computing Technique,” Proc. Second International Conference of Numerical Methods in Fluid Dynamics,” Uni, California, Berkeley, CA, Sept. 15-19 (1970).
  5. C.W. Hirt and J.M. Sicilian, “A Porosity Technique for the Definition of Obstacles in Rectangular Cell Meshes,” Proc. Fourth International Conf. on Ship Hydrodynamics,” Natl. Acad. Sciences, Washington, D.C., September 24-27 (1985).
  6. R.S, Hotchkiss and C.W. Hirt, “Particulate Transport in Highly Distorted Three-Dimensional Flow Fields,” Proc. Computer Simulation Conf., San Diego, CA, June (1972).
  7. J. Welch, F.H. Harlow, J.P. Shannon and B.J. Daly, “The MAC Method A Computing Technique for Solving Viscous, Incompressible, Transient Fluid-Flow Problems Involving Free Surfaces,” Los Alamos Scientific Laboratory repot LA-3425 (1965).
  8. A.J. Chorin, “A Numerical Solution of the Navier-Stokes Equations,” Math. Comp. 22 745 (1968).
  9. C.W. Hirt, “Heuristic Stability Theory for Finite Difference Equations,” J. Comp. Phys. 2, No. 4, 339. LA-DC-8976, (1968).
  10. W.C. Rivard, O.A. Farmer, T.D. Butler and P.J. O’Rourke, “A Method for Increased Accuracy in Eulerian Fluid Dynamics Calculations,” Los Alamos Scientific Laboratory report LA-5426-MS (Oct. 1973).
  11. C.W. Hirt and B.D. Nichols, “Adding Limited Compressibility to Incompressible Hydrocodes,” J.Comp.Phys. 34, 390 (1980).
  12. C.W. Hirt and B.D. Nichols, “Volume of Fluid (VOF) Method for the Dynamics of Free Boundaries,” J. Comp. Phys. 39 201 (1981).
  13. B.D. Nichols and C.W. Hirt, “Methods for Calculating Multi-Dimensional, Transient Free Surface Flows Past Bodies,” Proc. of the First International Conference on Numerical Ship Hydrodynamics, Gaithersburg, Maryland, Oct.20-23 (1975).
► Host a FLOW-3D HYDRO Workshop
  13 Jul, 2023

Host a FLOW-3D HYDRO Workshop

If your company is located in the US or Canada and is interested in hosting a FLOW-3D HYDRO workshop, please complete the form below. Hosts receive three free workshop registrations. Contact Workshop Support with any questions.

Host a FLOW-3D HYDRO Workshop
Application interest
FLOW-3D News
► Flow Science Earns Family Friendly Business Award® at the GOLD Level
  11 Jul, 2023

Flow Science is recognized as a committed leader in implementing family friendly policies in the workplace.

Santa Fe, NM, July 11, 2023 — For the seventh year in a row, Flow Science has earned distinction for its workplace policies by Family Friendly New Mexico, a statewide initiative developed to recognize employers that have adopted policies that give New Mexico businesses an advantage in recruiting and retaining the best employees.

“Flow Science is proud to offer benefits and policies that support our employees and their families in the pursuit of wellness, work-life balance, job satisfaction and community connection. Benefits and policies such as employer paid healthcare, wellness incentives, paid vacation, sick, parental and medical leave, generous employer match on 401k and charitable contributions, yearend cash bonuses, and ability to work remotely make Flow Science an employer of choice in New Mexico and nationwide,” said Aimee Abby, Director of HR at Flow Science.

The Family Friendly New Mexico initiative offers training, support and resources to businesses on how to implement family friendly policies, provides recognition to businesses and organizations that offer their employees family friendly benefits, and acts as a resource and clearinghouse of information for businesses and community leaders as they develop policies on issues such as paid family leave and childcare assistance.

“As we grow the state’s economy, we have the opportunity to be a national leader in offering New Mexicans workplaces that help companies attract and keep the best workers,” said Giovanna Rossi, founder and Director of Family Friendly New Mexico. “Implementing family friendly policies can be a simple, concrete investment a company can make to ensure it can compete for highly qualified employees. Studies have shown that costs associated with creating family friendly benefits are more than made up for in improved productivity, employee morale and employee retention. We are happy to recognize Flow Science as a committed leader in implementing family friendly policies.”

About Flow Science

Flow Science, Inc. is a privately held software company specializing in computational fluid dynamics software for industrial and scientific applications worldwide. Flow Science has distributors and technical support services for its FLOW-3D products in nations throughout the Americas, Europe, Asia, the Middle East, and Australasia. Flow Science is headquartered in Santa Fe, New Mexico.

Media Contact

Flow Science, Inc.

683 Harkle Rd.

Santa Fe, NM 87505

+1 505-982-0088

► FLOW-3D World Users Conference 2024
    5 Jul, 2023

We invite our customers from around the world to join us at the FLOW-3D World Users Conference 2024. The conference will be held at the Steigenberger Hotel Hamburg on June 10-12, 2024 in Hamburg, Germany. Join fellow engineers, researchers and scientists from some of the world’s most renowned companies and institutions to hone your simulation skills, explore new modeling approaches and learn about the latest software developments. The conference will feature application-specific tracks, free advanced training sessions, technical presentations by our customers, and the latest product developments presented by Flow Science’s senior technical staff. The conference will be co-hosted by Flow Science Deutschland. 

Registration is now open!

Social Events

Opening Reception

We invite all conference attendees and their guests to join us for the opening reception, which will be held on Monday, June 10 from 18:30-21:00. Drinks and refreshments will be served in the restaurant of the conference hotel.

Conference Dinner

We are very pleased to invite all conference attendees and their guests to the conference dinner the evening of Sunday, June 11 at the VLET in der Speicherstadt, a renowned restaurant in Hamburg. The restaurant is a short walk from the conference hotel. Directions will be provided in the conference materials.

VLET in der Speicherstadt
Am Sandtorkai 23/24
20457 Hamburg

TEL: +49 40 200064-222

Conference Registration

Register for the FLOW-3D World Users Conference 2024 in Hamburg, Germany, June 10-12! Connect with FLOW-3D users around the world. Enjoy social events, a poster session, technical presentations, product development talks and free advanced training.

Conference Information

Important Dates

  • June 10, 2023: Advanced Training Sessions
  • June 11-12, 2023: Conference Sessions
  • June 11, 2023: Conference Dinner

2023 World Users Conference Gallery

Conference Hotel

Steigenberger Hotel Hamburg

Heiligengeistbrücke 4
20459 Hamburg Germany

tel: +49 40 36806-0


Meeting Room Rate

Standard room including breakfast, 203 euro

Superior room, including breakfast, 223 euro

Mentor Blog top

► News Article: Graphcore leverages multiple Mentor technologies for its massive, second-generation AI platform
  10 Nov, 2020

Graphcore has used a range of technologies from Mentor, a Siemens business, to successfully design and verify its latest M2000 platform based on the Graphcore Colossus™ GC200 Intelligence Processing Unit (IPU) processor.

► Technology Overview: Simcenter FLOEFD 2020.1 Package Creator Overview
  20 Jul, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD helps users create thermal models of electronics packages easily and quickly. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 Electrical Element Overview
  20 Jul, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to add a component into a direct current (DC) electro-thermal calculation by the given component’s electrical resistance. The corresponding Joule heat is calculated and applied to the body as a heat source. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 Battery Model Extraction Overview
  17 Jun, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, the software features a new battery model extraction capability that can be used to extract the Equivalent Circuit Model (ECM) input parameters from experimental data. This enables you to get to the required input parameters faster and easier. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 BCI-ROM and Thermal Netlist Overview
  17 Jun, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to create a compact Reduced Order Model (ROM) that solves at a faster rate, while still maintaining a high level of accuracy. Watch this short video to learn how.

► On-demand Web Seminar: Avoiding Aerospace Electronics Failures, thermal testing and simulation of high-power semiconductor components
  27 May, 2020

High semiconductor temperatures may lead to component degradation and ultimately failure. Proper semiconductor thermal management is key for design safety, reliability and mission critical applications.

Tecplot Blog top

► What Computer Hardware Should I Buy for Tecplot 360?
  15 Mar, 2023

A common question from Tecplot 360 users centers around the hardware that they should buy to achieve the best performance. The answer is invariably, it depends. That said, we’ll try to demystify how Tecplot 360 utilizes your hardware so you can make an informed decision in your hardware purchase.

Let’s have a look at each of the major hardware components on your machine and show some test results that illustrate the benefits of improved hardware.

Test data

Our test data is an OVERFLOW simulation of a wind turbine. The data consists of 5,863 zones, totaling 263,075,016 elements and the file size is 20.9GB. For our test we:

  • Load the data.
  • Compute Q-Criterion.
  • Display an iso-surface of Q-Criterion (the resulting iso-surface consists of 32,248,635 triangular elements).
  • Export an image to PNG format.

The test was performed using 1, 2, 4, 8, 16, and 32 CPU-cores, with the data on a local HDD (spinning hard drive) and local SSD (solid state disk). Limiting the number of CPU cores was done using Tecplot 360’s ––max-available-processors command line option.

Data was cleared from the disk cache between runs using RamMap.

Machine Specs

  • Windows 10
  • 32 logical (16 physical) CPU cores. Intel Xeon E5-2650 v2 @ 2.60GHz
  • ATA ST2000DM001 Spinning Hard Disk
  • ATA INTEL SSDSC2BA40 Solid State Disk
  • Intel Gigabit Ethernet Adapter
  • 128GB DDR3 RAM
  • Nvidia Quadro K4000 graphics card


Advice: Buy the fastest disk you can afford.

In order to generate any plot in Tecplot 360, you need to load data from a disk. Some plots require more data to be loaded off disk than others. Some file formats are also more efficient than others – particularly file formats that summarize the contents of the file in a single header portion at the top or bottom of the file – Tecplot’s SZPLT is a good example of a highly efficient file format.

We found that the SSD was 61% faster than the HDD when using all 32 CPU-cores for this post-processing task.

All this said – if your data are on a remote server (network drive, cloud storage, HPC, etc…), you’ll want to ensure you have a fast disk on the remote resource and a fast network connection.

With Tecplot 360 the SZPLT file format coupled with the SZL Server could help here. With FieldView you could run in client-server mode.

Disk Performance at 32-cores


Advice: Buy the fastest CPU, with the most cores, that you can afford. But realize that performance is not always linear with the number of cores.

Most of Tecplot 360’s data compute algorithms are multi-threaded – meaning they’ll use all available CPU-cores during the computation. These include (but are not limited to): Calculation of new variables, slices, iso-surfaces, streamtraces, and interpolations. The performance of these algorithms improves linearly with the number of CPU-cores available.

You’ll also notice that the overall performance improvement is not linear with the number of CPU-cores. This is because loading data off disk becomes a dominant operation, and the slope is bound to asymptote to the disk read speed.

HDD vs SDD-Performance Scaling

You might notice that the HDD performance actually got worse beyond 8 CPU-cores. We believe this is because the HDD on this machine was just too slow to keep up with 16 and 32 concurrent threads requesting data.

It’s important to note that with data on the SSD the performance improved all the way to 32 CPU-cores. Further reinforcing the earlier advice – buy the fastest disk you can afford.


Advice: Buy as much RAM as you need, but no more.

You might be thinking: “Thanks for nothing – really, how much RAM do I need?”

Well, that’s something you’re going to have to figure out for yourself. The more data Tecplot 360 needs to load to create your plot, the more RAM you’re going to need. Computed iso-surfaces can also be a large consumer of RAM – such as the iso-surface computed in this test case.

If you have transient data, you may want enough RAM to post-process a couple time steps simultaneously – as Tecplot 360 may start loading a new timestep before unloading data from an earlier timestep.

The amount of RAM required is going to be different depending on your file format, cell types, and the post-processing activities you’re doing. For example:

  • A structured dataset will require less RAM than an unstructured dataset because structured data has implicit cell connectivity, while unstructured data has an explicit cell connectivity, which requires more RAM. For example, we compared a 100 million cell structured vs equivalent unstructured dataset, plotting one slice and one iso-surface. The peak RAM required by Tecplot 360 2022 R2 in this case was as such:
    • Structured dataset: 2.1GB RAM
    • Unstructured dataset: 8.8GB RAM
  • A simple plot of your surface data colored by a scalar is going to require less RAM than computing Q-Criterion and rendering an iso-surface. Why? Because computing Q-Criterion requires loading the volume data, plus several scalars. And then plotting the iso-surface requires the generation of new data in RAM.

When testing the amount of RAM used by Tecplot 360, make sure to set the Load On Demand strategy to Minimize Memory Use (available under Options>Performance).

Load on Demand

This will give you an understanding of the minimum amount of RAM required to accomplish your task. When set to Auto Unload (the default), Tecplot 360 will maintain more data in RAM, which improves performance. The amount of data Tecplot 360 holds in RAM is dictated by the Memory threshold (%) field, seen in the image above. So you – the user – have control over how much RAM Tecplot 360 is allowed to consume.

Graphics Card

Advice: Most modern graphics cards are adequate, even Intel integrated graphics provide reasonable performance. Just make sure you have up to date graphics drivers. If you have an Nvidia graphics card, favor the “Studio” drivers over the “Game Ready” drivers. The “Studio” drivers are typically more stable and offer better performance for the types of plots produced by Tecplot 360.

Many people ask specifically what type of graphics card they should purchase. This is, interestingly, the least important hardware component (at least for most of the plots our users make). Most of the post-processing pipeline is dominated by the disk and CPU, so the time spent rendering the scene is a small percentage of the total.

That said – there are some scenes that will stress your graphics card more than others. Examples are:

  • Showing lots of spherical scatter symbols
  • Many iso-surfaces or a complex iso-surface

Note that Tecplot 360’s interactive graphics performance currently (2023) suffers on Apple Silicon (M1 & M2 chips). The Tecplot development team is actively investigating solutions.


As with most things in life, striking a balance is important. You can spend a huge amount of money on CPUs and RAM, but if you have a slow disk or slow network connection, you’re going to be limited in how fast your post-processor can load the data into memory.

So, evaluate your post-processing activities to try to understand which pieces of hardware may be your bottleneck.

For example, if you:

  • Load a lot of timesteps and render simple objects like slices or just surfaces, your process is dominated by I/O – consider a fast disk or network connection.
  • Have a process that is compute heavy – like creating complicated iso-surfaces, computing new variables, or doing interpolations – consider more CPU cores.
  • Render a lot of images for a single dataset – for example multiple view angles of the same dataset, your process will spend a lot of time rendering – consider a higher-end GPU.

And again – make sure you have enough RAM for your workflow.

Try Tecplot 360 for Free

The post What Computer Hardware Should I Buy for Tecplot 360? appeared first on Tecplot Website.

► FieldView joins – Merger Update
  27 Feb, 2023

Three years after our merger began, we can report that the combined FieldView and Tecplot team is stronger than ever. Customers continue to receive the highest quality support and new product releases and we have built a solid foundation that will allow us to continue contributing to our customers’ successes long into the future.

This month we have taken another step by merging the FieldView website into Our social media outreach will also be combined. Stay up to date with news and announcements by subscribing and following us on social media.

AIAA SciTech Team 2023

Members of Tecplot 360 & FieldView teams exhibit together at AIAA SciTech 2023. From left to right: Shane Wagner, Charles Schnake, Scott Imlay, Raja Olimuthu, Jared McGarry and Yves-Marie Lefebvre. Not shown are Scott Fowler and Brandon Markham.

It’s been a pleasure seeing two groups that were once competitors come together as a team, learn from each other and really enjoy working together.

– Yves-Marie Lefebvre, Tecplot CTO & FieldView Product Manager.

Our customers have seen some of the benefits of our merger in the form of streamlined services from the common Customer Portal, simplified licensing, and license renewals. Sharing expertise and assets across teams has already led to the faster implementation of modules such as licensing and CFD data loaders. By sharing our development resources, we’ve been able to invest more in new technology, which will soon translate to increased performance and new features for all products.

Many of the improvements are internal to our organization but will have lasting benefits for our customers. Using common development tools and infrastructure will enable us to be as efficient as possible to ensure we can put more of our energy into improving the products. And with the backing of the larger organization, we have a firm foundation to look long term at what our customers will need in years to come.

We want to thank our customers and partners for their support and continued investment as we endeavor to create better tools that empower engineers and scientists to discover, analyze and understand information in complex data, and effectively communicate their results.

Subscribe to Tecplot News

The post FieldView joins – Merger Update appeared first on Tecplot Website.

► Faster Visualization of Higher-Order Finite-Element Data
  13 Feb, 2023

One of the most memorable parts of my finite-elements class in graduate school was a comparison of linear elements and higher-order elements for the structural analysis of a dam. As I remember, they were able to duplicate the results obtained with 34 linear elements by using a SINGLE high-order element. This made a big impression on me, but the skills I learned at that time remained largely unused until recently.

You see, my Ph.D. research and later work was using finite-volume CFD codes to solve the steady-state viscous flow. For steady flows, there didn’t seem to be much advantage to using higher than 2nd or 3rd order accuracy.

Increasing Usage of Higher-Order Methods

This has changed recently as the analysis of unsteady vortical flows have become more common. The use of higher-order (greater than second order) computational fluid dynamics (CFD) methods is increasing. Popular government and academic CFD codes such as FUN3D, KESTREL, and SU2 have released, or are planning to release, versions that include higher-order methods. This is because higher-order accurate methods offer the potential for better accuracy and stability, especially for unsteady flows. This trend is likely to continue.

CFD 2030 Vision

Commercial visual analysis codes are not yet providing full support for higher-order solutions. The CFD 2030 vision states

 “…higher-order methods will likely increase in utilization during this time frame, although currently the ability to visualize results from higher order simulations is highly inadequate. Thus, software and hardware methods to handle data input/output (I/O), memory, and storage for these simulations (including higher-order methods) on emerging HPC systems must improve. Likewise, effective CFD visualization software algorithms and innovative information presentation (e.g., virtual reality) are also lacking.”

The isosurface algorithm described in this paper is the first step toward improving higher-order element visualization in the commercial visualization code Tecplot 360.

Higher-Order Finite-Element Techniques

Higher-order methods can be based on either finite-difference methods or finite-element methods. While some popular codes use higher-order finite-difference methods (OVERFLOW, for example), this paper will focus on higher-order finite-element techniques. Specifically, we will present a memory-efficient recursive subdivision algorithm for visualizing the isosurface of higher-order element solutions.

In previous papers we demonstrated this technique for quadratic tetrahedral, hexahedral, pyramid, and prism elements with Lagrangian polynomial basis functions. In this paper Optimized Implementation of Recursive Sub-Division Technique for Higher-Order Finite-Element Isosurface and Streamline Visualization we discuss the integration of these techniques into the engine of the commercial visualization code Tecplot 360 and discuss speed optimizations. We also discuss the extension of the recursive subdivision algorithm to cubic tetrahedral and pyramid elements, and quartic tetrahedral elements. Finally, we discuss the extension of the recursive subdivision algorithm to the computation of streamlines.

Read the White Paper (PDF)

Read the White Paper (PDF)

Click an image to view the slideshow

[See image gallery at]

The post Faster Visualization of Higher-Order Finite-Element Data appeared first on Tecplot Website.

► Webinar: Tecplot 360 2022 R2
  15 Dec, 2022

In this release, we are very excited to offer “Batch-Pack” licensing for the first time. A Batch-Pack license enables a single user access to multiple concurrent batch instances of our Python API (PyTecplot) while consuming only a single license seat. This option will reduce license contention and allow for faster turnaround times by running jobs in parallel across multiple nodes of an HPC. All at a substantially lower cost than buying additional license seats.


Data courtesy of ZJ Wang, University of Kansas, visualization by Tecplot.

Webinar Agenda for 360 2022 R2

  • Tecplot at a Glance
  • Tecplot 360 Suite of Tools [02:11]
  • Overview of What’s New in Tecplot 360 2022 R2 [03:15]
  • Batch-Packs [04:25]
  • Critical Bug Fixes [8:29]
  • Loader Updates [11:16]
  • TecIO Updates [15:37]
  • Platform Updates [17:15]
  • Higher-Order Element Technology Preview [18:50]
  • Questions & Answers [27:26]


Get a Free Trial   Update Your Software

The post Webinar: Tecplot 360 2022 R2 appeared first on Tecplot Website.

► Introducing 360 “Batch-Packs”
  15 Dec, 2022

A license booster for engineers who want maximum throughput at minimum cost.

Ask us about Batch-Packs!

Call 1.800.763.7005 or 425.653.1200

Batch-mode is a term nearly as old as computers themselves. Despite its age, however, it is representative of a concept that is as relevant today as it ever was, perhaps even more so: headless (scripted, programmatic, automated, etc.) execution of instructions. Lots of engineering is done interactively, of course, but oftentimes the task is a known quantity and there is a ton of efficiency to be gained by automating the computational elements. That efficiency is realized ten times over when batch-mode meets parallelization – and that’s why we thought it was high-time we offered a batch-mode licensing model for Tecplot 360’s Python API, PyTecplot. We call them “batch-packs.”

Tecplot 360 Batch-Packs

Tecplot 360 batch-packs work by enabling users to run multiple concurrent instances of our Python API (PyTecplot) while consuming only a single license seat. It’s an optional upgrade that any customer can add to their license for a fee. The benefit? The fee for a batch-pack is substantially lower than buying an equivalent number of license seats – which makes it easier to justify outfitting your engineers with the software access they need to reach peak efficiency.

Batch-Packs Explained

Here is a handy little diagram we drew to help explain it better:

Batch Packs in Tecplot 360 2022 R2

Each network license allows ‘n’ seats. Traditionally, each instance of PyTecplot consumes 1 seat. Prior to the 2022 R2 release of Tecplot 360 EX, licenses only operated using the paradigm illustrated in the first two rows of the diagram above (that is, a user could check out up to ‘n’ seats, or ‘n’ users could check out a single seat). Now customers can elect to purchase batch-packs, which will enable each seat to provide a single user with access to ‘m’ instances of PyTecplot, as shown in the bottom row of the figure.

Batch-Pack Benefits

In addition to a cost reduction (vs. purchasing an equivalent number of network seats), batch-pack licensees will enjoy:

  • Reduced license contention. Since each user is guaranteed “m” PyTecplot instances they can run post-processing jobs in parallel without fear of their job failing due to license contention.
  • Faster turnaround times by running your post-processing jobs in parallel across multiple nodes of an HPC, or even on a single workstation. Running across multiple nodes may help alleviate memory limitations for large datasets.

Learn More

We’re excited to offer this new option and hope that our customers can make the most of it.

The post Introducing 360 “Batch-Packs” appeared first on Tecplot Website.

► Colormap in Tecplot 360
    7 Dec, 2022

The Rainbow Colormap Sucks and Here’s Why…

If you care about how you present your data and how people perceive your results, stop reading and watch this talk by Kristen Thyng on YouTube. Seriously, I’ll wait, I’ve got the time.

Why Colormaps are Important

Which colormap you choose, and which data values are assigned to each color can be vitally important to how you (or your clients) interpret the data being presented. To illustrate the importance of this, consider the image below.

Why Colormaps are Important

Figure 1. Visualization of the Southeast United States. [4]

With the colormap on the left, one can hardly tell what the data represents, but with a modified colormap and strategic transitions at zero (sea level) one can clearly tell that the data represents the Southeast of the United States. Even without data labels, one might infer that the color represents elevation. Without a good colormap, and without strategic placement of the color transitions you may be inaccurately representing your data.

Why You Should Consider Perceptually Uniform Colormaps

Before I explain what a perceptually uniform colormap is, let’s start with everyone’s favorite: the rainbow colormap. We all love the rainbow colormap because it’s pretty and is recognizable. Everyone knows “ROY G BIV” so we think of this color progression as intuitive, but in reality (for scalar values) it’s anything but.

Consider the image below, which represents the “Estimated fraction of precipitation lost to evapotranspiration”. This image makes it appear that there’s a very distinct difference in the scalar value right down the center of the United States. Is there really a sudden change in the values right in the middle of the Great Plains? No – this is an artifact of the colormap, which is misleading you!

Rainbow Colormap

Figure 2. This plot illustrates how the rainbow colormap is misleading, giving the perception that there is a distinct different in the middle of the US, when in fact the values are more continuous. [2]

To interpret the data correctly it’s important that “the perceptual interpolation matches the underlying scalars of the map” [6]

Comparison of Perceptually Uniform and Rainbow Colormaps

So let’s dive a little deeper into the rainbow colormap and how it compares to perceptually uniform (or perceptually linear) colormaps.

Consider the six images below, what are we looking at? If you were to only look at the top three images, you might get the impression that the scalar value has non-linear changes – while this value (radius) is actually changing linearly. If presented with the rainbow colormap, you’d be forgiven if you didn’t guess that the object is a cone, colored by radius.


Figure 3. An example of how the rainbow colormap imparts information that does not actually exist in the data.

So why does the rainbow colormap mislead? It’s because the color values are not perceptually uniform. In this image you can see how the perceptual changes in the colormap vary from one end to the other. The gray scale and “cmocean – haline” colormaps shown here are perceptually uniform, while the rainbow colormap adds information that doesn’t actually exist.

Perceptual Change

Figure 4. Visualization of the perceptual changes of three colormaps. [5]

This blog post isn’t meant to be a technical article, so I won’t go into all the specific here, but if you want to dive deeper into the how and why of the perceptual changes in colors, check out the References.

So which colormap should I use?

Well, that depends. Tecplot 360 and FieldView are typically used to represent scalar data, so Sequential and Diverging colormaps will probably get used the most – but there are others we will discuss as well.

Sequential colormaps

Sequential colormaps are ideal for scalar values in which there’s a continuous range of values. Think pressure, temperature, and velocity magnitude. Here we’re using the ‘cmocean – thermal’ colormap in Tecplot 360 to represent fluid temperature in a Barracuda Virtual Reactor simulation of a cyclone separator.


Diverging Colormaps

Diverging colormaps are a great option when you want to highlight a change in values. Think ratios, where the values span from -1 to 1, it can help to highlight the value at zero.

Diverging Colormaps

The diverging colormap is also useful for “delta plots” – In the plot below, the bottom frame is showing a delta between the current time step and the time average. Using a diverging colormap, it’s easy to identify where the delta changes from negative to positive.


Qualitative Colormaps

If you have discrete data that represent things like material properties – say “rock, sand, water, oil”, these data can be represented using integer values and a qualitative colormap. This type of colormap will do good job in supplying distinct colors for each value. An example of this, from a CONVERGE simulation, can be seen below. Instructions to create this plot can be found in our blog, Creating a Materials Legend in Tecplot 360.
Qualitative Colormaps

Circular (Phase) Colormaps

Perhaps infrequently used, but still important to point out is the “phase” colormap. This is particularly useful for values which are cyclic – such as a theta value used to represent wind direction in this FVCOM simulation result. If we were to use a simple sequential colormap (inset plot below) you would observe what appears to be a large gradient where the wind direction is 360o vs. 0o. Logically these are the same value and using the “cmocean – phase” colormap allows you communicate the continuous nature of the data.

Contrast in Pink

Purposeful Breaks in the Colormap

There are times when you want to force a break in a continuous colormap. In the image below, the colormap is continuous from green to white Horizontal Colormap but we want to ensure that values at or below zero are represented as blue – to indicate water. In Tecplot 360 this can be done using the “Override band colors” option, in which we override the first color band to be blue. This makes the plot more realistic and therefore easier to interpret.


Best Practices

  • Avoid red and green in the same plot. About 1 in 12 men are color blind, with red-green color blindness being the most common [7].
  • Use different colormaps for different data and objects. Using colors that are associated with the physical object or property can help make the visualization more intuitive. For example, blue hues for rain and ice, green hues for algae, yellow and orange hues for heat.
  • Don’t like our colormaps? Create your own! Tecplot 360 allows you to supply your own custom colormaps as well as change which colormap is default. [1]




The post Colormap in Tecplot 360 appeared first on Tecplot Website.

Schnitger Corporation, CAE Market top

► Ansys adds Zemax optical imaging system simulation to its portfolio
  31 Aug, 2021

Ansys adds Zemax optical imaging system simulation to its portfolio

Ansys has announced that it will acquire Zemax, maker of high-performance optical imaging system simulation solutions. The terms of the deal were not announced, but it is expected to close in the fourth quarter of 2021.

Zemax’s OpticStudio is often mentioned when users talk about designing optical, lighting, or laser systems. Ansys says that the addition of Zemax will enable Ansys to offer a “comprehensive solution for simulating the behavior of light in complex, innovative products … from the microscale with the Ansys Lumerical photonics products, to the imaging of the physical world with Zemax, to human vision perception with Ansys Speos [acquired with Optis]”.

This feels a lot like what we’re seeing in other forms of CAE, for example, when we simulate materials from nano-scale all the way to fully-produced-sheet-of-plastic-scale. There is something to be learned at each point, and simulating them all leads, ultimately, to a more fit-for-purpose end result.

Ansys is acquiring Zemax from its current owner, EQT Private Equity. EQT’s announcement of the sale says that “[w]ith the support of EQT, Zemax expanded its management team and focused on broadening the Company’s product portfolio through substantial R&D investment focused on the fastest growing segments in the optics space. Zemax also revamped its go-to-market sales approach and successfully transitioned the business model toward recurring subscription revenue”. EQT had acquired Zemax in 2018 from Arlington Capital Partners, a private equity firm, which had acquired Zemax in 2015. Why does this matter? Because the path each company takes is different — and it’s sometimes not a straight line.

Ansys says the transaction is not expected to have a material impact on its 2021 financial results.

► Sandvik building CAM powerhouse by acquisition
  30 Aug, 2021

Sandvik building CAM powerhouse by acquisition

Last year Sandvik acquired CGTech, makers of Vericut. I, like many people, thought “well, that’s interesting” and moved on. Then in July, Sandvik announced it was snapping up the holding company for Cimatron, GibbsCAM (both acquired by Battery Ventures from 3D Systems), and SigmaTEK (acquired by Battery Ventures in 2018). Then, last week, Sandvik said it was adding Mastercam to that list … It’s clearly time to dig a little deeper into Sandvik and why it’s doing this.

First, a little background on Sandvik. Sandvik operates in three main spheres: rocks, machining, and materials. For the rocks part of the business, the company makes mining/rock extraction and rock processing (crushing, screening, and the like) solutions. Very cool stuff but not relevant to the CAM discussion.

The materials part of the business develops and sells industrial materials; Sandvik is in the process of spinning out this business. Also interesting but …

The machining part of the business is where things get more relevant to us. Sandvik Machining & Manufacturing Solutions (SMM) has been supplying cutting tools and inserts for many years, via brands like Sandvik, SECO, Miranda, Walter, and Dormer Pramet, and sees a lot of opportunity in streamlining the processes around the use of specific tools and machines. Light weighting and sustainability efforts in end-industries are driving interest in new materials and more complex components, as well as tighter integration between design and manufacturing operations. That digitalization across an enterprise’s areas of business, Sandvik thinks, plays into its strengths.

According to info from the company’s 2020 Capital Markets Day, rocks and materials are steady but slow revenue growers. The company had set a modest 5% revenue growth target but had consistently been delivering closer to 3% — what to do? Like many others, the focus shifted to (1) software and (2) growth by acquisition. Buying CAM companies ticked both of those boxes, bringing repeatable, profitable growth. In an area the company already had some experience in.

Back to digitalization. If we think of a manufacturer as having (in-house or with partners) a design function, which sends the concept on to production preparation, then to machining, and, finally, to verification/quality control, Sandvik wants to expand outwards from machining to that entire world. Sandvik wants to help customers optimize the selection of tools, the machining strategy, and the verification and quality workflow.

The Manufacturing Solutions subdivision within SMM was created last year to go after this opportunity. It’s got 3 areas of focus: automating the manufacturing process, industrializing additive manufacturing, and expanding the use of metrology to real-time decision making.

The CGTech acquisition last year was the first step in realizing this vision. Vericut is prized for its ability to work with any CAM, machine tool, and cutting tool for NC code simulation, verification, optimization, and programming. CGTech is a long-time supplier of Vericut software to Sandvik’s Coromant production units, so the companies knew one another well. Vericut helps Sandvik close that digitalization/optimization loop — and, of course, gives it access to the many CAM users out there who do not use Coromant.

But verification is only one part of the overall loop, and in some senses, the last. CAM, on the other hand, is the first (after design). Sanvik saw CAM as “the most important market to enter due to attractive growth rates – and its proximity to Sandvik Manufacturing and Machining Solutions’ core business.” Adding Cimatron, GibbsCAM, SigmaTEK, and Mastercam gets Sandvik that much closer to offering clients a set of solutions to digitize their complete workflows.

And it makes business sense to add CAM to the bigger offering:

  1. Sandvik has over 100,000 machining customers, many of which are relatively small, and most of which have a low level of digitalization. Sandvik believes it can bring significant value to these customers, while also providing point solutions to much larger clients
  2. Software is attractive — recurring revenue, growth rates, and margins
  3. CAM lets Sandvik grow in strategic importance with its customers, integrating cutting and tool data with process planning, as a way of improving productivity and part quality
  4. The acquisitions are strong in Americans and Asia — expanding Sandvik’s footprint to a more even global basis

To head off one question: As of last week’s public statements, anyway, Sandvik has no interest in getting into CAD, preferring to leave that battlefield to others, and continue on its path of openness and neutrality.

And because some of you asked: there is some overlap in these acquisitions, but remarkably little, considering how established these companies all are. GibbsCAM is mostly used for production milling and turning; Cimatron is used in mold and die — and with a big presence in automotive, where Sandvik already has a significant interest; and SigmaNEST is for sheet metal fabrication and material requisitioning.

One interesting (to me, anyway) observation: 3D Systems sold Gibbs and Cimatron to Battery in November 2020. Why didn’t Sandvik snap it up then? Why wait until July 2021? A few possible reasons: Sandvik CEO Stefan Widing has been upfront about his company’s relative lack of efficiency in finding/closing/incorporating acquisitions; perhaps it was simply not ready to do a deal of this type and size eight months earlier. Another possible reason: One presumes 3D Systems “cleaned up” Cimatron and GibbsCAM before the sale (meaning, separating business systems and financials from the parent, figuring out HR, etc.) but perhaps there was more to be done, and Sandvik didn’t want to take that on. And, finally, maybe the real prize here for Sandvik was SigmaNEST, which Battery Ventures had acquired in 2018, and Cimatron and GibbsCAM simply became part of the deal. We may never know.

This whole thing is fascinating. A company out of left field, acquiring these premium PLMish assets. Spending major cash (although we don’t know how much because of non-disclosures between buyer and sellers) for a major market presence.

No one has ever asked me about a CAM roll-up, yet I’m constantly asked about how an acquirer could create another Ansys. Perhaps that was the wrong question, and it should have been about CAM all along. It’s possible that the window for another company to duplicate what Sandvik is doing may be closing since there are few assets left to acquire.

Sandvik’s CAM acquisitions haven’t closed yet, but assuming they do, there’s a strong fit between CAM and Sandvik’s other manufacturing-focused business areas. It’s more software, with its happy margins. And, finally, it lets Sandvik address the entire workflow from just after component design to machining and on to verification. Mr. Widing says that Sandvik first innovated in hardware, then in service – and now, in software to optimize the component part manufacturing process. These are where gains will come, he says, in maximizing productivity and tool longevity. Further out, he sees, measuring every part to see how the process can be further optimized. It’s a sound investment in the evolution of both Sandvik and manufacturing.

We all love a good reinvention story, and how Sandvik executes on this vision will, of course, determine if the reinvention was successful. And, of course, there’s always the potential for more news of this sort …

► Missed it: Sandvik also acquiring GibbsCAM, Cimatron & SigmaNEST
  25 Aug, 2021

Missed it: Sandvik also acquiring GibbsCAM, Cimatron & SigmaNEST

I missed this last month — Sandvik also acquired Cambrio, which is the combined brand for what we might know better as GibbsCAM (milling, turning), Cimatron (mold and die), and SigmaNEST (nesting, obvs). These three were spun out of 3D Systems last year, acquired by Battery Ventures — and now sold on to Sandvik.

This was announced in July, and the acquisition is expected to close in the second half of 2021 — we’ll find out on Friday if it already has.

At that time. Sandvik said its strategic aim is to “provide customers with software solutions enabling automation of the full component manufacturing value chain – from design and planning to preparation, production and verification … By acquiring Cambrio, Sandvik will establish an important position in the CAM market that includes both toolmaking and general-purpose machining. This will complement the existing customer offering in Sandvik Manufacturing Solutions”.

Cambrio has around 375 employees and in 2020, had revenue of about $68 million.

If we do a bit of math, Cambrio’s $68 million + CNC Software’s $60 million + CGTech’s (that’s Vericut’s maker) of $54 million add up to $182 million in acquired CAM revenue. Not bad.

More on Friday.

► Mastercam will be independent no more
  25 Aug, 2021

Mastercam will be independent no more

CNC Software and its Mastercam have been a mainstay among CAM providers for decades, marketing its solutions as independent, focused on the workgroup and individual. That is about to change: Sandvik, which bought CGTech late last year, has announced that it will acquire CNC Software to build out its CAM offerings.

According to Sandvik’s announcement, CNC Software brings a “world-class CAM brand in the Mastercam software suite with an installed base of around 270,000 licenses/users, the largest in the industry, as well as a strong market reseller network and well-established partnerships with leading machine makers and tooling companies”.

We were taken by surprise by the CGTech deal — but shouldn’t be by the Mastercam acquisition. Stefan Widing, Sandvik’s CEO explains it this way: “[Acquiring Mastercam] is in line with our strategic focus to grow in the digital manufacturing space, with special attention on industrial software close to component manufacturing. The acquisition of CNC Software and the Mastercam portfolio, in combination with our existing offerings and extensive manufacturing capabilities, will make Sandvik a leader in the overall CAM market, measured in installed base. CAM plays a vital role in the digital manufacturing process, enabling new and innovative solutions in automated design for manufacturing.” The announcement goes on to say, “CNC Software has a strong market position in CAM, and particularly for small and medium-sized manufacturing enterprises (SME’s), something that will support Sandvik’s strategic ambitions to develop solutions to automate the manufacturing value chain for SME’s – and deliver competitive point solutions for large original equipment manufacturers (OEM’s).”

Sandvik says that CNC Software has 220 employees, with revenue of $60 million in 2020, and a “historical annual growth rate of approximately 10 percent and is expected to outperform the estimated market growth of 7 percent”.

No purchase price was disclosed, but the deal is expected to close during the fourth quarter.

Sandvik is holding a call about this on Friday — more updates then, if warranted.

► Bentley saw a rebound in infrastructure in Q2 but is cautious about China
  18 Aug, 2021

Bentley saw a rebound in infrastructure in Q2 but is cautious about China

Bentley continues to grow its deep expertise in various AEC disciplines — most recently, expanding its focus in underground resource mapping and analysis. This diversity serves it well; read on.

In Q2,

  • Total revenue was $223 million, up 21% as reported. Seequent contributed about $4 million per the quarterly report filed with the US SEC, so almost all of this growth was organic
  • Subscription revenue was $186 million, up 18%
  • Perpetual license revenue was $11 million, down 8% as Bentley continues to focus on selling subscriptions
  • Services revenue was $26 million, up 86% as Bentley continues to build out its Maximo-related consulting and implementation business, the Cohesive Companies

Unlike AspenTech, Bentley’s revenue growth is speeding up (total revenue up 21% in Q2, including a wee bit from Seequent, and up 17% for the first six months of 2021). Why the difference? IMHO, because Bentley has a much broader base, selling into many more end industries as well as to road/bridge/water/wastewater infrastructure projects that keep going, Covid or not. CEO Greg Bentley told investors that some parts of the business are back to —or even better than— pre-pandemic levels, but not yet all. He said that the company continues to struggle in industrial and resources capital expenditure projects, and therefore in the geographies (theMiddle East and Southeast Asia) that are the most dependent on this sector. This is balanced against continued success in new accounts and the company’s reinvigorated selling to small and medium enterprises via its Virtuosity subsidiary — and in a resurgence in the overall commercial/facilities sector. In general, it appears that sales to contractors such as architects and engineers lag behind those to owners and operators of commercial facilities —makes sense as many new projects are still on pause until pandemic-related effects settle down.

One unusual comment from Bentley’s earnings call that we’re going to listen for on others: The government of China is asking companies to explain why they are not using locally-grown software solutions; it appears to be offering preferential tax treatment for buyers of local software. As Greg Bentley told investors, “[d]uring the year to date, we have experienced a rash of unanticipated subscription cancellations within the mid-sized accounts in China that have for years subscribed to our China-specific enterprise program … Because we don’t think there are product issues, we will try to reinstate these accounts through E365 programs, where we can maintain continuous visibility as to their usage and engagement”. So, to recap: the government is using taxation to prefer one set of vendors over another, and all Bentley can do (really) is try to bring these accounts back and then monitor them constantly to keep on top of emerging issues. FWIW, in the pre-pandemic filings for Bentley’s IPO, “greater China, which we define as the Peoples’ Republic of China, Hong Kong and Taiwan … has become one of our largest (among our top five) and fastest-growing regions as measured by revenue, contributing just over 5% of our 2019 revenues”. Something to watch.

The company updated its financial outlook for 2021 to include the recent Seequent acquisition and this moderate level of economic uncertainty. Bentley might actually join the billion-dollar club on a pro forma basis — as if the acquisition of Seequent had occurred at the beginning of 2021. On a reported basis, the company sees total revenue between $945 million and $960 million, or an increase of around 18%, including Seequent. Excluding Seequent, Bentley sees organic revenue growth of 10% to 11%.

Much more here, on Bentley’s investor website.

► AspenTech is cautious about F2022, citing end-market uncertainty
  18 Aug, 2021

AspenTech is cautious about F2022, citing end-market uncertainty

We still have to hear from Autodesk, but there’s been a lot of AECish earnings news over the last few weeks. This post starts a modest series as we try to catch up on those results.

AspenTech reported results for its fiscal fourth quarter, 2021 last week. Total revenue of $198 million in DQ4, down 2% from a year ago. License revenue was $145 million, down 3%; maintenance revenue was $46 million, basically flat when compared to a year earlier, and services and other revenue was $7 million, up 9%.

For the year, total revenue was up 19% to $709 million, license revenue was up 28%, maintenance was up 4% and services and other revenue was down 18%.

Looking ahead, CEO Antonio Pietri said that he is “optimistic about the long-term opportunity for AspenTech. The need for our customers to operate their assets safely, sustainably, reliably and profitably has never been greater … We are confident in our ability to return to double-digit annual spend growth over time as economic conditions and industry budgets normalize.” The company sees fiscal 2022 total revenue of $702 million to $737 million, which is up just $10 million from final 2021 at the midpoint.

Why the slowdown in FQ4 from earlier in the year? And why the modest guidance for fiscal 2022? One word: Covid. And the uncertainty it creates among AspenTech’s customers when it comes to spending precious cash. AspenTech expects its visibility to improve when new budgets are set in the calendar fourth quarter. By then, AspenTech hopes, its customers will have a clearer view of reopening, consumer spending, and the timing of an eventual recovery.

Lots more detail here on AspenTech’s investor website.

Next up, Bentley. Yup. Alphabetical order.

Symscape top

► CFD Simulates Distant Past
  25 Jun, 2019

There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.

CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation

read more

► Background on the Caedium v6.0 Release
  31 May, 2019

Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.

Conjugate Heat Transfer Through a Water-Air RadiatorConjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature

read more

► Long-Necked Dinosaurs Succumb To CFD
  14 Jul, 2017

It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.

CFD Water Flow Simulation over an Idealized PlesiosaurCFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study

read more

► CFD Provides Insight Into Mystery Fossils
  23 Jun, 2017

Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).

CFD Water Flow Simulation over a ParvancorinaCFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study

read more

► Wind Turbine Design According to Insects
  14 Jun, 2017

One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.

DragonflyDragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath

read more

► Runners Discover Drafting
    1 Jun, 2017

The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.

2 Hour Marathon Attempt

read more

curiosityFluids top

► Creating curves in blockMesh (An Example)
  29 Apr, 2019

In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:

As you can see, we’ll be simulating the flow over a bump defined by the curve:

y=H*\sin\left(\pi x \right)

First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:

/*--------------------------------*- C++ -*----------------------------------*\
  =========                 |
  \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
   \\    /   O peration     | Website:
    \\  /    A nd           | Version:  6
     \\/     M anipulation  |
    version     2.0;
    format      ascii;
    class       dictionary;
    object      blockMeshDict;

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

convertToMeters 1;

    (-1 0 0)    // 0
    (0 0 0)     // 1
    (1 0 0)     // 2
    (2 0 0)     // 3
    (-1 2 0)    // 4
    (0 2 0)     // 5
    (1 2 0)     // 6
    (2 2 0)     // 7

    (-1 0 1)    // 8    
    (0 0 1)     // 9
    (1 0 1)     // 10
    (2 0 1)     // 11
    (-1 2 1)    // 12
    (0 2 1)     // 13
    (1 2 1)     // 14
    (2 2 1)     // 15

    hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
    hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
    hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)


        type patch;
            (0 8 12 4)
        type patch;
            (3 7 15 11)
        type wall;
            (0 1 9 8)
            (1 2 10 9)
            (2 3 11 10)
        type patch;
            (4 12 13 5)
            (5 13 14 6)
            (6 14 15 7)
        type empty;
            (8 9 13 12)
            (9 10 14 13)
            (10 11 15 14)
            (1 0 4 5)
            (2 1 5 6)
            (3 2 6 7)

// ************************************************************************* //

This blockMeshDict produces the following grid:

It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!

So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:

        polyLine 1 2
                (0	0       0)
                (0.1	0.0309016994    0)
                (0.2	0.0587785252    0)
                (0.3	0.0809016994    0)
                (0.4	0.0951056516    0)
                (0.5	0.1     0)
                (0.6	0.0951056516    0)
                (0.7	0.0809016994    0)
                (0.8	0.0587785252    0)
                (0.9	0.0309016994    0)
                (1	0       0)

        polyLine 9 10
                (0	0       1)
                (0.1	0.0309016994    1)
                (0.2	0.0587785252    1)
                (0.3	0.0809016994    1)
                (0.4	0.0951056516    1)
                (0.5	0.1     1)
                (0.6	0.0951056516    1)
                (0.7	0.0809016994    1)
                (0.8	0.0587785252    1)
                (0.9	0.0309016994    1)
                (1	0       1)

The sub-dictionary above is just a list of points on the curve y=H\sin(\pi x). The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.

The following mesh is produced:

Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!


This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via, and owner of theOPENFOAM®  andOpenCFD®  trademarks.

► Creating synthetic Schlieren and Shadowgraph images in Paraview
  28 Apr, 2019

Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.

Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.

In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.

Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).

In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.

For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).

In this post, I’ll use a simple case I did previously ( as an example and produce some synthetic Schlieren and Shadowgraph images using the data.

So how do we create these images in paraview?

Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.

In ParaView the necessary tool for this is:

Gradient of Unstructured DataSet:

Finding “Gradient of Unstructured DataSet” using the Filters-> Search

Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:

Change the “Scalar Array” Drop down to the density field (rho), and change the name to Synthetic Schlieren

To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:

This is NOT a synthetic Schlieren Image – but it sure looks nice

There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.

To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:

The results look pretty realistic:

Horizontal Knife Edge

Vertical Knife Edge

Now how about ShadowGraph?

The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:

\nabla^2\left[\right]  = \nabla \cdot \nabla \left[\right]

Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!

To do this, we just have to use the Gradient of Unstructured DataSet tool again:

This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.

Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:

Shadowgraph Image

So what do the values mean?

Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.

This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.

Hopefully this post will be helpful to some of you out there. Cheers!

► Solving for your own Sutherland Coefficients using Python
  24 Apr, 2019

Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post:

The law given by:

\mu=\mu_o\frac{T_o + C}{T+C}\left(\frac{T}{T_o}\right)^{3/2}

It is also often simplified (as it is in OpenFOAM) to:

\mu=\frac{C_1 T^{3/2}}{T+C}=\frac{A_s T^{3/2}}{T+T_s}

In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.

So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.

So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.

By far the simplest way to achieve this is using Python and the Scipy.optimize package.

Step 1: Get Data

The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (, but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:

Temparature (K) Viscosity (Pa.s)
400 0.000022217
600 0.000029602
800 0.000035932
1000 0.000041597
1200 0.000046812
1400 0.000051704
1600 0.000056357
1800 0.000060829
2000 0.000065162

This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).

Step 2: Use python to fit the data

If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.

First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

Now we define the sutherland function:

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

Next we input the data:



Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.

popt = curve_fit(sutherland, T, mu)

Now we can just output our data to the screen and plot the results if we so wish:

print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')


plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])

Overall the entire code looks like this:

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

T=[200, 400, 600,


popt, pcov = curve_fit(sutherland, T, mu)
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')


plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])

And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!


In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.

This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.

► Tips for tackling the OpenFOAM learning curve
  23 Apr, 2019

The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.

There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.

While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.

Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:

(1) Understand CFD

This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:

(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish

(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera

(c) Computational fluid dynamics – the basics with applications – By John D. Anderson

(2) Understand fluid dynamics

Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.

(3) Avoid building cases from scratch

Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!

As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.

(4) Using Ubuntu makes things much easier

This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.

I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.

(5) If you’re struggling, simplify

Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.

(6) Familiarize yourself with the cfd-online forum

If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.

(7) The results from checkMesh matter

If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:

(8) CFL Number Matters

If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.

For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:

For the record, this points falls into point (1) of Understanding CFD.

(9) Work through the OpenFOAM Wiki “3 Week” Series

If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:

If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.

(10) OpenFOAM is not a second-tier software – it is top tier

I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (

In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.

(11) Meshing… Ugh Meshing

For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post ( most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.


Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.

Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.

This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via, and owner of theOPENFOAM®  andOpenCFD®  trade marks.

► Automatic Airfoil C-Grid Generation for OpenFOAM – Rev 1
  22 Apr, 2019
Airfoil Mesh Generated with

Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.

Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.

The two main ways that I have meshed airfoils to date has been:

(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.

But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.

The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections

In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.

There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!

Hopefully, this is useful to some of you out there!


You can download the script here:

Here you will also find a template based on the airfoil2D OpenFOAM tutorial.


(1) Copy to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3
(5) If no errors – run blockMesh

You need to run this with python 3, and you need to have numpy installed


The inputs for the script are very simple:

ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.

airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.

DomainHeight: This is the height of the domain in multiples of chords.

WakeLength: Length of the wake domain in multiples of chords

firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator

growthRate: Boundary layer growth rate

MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.

The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.

BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil

LeadingEdgeGrading: Grading from the 1/4 chord position to the leading edge

TrailingEdgeGrading: Grading from the 1/4 chord position to the trailing edge

inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity

trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.


12% Joukowski Airfoil


With the above inputs, the grid looks like this:

Mesh Quality:

These are some pretty good mesh statistics. We can also view them in paraView:

Clark-y Airfoil

The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:

With these inputs, the result looks like this:

Mesh Quality:

Visualizing the mesh quality:

MH60 – Flying Wing Airfoil

Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).


Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.

Grid Quality:

Visualizing the grid quality


Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.

The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!

Comments and bug reporting encouraged!

DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via, and owner of the OPENFOAM®  and OpenCFD®  trademarks.

► Normal Shock Calculator
  20 Feb, 2019

Here is a useful little tool for calculating the properties across a normal shock.

If you found this useful, and have the need for more, visit One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at for more information!

Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.


Layout Settings:

Entries per feed:
Display dates:
Width of titles:
Width of content: