|
[Sponsors] |
They say the only constant in life is change and that’s as true for blogs as anything else. After almost a dozen years blogging here on WordPress.com as Another Fine Mesh, it’s time to move to a new home, the … Continue reading
The post Farewell, Another Fine Mesh. Hello, Cadence CFD Blog. first appeared on Another Fine Mesh.
Welcome to the 500th edition of This Week in CFD on the Another Fine Mesh blog. Over 12 years ago we decided to start blogging to connect with CFDers across teh interwebs. “Out-teach the competition” was the mantra. Almost immediately … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
Automated design optimization is a key technology in the pursuit of more efficient engineering design. It supports the design engineer in finding better designs faster. A computerized approach that systematically searches the design space and provides feedback on many more … Continue reading
The post Create Better Designs Faster with Data Analysis for CFD – A Webinar on March 28th first appeared on Another Fine Mesh.
It’s nice to see a healthy set of events in the CFD news this week and I’d be remiss if I didn’t encourage you to register for CadenceCONNECT CFD on 19 April. And I don’t even mention the International Meshing … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
Some very cool applications of CFD (like the one shown here) dominate this week’s CFD news including asteroid impacts, fish, and a mesh of a mesh. For those of you with access, NAFEM’s article 100 Years of CFD is worth … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
This week’s aggregation of CFD bookmarks from around the internet clearly exhibits the quote attributed to Mark Twain, “I didn’t have time to write a short letter, so I wrote a long one instead.” Which makes no sense in this … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
A few years ago, the Internet was abuzz with water bottle flips. Experimentalists are still looking at how they can arrest a partially fluid-filled container’s bounce, but now they’re rotating the bottles vertically rather than flipping them end-over-end. Their work shows that faster rotating bottles have little to no bounce after impacting a surface.
The reason for this is visible in the image sequence above, which shows a falling bottle (top row) and the aftermath of its impact (bottom row). When the bottle rotates and falls, water climbs up the sides of the bottle, forming a shell. On impact, the water collapses, forming a central jet that shoots up the middle of the bottle, expending momentum that would otherwise go into a bounce. It’s a bit like the water is stomping the landing.
The authors hope their observations will be useful in fluid transport, but they also note that this bit of physics is easily recreated at home with a partially-filled water bottle. (Image and research credit: K. Andrade et al.; via APS Physics)
For densely-populated urban areas, floods are one of the most damaging and expensive natural disasters. We can’t control the amount of rain that falls, so engineers need other ways to mitigate damage. It’s not usually possible to remove people and property from floodplains, so instead civil engineers look below the surface, building flood tunnel networks to alleviate floodwaters. In this Practical Engineering video, Grady demonstrates how these systems work and what some of their challenges are. (Video and image credit: Practical Engineering)
As long as we continue to extract and transport oil, marine oil spills will continue to be a problem. Recent work shows that spilled oil weathers differently depending on both sunlight and water temperature. When exposed to sunlight, crude oil undergoes chemical reactions that can change its makeup. Researchers studied the mechanical properties of crude oil samples kept at different temperatures in both sunlight and the dark.
They discovered that sunlight-exposed crude oil kept at a high temperature had twice the viscosity of a sample kept in the dark at the same temperature. In contrast, the high-temperature sunlit sample’s viscosity was 8 times lower than a sunlit sample kept at a lower temperature. That’s quite a large difference, and it implies that tropical oil spills may behave quite differently than Arctic ones. Cold-water spills will entrain and dissolve less than warm-water ones, so there may be more surface oil to collect at high-latitude spills. The differences in viscosity may also necessitate different spill mitigation techniques. (Image credit: NOAA; research credit: D. Freeman et al.; via APS Physics)
Although they may look sinister, roll clouds like this one are no tornado. These unusual clouds form near advancing cold fronts when downdrafts cause warm, moist air to rise, cool below the dew point, and condense into a cloud. Air in the cloud can circulate around its long horizontal axis, but the clouds won’t transform into a tornado. Roll clouds are also known as Morning Glory clouds because they often form early in the day along the Queensland coast, where springtime breezes off the water promote their growth. The clouds do form elsewhere, though; this example is from Wisconsin in 2007. (Image credit: M. Hanrahan; via APOD)
Blue-footed boobies, like many other seabirds, climb to a particular altitude before folding their wings and diving head-first into the water. This acrobatic feat balances the bird’s force of impact and the depth it can reach to ensnare fish swimming there. It’s an incredible process to watch, a fascinating one to study, and, here, a beautiful glimpse of the natural world from a perspective we don’t typically see. (Image credit: H. Spiers, Bird POTY; via Colossal)
Catch a butterfly, and you’ll notice a dust-like residue left behind on your fingers. These are tiny scales from the butterfly’s wing. Under a microscope, those scales overlap like shingles all over the wing. Their downstream edges tilt upward, leaving narrow gaps between one scale and the next. Experiments show that, although butterflies can fly without their scales, these tiny features make a big difference in their efficiency.
When air flows over the scales, tiny vortices form in the gaps between. These laminar vortices act like roller bearings, helping the flow overhead move along with less friction and, thus, less drag. Compared to a smooth surface, the scales reduce skin friction on the wing by 26-45%. (Image credit: butterfly – E. Minuskin, scales – N. Slegers et al., experiment – S. Gautam; research credit: N. Slegers et al. and S. Gautam; via Physics Today)
dnf install -y python3-pip m4 flex bison git git-core mercurial cmake cmake-gui openmpi openmpi-devel metis metis-devel metis64 metis64-devel llvm llvm-devel zlib zlib-devel ....
{ echo 'export PATH=/usr/local/cuda/bin:$PATH' echo 'module load mpi/openmpi-x86_64' }>> ~/.bashrc
cd ~ mkdir foam && cd foam git clone https://git.code.sf.net/p/foam-extend/foam-extend-4.1 foam-extend-4.1
{ echo '#source ~/foam/foam-extend-4.1/etc/bashrc' echo "alias fe41='source ~/foam/foam-extend-4.1/etc/bashrc' " }>> ~/.bashrc
pip install --user PyFoam
cd ~/foam/foam-extend-4.1/etc/ cp prefs.sh-EXAMPLE prefs.sh
# Specify system openmpi # ~~~~~~~~~~~~~~~~~~~~~~ export WM_MPLIB=SYSTEMOPENMPI # System installed CMake export CMAKE_SYSTEM=1 export CMAKE_DIR=/usr/bin/cmake # System installed Python export PYTHON_SYSTEM=1 export PYTHON_DIR=/usr/bin/python # System installed PyFoam export PYFOAM_SYSTEM=1 # System installed ParaView export PARAVIEW_SYSTEM=1 export PARAVIEW_DIR=/usr/bin/paraview # System installed bison export BISON_SYSTEM=1 export BISON_DIR=/usr/bin/bison # System installed flex. FLEX_DIR should point to the directory where # $FLEX_DIR/bin/flex is located export FLEX_SYSTEM=1 export FLEX_DIR=/usr/bin/flex #export FLEX_DIR=/usr # System installed m4 export M4_SYSTEM=1 export M4_DIR=/usr/bin/m4
foam Allwmake.firstInstall -j
Figure 1: Hexahedral mesh for HiFire6 vehicle with Busemann hypersonic intake.
1200 words / 6 minutes read
Hypersonic flow phenomena, such as shock waves, shock-boundary layer interactions, and laminar to turbulent transitions, necessitate flow-aligned, high-resolution hexahedral meshes. These meshes effectively discretize the flow physics regions, enabling accurate prediction of their impact on the flow.
In light of successful scramjet-powered hypersonic flight tests conducted by numerous countries, the pressure is mounting for other nations to keep up with this technology. Extensive testing and computational fluid dynamics (CFD) simulations are underway to develop a scramjet design capable of withstanding the demanding conditions of hypersonic flight.
As an effective and efficient design tool, CFD plays a pivotal role in rapidly designing and optimizing various parametric scramjet configurations. However, simulating these extreme flow fields using CFD is a formidable challenge, and proper meshing is of utmost importance.
The meshing requirements for CFD of hypersonic flows in intakes differ significantly from those for low Mach number flows. High-speed flows involve elevated temperatures and interactions between shockwaves and boundary layers, which were previously negligible. Boundary layers are particularly critical as they experience high rates of heat transfer. Furthermore, the transition of the boundary layer from laminar to turbulent flow is a complex phenomenon that is challenging to capture and simulate accurately. Nonetheless, this transition is of paramount importance, as it has a profound impact on flow behaviour.
Change in the flow field demands a change in meshing requirements. As one may expect, the boundary layer should have a high resolution to capture the velocity boundary layer and the enthalpy boundary layer. Next, the shocks must also be captured precisely since the flow turns through the shock wave in hypersonic flows. But more importantly, shocks have extremely strong gradients, which can lead to large errors if not resolved accurately.
Multiple shocks and boundary layer interactions happen in hypersonic intake flows at different locations. If these effects are not resolved precisely, it is impossible to predict whether the hypersonic engine works effectively or not. To summarise, we must deal with multiple effects with different strength levels. The gridding system we adopt should create a grid that adequately resolves all effects with sufficient precision to achieve the needed level of solution reliability.
Other regions of concern in scramjet are the inlet leading edge, injector and cavity. Not only does the mesh topology have to be appropriately structured around these regions, but it must also align with the surfaces as best as possible to avoid introducing unnecessary skewing and warpage.
The boundary layer, a home for laminar to turbulent transitions and shock-induced boundary layer separation, must be properly resolved. Usually, structured meshes are preferred. Even the hybrid unstructured approach adopts finely resolved stacked prism or hexahedral cells in viscous padding.
This is necessary because resolving the boundary layer close to the wall aids in accurately representing its profile, leading to correct predictions of wall shear stress, surface pressure and the effect of adverse pressure gradients and forces.
Further, at hypersonic speeds, the transition of laminar to turbulent boundary layer inside the boundary layer significantly influences aircraft aerodynamic characteristics. It affects the thermal processes, the drag coefficient and the vehicle lift-to-drag ratio. Hence, paying attention to how well the cells are arranged in the boundary layer padding is critically essential.
Another important aspect of the proper resolution of the boundary layer is how it helps predict shock-induced flow separation. Shock wave interaction with a turbulent boundary layer generates significant undesirable changes in local flow properties, such as increased drag rise, large-scale flow separation, adverse aerodynamic loading and heating, shock unsteadiness and poor engine inlet performance.
Unsteadiness induces substantial variations in pressure and shear stress, leading to flutter that impacts the integrity of aircraft components. Additionally, the operational efficiency of engines can be considerably compromised if the shock-wave-induced boundary layers separation deviates from the anticipated location. If the computational grid fails to accurately represent the interaction between shock waves and boundary layers due to inadequate resolution or improper cell placement, the obtained results from CFD will lack practical utility or advantages. This underscores the critical significance of well-designed grids in the context of hypersonic flows.
Ideally, grid lines need to be aligned to the shock shape. For this, hexahedral meshes are better suited. They can be tailored to the shock pattern and made finer in the direction normal to the shock or adaptively refined. This brings the captured shock thickness closer to its physical value and improves the solution quality by aligning the faces of the control volumes with the shock front. Shock-aligned grids reduce the numerical errors induced by the captured shock waves, thereby significantly enhancing the computed solution quality in the entire region downstream of the shock.
This grid alignment is necessary for both oblique and normal bow shock. Grid studies have shown that solver convergence is extremely sensitive to the shape of the O-grid at the stagnation point. Matching the edge of the O-grid with the curved standing shock and maintaining cell orthogonality at the walls was necessary to get good convergence.
Also, grid misalignment is observed to generate non-physical waves, as shown in Figure 7. For CFD solvers with low numerical dissipation, a strong shock generates spurious waves when it goes through a ‘cell step’ or moves from one cell to another. Such numerical artefacts can be avoided, or at least the strength of the spurious waves can be minimized by reducing the cell growth ratio and cell misalignment w.r.t the shock shape.
A sparser grid density may suffice in areas where flow is uniform and surfaces have slight curvatures. Nevertheless, it becomes necessary to employ grid clustering and increase the resolution in regions characterized by abrupt flow gradients, geometric or topological variations, regions accommodating critical flow phenomena (such as near walls, shear and boundary layers, shock interactions), geometric cavities, injectors, and other solid structures. The appropriate refinement of these regions holds significance as it contributes to enhancing the efficacy of numerical schemes and models at both local and global levels. Consequently, this refinement leads to the generation of more precise and reliable results.
When employing a solution-based grid adaptation approach, the selection of an appropriate refinement ratio and initial grid density becomes crucial. If the refinement ratio is too low, it may be inefficient and ineffective. This is due to the limited coverage of the asymptotic region, which may not be sufficient to accurately determine the convergence behaviour. Additionally, it may necessitate multiple flow solutions before reaching a valid conclusion.
Another aspect which needs due attention while making grid adaptation is the initial grid employed. The initial grid should possess a sufficient level of resolution. Employing a low initial grid density can lead to inaccurate simulation results and unsatisfactory flow field solutions. On the other hand, an excessively refined initial grid may not be feasible for high-fidelity studies involving viscous, turbulent or fully reacting flows. This is because the initial cell density may already be too high, making creating subsequent grids with even higher densities impractical.
Grid accuracy plays a critical role in the reliability and precision of hypersonic CFD simulations, as it directly influences the computed flow field. Given the high velocities involved, errors introduced upstream can rapidly amplify downstream.
Consequently, it is imperative to employ a meticulous grid or topology design to achieve suitable cell discretization and blocking structures. Factors such as grid resolution, grid clustering, cell shape, and cell size distribution must be thoroughly evaluated and selected both locally and across the entire domain. This careful assessment is essential for preventing the introduction of errors and inaccuracies into the computed results through numerical artefacts and uncaptured phenomena.
1.“Experimental Study of Hypersonic Fluid-Structure Interaction with Shock Impingement on a Cantilevered Plate”, Gaetano M D Currao, PhD Thesis, UNSW AUSTRALIA, March 2018.
2.“Investigation of “6X” Scramjet Inlet Configurations”, Stephen J. Alter, NASA/TM–2012–217761, September 2012.
3.“Numerical Simulation of Hypersonic Air Intake Flow in Scramjet Propulsion Using a Mesh-Adaptive Approach”, Sarah Frauholz, et al, AIAA Conference Paper · September 2012.
4.“Parametric Geometry, Structured Grid Generation, and Initial Design Study for REST-Class Hypersonic Inlets”, Paul G. Ferlemann et al.
5.“Numerical Simulation of Hypersonic Air Intake Flow in Scramjet Propulsion”, Sarah Frauholz et al, 5TH EUROPEAN CONFERENCE FOR AERONAUTICS AND SPACE SCIENCES (EUCASS), July 2013.
6.”Computational Prediction of NASA Langley HYMETS Arc Jet Flow with KATS”, Umran Duzel,AIAA conference paper, Jan 2018.
7.“Numerical simulations of the shock wave-boundary layer interactions”, Ismaïl Ben Hassan Saïdi, HAL Id: tel-02410034, 13 Dec 2019.
8.“The Role of Mesh Generation, Adaptation, and Refinement on the Computation of Flows Featuring Strong Shocks”, Aldo Bonfiglioli et al, Hindawi Publishing Corporation Modelling and Simulation in Engineering, Volume 2012, Article ID 631276.
9.”Numerical Investigation of Compressible Turbulent Boundary Layer Over Expansion Corner“, Tue T.Q. Nguyen et al., AIAA Conference Paper, October 2009.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Know your mesh for Hypersonic Intake CFD Simulations appeared first on GridPro Blog.
Figure 1: Flow-aligned mesh around an MDA -3 element configuration.
1350 words / 7 minutes read
Alignment of grid lines with the flow aid in lower diffusion and numerical error, faster convergence and accurate capturing of high gradient flow features like a shock. This subtle gridding detail makes a significant difference to the CFD simulation’s solution quality and accuracy.
In the rapid world of product design, CFD simulations are expected to generate quick results. Quick results mean faster grid generation, which inevitably leads to a loss of attention to subtle gridding details. One such critically important gridding aspect that most CFD practitioners have less appreciation for is that of alignment of the grid to the flow.
Three aspects of gridding dictate the final solver solution outcome – grid quality, mesh resolution and grid alignment. Most grid generators pay attention to the first two aspects of mesh cell quality and refinement but ignore grid line alignment to the flow. This is understandable as rapid domain filling algorithms like unstructured meshing and Cartesian will not be able to meet the meshing criteria of flow alignment, as these algorithms are inherently handicapped to do so. Only inside the boundary layer, where they adopt stacking of prism or hexahedral cells, is some flow alignment achieved. Currently, only the structured multi-block technique is capable of orienting the grid cells to the flow inside the boundary layer padding as well as outside.
It is critically essential that CFD practitioners know how alignment or non-alignment of the grid to flow, how the presence of different degrees of mesh singularities affects the flow field and how grid alignment to high gradient flow phenomena like shock influences the final solution outcome. This article attempts to address these meshing aspects.
The importance of grid cell orientation w.r.t to the flow can be demonstrated with a simple convective-diffusive flow in a square domain. Figures 2 and 3 show the errors produced due to different orientations of the cells to the flow direction.
If we have two velocities, V1 and V2, flowing on a structured mesh in the direction of the grid lines, the solution will be completely conformal without any diffusion or numerical error, as shown in Figure 2a. This is true, even for a grid where the mesh lines are not oriented in the direction of the coordinate system, as illustrated in Figure 2b.
However, if we have an unstructured mesh or a structured mesh, but the flow is not aligned, then there is diffusion taking place. The amount of diffusion depends on differencing scheme used in the flow solver and on the size of the mesh. The finer the mesh, the lower the diffusion. But, never the less, it still exists.
A grid singularity is nothing but a grid point in 2-Dimension where more or less than four grid lines radiate from a point. Singularities exist in large numbers in unstructured meshes and in very small numbers in multi-block meshes for complex configurations.
Results from the gridding experiment on singularities show that the error magnitudes are least for lesser singularities ( 3-way singularity) while it is high for larger singularities like an 8-way singularity, as shown in Figures 5 and 6.
A closer review of the results shows that the results for 3- and 5- way singularity grids are quite acceptable and actually are as good as the results from the non-singular grids from the same grid generator.
Though both Cartesian grids and the classical structured grids use hexahedral cells, the effect of the grid on the flow solver output is not the same. The subtle difference in the alignment of the cells and the need for interpolation in Cartesian grids show up in the computed results. In a Cartesian grid, the grid lines are aligned to the regular Cartesian coordinates, while the grid lines in structured grids are aligned to the geometric body and the flow field.
Figure 7 illustrates the computed species mass fraction and temperature distribution for a CFD simulation involving fuel injection in a combustor of a hypersonic vehicle. As shown in Figure 7a, the Cartesian interpolation leads to dramatic spurious oscillations for the species mass fraction, especially at small stoichiometric scalar dissipation rate. On the other hand, structured curvilinear meshes show a very smooth interpolation without any oscillation. Similar results can be seen in the computed temperature distribution in Figure 7b. As V. E. Terrapon, the author of the research work [ref 1], says,
“The small additional lookup cost in a curvilinear mesh is largely compensated by a much smoother interpolation.”
The boundary layer, which is home to wall-bounded viscous flows, experiences high gradients. To capture the high gradients, finely stacked flow-aligned cells are required. Maintaining cell orthogonality w.r.t to the wall is another key factor in boundary layer generation. So, to maintain optimal cell count and yet finely resolve the boundary layer, stretched elements in the form of prisms or hexahedral cells are preferred. For the same reason, even the hybrid unstructured meshing approach adopts stacked prism cells in the viscous padding, as stacking high aspect ratio tetrahedral is not preferred due to deterioration in cell skewness.
Orderly arranged flow-aligned mesh in the boundary layer are critical and essential as it aids in the accurate representation of its profile, leading to accurate predictions of wall shear stress, surface pressure and also the effect of adverse pressure gradients and forces.
Further, at very high Mach numbers in the supersonic or hypersonic flow regimes, the laminar to turbulent boundary layer transition and shock boundary layer interactions significantly influence aircraft aerodynamic characteristics. They affect the thermal processes, the drag coefficient and the vehicle lift-to-drag ratio. Hence, it is critical essentially to pay attention to how well the cells are arranged in the boundary layer padding.
To capture the effects of high gradient flow phenomena like shocks on the flow field downstream, it is essential to align the grid lines to the shock shape and have refined cells.
For this, hexahedral meshes are better suited. They can be tailored to the shock pattern and can be made finer in the shock normal direction or can be adaptively refined. This not only brings the captured shock thickness closer to its physical value but also allows for the improvement of the solution quality by aligning the faces of the control volumes with the shock front. Aligned grids reduce the numerical errors induced by the captured shock waves and thereby significantly enhance the computed solution quality in the entire region downstream of the shock.
Grid alignment is necessary for both oblique and normal bow shock. Grid studies have shown that solver convergence is extremely sensitive to the shape of the O-grid at the stagnation point. Matching the edge of the O-grid with the curved standing shock and maintaining cell orthogonality at the walls was found to be necessary to get good convergence.
Also, grid misalignment is observed to generate non-physical waves, as shown in Figure 10. For CFD solvers with low numerical dissipation, a strong shock generates spurious waves when it goes through a ‘cell step’ or moves from one cell to another. Such numerical artefacts can be avoided, or at least the strength of the spurious waves can be minimized by reducing the cell growth ratio and cell misalignment w.r.t the shock shape.
Check out the importance of flow alignment and comparison on various grid types for an airfoil and Onera M6 wing.
Do Mesh Still Play a Critical Role in CFD?
For ultra-accurate CFD results, flow alignment of grids is a must. It is a subtle detail in grid generation which can make a mammoth difference in the computed solution. Out of all the gridding methodologies developed to date, structured hexahedral meshing is the best candidate for the job. Whether it is near the wall in the boundary layer or in the interior of the domain to discretize shocks, structured meshes optimally align to the flow features and helps to avoid dissipation or numerical errors.
To sum up, if accurate CFD results are the top priority in your CFD cycle, then having flow-aligned grids is your secret recipe.
To know about generating flow-aligned meshes in GridPro, contact us at: support@gridpro.com.
1. “A flamelet-based model for supersonic combustion”, V. E. Terrapon et al, Center for Turbulence Research Annual Research Briefs, 2009.
2. “HEC-RAS 2D – AN ACCESSIBLE AND CAPABLE MODELLING TOOL“, C. M. Lintott Beca Ltd, Water New Zealand’s 2017 Stormwater Conference.
3. “Effect of Grid Singularities on the Solution Accuracy of a CAA Code”, R. Hixon et al, 41st Aerospace Sciences Meeting and Exhibit, 6-9 January 2003, Reno, Nevada.
4. “Challenges to 3D CFD modelling of rotary positive displacement machines”, Prof Ahmed Kovacevic, SCORG Webinar.
5. “Experimental Study of Hypersonic Fluid-Structure Interaction with Shock Impingement on a Cantilevered Plate”, Gaetano M D Currao, PhD Thesis, March 2018.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post The Importance of Flow Alignment of Mesh appeared first on GridPro Blog.
Figure 1: Hexahedral mesh for an aircraft icing surface.
1228 words / 6 minutes read
Complex ice shapes make generating well-resolved mesh extremely difficult, compelling CFD practitioners to make geometric and meshing compromises to understand the effect of Ice accretion on UAVs.
Flying safely and reliably depends on how well icing conditions are managed. Atmospheric icing is one of the main reasons for the operational limitations, Icing disturbs the aerodynamics and limits the flight capabilities such as range and duration. In some scenarios, it can even lead to crashes.
Icing has been under research for manned aircraft since the 1940s. However, the need to understand icing effects for different flying scenarios in unmanned aerial vehicles (UAVs) or drones has reignited the research. Drones are used for a wide range of applications like package delivery, military, glacier studies, pipeline monitoring, search and rescue, etc.
The well-understood icing process of manned civil and military aircraft does not hold good for most UAVs. UAVs fly at a lower air speed and are smaller in size. They operate at a low Reynolds Number in the range of 0.1-1.0 million as against manned aviation which fly at Reynolds Numbers of the order of 10-100 million. This huge difference necessitates the need to gain a better understanding of the icing process at low Reynolds numbers.
CFD simulation of aircraft ice accretion is a natural choice for researchers due to its cost-effective approach when compared to flight testing. In this article, we will discuss how researchers navigate through geometry and meshing challenges to understand the icing effects.
Icing analysis covers a large variety of physical phenomena. From droplet or ice crystal impact on cold surfaces to solidification process at different scales. Ice accumulation degrades aerodynamic performances such as the lift, drag, stability and stall behaviour of lifting surfaces by modifying the leading-edge geometry and the state of the boundary layer downstream. This results in premature and highly undesirable flow separation.
Such flow transition and turbulently active regions need well-resolved grids. However, the complex icing undulations make meshing very hard, forcing the CFD practitioners to face geometric and meshing challenges.
Icing develops different kinds of geometric features such as conic shapes, jagged ridges, narrow, deep valleys and concave regions. In 3D, the spanwise variation of these features creates further complexities.
Geometric simplification is more often done while attempting 3D simulations. Even though fine resolution 3D scanned ice feature data is available, incapability to create quality normal wall resolved cells compels CFD practitioners to either simplify the ice features or settle down for some kind of inviscid simulation without capturing the viscous effects. Figure 4 shows such a compromised unstructured mesh without viscous padding for a DLES simulation. Figure 5 shows the extraction of a smoothened and simplified ice geometry from an actual icing surface.
It is extremely difficult to mesh such realistic ice shapes for any mesh generation algorithm let alone the aspect of mesh quality.
As a compromise, the sub-scale surface roughness is smoothened out and is not captured. As a consequence, the turbulence effects due to sub-scale geometric features get ignored.
Ice features range widely in geometric scales. For, e.g., ice horns can be as big as 1-2 centimetres, while sub-scale surface roughness can be as small as a few microns.
The level of deterioration in performance is directly related to the ice shapes and to the degree of aerodynamic flow disruption they rake up. Sub-scale ice surface roughness triggers laminar to turbulent transition while large size ice-horns cause large-scale separation.
Meshing such wide-ranging geometric scales poses a few challenges. Firstly, they will need a massive number of cells to capture the micron-level features, directly posing a challenge to the computational power and considerable time for both meshing and CFD.
Literature review shows that certain CFD practitioners, foreseeing these challenges, settle down for 2D simulations to avoid computationally expensive 3D simulations. Even at the 2D level, finer ice-roughness features are smoothened to make viscous padding creation more manageable.
Crevices and concave regions are home to re-circulation flows. These viscous regions need finely resolved unit aspect ratio cells to capture them. But since many grid generators find it difficult to mesh these regions, the crevices are removed and replaced by a small depression.
Aft of the horns, large-scale wakes are created, which are highly unsteady and three-dimensional in nature. Also, with an increase in the angle of attack, these turbulent features grow in size and start to extend further in the normal and axial direction w.r.t the wing surface. In concave regions and narrow crevices, recirculation flows can be observed.
The boundary layer padding needs to have a good wall-normal resolution with first spacing equivalent to Y+ not more than 1. The rough ice surfaces aggravate flow separation and adequate viscous padding with a uniform number of layers with orthogonal cells is necessary at all locations.
Growing wall-normal quadrilateral or hexahedral cells from the ice walls for the entire region is a challenge since the crevices are very narrow with irregular protrusions, and generating continuous viscous padding causes cells to collapse one over the other.
To overcome this some grid generators resort to partial normal wall padding to the extent the local geometry permits and quickly transition to unstructured meshing, as shown in Figure 9a.
Research has shown that airframe size and air speed are two main important parameters influencing ice accretion.
One of the icing simulation requirements is computing ice accumulation for a finite time period spanning 15 to 20 minutes. Multiple CFD simulations are done for different chord lengths and air velocities. As one can perceive, this is a numerically intensive job requiring automated geometry building and mesh generation. In such studies, it is necessary to generate new mesh for every minute or even less to make a CFD run for newer instances of ice deposition.
With each time step the shape of the ice-feature changes and with time, they take fairly complex shapes with horns and crevices, making local manual intervention an inevitable necessity.
For the safe operation of UAVs without an icing protection system, the common solution is to ground the aircraft when icing conditions prevail. This limitation can be overcome by having a better de-icing system. Through CFD analysis of ice accretion at different atmospheric conditions, the amount of optimal onboard electrical power needed to do de-icing can be known.
However, accurate CFD analysis hinges on precise capturing of the ice features by the mesh. A meshing system which can aptly meet this requirement without making geometric or meshing compromises is the need of the hour.
For structured meshing needs for icing analysis reach out to GridPro, please contact: gridpro@gridpro.com.
1.”Comparison of LEWICE 1.6 and LEWICE/NS with IRT Experimental Data from Modern Airfoil Tests“, William B. Wright, Mark G. Potapczuk.
2. “Geometry Modeling and Grid Generation for Computational Aerodynamic Simulations around Iced Airfoils and Wings“, Yung K. Choo, John W. Slater, Mary B. Vickerman, Judith F. VanZante.
3. “COMPUTATIONAL MODELING OF ROTOR BLADE PERFORMANCE DEGRADATION DUE TO ICE ACCRETION“, A Thesis in Aerospace Engineering, Christine M. Brown, The Pennsylvania State University The Graduate School, December 2013.
4. ” ICE INTERFACE EVOLUTION MODELLING ALGORITHMS FOR AIRCRAFT ICING“, SIMON BOURGAULT-CÔTÉ, Thesis, UNIVERSITÉ DE MONTRÉAL, 2019.
5. “Atmospheric Ice Accretions, Aerodynamic Icing Penalties, and Ice Protection Systems on Unmanned Aerial Vehicles“, Richard Hann, PhD Thesis, Norwegian University of Science and Technology, July 2020.
6. “Icing on UAVs“, Richard Hann, NASA Seminar.
7. https://www.ntnu.no/blogger/richard-hann/
8. https://uavicinglab.com/
9. “An Integrated Approach to Swept Wing Icing Simulation“, Mark G. Potapczuk et al, Presented at 7th European Conference for Aeronautics and Space Sciences Milan, Italy, July 3-6, 2017.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post The Challenges of Meshing Ice Accretion for CFD appeared first on GridPro Blog.
Figure 1: Structured multi-block mesh for scroll compressors with tip seal.
804 words / 4 minutes read
Scroll compressors with deforming fluid space, narrow flank, and axial clearance pose immense meshing challenges to any mesh generation technique.
Scroll compressors and expanders have been in extensive usage in refrigeration, air-conditioning, and automobile industries since the 1980s. A slight improvement in scroll efficiency results in significant energy savings and reduction in pollution on the environment. It is therefore important to minimize frictional power loss at each pair of the compressor elements and also the fluid leakage power loss at each clearance between the compressor elements. So developing ways to minimize leakage losses is essential to improve scroll performance.
Unlike other turbomachines like compressors and turbines, Positive Displacement (PD) machines like scroll suffer from innovative designs and performance enhancements. This is mainly due to difficulties in applying CFD to these machines because of the challenges in meshing , fluid real equations and long computational time.
Deforming Flow Field:
The fluid flow is transient and the flow volume changes with time (Figure 3). The fluid is compressed and expanded as it passes through different stages of the compression process. The mesh for the fluid space should be able to ‘follow’ the deformation imposed by the machine without losing its quality.
When the deformation is small, the initial mesh maintains cell quality, however, for large deformations, mesh quality deteriorates and collapses near the contact points between the stator and moving parts.
Flank Clearance:
The narrow passage between the stationary and moving scroll in the radial direction is called the Flank clearance. A clearance of [~ 0.05 mm] is generally used to avoid contact, rub and tear.
Adequately resolving this clearance with a fine mesh is one of the key factors in obtaining an accurate CFD simulation. However, the narrowness of this gap poses meshing challenges for many grid generators.
Axial Clearance:
The narrow passage between the stationary and moving scroll in the axial direction is called the Axial clearance. The axial clearance is about one thousand of the axial scroll plate height, which is much smaller than the flank clearance.
The gap actually forces to have separate zones of mesh in some cases. Adequate resolution of axial clearance gaps is also equally important since it leads to inaccurate flow field prediction.
Tip Seal Modeling:
Tip seals are used to reduce axial leakages which are caused due to wear and tear. The tip seals influence the mass flow rate of the fluid. Modeling internal leakages with tip seals would require many numerical techniques ranging from fluid-structure interaction to special treatments for thermal deformation and tip seals efficiency.
Discharge Check Valve Modeling:
Valves called reed valves are installed at the discharge to prevent reverse flow. Understanding the dynamics of the check valves is important because they significantly influence scroll efficiency and noise levels. The losses at the discharge can significantly reduce the overall efficiency.
However, modeling the valve with appropriate simplification is a challenge for any meshing technique.
A lot of different meshing methods have been employed from tetrahedral to hexahedral to polyhedral cells to discretize the fluid passage. However, researchers who tend to weigh more on the accuracy of the solution tend to weigh more to mesh with structured hexahedral cells.
Hexahedral meshing outweighs other element types w.r.t grid quality, domain space discretization efficiency, solution accuracy, solver robustness, and convergence levels.
One of the reasons why structured hexahedral mesh offers better accuracy is that it can be squeezed without deteriorating the cell quality. This allows to place, a large number of mesh layers in the narrow clearance gap. Better resolution of the critical gap results in better CFD prediction.
Understanding the key meshing challenges before setting forth to mesh scrolls is very essential. Becoming aware of the regions that pose difficulties to mesh and regions that strongly influence the accuracy of the CFD prediction is critically important. More importantly, which meshing approach to pick – structured, unstructured, or cartesian also influence the quality and accuracy of your CFD prediction.
In the next article on Automating meshing for scroll compressors, we discuss, how we can mesh scroll compressors in GridPro.
1.“Study on the Scroll Compressors Used in the Air and Hydrogen Cycles of FCVs by CFD Modeling”, Qingqing ZHANG et al, 24th International Compressor Engineering Conference at Purdue, July 9-12, 2018.
2. “Numerical Simulation of Unsteady Flow in a Scroll Compressor”, Haiyang Gao et al, 22nd International Compressor Engineering Conference at Purdue, July 14-17, 2014.
3. “Novel structured dynamic mesh generation for CFD analysis of scroll compressors”, Jun Wang et al, Proc IMechE Part A: J Power and Energy 2015, Vol. 229(8), IMechE 2015.
4. “Modeling A Scroll Compressor Using A Cartesian Cut-Cell Based CFD Methodology With Automatic Adaptive Meshing”, Ha-Duong Pham et al, 24th International Compressor Engineering Conference at Purdue, July 9-12, 2018.
5. “3D Transient CFD Simulation of Scroll Compressors with the Tip Seal”, Haiyang Gao et al, IOP Conf. Series: Materials Science and Engineering 90 (2015) 012034.
6.“CFD simulation of a dry scroll vacuum pump with clearances, solid heating and thermal deformation”, A Spille-Kohoff et al, IOP Conf. Series: Materials Science and Engineering 232 (2017).
7. “Structured Mesh Generation and Numerical Analysis of a Scroll Expander in an Open-Source Environment”, Ettore Fadiga et al, Energies 2020, 13, 666.
8. “Analysis of the Inner Fluid-Dynamics of Scroll Compressors and Comparison between CFD Numerical and Modelling Approaches”, Giovanna Cavazzini et al, Energies 2021, 14, 1158.
9. “FLOW MODELING OF SCROLL COMPRESSORS AND EXPANDERS”, by George Karagiorgis, PhD- Thesis, The City University, August 1998.
10. “Heat Transfer and Leakage Analysis for R410A Refrigeration Scroll Compressor“, Bin Peng et al, ICMD 2017: Advances in Mechanical Design pp 1453-1469.
11. “Implementation of scroll compressors into the Cordier diagram“, C Thomas et al, IOP Conf. Series: Materials Science and Engineering 604 (2019) 012079.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Challenges in Meshing Scroll Compressors appeared first on GridPro Blog.
Figure 1: Structured multi-block mesh for scroll compressors.
1167 words / 5 minutes read
Developing a three-dimensional mesh of a scroll compressor for reliable Computational Fluid Dynamics (CFD) Analysis is challenging. The challenges not only demand an automated meshing strategy but also a high-quality structured hexahedral mesh for accurate CFD results in a shorter turnaround time.
The geometric complexities of Meshing Scroll Compressors discussed in our previous article give us a window into the need for creating a high-quality structured mesh of scroll compressors.
A good mesher should handle the following challenges in a positive displacement machine:
The scroll compressor fluid mesh region on a given plane is a helical passage, with varying thickness, expanding, and contracting based on the crank angle and the fluid domain is topologically a rectangular passage. So we use the same approach as that of meshing a rectangle for the Scroll Compressor.
One of the main obstacles for simulation in scroll compressors is the generation process of dynamic mesh in fluid domains, especially in the region of flank clearance. The topology based approach offers a perfect solution for such scenarios. Primarily because the deforming fluid domain in the Scroll compressor does not change the topology of the fluid region.
Advantages of Topology based Meshing:
The flank clearance could reduce to as low as 0.05 mm and an adequate resolution of the flank clearance with low skewness is the key reason for better prediction of performance by structured meshes when compared to unstructured meshes.
The dynamic boundary conforming algorithm of GridPro moves the blocks into the compressed space automatically and generates the mesh. The smoother ensures that the mesh has a homogenous mesh distribution and is orthogonal. Orthogonality is another important mesh quality metric that sets structured meshes against moving mesh approaches. Orthogonality improves the numerical accuracy, stability of the solution and prevents numerical diffusion.
Understanding the heat transfer towards and inside the solid components is important since the heat transfer influences the leakage gap size. Heat transfer analysis is especially required in vacuum pumps where the fluid has low densities and low mass flow rates.
One of the major drawbacks of scroll compressors is the high working temperature (maximum temperature of up to 250 degrees Celsius is reported [Ref 3]). The higher temperatures increase excessively the thermal expansion of scroll spirals, leading to significant increments of internal leakages and thereby affecting the efficiency.
A mesh created for conjugate heat transfer has to model the in-between compression chamber, the scrolls and the convective boundary condition at the outer surface of the scrolls. This type of mesh enables to get consistent temperatures in the solids, to calculate the thermal deformation of the scrolls.
Even though scroll compressors enjoy a high volumetric efficiency in the range of 80-95%, there is still room for improvements. Optimization of the geometric parameters is necessary to reduce the performance degradation due to leakage flows in radial and axial clearances.
CFD as a design tool plays a significant role in optimizing scroll geometry. The major advantage of a 3D CFD simulation combined with fluid-structure interaction (FSI) is that the 3D geometry effect is directly considered. This makes CFD analysis highly suitable for the optimization of the design.
GridPro provides an excellent platform for automating hexahedral meshing through because of its working principle and the python based API.
The key features are:
Since GridPro offers both process automation through scripting and API level automation. The automation can either be triggered outside of a CAD environment or inside the CAD environment.
This flexibility provides companies and researchers to develop full-scale meshing automation with GridPro while the user only interacts with CAD / CFD or any software connector platform.
The generation of a structured mesh for the entire scroll domains, including the port region, is a very challenging task. It could be very difficult to model narrow gaps and complex features of the geometry. However, with GridPro’s template-based approach and dynamic boundary conforming technology the setup is reduced to a few specifications and the user can develop his own automation module for structured hexahedral meshing.
If scroll compressor meshing is your need and you are looking out for solutions. Feel free to reach out to us at: support@gridpro.com
1.”Analysis of the Inner Fluid-Dynamics of Scroll Compressors and Comparison between CFD Numerical and Modelling Approaches“, Giovanna Cavazzini et al, Advances in Energy Research: 2nd Edition, 2021.
2. “Structured Mesh Generation and Numerical Analysis of a Scroll Expander in an Open-Source Environment”, Ettore Fadiga et al, Energies 2020, 13, 666.
3. “Waste heat recovery for commercial vehicles with a Rankine process“, Seher, D.; Lengenfelder, T.; Gerhardt, J.; Eisenmenger, N.; Hackner, M.; Krinn, I., In Proceedings of the 21st Aachen Colloquium on Automobile and Engine Technology, Aachen, Germany, 8–10 October 2012; pp. 7–9.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Automation of Hexahedral Meshing for Scroll Compressors appeared first on GridPro Blog.
The GridPro version 8.1 release marks the completion of yet another endeavor to provide a feature-rich, powerful and reliable package to the Structured meshing software to the CAE community.
In every cycle of development, we fulfill the feature requests from our users, improve workflow challenges and democratize the feature to enable newer users to transition without much learning. Along the way, we are improvising on the performance of the tool with the increasing demand to handle challenging geometries in meshing.
The License Management System now has GUI access to most of the features that a user or a system admin would look for. The License Manager GUI now displays all the license-related information. When the user loads the license file and starts the license manager, all the initialization process is done before the license manager is started. The license manager also displays the number of licenses used and the MAC id/ hostname of the user using the license.
The client license management system is now packaged along with the GUI. When the GUI is opened for the first time the license popup appears where the user is asked to upload the license and Initialise. The initialization process runs in the background and opens up the GUI. This process irons out the need to go through a list of specific commands listed in section 9.11 of the utility manual.
The quest to improve user experience and provide easy access to the entities continues. The current version has made a major stride in this direction. From version 8.1 onwards the user has a list of smart selections of face groups available as a part of the Selection Panel. From the blocking, the algorithm calculates the boundary faces and smart groups, based on certain checks. These face groups are displayed and the user can select a single group or a combination of groups to progress in further modifying the structure or assigning to surfaces.
The selection pane also has a temporary selection group to provide flexibility in the workflow. In the past, the user had to select a group to select the entities in the GL. However, the present version enables the users with an alternative workflow where they can right-click and drag in the GL to select faces /blocks. These selected blocks/faces/edges/corners are stored in the Selection Group. It is overwritten when the next selection is made. However, the user has an option to move the selection into one of the permanent groups.
The topology now has a Face display along with the corners and edges. The face display now helps the user to have a better perception of the faces and blocks both displayed in the GL and grouped in individual groups. To reassure the user of the topology entities selected, the display mode is automatically changed to face display mode in the following scenarios.
There are many such scenarios where the user is provided feedback on the operations visually.
The improved centreline evaluation tool is now robust and fast. This speeds up the topology building for geometries like pipes and human arteries and ducts. The algorithm extrudes the given input along the centreline of the geometry resection the change in cross-sectional area change. The algorithm is now available under extrude option in the GUI.
For more details about the new features, enhancements, and bug fixes please, refer to:
GridPro WS works on Windows 7 and above, Ubuntu 12.04 and above, Rhel 5.6 and above, MacOS 10 and above.
The support for the 32-bit platform has been discontinued for all operating systems.
GridPro AZ will be discontinued from version 9 onwards.
GridPro Version 8.1 can be downloaded by registering here.
All tutorials can be found in the Doc folder in the GridPro Installation directory. Alternatively, it can be downloaded from the link here.
All earlier software versions can be found in the Download sections.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post GridPro Version 8.1 Released appeared first on GridPro Blog.
Stallion 3D is an aerodynamics analysis software package that can be used to analyze golf balls in flight. The software runs on MS Windows 10 & 11 and can compute the lift, drag and moment coefficients to determine the trajectory. The STL file, even with dimples, can be read directly into Stallion 3D for analysis.
What we learn from the aerodynamics:
Stallion 3D strengths are:
During the past summer, AIAA successfully organized the 4th High Lift Prediction Workshop (HLPW-4) concurrently with the 3rd Geometry and Mesh Generation Workshop (GMGW-3), and the results are documented on a NASA website. For the first time in the workshop's history, scale-resolving approaches have been included in addition to the Reynolds Averaged Navier-Stokes (RANS) approach. Such approaches were covered by three Technology Focus Groups (TFGs): High Order Discretization, Hybrid RANS/LES, Wall-Modeled LES (WMLES) and Lattice-Boltzmann.
The benchmark problem is the well-known NASA high-lift Common Research Model (CRM-HL), which is shown in the following figure. It contains many difficult-to-mesh features such as narrow gaps and slat brackets. The Reynolds number based on the mean aerodynamic chord (MAC) is 5.49 million, which makes wall-resolved LES (WRLES) prohibitively expensive.
![]() |
The geometry of the high lift Common Research Model |
University of Kansas (KU) participated in two TFGs: High Order Discretization and WMLES. We learned a lot during the productive discussions in both TFGs. Our workshop results demonstrated the potential of high-order LES in reducing the number of degrees of freedom (DOFs) but also contained some inconsistency in the surface oil-flow prediction. After the workshop, we continued to refine the WMLES methodology. With the addition of an explicit subgrid-scale (SGS) model, the wall-adapting local eddy-viscosity (WALE) model, and the use of an isotropic tetrahedral mesh produced by the Barcelona Supercomputing Center, we obtained very good results in comparison to the experimental data.
At the angle of attack of 19.57 degrees (free-air), the computed surface oil flows agree well with the experiment with a 4th-order method using a mesh of 2 million isotropic tetrahedral elements (for a total of 42 million DOFs/equation), as shown in the following figures. The pizza-slice-like separations and the critical points on the engine nacelle are captured well. Almost all computations produced a separation bubble on top of the nacelle, which was not observed in the experiment. This difference may be caused by a wire near the tip of the nacelle used to trip the flow in the experiment. The computed lift coefficient is within 2.5% of the experimental value. A movie is shown here.
![]() |
Comparison of surface oil flows between computation and experiment |
![]() |
Comparison of surface oil flows between computation and experiment |
Multiple international workshops on high-order CFD methods (e.g., 1, 2, 3, 4, 5) have demonstrated the advantage of high-order methods for scale-resolving simulation such as large eddy simulation (LES) and direct numerical simulation (DNS). The most popular benchmark from the workshops has been the Taylor-Green (TG) vortex case. I believe the following reasons contributed to its popularity:
Using this case, we are able to assess the relative efficiency of high-order schemes over a 2nd order one with the 3-stage SSP Runge-Kutta algorithm for time integration. The 3rd order FR/CPR scheme turns out to be 55 times faster than the 2nd order scheme to achieve a similar resolution. The results will be presented in the upcoming 2021 AIAA Aviation Forum.
Unfortunately the TG vortex case cannot assess turbulence-wall interactions. To overcome this deficiency, we recommend the well-known Taylor-Couette (TC) flow, as shown in Figure 1.
Figure 1. Schematic of the Taylor-Couette flow (r_i/r_o = 1/2)
The problem has a simple geometry and boundary conditions. The Reynolds number (Re) is based on the gap width and the inner wall velocity. When Re is low (~10), the problem has a steady laminar solution, which can be used to verify the order of accuracy for high-order mesh implementations. We choose Re = 4000, at which the flow is turbulent. In addition, we mimic the TG vortex by designing a smooth initial condition, and also employing enstrophy as the resolution indicator. Enstrophy is the integrated vorticity magnitude squared, which has been an excellent resolution indicator for the TG vortex. Through a p-refinement study, we are able to establish the DNS resolution. The DNS data can be used to evaluate the performance of LES methods and tools.
Figure 2. Enstrophy histories in a p-refinement study
Happy 2021!
The year of 2020 will be remembered in history more than the year of 1918, when the last great pandemic hit the globe. As we speak, daily new cases in the US are on the order of 200,000, while the daily death toll oscillates around 3,000. According to many infectious disease experts, the darkest days may still be to come. In the next three months, we all need to do our very best by wearing a mask, practicing social distancing and washing our hands. We are also seeing a glimmer of hope with several recently approved COVID vaccines.
2020 will be remembered more for what Trump tried and is still trying to do, to overturn the results of a fair election. His accusations of wide-spread election fraud were proven wrong in Georgia and Wisconsin through multiple hand recounts. If there was any truth to the accusations, the paper recounts would have uncovered the fraud because computer hackers or software cannot change paper votes.
Trump's dictatorial habits were there for the world to see in the last four years. Given another 4-year term, he might just turn a democracy into a Trump dictatorship. That's precisely why so many voted in the middle of a pandemic. Biden won the popular vote by over 7 million, and won the electoral college in a landslide. Many churchgoers support Trump because they dislike Democrats' stances on abortion, LGBT rights, et al. However, if a Trump dictatorship becomes reality, religious freedom may not exist any more in the US.
Is the darkest day going to be January 6th, 2021, when Trump will make a last-ditch effort to overturn the election results in the Electoral College certification process? Everybody knows it is futile, but it will give Trump another opportunity to extort money from his supporters.
But, the dawn will always come. Biden will be the president on January 20, 2021, and the pandemic will be over, perhaps as soon as 2021.
The future of CFD is, however, as bright as ever. On the front of large eddy simulation (LES), high-order methods and GPU computing are making LES more efficient and affordable. See a recent story from GE.
![]() |
Figure 1. Various discretization stencils for the red point |
![]() ![]() |
p = 1 |
![]() ![]() |
p = 2 |
![]() ![]() |
p = 3 |
|
CL
|
CD
|
p = 1
|
2.020
|
0.293
|
p = 2
|
2.411
|
0.282
|
p = 3
|
2.413
|
0.283
|
Experiment
|
2.479
|
0.252
|
Leonardo Pagamonci
We’re thrilled to announce Leonardo Pagamonci, graduate student at the University of Florence, as the winner of the 2023 CONVERGE Academic Competition. The competition challenged students to design and run a novel CONVERGE simulation that demonstrates significant engineering knowledge, accurately reflects the real world, and represents progress for the engineering community.
Leonardo, who is pursuing a Ph.D. in industrial engineering, developed an interest in wind energy during his studies. “It strongly caught my attention because it’s a very interesting, modern field. The wind energy sector is relatively new, compared to other energy sectors.”
For his Ph.D., Leonardo is combining wind energy with another passion of his: computational fluid dynamics (CFD). He is developing a modeling approach to study the aeroelastic response of the wind turbine blades, i.e., the mutual interaction between the rotor structure and aerodynamics. When he learned about the CONVERGE Academic Competition, he thought it was the perfect opportunity to put his new modeling approach to the test. For his submission, he performed an aero-servo-elastic study of tandem onshore wind turbines operating in an atmospheric boundary layer (ABL), with the upwind turbine undergoing a yaw maneuver.
“The goal of this project was to simulate the operation of two turbines in an atmospheric boundary layer with realistic wind field conditions using a control technique that is common for wind farms,” said Leonardo.
The geometry for his study consists of two 5 MW onshore turbines separated by a distance of 7 rotor diameters (Figure 1). To simulate the rotor, Leonardo employed CONVERGE’s actuator line model (ALM), which is a cost-efficient method to model the aeroelastic response of the rotor blades without needing to solve the 3D geometry. He also included an actuator line for the wind turbine tower in his model to account for the aerodynamic effects of the tower and the aeroelastic interactions between the tower and the blades.
To conduct the aero-servo-elastic study, Leonardo coupled CONVERGE with OpenFAST, a multi-physics tool for simulating the coupled dynamic response of wind turbines, through a user-defined function in CONVERGE. With this approach, CONVERGE solves the flow domain, predicting the inflow velocities. These data are passed to OpenFAST and used as inputs to solve for the aerodynamics of the structure and calculate the new positions of the ALM nodes. Furthermore, Leonardo used a synthetic turbulence generator developed at the University of Florence1 to generate the macro-structures of the turbulent wind conditions.
The purpose of Leonardo’s study was to investigate the effects of a yaw misalignment on the tandem wind turbines. Initially, the two rotors operate with zero yaw angle. At a specified time, the upwind rotor (T1) is controlled to maneuver to a 25° yaw angle. The effects of this maneuver on the downwind turbine (T2), as well as on the system as a whole, are then quantified.
Table 1 shows the results for aerodynamic power both before (pre) and after (post) the yaw maneuver. The yaw maneuver caused a decrease in performance in T1 and an increase in performance in T2, although of a smaller magnitude. Overall, the yaw maneuver resulted in a 3.6% decrease in performance for the whole system. The decrease in total power is likely because the yaw angle is not optimal. Further simulation studies of different angles could help identify an optimal configuration.
T1 | T2 | Tandem | |
---|---|---|---|
Power_pre (kW) | 2935 | 1263 | 4198 |
Power_post (kW) | 2376 | 1672 | 4048 |
Delta | -559 kW | +409 kW | -3.6% |
Looking at the structural response of the blades, Leonardo found a substantial redistribution of the loads following the yaw maneuver, with significant changes in the mean displacements of the blade tips (Figure 2).
“Aeroelasticity is a very important aspect of wind turbine analysis, especially because horizontal-axis wind turbines have very large rotors,” Leonardo explained. “With such long, slender, and flexible blades, it is important to analyze the mutual interaction of the aerodynamics and the structure, since each one interacts with and modifies the response of the other.”
Being able to accurately predict these interactions becomes even more important when looking at larger wind farms, where the wakes from the upwind rows propagate to the downwind ones, affecting the performance of the entire wind farm. In addition, the structural response of each individual turbine must be taken into account. These kinds of studies are exactly what Leonardo has planned for the future using this methodology.
“This tool is applicable to a very wide range of analyses,” said Leonardo. “You could analyze more yaw maneuver angles to see which is optimal, look at a broad range of operating conditions, investigate cases where the turbines aren’t aligned with the wind, study a greater number of turbines, or simulate much larger turbines. And because the controller is available with this tool, the studies have another degree of realism.”
Leonardo’s work is not only extending the modeling capabilities of CONVERGE, but also enabling more realistic studies of complex wind turbine dynamics, which will ultimately help the wind energy industry continue to grow to meet rising consumer demand. We look forward to seeing more of Leonardo’s impressive work in the future!
Learn more about the CONVERGE Academic Program here.
[1] Balduzzi, F., Zini, M., Ferrara, G., and Bianchini, A., “Development of a Computational Fluid Dynamics Methodology to Reproduce the Effects of Macroturbulence on Wind Turbines and Its Application to the Particular Case of a VAWT,” Journal of Engineering for Gas Turbines and Power, 141(11), 2019. DOI: 10.1115/1.4044231
Scott Drennan
November 5, 1962 – August 7, 2023
It is with heavy hearts that we mourn the passing and honor the life of Scott Drennan, a remarkable individual whose impact reached far beyond his professional achievements. As the director of both gas turbine and aftertreatment applications at Convergent Science, Scott’s journey was one of dedication, innovation, and unwavering support for his colleagues, friends, and family.
Scott joined Convergent Science in 2012, when the company was aiming to branch out into gas turbine and aftertreatment modeling. In search of someone who would own and evolve our presence in these new markets, Scott emerged as a natural choice to lead our endeavors because of his renowned reputation in the field. Relocating his family from California to Texas demonstrated not only his dedication but also his willingness to embrace new challenges. Scott’s contributions to our gas turbine solutions were nothing short of transformative, a reflection of his ability to drive progress.
Throughout his years at the helm of the Aftertreatment team, Scott exhibited an inspiring passion for growth. He masterfully guided the team’s evolution, from nurturing talent to crafting the very training program that paved the way for groundbreaking aftertreatment modeling with CONVERGE. Scott’s commitment to validation laid the cornerstone for client acquisition, future benchmarks, and software development. His oversight of key initiatives, such as urea deposit and filter modeling, was a testament to his visionary leadership.
Scott was more than just a professional. His love for live music, sports, and culinary experiences showcased his zest for life. His ability to find hidden gems in gastronomy enriched every journey. As a friend and colleague, he radiated warmth, leaving memories of shared laughter and camaraderie from countless trips and projects.
Above all, Scott’s conversations were frequently punctuated with stories of his greatest treasures: his wife, Julie, and his three children. His dedication to family radiated as he spoke with pride about his daughter’s accomplishments and his boys’ martial arts victories and educational achievements. Scott’s anecdotes and wisdom on parenting forged a bond, reminding us of the shared joys and challenges of fatherhood.
Scott’s legacy will forever remain a testament to the power of friendship, the pursuit of excellence, and the importance of cherishing those we hold dear. As we grieve this immeasurable loss, let us remember the light he brought to our lives and extend our deepest condolences to his beloved family. Though he is no longer with us, his spirit lives on in the memories we share and the values he instilled. Rest in peace, dear friend.
**Following his wishes, in lieu of flowers, contributions may be made to the boys’ college funds at Ugift529.com. Codes for: Christopher Q17-G8X, Sean H5R-C42
Author:
Alexandre Minot
Senior Research Engineer
At Convergent Science, we recently selected ParaView Catalyst as our in situ post-processing solution for solving computational fluid dynamics (CFD) problems. ParaView Catalyst is a library that allows ParaView, an open-source data analysis and visualization program distributed by Kitware, to connect to simulation codes. With ParaView Catalyst, ParaView can access the simulation code’s data and post-process it on the fly directly on the high-performance computing (HPC) cluster. This feature eliminates the need to write large 3D results files. Additionally, you get results tailored to your application during the run.
Coupling with ParaView Catalyst allows you to track high frequency phenomena, monitor the convergence of your simulation, or simply have your results ready to go for your presentation at any time. Because in situ post-processing allows you to extract only the most important data from your simulation, it significantly reduces the size of the files you need to download from the computational server to your workstation.
While the simulation is running, CONVERGE uses ParaView Catalyst to open background instances of ParaView automatically. CONVERGE then shares its data with ParaView and triggers the run of a post-processing script. ParaView runs in parallel on the same HPC nodes as CONVERGE and accesses CONVERGE’s memory directly, guaranteeing fast and fully automatic data processing. ParaView will write only the data and images you asked for in the CONVERGE results directory.
Suppose you want to visualize autoignition in a piston engine, a fast moving phenomenon. In a typical CFD workflow, you would need to save the 3D data at a high frequency, potentially at every time-step, in order to capture the autoignition. At the end of the simulation, this large amount of data is downloaded onto the post-processing machine, where it has to be loaded again and processed for visualization.
For knock identification, we recommend the extraction of an isosurface of 1700 K to visualize the main flame front and an isosurface of pressure difference colored by the mass fraction of CH2O to identify the autoignition pockets. With ParaView Catalyst, CONVERGE can write out these isosurfaces directly during the simulation. For our knock demonstration case, this coupling decreases the total runtime of the simulation by about 20%, compared with saving 3D files at the same frequency. Since no post-processing of the 3D files is necessary, you can then directly load the isosurfaces in your favorite visualization tool.
There are two ways to configure in situ post-processing actions in CONVERGE. The first way is through predefined scripts in CONVERGE Studio. Using these predefined scripts, you can set up in situ post-processing in just a few clicks. No knowledge of ParaView is required to configure a Catalyst script, and everything is accessible directly in a classic CONVERGE Studio panel (Figure 2).
Figure 3 shows an image of a slice generated during a spray simulation. Its extraction was set up directly in CONVERGE Studio using the ParaView Catalyst panel. Slices, which allow us to easily visualize flow, are among the most common CFD data extractions. By extracting slices at high frequency during the simulation, you can access more detailed information sooner than with a classic post-processing workflow.
The second way to configure in situ post-processing actions is to create a custom Catalyst script in ParaView. Creating your own post-processing scripts can be done easily before you start your simulation using Studio ParaView, our integration of the ParaView software available starting in CONVERGE Studio 3.1_10May2023. Using the Studio ParaView graphical user interface, you can set up your post-processing the way you would a classic post-processing workflow. Once configured, ParaView allows you to export your setup in the form of a Catalyst script, which is ready to be used by CONVERGE during the simulation.
For example, Figure 4 shows a video of gas venting in a single cell undergoing thermal runaway in an e-bike battery pack. To generate the images for this video, we used ParaView to set up isosurfaces of H2, C2H2, and CH4 and exported the setup to a Catalyst script.
ParaView Catalyst allows you to extract only the most important data from your simulation in real time, enabling you to transfer results faster and incorporate them directly into your design review process. In situ post-processing with ParaView Catalyst filters the unnecessary data and saves only the data you need for your analysis.
Interested in finding out more about how ParaView Catalyst can help you streamline your CFD workflow? Contact us today!
Author:
Wendy Lovinger
The heart is a vital organ that pumps blood throughout the body, carrying oxygen and nutrients critical to organ function and sustaining life. It is, nevertheless, susceptible to disease. Heart disease touches the lives of almost everyone. The line between a healthy heart and an unhealthy heart is a fine one. Modern medicine has made significant advances in the technology needed to successfully intervene in the event of illness, but the technology can always be improved. One of the areas where improvements can continue to be made is mechanical heart valves.
Determining whether an implanted mechanical heart valve will open and close properly based on the actual blood flow usually requires patient participation, a high-risk proposition. Using computational fluid dynamics (CFD) to model mechanical heart valves, on the other hand, is a low-cost, low-risk method to evaluate device performance before performing an invasive procedure.
In this blog post, we explain how we simulated an idealized mechanical 3D heart valve with a small leaflet-to-blood density ratio using CONVERGE. We validated our results with the data from Banks et al., 2018.1
We modeled the motion of the mechanical heart valve with CONVERGE’s implicit fluid-structure interaction (FSI) solver. Because the density of blood is so close to the density of the heart valve, the added mass effect is significant, which can cause explicit FSI solvers to become unstable. CONVERGE’s implicit FSI solver can account for the additional inertial forces from the added mass effect. The implicit method tightly couples the CFD solver with the six degree-of-freedom rigid FSI solver, iterating between the two in a single-time step until the solution converges.
This implicit coupling allows us to predict the movement of an FSI object submerged in a fluid of a similar or higher density, such as a mechanical heart valve in blood. Figure 1 shows that our implicit FSI solver can accurately model how an idealized heart valve opens and closes for a range of leaflet-to-blood density ratios.
To capture the moving geometry of the mechanical heart valve, we used CONVERGE’s Cartesian cut-cell method with autonomous mesh generation. In some CFD solvers, creating an appropriate mesh for an FSI simulation can be challenging because you don’t know the motion profile ahead of time. In CONVERGE, the mesh is automatically regenerated near the FSI object at each time-step, easily accommodating the motion without any additional setup. We also deployed our Adaptive Mesh Refinement (AMR) to refine the grid in areas of high velocity gradient, which allows us to accurately capture the changes in velocity around the valve leaflet.
Figure 2 shows four velocity contour images at different stages of the heart valve opening and closing. CONVERGE’s AMR refines the grid only where the velocity changes the most and leaves the grid coarser where the flow is stagnant, greatly reducing computational expense.
Our results show you can accurately simulate an artificial heart valve with CONVERGE’s implicit FSI solver and autonomous meshing feature. Because CONVERGE allows you to easily modify your geometry, it is an excellent tool for evaluating the performance of different heart valve designs. Interested in finding out what other biomedical applications CONVERGE can be used for? Check out our biomedical webpage here!
[1] Banks, J.W., Henshaw, W.D., Schwendeman, D.W., and Tiang, Q., “A Stable Partitioned FSI Algorithm for Rigid Bodies and Incompressible Flow in Three Dimensions,” Journal of Computational Physics, 373, 455-492, 2018. DOI: 10.1016/j.jcp.2018.06.072
Author:
Jameil Kolliyil
Engineer, Technical Marketing
From refineries to planes, gas turbines are vital to several industries. In addition to providing thrust to keep planes in the air, gas turbines account for almost a quarter of the world’s electricity production.1 Given their prominence in the industry, reducing emissions from gas turbines is crucial. Hydrogen has emerged as one of the more attractive alternative fuels for gas turbines and is backed by several nations to replace or supplement conventional fuels. Hydrogen offers numerous advantages: it has a higher calorific value, produces no greenhouse gases when combusted, and can be blended with existing fuels without major changes to the combustor.
While the use of hydrogen fuel is desirable, there are a number of design, storage, and operational challenges that come with it. One major challenge in designing new gas turbines or retrofitting old ones is the prevention of a phenomenon called flashback in the combustor. During flashback, the flame propagates upstream at speeds higher than the incoming gas flow. Sustained upstream propagation can cause substantial thermal damage to the combustor hardware. Hydrogen has faster kinetics and a higher flamespeed than conventional fuels, making it more prone to flashback. To mitigate the phenomenon, various studies are being performed to find the limits of safe operation for hydrogen fuel. At Convergent Science, we used CONVERGE to perform one such study to analyze flashback in a swirling combustor.2 We compared our simulation results with experimental work performed at The University of Texas at Austin by D. Ebi.3
Figure 1 shows the geometry of the swirling combustor that was investigated in our study. Premixed fuel and air enter through the bottom, pass the swirler, and ignite in the combustion chamber. To accurately predict flashback, we employed the dynamic structure large eddy simulation (LES) model and a detailed chemistry mechanism4 fully coupled with the flow solver. Because the flame travels upstream during flashback, the mesh in the premixing section and the combustion chamber must be refined enough to capture the flame front. However, such an approach will result in unrealistically long simulation times. To obtain accurate results in a reasonable timeframe, we used CONVERGE’s Adaptive Mesh Refinement (AMR) technology to add mesh resolution along the flame front while maintaining a coarser mesh in other parts of the computational domain.
In Figure 2, we have shown a visual comparison between experimental3 and simulation results for a CH4 + air (equivalence ratio Φ = 0.8) fuel mixture. You can see there is a good resemblance in the flame structure and temporal location. We also analyzed the flashback limit for a CH4 + H2 + air (Φ = 0.4) fuel mixture. For this particular fuel mixture, the experimental value for the onset of flashback is 75% H2 by volume.3 Based on our simulations, we predicted a value of 77% of H2 by volume.
The present study demonstrates an engineering solution for accurately predicting flashback and analyzing flame propagation using CONVERGE. For more details about this research, take a look at our paper here! With a long history of simulating complex geometries and combustion, CONVERGE is the go-to tool for all your gas turbine flow simulations. Check out our gas turbine webpage for more information on how CONVERGE can help you design the gas turbines of the future!
[1] “bp Statistical Review of World Energy, 2022 | 71st Edition”, bp, 2022. https://www.bp.com/content/dam/bp/business-sites/en/global/corporate/pdfs/energy-economics/statistical-review/bp-stats-review-2022-full-report.pdf
[2] Kumar, G., and Attal, N., “Accurate Predictions of Flashback in a Swirling Combustor with Detailed Chemistry and Adaptive Mesh Refinement,” AIAA SciTech Forum, San Diego, CA, United States, Jan 3–7, 2022. DOI: 10.2514/6.2022-1722
[3] Ebi, D.F., “Boundary Layer Flashback of Swirl Flames,” Ph.D. thesis, The University of Texas at Austin, Austin, TX, United States, 2016. https://repositories.lib.utexas.edu/handle/2152/38721
[4] G.P. Smith, Y. Tao, and H. Wang, Foundational Fuel Chemistry Model Version 1.0 (FFCM-1), https://web.stanford.edu/group/haiwanglab/FFCM1/pages/download.html, 2016.
Co-Author:
Jameil Kolliyil
Engineer, Technical Marketing
Last year while traveling through the countryside of Tamil Nadu, India, I was struck by the sight of numerous wind turbines dotting the landscape. Those towering machines were not only a testament to the ingenuity of human engineering but also a symbol of the growing importance of wind energy in India. In recent years, wind energy has emerged as a significant source of renewable energy in India, contributing to the country’s efforts to reduce its dependence on fossil fuels and mitigate the effects of climate change. With its vast coastline, ample wind resources, and growing demand for electricity, India has the potential to become a global leader in wind energy.
To promote research and development of wind energy technology, the Indian government is taking steps to support universities and research institutions by providing funding, incentives, and skill development programs. At Convergent Science, we recognize the importance of advancing research through academia and offer exclusive CONVERGE license deals to universities. Kingshuk Mondal is a graduate student working with Professor Niranjan S. Ghaisas at the Indian Institute of Technology Hyderabad (IITH), and he is leveraging CONVERGE to study wind farm wakes on complex terrain. Kingshuk also presented his research at the CONVERGE User Conference–India 2023. I’ll let Kingshuk explain what he’s been working on.
Co-Author:
Kingshuk Mondal
Graduate Student, Indian Institute of Technology Hyderabad (IITH)
The wind energy sector has seen rapid growth in the context of sustainable development, resulting in large installations of onshore and offshore wind farms. Onshore wind turbines are often situated on complex terrain because of the high wind resource potential in hilly regions. Accurate estimations of power output and turbine lifetime are essential aspects of wind turbine and wind farm design and operation. To achieve accurate estimations, you must predict the turbulent flow conditions, the wind turbine wake recovery, and the interactions between wakes of multiple turbines in a wind farm. The wake of a wind turbine evolves differently when sited on complex terrain (e.g., on a hill) compared to a flat surface. Our study aims to optimize the layout of a wind farm over a complex topology for efficient energy extraction and minimal structural stresses.
In this work, we focus on the evolution of an isolated wind turbine’s wake and the wake interactions in a row of wind turbines sited on an idealized cosine-shaped hill. CONVERGE is a useful tool for these simulations because of its ability to simulate flow in complex geometries without time-consuming mesh generation and the flexibility to use a range of turbulence closure models. In addition, CONVERGE’s Adaptive Mesh Refinement feature automatically concentrates grid points in regions with large gradients. For this work, we used large eddy simulations (LES) with the dynamic Smagorinsky model as the sub-grid scale model.
First, we validated a single turbine on a flat surface with an experimental study by Chammorro et al., 2009.1 We found fair quantitative and qualitative agreement between the simulation results and the experimental data. We then proceeded to simulate the flow over a cosine-shaped hill. The flow accelerates on the windward slope of the hill and attains the highest velocity at the top of the hill as shown in Figure 1(a). These areas have low turbulence intensity (TI) and total shear stress (TSS), making them appropriate sites for installing wind turbines. A long wake region is formed on the leeward side of the hill stretching up to 15 hill heights. This region is characterized by enhanced TI and TSS along with low wind potential, making it unfavorable for wind turbine installation.
Placing a wind turbine in front of and on the top of the hill has a similar effect on the hill wake. The wake recovery behind the hill is faster due to the influence of TI from the turbine wake. Because of this, reasonable wind potential is observed after 5 hill distances on the leeward side of the hill as shown in Figure 1(b).
With these findings in mind, we placed a row of five turbines (T1–T5) along the hill as shown in Figure 2. T3 and T4 are placed on the windward slope and on top of the hill, respectively, to minimize the effect of the wakes from T1 and T2.
Because the flow accelerates as it climbs the slope of the hill, T5 is placed at a distance of approximately 5H after the hill to get reasonable wind potential. In addition to considerable wind input, T5 encounters high TI and TSS—reinforcing the structure of T5 is imperative to reduce fatigue stresses. These results are shown in Figure 3.
This study is a first step toward optimizing the layout of a wind farm over complex topology. Future work will consist of rigorous validation of different cases with multiple turbines and flow over various topologies. We also aim to estimate the power output for the optimized layout.
Thanks, Kingshuk! Analyzing potential wind farm locations to extract maximum energy and ensure smooth operation is crucial to future wind energy projects. Wind energy is expected to play a critical role in the world’s energy transition to help meet our climate goals, and Kingshuk’s work is a promising step toward creating more efficient wind farms. From analyzing renewable sources of energy to assessing battery energy storage systems where the generated electricity is stored, CONVERGE is the go-to tool for designing sustainable technologies!
[1] Chamorro, L. P., Fernando Porté-Agel, “A wind-tunnel investigation of wind-turbine wakes: boundary-layer turbulence effects,” Boundary-layer meteorology 132 (2009): 129-149, 2009.
Graphcore has used a range of technologies from Mentor, a Siemens business, to successfully design and verify its latest M2000 platform based on the Graphcore Colossus™ GC200 Intelligence Processing Unit (IPU) processor.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD helps users create thermal models of electronics packages easily and quickly. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to add a component into a direct current (DC) electro-thermal calculation by the given component’s electrical resistance. The corresponding Joule heat is calculated and applied to the body as a heat source. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, the software features a new battery model extraction capability that can be used to extract the Equivalent Circuit Model (ECM) input parameters from experimental data. This enables you to get to the required input parameters faster and easier. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to create a compact Reduced Order Model (ROM) that solves at a faster rate, while still maintaining a high level of accuracy. Watch this short video to learn how.
High semiconductor temperatures may lead to component degradation and ultimately failure. Proper semiconductor thermal management is key for design safety, reliability and mission critical applications.
A common question from Tecplot 360 users centers around the hardware that they should buy to achieve the best performance. The answer is invariably, it depends. That said, we’ll try to demystify how Tecplot 360 utilizes your hardware so you can make an informed decision in your hardware purchase.
Let’s have a look at each of the major hardware components on your machine and show some test results that illustrate the benefits of improved hardware.
Our test data is an OVERFLOW simulation of a wind turbine. The data consists of 5,863 zones, totaling 263,075,016 elements and the file size is 20.9GB. For our test we:
The test was performed using 1, 2, 4, 8, 16, and 32 CPU-cores, with the data on a local HDD (spinning hard drive) and local SSD (solid state disk). Limiting the number of CPU cores was done using Tecplot 360’s ––max-available-processors
command line option.
Data was cleared from the disk cache between runs using RamMap.
Advice: Buy the fastest disk you can afford.
In order to generate any plot in Tecplot 360, you need to load data from a disk. Some plots require more data to be loaded off disk than others. Some file formats are also more efficient than others – particularly file formats that summarize the contents of the file in a single header portion at the top or bottom of the file – Tecplot’s SZPLT is a good example of a highly efficient file format.
We found that the SSD was 61% faster than the HDD when using all 32 CPU-cores for this post-processing task.
All this said – if your data are on a remote server (network drive, cloud storage, HPC, etc…), you’ll want to ensure you have a fast disk on the remote resource and a fast network connection.
With Tecplot 360 the SZPLT file format coupled with the SZL Server could help here. With FieldView you could run in client-server mode.
Advice: Buy the fastest CPU, with the most cores, that you can afford. But realize that performance is not always linear with the number of cores.
Most of Tecplot 360’s data compute algorithms are multi-threaded – meaning they’ll use all available CPU-cores during the computation. These include (but are not limited to): Calculation of new variables, slices, iso-surfaces, streamtraces, and interpolations. The performance of these algorithms improves linearly with the number of CPU-cores available.
You’ll also notice that the overall performance improvement is not linear with the number of CPU-cores. This is because loading data off disk becomes a dominant operation, and the slope is bound to asymptote to the disk read speed.
You might notice that the HDD performance actually got worse beyond 8 CPU-cores. We believe this is because the HDD on this machine was just too slow to keep up with 16 and 32 concurrent threads requesting data.
It’s important to note that with data on the SSD the performance improved all the way to 32 CPU-cores. Further reinforcing the earlier advice – buy the fastest disk you can afford.
Advice: Buy as much RAM as you need, but no more.
You might be thinking: “Thanks for nothing – really, how much RAM do I need?”
Well, that’s something you’re going to have to figure out for yourself. The more data Tecplot 360 needs to load to create your plot, the more RAM you’re going to need. Computed iso-surfaces can also be a large consumer of RAM – such as the iso-surface computed in this test case.
If you have transient data, you may want enough RAM to post-process a couple time steps simultaneously – as Tecplot 360 may start loading a new timestep before unloading data from an earlier timestep.
The amount of RAM required is going to be different depending on your file format, cell types, and the post-processing activities you’re doing. For example:
When testing the amount of RAM used by Tecplot 360, make sure to set the Load On Demand strategy to Minimize Memory Use (available under Options>Performance).
This will give you an understanding of the minimum amount of RAM required to accomplish your task. When set to Auto Unload (the default), Tecplot 360 will maintain more data in RAM, which improves performance. The amount of data Tecplot 360 holds in RAM is dictated by the Memory threshold (%) field, seen in the image above. So you – the user – have control over how much RAM Tecplot 360 is allowed to consume.
Advice: Most modern graphics cards are adequate, even Intel integrated graphics provide reasonable performance. Just make sure you have up to date graphics drivers. If you have an Nvidia graphics card, favor the “Studio” drivers over the “Game Ready” drivers. The “Studio” drivers are typically more stable and offer better performance for the types of plots produced by Tecplot 360.
Many people ask specifically what type of graphics card they should purchase. This is, interestingly, the least important hardware component (at least for most of the plots our users make). Most of the post-processing pipeline is dominated by the disk and CPU, so the time spent rendering the scene is a small percentage of the total.
That said – there are some scenes that will stress your graphics card more than others. Examples are:
Note that Tecplot 360’s interactive graphics performance currently (2023) suffers on Apple Silicon (M1 & M2 chips). The Tecplot development team is actively investigating solutions.
As with most things in life, striking a balance is important. You can spend a huge amount of money on CPUs and RAM, but if you have a slow disk or slow network connection, you’re going to be limited in how fast your post-processor can load the data into memory.
So, evaluate your post-processing activities to try to understand which pieces of hardware may be your bottleneck.
For example, if you:
And again – make sure you have enough RAM for your workflow.
The post What Computer Hardware Should I Buy for Tecplot 360? appeared first on Tecplot Website.
Three years after our merger began, we can report that the combined FieldView and Tecplot team is stronger than ever. Customers continue to receive the highest quality support and new product releases and we have built a solid foundation that will allow us to continue contributing to our customers’ successes long into the future.
This month we have taken another step by merging the FieldView website into www.tecplot.com. Our social media outreach will also be combined. Stay up to date with news and announcements by subscribing and following us on social media.
Members of Tecplot 360 & FieldView teams exhibit together at AIAA SciTech 2023. From left to right: Shane Wagner, Charles Schnake, Scott Imlay, Raja Olimuthu, Jared McGarry and Yves-Marie Lefebvre. Not shown are Scott Fowler and Brandon Markham.
It’s been a pleasure seeing two groups that were once competitors come together as a team, learn from each other and really enjoy working together.
– Yves-Marie Lefebvre, Tecplot CTO & FieldView Product Manager.
Our customers have seen some of the benefits of our merger in the form of streamlined services from the common Customer Portal, simplified licensing, and license renewals. Sharing expertise and assets across teams has already led to the faster implementation of modules such as licensing and CFD data loaders. By sharing our development resources, we’ve been able to invest more in new technology, which will soon translate to increased performance and new features for all products.
Many of the improvements are internal to our organization but will have lasting benefits for our customers. Using common development tools and infrastructure will enable us to be as efficient as possible to ensure we can put more of our energy into improving the products. And with the backing of the larger organization, we have a firm foundation to look long term at what our customers will need in years to come.
We want to thank our customers and partners for their support and continued investment as we endeavor to create better tools that empower engineers and scientists to discover, analyze and understand information in complex data, and effectively communicate their results.
The post FieldView joins Tecplot.com – Merger Update appeared first on Tecplot Website.
One of the most memorable parts of my finite-elements class in graduate school was a comparison of linear elements and higher-order elements for the structural analysis of a dam. As I remember, they were able to duplicate the results obtained with 34 linear elements by using a SINGLE high-order element. This made a big impression on me, but the skills I learned at that time remained largely unused until recently.
You see, my Ph.D. research and later work was using finite-volume CFD codes to solve the steady-state viscous flow. For steady flows, there didn’t seem to be much advantage to using higher than 2nd or 3rd order accuracy.
This has changed recently as the analysis of unsteady vortical flows have become more common. The use of higher-order (greater than second order) computational fluid dynamics (CFD) methods is increasing. Popular government and academic CFD codes such as FUN3D, KESTREL, and SU2 have released, or are planning to release, versions that include higher-order methods. This is because higher-order accurate methods offer the potential for better accuracy and stability, especially for unsteady flows. This trend is likely to continue.
Commercial visual analysis codes are not yet providing full support for higher-order solutions. The CFD 2030 vision states
“…higher-order methods will likely increase in utilization during this time frame, although currently the ability to visualize results from higher order simulations is highly inadequate. Thus, software and hardware methods to handle data input/output (I/O), memory, and storage for these simulations (including higher-order methods) on emerging HPC systems must improve. Likewise, effective CFD visualization software algorithms and innovative information presentation (e.g., virtual reality) are also lacking.”
The isosurface algorithm described in this paper is the first step toward improving higher-order element visualization in the commercial visualization code Tecplot 360.
Higher-order methods can be based on either finite-difference methods or finite-element methods. While some popular codes use higher-order finite-difference methods (OVERFLOW, for example), this paper will focus on higher-order finite-element techniques. Specifically, we will present a memory-efficient recursive subdivision algorithm for visualizing the isosurface of higher-order element solutions.
In previous papers we demonstrated this technique for quadratic tetrahedral, hexahedral, pyramid, and prism elements with Lagrangian polynomial basis functions. In this paper Optimized Implementation of Recursive Sub-Division Technique for Higher-Order Finite-Element Isosurface and Streamline Visualization we discuss the integration of these techniques into the engine of the commercial visualization code Tecplot 360 and discuss speed optimizations. We also discuss the extension of the recursive subdivision algorithm to cubic tetrahedral and pyramid elements, and quartic tetrahedral elements. Finally, we discuss the extension of the recursive subdivision algorithm to the computation of streamlines.
Click an image to view the slideshow
[See image gallery at www.tecplot.com]The post Faster Visualization of Higher-Order Finite-Element Data appeared first on Tecplot Website.
In this release, we are very excited to offer “Batch-Pack” licensing for the first time. A Batch-Pack license enables a single user access to multiple concurrent batch instances of our Python API (PyTecplot) while consuming only a single license seat. This option will reduce license contention and allow for faster turnaround times by running jobs in parallel across multiple nodes of an HPC. All at a substantially lower cost than buying additional license seats.
Data courtesy of ZJ Wang, University of Kansas, visualization by Tecplot.
Get a Free Trial Update Your Software
The post Webinar: Tecplot 360 2022 R2 appeared first on Tecplot Website.
Call 1.800.763.7005 or 425.653.1200
Email info@tecplot.com
Batch-mode is a term nearly as old as computers themselves. Despite its age, however, it is representative of a concept that is as relevant today as it ever was, perhaps even more so: headless (scripted, programmatic, automated, etc.) execution of instructions. Lots of engineering is done interactively, of course, but oftentimes the task is a known quantity and there is a ton of efficiency to be gained by automating the computational elements. That efficiency is realized ten times over when batch-mode meets parallelization – and that’s why we thought it was high-time we offered a batch-mode licensing model for Tecplot 360’s Python API, PyTecplot. We call them “batch-packs.”
Tecplot 360 batch-packs work by enabling users to run multiple concurrent instances of our Python API (PyTecplot) while consuming only a single license seat. It’s an optional upgrade that any customer can add to their license for a fee. The benefit? The fee for a batch-pack is substantially lower than buying an equivalent number of license seats – which makes it easier to justify outfitting your engineers with the software access they need to reach peak efficiency.
Here is a handy little diagram we drew to help explain it better:
Each network license allows ‘n’ seats. Traditionally, each instance of PyTecplot consumes 1 seat. Prior to the 2022 R2 release of Tecplot 360 EX, licenses only operated using the paradigm illustrated in the first two rows of the diagram above (that is, a user could check out up to ‘n’ seats, or ‘n’ users could check out a single seat). Now customers can elect to purchase batch-packs, which will enable each seat to provide a single user with access to ‘m’ instances of PyTecplot, as shown in the bottom row of the figure.
In addition to a cost reduction (vs. purchasing an equivalent number of network seats), batch-pack licensees will enjoy:
We’re excited to offer this new option and hope that our customers can make the most of it.
The post Introducing 360 “Batch-Packs” appeared first on Tecplot Website.
If you care about how you present your data and how people perceive your results, stop reading and watch this talk by Kristen Thyng on YouTube. Seriously, I’ll wait, I’ve got the time.
Which colormap you choose, and which data values are assigned to each color can be vitally important to how you (or your clients) interpret the data being presented. To illustrate the importance of this, consider the image below.
Figure 1. Visualization of the Southeast United States. [4]
Before I explain what a perceptually uniform colormap is, let’s start with everyone’s favorite: the rainbow colormap. We all love the rainbow colormap because it’s pretty and is recognizable. Everyone knows “ROY G BIV” so we think of this color progression as intuitive, but in reality (for scalar values) it’s anything but.
Consider the image below, which represents the “Estimated fraction of precipitation lost to evapotranspiration”. This image makes it appear that there’s a very distinct difference in the scalar value right down the center of the United States. Is there really a sudden change in the values right in the middle of the Great Plains? No – this is an artifact of the colormap, which is misleading you!
Figure 2. This plot illustrates how the rainbow colormap is misleading, giving the perception that there is a distinct different in the middle of the US, when in fact the values are more continuous. [2]
So let’s dive a little deeper into the rainbow colormap and how it compares to perceptually uniform (or perceptually linear) colormaps.
Consider the six images below, what are we looking at? If you were to only look at the top three images, you might get the impression that the scalar value has non-linear changes – while this value (radius) is actually changing linearly. If presented with the rainbow colormap, you’d be forgiven if you didn’t guess that the object is a cone, colored by radius.
Figure 3. An example of how the rainbow colormap imparts information that does not actually exist in the data.
So why does the rainbow colormap mislead? It’s because the color values are not perceptually uniform. In this image you can see how the perceptual changes in the colormap vary from one end to the other. The gray scale and “cmocean – haline” colormaps shown here are perceptually uniform, while the rainbow colormap adds information that doesn’t actually exist.
Figure 4. Visualization of the perceptual changes of three colormaps. [5]
Well, that depends. Tecplot 360 and FieldView are typically used to represent scalar data, so Sequential and Diverging colormaps will probably get used the most – but there are others we will discuss as well.
Sequential colormaps are ideal for scalar values in which there’s a continuous range of values. Think pressure, temperature, and velocity magnitude. Here we’re using the ‘cmocean – thermal’ colormap in Tecplot 360 to represent fluid temperature in a Barracuda Virtual Reactor simulation of a cyclone separator.
Diverging colormaps are a great option when you want to highlight a change in values. Think ratios, where the values span from -1 to 1, it can help to highlight the value at zero.
The diverging colormap is also useful for “delta plots” – In the plot below, the bottom frame is showing a delta between the current time step and the time average. Using a diverging colormap, it’s easy to identify where the delta changes from negative to positive.
If you have discrete data that represent things like material properties – say “rock, sand, water, oil”, these data can be represented using integer values and a qualitative colormap. This type of colormap will do good job in supplying distinct colors for each value. An example of this, from a CONVERGE simulation, can be seen below. Instructions to create this plot can be found in our blog, Creating a Materials Legend in Tecplot 360.
Perhaps infrequently used, but still important to point out is the “phase” colormap. This is particularly useful for values which are cyclic – such as a theta value used to represent wind direction in this FVCOM simulation result. If we were to use a simple sequential colormap (inset plot below) you would observe what appears to be a large gradient where the wind direction is 360o vs. 0o. Logically these are the same value and using the “cmocean – phase” colormap allows you communicate the continuous nature of the data.
There are times when you want to force a break in a continuous colormap. In the image below, the colormap is continuous from green to white but we want to ensure that values at or below zero are represented as blue – to indicate water. In Tecplot 360 this can be done using the “Override band colors” option, in which we override the first color band to be blue. This makes the plot more realistic and therefore easier to interpret.
The post Colormap in Tecplot 360 appeared first on Tecplot Website.
Ansys has announced that it will acquire Zemax, maker of high-performance optical imaging system simulation solutions. The terms of the deal were not announced, but it is expected to close in the fourth quarter of 2021.
Zemax’s OpticStudio is often mentioned when users talk about designing optical, lighting, or laser systems. Ansys says that the addition of Zemax will enable Ansys to offer a “comprehensive solution for simulating the behavior of light in complex, innovative products … from the microscale with the Ansys Lumerical photonics products, to the imaging of the physical world with Zemax, to human vision perception with Ansys Speos [acquired with Optis]”.
This feels a lot like what we’re seeing in other forms of CAE, for example, when we simulate materials from nano-scale all the way to fully-produced-sheet-of-plastic-scale. There is something to be learned at each point, and simulating them all leads, ultimately, to a more fit-for-purpose end result.
Ansys is acquiring Zemax from its current owner, EQT Private Equity. EQT’s announcement of the sale says that “[w]ith the support of EQT, Zemax expanded its management team and focused on broadening the Company’s product portfolio through substantial R&D investment focused on the fastest growing segments in the optics space. Zemax also revamped its go-to-market sales approach and successfully transitioned the business model toward recurring subscription revenue”. EQT had acquired Zemax in 2018 from Arlington Capital Partners, a private equity firm, which had acquired Zemax in 2015. Why does this matter? Because the path each company takes is different — and it’s sometimes not a straight line.
Ansys says the transaction is not expected to have a material impact on its 2021 financial results.
Last year Sandvik acquired CGTech, makers of Vericut. I, like many people, thought “well, that’s interesting” and moved on. Then in July, Sandvik announced it was snapping up the holding company for Cimatron, GibbsCAM (both acquired by Battery Ventures from 3D Systems), and SigmaTEK (acquired by Battery Ventures in 2018). Then, last week, Sandvik said it was adding Mastercam to that list … It’s clearly time to dig a little deeper into Sandvik and why it’s doing this.
First, a little background on Sandvik. Sandvik operates in three main spheres: rocks, machining, and materials. For the rocks part of the business, the company makes mining/rock extraction and rock processing (crushing, screening, and the like) solutions. Very cool stuff but not relevant to the CAM discussion.
The materials part of the business develops and sells industrial materials; Sandvik is in the process of spinning out this business. Also interesting but …
The machining part of the business is where things get more relevant to us. Sandvik Machining & Manufacturing Solutions (SMM) has been supplying cutting tools and inserts for many years, via brands like Sandvik, SECO, Miranda, Walter, and Dormer Pramet, and sees a lot of opportunity in streamlining the processes around the use of specific tools and machines. Light weighting and sustainability efforts in end-industries are driving interest in new materials and more complex components, as well as tighter integration between design and manufacturing operations. That digitalization across an enterprise’s areas of business, Sandvik thinks, plays into its strengths.
According to info from the company’s 2020 Capital Markets Day, rocks and materials are steady but slow revenue growers. The company had set a modest 5% revenue growth target but had consistently been delivering closer to 3% — what to do? Like many others, the focus shifted to (1) software and (2) growth by acquisition. Buying CAM companies ticked both of those boxes, bringing repeatable, profitable growth. In an area the company already had some experience in.
Back to digitalization. If we think of a manufacturer as having (in-house or with partners) a design function, which sends the concept on to production preparation, then to machining, and, finally, to verification/quality control, Sandvik wants to expand outwards from machining to that entire world. Sandvik wants to help customers optimize the selection of tools, the machining strategy, and the verification and quality workflow.
The Manufacturing Solutions subdivision within SMM was created last year to go after this opportunity. It’s got 3 areas of focus: automating the manufacturing process, industrializing additive manufacturing, and expanding the use of metrology to real-time decision making.
The CGTech acquisition last year was the first step in realizing this vision. Vericut is prized for its ability to work with any CAM, machine tool, and cutting tool for NC code simulation, verification, optimization, and programming. CGTech is a long-time supplier of Vericut software to Sandvik’s Coromant production units, so the companies knew one another well. Vericut helps Sandvik close that digitalization/optimization loop — and, of course, gives it access to the many CAM users out there who do not use Coromant.
But verification is only one part of the overall loop, and in some senses, the last. CAM, on the other hand, is the first (after design). Sanvik saw CAM as “the most important market to enter due to attractive growth rates – and its proximity to Sandvik Manufacturing and Machining Solutions’ core business.” Adding Cimatron, GibbsCAM, SigmaTEK, and Mastercam gets Sandvik that much closer to offering clients a set of solutions to digitize their complete workflows.
And it makes business sense to add CAM to the bigger offering:
To head off one question: As of last week’s public statements, anyway, Sandvik has no interest in getting into CAD, preferring to leave that battlefield to others, and continue on its path of openness and neutrality.
And because some of you asked: there is some overlap in these acquisitions, but remarkably little, considering how established these companies all are. GibbsCAM is mostly used for production milling and turning; Cimatron is used in mold and die — and with a big presence in automotive, where Sandvik already has a significant interest; and SigmaNEST is for sheet metal fabrication and material requisitioning.
One interesting (to me, anyway) observation: 3D Systems sold Gibbs and Cimatron to Battery in November 2020. Why didn’t Sandvik snap it up then? Why wait until July 2021? A few possible reasons: Sandvik CEO Stefan Widing has been upfront about his company’s relative lack of efficiency in finding/closing/incorporating acquisitions; perhaps it was simply not ready to do a deal of this type and size eight months earlier. Another possible reason: One presumes 3D Systems “cleaned up” Cimatron and GibbsCAM before the sale (meaning, separating business systems and financials from the parent, figuring out HR, etc.) but perhaps there was more to be done, and Sandvik didn’t want to take that on. And, finally, maybe the real prize here for Sandvik was SigmaNEST, which Battery Ventures had acquired in 2018, and Cimatron and GibbsCAM simply became part of the deal. We may never know.
This whole thing is fascinating. A company out of left field, acquiring these premium PLMish assets. Spending major cash (although we don’t know how much because of non-disclosures between buyer and sellers) for a major market presence.
No one has ever asked me about a CAM roll-up, yet I’m constantly asked about how an acquirer could create another Ansys. Perhaps that was the wrong question, and it should have been about CAM all along. It’s possible that the window for another company to duplicate what Sandvik is doing may be closing since there are few assets left to acquire.
Sandvik’s CAM acquisitions haven’t closed yet, but assuming they do, there’s a strong fit between CAM and Sandvik’s other manufacturing-focused business areas. It’s more software, with its happy margins. And, finally, it lets Sandvik address the entire workflow from just after component design to machining and on to verification. Mr. Widing says that Sandvik first innovated in hardware, then in service – and now, in software to optimize the component part manufacturing process. These are where gains will come, he says, in maximizing productivity and tool longevity. Further out, he sees, measuring every part to see how the process can be further optimized. It’s a sound investment in the evolution of both Sandvik and manufacturing.
We all love a good reinvention story, and how Sandvik executes on this vision will, of course, determine if the reinvention was successful. And, of course, there’s always the potential for more news of this sort …
I missed this last month — Sandvik also acquired Cambrio, which is the combined brand for what we might know better as GibbsCAM (milling, turning), Cimatron (mold and die), and SigmaNEST (nesting, obvs). These three were spun out of 3D Systems last year, acquired by Battery Ventures — and now sold on to Sandvik.
This was announced in July, and the acquisition is expected to close in the second half of 2021 — we’ll find out on Friday if it already has.
At that time. Sandvik said its strategic aim is to “provide customers with software solutions enabling automation of the full component manufacturing value chain – from design and planning to preparation, production and verification … By acquiring Cambrio, Sandvik will establish an important position in the CAM market that includes both toolmaking and general-purpose machining. This will complement the existing customer offering in Sandvik Manufacturing Solutions”.
Cambrio has around 375 employees and in 2020, had revenue of about $68 million.
If we do a bit of math, Cambrio’s $68 million + CNC Software’s $60 million + CGTech’s (that’s Vericut’s maker) of $54 million add up to $182 million in acquired CAM revenue. Not bad.
More on Friday.
CNC Software and its Mastercam have been a mainstay among CAM providers for decades, marketing its solutions as independent, focused on the workgroup and individual. That is about to change: Sandvik, which bought CGTech late last year, has announced that it will acquire CNC Software to build out its CAM offerings.
According to Sandvik’s announcement, CNC Software brings a “world-class CAM brand in the Mastercam software suite with an installed base of around 270,000 licenses/users, the largest in the industry, as well as a strong market reseller network and well-established partnerships with leading machine makers and tooling companies”.
We were taken by surprise by the CGTech deal — but shouldn’t be by the Mastercam acquisition. Stefan Widing, Sandvik’s CEO explains it this way: “[Acquiring Mastercam] is in line with our strategic focus to grow in the digital manufacturing space, with special attention on industrial software close to component manufacturing. The acquisition of CNC Software and the Mastercam portfolio, in combination with our existing offerings and extensive manufacturing capabilities, will make Sandvik a leader in the overall CAM market, measured in installed base. CAM plays a vital role in the digital manufacturing process, enabling new and innovative solutions in automated design for manufacturing.” The announcement goes on to say, “CNC Software has a strong market position in CAM, and particularly for small and medium-sized manufacturing enterprises (SME’s), something that will support Sandvik’s strategic ambitions to develop solutions to automate the manufacturing value chain for SME’s – and deliver competitive point solutions for large original equipment manufacturers (OEM’s).”
Sandvik says that CNC Software has 220 employees, with revenue of $60 million in 2020, and a “historical annual growth rate of approximately 10 percent and is expected to outperform the estimated market growth of 7 percent”.
No purchase price was disclosed, but the deal is expected to close during the fourth quarter.
Sandvik is holding a call about this on Friday — more updates then, if warranted.
Bentley continues to grow its deep expertise in various AEC disciplines — most recently, expanding its focus in underground resource mapping and analysis. This diversity serves it well; read on.
In Q2,
Unlike AspenTech, Bentley’s revenue growth is speeding up (total revenue up 21% in Q2, including a wee bit from Seequent, and up 17% for the first six months of 2021). Why the difference? IMHO, because Bentley has a much broader base, selling into many more end industries as well as to road/bridge/water/wastewater infrastructure projects that keep going, Covid or not. CEO Greg Bentley told investors that some parts of the business are back to —or even better than— pre-pandemic levels, but not yet all. He said that the company continues to struggle in industrial and resources capital expenditure projects, and therefore in the geographies (theMiddle East and Southeast Asia) that are the most dependent on this sector. This is balanced against continued success in new accounts and the company’s reinvigorated selling to small and medium enterprises via its Virtuosity subsidiary — and in a resurgence in the overall commercial/facilities sector. In general, it appears that sales to contractors such as architects and engineers lag behind those to owners and operators of commercial facilities —makes sense as many new projects are still on pause until pandemic-related effects settle down.
One unusual comment from Bentley’s earnings call that we’re going to listen for on others: The government of China is asking companies to explain why they are not using locally-grown software solutions; it appears to be offering preferential tax treatment for buyers of local software. As Greg Bentley told investors, “[d]uring the year to date, we have experienced a rash of unanticipated subscription cancellations within the mid-sized accounts in China that have for years subscribed to our China-specific enterprise program … Because we don’t think there are product issues, we will try to reinstate these accounts through E365 programs, where we can maintain continuous visibility as to their usage and engagement”. So, to recap: the government is using taxation to prefer one set of vendors over another, and all Bentley can do (really) is try to bring these accounts back and then monitor them constantly to keep on top of emerging issues. FWIW, in the pre-pandemic filings for Bentley’s IPO, “greater China, which we define as the Peoples’ Republic of China, Hong Kong and Taiwan … has become one of our largest (among our top five) and fastest-growing regions as measured by revenue, contributing just over 5% of our 2019 revenues”. Something to watch.
The company updated its financial outlook for 2021 to include the recent Seequent acquisition and this moderate level of economic uncertainty. Bentley might actually join the billion-dollar club on a pro forma basis — as if the acquisition of Seequent had occurred at the beginning of 2021. On a reported basis, the company sees total revenue between $945 million and $960 million, or an increase of around 18%, including Seequent. Excluding Seequent, Bentley sees organic revenue growth of 10% to 11%.
Much more here, on Bentley’s investor website.
We still have to hear from Autodesk, but there’s been a lot of AECish earnings news over the last few weeks. This post starts a modest series as we try to catch up on those results.
AspenTech reported results for its fiscal fourth quarter, 2021 last week. Total revenue of $198 million in DQ4, down 2% from a year ago. License revenue was $145 million, down 3%; maintenance revenue was $46 million, basically flat when compared to a year earlier, and services and other revenue was $7 million, up 9%.
For the year, total revenue was up 19% to $709 million, license revenue was up 28%, maintenance was up 4% and services and other revenue was down 18%.
Looking ahead, CEO Antonio Pietri said that he is “optimistic about the long-term opportunity for AspenTech. The need for our customers to operate their assets safely, sustainably, reliably and profitably has never been greater … We are confident in our ability to return to double-digit annual spend growth over time as economic conditions and industry budgets normalize.” The company sees fiscal 2022 total revenue of $702 million to $737 million, which is up just $10 million from final 2021 at the midpoint.
Why the slowdown in FQ4 from earlier in the year? And why the modest guidance for fiscal 2022? One word: Covid. And the uncertainty it creates among AspenTech’s customers when it comes to spending precious cash. AspenTech expects its visibility to improve when new budgets are set in the calendar fourth quarter. By then, AspenTech hopes, its customers will have a clearer view of reopening, consumer spending, and the timing of an eventual recovery.
Lots more detail here on AspenTech’s investor website.
Next up, Bentley. Yup. Alphabetical order.
There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.
CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation
Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.
Conjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature
It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.
CFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study
Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).
CFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study
One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.
Dragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath
The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.
2 Hour Marathon Attempt
In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:
As you can see, we’ll be simulating the flow over a bump defined by the curve:
First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:
/*--------------------------------*- C++ -*----------------------------------*\
========= |
\\ / F ield | OpenFOAM: The Open Source CFD Toolbox
\\ / O peration | Website: https://openfoam.org
\\ / A nd | Version: 6
\\/ M anipulation |
\*---------------------------------------------------------------------------*/
FoamFile
{
version 2.0;
format ascii;
class dictionary;
object blockMeshDict;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
convertToMeters 1;
vertices
(
(-1 0 0) // 0
(0 0 0) // 1
(1 0 0) // 2
(2 0 0) // 3
(-1 2 0) // 4
(0 2 0) // 5
(1 2 0) // 6
(2 2 0) // 7
(-1 0 1) // 8
(0 0 1) // 9
(1 0 1) // 10
(2 0 1) // 11
(-1 2 1) // 12
(0 2 1) // 13
(1 2 1) // 14
(2 2 1) // 15
);
blocks
(
hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)
);
edges
(
);
boundary
(
inlet
{
type patch;
faces
(
(0 8 12 4)
);
}
outlet
{
type patch;
faces
(
(3 7 15 11)
);
}
lowerWall
{
type wall;
faces
(
(0 1 9 8)
(1 2 10 9)
(2 3 11 10)
);
}
upperWall
{
type patch;
faces
(
(4 12 13 5)
(5 13 14 6)
(6 14 15 7)
);
}
frontAndBack
{
type empty;
faces
(
(8 9 13 12)
(9 10 14 13)
(10 11 15 14)
(1 0 4 5)
(2 1 5 6)
(3 2 6 7)
);
}
);
// ************************************************************************* //
This blockMeshDict produces the following grid:
It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!
So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:
edges
(
polyLine 1 2
(
(0 0 0)
(0.1 0.0309016994 0)
(0.2 0.0587785252 0)
(0.3 0.0809016994 0)
(0.4 0.0951056516 0)
(0.5 0.1 0)
(0.6 0.0951056516 0)
(0.7 0.0809016994 0)
(0.8 0.0587785252 0)
(0.9 0.0309016994 0)
(1 0 0)
)
polyLine 9 10
(
(0 0 1)
(0.1 0.0309016994 1)
(0.2 0.0587785252 1)
(0.3 0.0809016994 1)
(0.4 0.0951056516 1)
(0.5 0.1 1)
(0.6 0.0951056516 1)
(0.7 0.0809016994 1)
(0.8 0.0587785252 1)
(0.9 0.0309016994 1)
(1 0 1)
)
);
The sub-dictionary above is just a list of points on the curve . The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.
The following mesh is produced:
Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!
Cheers.
This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM® andOpenCFD® trademarks.
Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.
Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.
In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.
Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).
In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.
For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).
In this post, I’ll use a simple case I did previously (https://curiosityfluids.com/2016/03/28/mach-1-5-flow-over-23-degree-wedge-rhocentralfoam/) as an example and produce some synthetic Schlieren and Shadowgraph images using the data.
Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.
In ParaView the necessary tool for this is:
Gradient of Unstructured DataSet:
Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:
To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:
There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.
To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:
The results look pretty realistic:
The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:
Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!
To do this, we just have to use the Gradient of Unstructured DataSet tool again:
This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.
Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:
Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.
This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.
Hopefully this post will be helpful to some of you out there. Cheers!
Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post: https://curiosityfluids.com/2019/02/15/sutherlands-law/
The law given by:
It is also often simplified (as it is in OpenFOAM) to:
In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.
So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.
So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.
By far the simplest way to achieve this is using Python and the Scipy.optimize package.
Step 1: Get Data
The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (
https://webbook.nist.gov/), but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:
Temparature (K) | Viscosity (Pa.s) |
200 |
0.000012924 |
400 | 0.000022217 |
600 | 0.000029602 |
800 | 0.000035932 |
1000 | 0.000041597 |
1200 | 0.000046812 |
1400 | 0.000051704 |
1600 | 0.000056357 |
1800 | 0.000060829 |
2000 | 0.000065162 |
This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).
Step 2: Use python to fit the data
If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.
First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
Now we define the sutherland function:
def sutherland(T, As, Ts):
return As*T**(3/2)/(Ts+T)
Next we input the data:
T=[200,
400,
600,
800,
1000,
1200,
1400,
1600,
1800,
2000]
mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]
Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.
popt = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
Now we can just output our data to the screen and plot the results if we so wish:
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')
xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)
plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()
Overall the entire code looks like this:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def sutherland(T, As, Ts):
return As*T**(3/2)/(Ts+T)
T=[200, 400, 600,
800,
1000,
1200,
1400,
1600,
1800,
2000]
mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]
popt, pcov = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')
xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)
plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()
And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!
In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.
This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.
The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.
There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.
While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.
Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:
(1) Understand CFD
This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:
(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish
(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera
(c) Computational fluid dynamics – the basics with applications – By John D. Anderson
(2) Understand fluid dynamics
Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.
(3) Avoid building cases from scratch
Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!
As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.
(4) Using Ubuntu makes things much easier
This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.
I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.
(5) If you’re struggling, simplify
Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.
(6) Familiarize yourself with the cfd-online forum
If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.
(7) The results from checkMesh matter
If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:
http://www.wolfdynamics.com/wiki/OFtipsandtricks.pdf
(8) CFL Number Matters
If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.
For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:
https://holzmann-cfd.de/publications/mathematics-numerics-derivations-and-openfoam
For the record, this points falls into point (1) of Understanding CFD.
(9) Work through the OpenFOAM Wiki “3 Week” Series
If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:
https://wiki.openfoam.com/%223_weeks%22_series
If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.
(10) OpenFOAM is not a second-tier software – it is top tier
I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (
https://www.linkedin.com/feed/update/urn:li:groupPost:1920608-6518408864084299776/?commentUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518932944235610112%29&replyUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518956058403172352%29).
In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.
(11) Meshing… Ugh Meshing
For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post (https://curiosityfluids.com/2019/02/14/high-level-overview-of-meshing-for-openfoam/) most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.
Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.
Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.
This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM® andOpenCFD® trade marks.
Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.
Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.
The two main ways that I have meshed airfoils to date has been:
(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.
But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.
The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections
In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.
There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!
Hopefully, this is useful to some of you out there!
You can download the script here:
https://github.com/curiosityFluids/curiosityFluidsAirfoilMesher
Here you will also find a template based on the airfoil2D OpenFOAM tutorial.
(1) Copy curiosityFluidsAirfoilMesher.py to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify curiosityFluidsAirfoilMesher.py to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3 curiosityFluidsAirfoilMesher.py
(5) If no errors – run blockMesh
PS
You need to run this with python 3, and you need to have numpy installed
The inputs for the script are very simple:
ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.
airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.
DomainHeight: This is the height of the domain in multiples of chords.
WakeLength: Length of the wake domain in multiples of chords
firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator
growthRate: Boundary layer growth rate
MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.
The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.
BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil
LeadingEdgeGrading: Grading from the 1/4 chord position to the leading edge
TrailingEdgeGrading: Grading from the 1/4 chord position to the trailing edge
inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity
trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.
Inputs:
With the above inputs, the grid looks like this:
Mesh Quality:
These are some pretty good mesh statistics. We can also view them in paraView:
The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:
With these inputs, the result looks like this:
Mesh Quality:
Visualizing the mesh quality:
Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).
Inputs:
Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.
Grid Quality:
Visualizing the grid quality
Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.
The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!
Comments and bug reporting encouraged!
DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of the OPENFOAM® and OpenCFD® trademarks.
Here is a useful little tool for calculating the properties across a normal shock.
If you found this useful, and have the need for more, visit www.stfsol.com. One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at www.stfsol.com for more information!
Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.