Questions? Contact Us: Main: (510) 845-0029 Sales: (510) 301-9860 [email protected]
blog

The Ennova Vision by Dr. Michael Hohmeyer

In 2007 when I left Ansys to start Ennova, I wanted to make sure that what we created would be different enough from ICEM-CFD to be compelling to ICEM-CFD users. I thought about what was different between the computing landscape in 1990 when we started ICEM-CFD and 2007. The main difference was that in 1990, multi-processor machines were rare and expensive, and coding for parallel computing was considered very esoteric. In 1990 developers and users could expect that CPU speeds would double every two years. Thus, any problem that was too big to be handled with the day's software and hardware could be tackled by waiting a year or so, which might be quicker than undertaking the software development necessary to achieve the same effect. However, by 2004 for many reasons, the decades long constant increase in CPU speeds came to a sudden halt. Computers in the future would have more CPU cores, not faster ones.

Rewriting a piece of code that was once serial into a parallel piece of code is probably no faster (and more likely slower) than rewriting it from scratch. All of the data structures have to be re-worked, all of the data flow is different. And most times the parallel version of the software cannot produce exactly the same result as the serial version, anyway. So that made the decision to re-build a pre/post-processor from scratch easier.


The Cloud

Once you have made the decision to take advantage of multiple cores, it would be crazy to limit yourself to just the cores on your laptop or desk side computer. While you might have 4 cores (counted as 8 with hyper-threading) in your laptop, you might have thousands of cores available to you on a Linux cluster, either in your organization or for lease in the cloud . So the client/server model with a lightweight client on the desktop and heavy lifting server where the compute power is, was essential.


A New Level of Geometric Complexity

There had also been a change in the kinds of geometry on which people wanted to perform CFD analysis. In 1990 engineers were performing CFD analysis on the furthest upstream (in the sense of the design cycle) CAD data. So engineers would analyze a model of a car with a few hundred surfaces. Wheels might or might not be present and things like doors and gas filler caps were not yet separately designed. By 2007, engineers were interested in gathering together all of the production design data for the entire car for CFD analysis. This would include, for example, the gas filler cap and the small strip of plastic that attaches the gas cap to the car so you don't loose it, including small plastic pin that attaches the strip to car body. And this level of detail would exist for the entire car. So now the CAD data for a CFD analysis would be hundreds of thousands of CAD surfaces, rather than hundreds. By 2007, it was common to find assemblies so large they could not be loaded all at once on a single computer.

To tackle these really large problems, it is essential that no part of the process require human interaction that scales with the problem size. Slick interactive geometry cleanup tools can be really attractive for smaller problems, but once the geometry becomes complex enough, the process needs to be completely automated.

Looking at ICEM HEXA, it could make beautiful meshes, but required an expert user to interactively design the multi-block decomposition. ICEM TETRA could generate meshes with no human interaction on extremely complex geometry, but the mesh was less than optimal. With Ennova we have set ourselves the following goal: Create the best mesh possible that can be created with no human interaction.

Many people, including myself, have tried to create automatic multi-block decomposition algorithms or automatic fully hexahedral mesh generation algorithms. This is, so far, an unconquered problem. But it is also a problem harder than necessary. In fact, what is needed for CFD is a mesh that was appropriate for the solution in different parts of the computational domain. Away from the geometry, where gradients are small, a tetrahedral mesh is fine. Near the boundary, where there are large gradients, prism layers are needed. Many existing approaches could create such a mesh. But there are some geometric areas that require a more specialized mesh. Consider the leading edge of an airfoil. In the direction from root to tip, the elements need to be one size. In the direction from leading edge to trailing edge they need to be an order of magnitude smaller. And in the direction perpendicular to the surface they need to be an order of magnitude smaller still. So really, there you need a fully anisotropic structured mesh. Without this kind of mesh either you are going to have an order of magnitude too many elements or you will not capture the solution accurately, or both.

Often, for narrow spaces a swept mesh is appropriate. Consider, for example, a brake pad separated by a small distance from a brake rotor, or the tip of a fan blade separated from a hub, or the space between a valve and its seat. If the mesh is not swept there, the isotropic element size would have to be several times smaller than the gap. By sweeping the mesh we can have the element size in the plane of the gap many times larger than the width of the gap, again reducing the side of the mesh by an order of magnitude.

Fortunately, most of these cases can be recognized from the underlying CAD geometry. CAD designers use surfaces whose parametric directions also line up with their major and minor axes of curvature. So if we can recover a water-tight CAD model, then we can manipulate (automatically) the CAD topology to create a mesh topology that will generate an appropriate mesh.

This leads us to the problem of CAD repair. Many programs exist to repair dirty CAD, but clearly their power is limited, since we know that repairing geometry for meshing can still be a difficult problem. Many of these CAD repair systems work directly on the CAD data, modifying B-Spline control points and manipulating pointers between points, curves, loops and surfaces. This can be dangerous because it is actually changing the XYZ locations of the points on the surfaces, an unacceptable solution in many industries such as turbomachinery and aerospace.

Ennova takes a different path, building a layer between the CAD and the mesh, which we call the mesh topology. It is similar to the blocking in ICEM HEXA but much more general. The mesh topology has all of the kinds of elements (nodes, edges, faces, volumes) that you have in a CAD part but with none of the geometric properties. It is just the connectivity data together with pointers to the CAD geometry where the actual geometric properties are stored in their unaltered state. An edge in the mesh topology is just the collection of pieces of various CAD curves. A face in the mesh topology is a set of bounding edges together with pointers to whichever CAD surfaces it represents.

The mesh topology has the advantage that manipulations on it are simpler (since there is no geometric data) and cannot cause the CAD data to become corrupted. Two faces or two edges can be combined into one without any re-approximation. A face or an edge can be split into two in any desired way. Intersections between two surfaces can be created exactly, since the intersection curve is just a symbolic link to the two surfaces that define the intersection. When we mesh an intersection edge we need only solve a simple system of non-linear equations to find each mesh point precisely on the intersection curve. This contrasts with an intersection curve represented as a B-Spline, which must necessarily be an approximation.

Ennova's geometry repair algorithm is built on top of the mesh topology and is very sophisticated, not requiring any tolerances from the user. Many algorithms require a tolerance that in some places is too small so that edges and nodes don't get merged together and in other places is too large so that features get crush and corrupted.

The Ennova repair algorithm is more sophisticated. Individual operations of stitching the geometry together to make a watertight volume include: merging nodes, merging edges, merging faces, splitting edges at nodes and so on. Each of these operations has a distance or tolerance associated with it, for instance: the distance between two points to be merged. All of the potential repair events are ordered according to this distance and the smallest ones are performed first. Once a part of the model has become locally watertight further repair is stopped there. This allows the repair process to run without requiring a user to select a tolerance that if not chosen carefully, will fail.

There is some geometry that is just too complex and full of inconsistencies to practicably be repaired. For these cases Ennova provides a cartesian mesher (or cut cell mesher) similar to Snappy Hex or CCM+'s shrink wrapper. Again, Ennova's solution provides many advantages. The algorithm is highly scalable and also has a small memory footprint, with the number of bytes required to store a cell asymptotically approaching zero. The Ennova octree mesher also has the advantage that inside of each background cell, the mesher can resolve detail smaller than the cell. This means that you can to capture an edge or a gap or a cooling fin with a far coarser mesh than you would need using other meshing technology.