Our C3D Labs team began working with the JT format in 2016. This is a CAD exchange format developed by Siemens and approved as an international standard by ISO. It is fully documented, and so we thought that since we had the documentation that writing a JT import-export module would just be paperwork. In other words, we could learn how to read and write JT data using arrow diagrams. However, we found that knowing the format is not identical to implementing read and write based on the documentation.
Firstly, the data read has to be interpreted correctly. In the case of JT, this means converting the geometry model from the JT representation to the C3D one – and back.
Secondly, we needed to keep in mind what JT is used for, its advantages over other formats, and other issues involving its use, such as how third-party applications operate with files in this format. The JT format provides a means to transfer product shape between software systems in two representations -- boundary and polygonal. (The second one, strictly speaking, also describes the boundary, but does not provide information about the smoothness of surfaces). That’s the most obvious difference between JT and other formats supported by our C3D kernel.
When we first took a stab at developing the translator, our idea was to first implement support for boundary representation, and then follow it with polygonal representation in later iterations. We took this stance from the CAD industry’s prevailing attitude that sees polygonal representation as a secondary version of CAD data, a derivative from solids. However, it soon became clear to us that JT viewers require at least one triangulated mesh to be written to the JT file. We could not therefore release a JT converter without supporting polygon models.
It should be noted that polygon representations are designed differently in JT and C3D. In both cases, triangles are grouped like analogs of faces, but in the case of C3D each group refers to its own set of points. By contrast, JT uses a set of points that is the same for the whole body. Thus, we hit upon the task of learning how to form a single set.
Of course, it was easy to make that work at the first stage: the JT standard allows this. For instance, when each face is exported as individual meshes, we have no problem finding identical points on adjacent faces. However, this simplification deprived us of valuable information about what tasks would have to be solved when it became necessary to export entire meshes. Actually, we had the opportunity to return to the simple option if a more correct one could not be implemented before the deadline.
The first thing to decide when forming a unique set of points is how to store them. So, the perfect thing in which to store unique objects is a sorted container. It provides both uniqueness of points and a high speed of checking of whether a point needs to be added to it or not.
The real trick to sorted containers is how to implement a function that compares two points with the ability to order them. Perhaps there is a way to do this, but we didn’t look for one; in any case, the obvious implementations turned out to be unsuitable for practical use. Instead of sorted storage, we decided to use ordinary linear arrays. So we applied a linear search, instead of binary one and as a consequence lost search speed. To minimize the slowdown that would make our customers’ lives miserable, we decided to get rid of all unnecessary comparisons.
Our second task was to ensure that triangles adjacent to the vertex were placed in the correct sequence. In principle, it turned out that the task was not complicated, except that it had to also handle the case of open shells.
Having debugged the alignment of triangles at vertices, we started testing the export code on large numbers of models. Testing revealed two problems. The first problem we encountered was that not all models converted to JT format successfully. Secondly, as shapes of bodies became more complex, transformation times grew quickly to the point where they could be exported.
We partially overcame the loss of performance by checking only boundary points for coincidence. It's better than nothing, but the slowdown was still significant so we had to give up entirely exporting some bodies. Worse, the code became difficult to manage and did not provide us with diagnostics for the transformation processes control of the input data.
We were solving practical tasks, but it became clear to us that we needed to further refactor the code.
The unsorted container of points remained the performance bottleneck. It made no sense for us to refactor the code until we could figure out how to drastically reduce the number of comparisons. We had taken a first step down this road: all face points were divided into internal and border points. We reduced the number of point comparisons by knowing that for each face its nearest neighbors were determined, and so we needed to make comparisons only with their boundary points.
The code preparing the input data for JT meshes has now been completely refactored. We were helped by special classes that replaced previous C ++ template constructions. They provided not only basic functionality, but also convenient debugging. When the fine-tuning phase of the new algorithm on simple models ended, the debugging tools came in handy to understand why some models were not exported, although by all indications they should have been.
The unsuccessful conversions turned out to be caused by functions whose reliability until that moment we had never considered dubious, such as the ones that build boundary polygons. As a result of our fine-tuning, the code became able to detect mesh defects and in some cases repair them to ensure the export of entire bodies to JT.
All the nice things like code readability and convenient diagnostics became important addition to solving the main task: transforming meshes to JT format. The final code was significantly accelerated compared to the original version. The primary detriment to performance --linear search -- did not disappear, but we limited its impact on export time significantly.