What is in this article?
- What is in this article?
- What is Component Pre-Computing?
- How to train your component?
- Want to know more about component training?|
What is Component Pre-Computing?
Component Training (or Component Pre-computing/Component Pre-analysis) is a crucial step of Akselos’ workflow. It uses Akselos smart algorithm to generate training datasets that cover many possibilities of models based on their parameters' settings. The training dataset, which is the output of this training process, is used to perform RB-FEA analysis.
Akselos Workflow for a simulation model
The methodology applied to boost solution accuracy is “Enrichment training” and “Error indicator”, which allows enriching training iteratively to increase the quality of training datasets. To monitor the quality of training datasets, Akselos provides an RB-FEA Error Indicator that reliably indicates the accuracy of an RB-FEA solve compared to the corresponding FEA solution without ever having to do an FEA solve. This provides a way for users to validate the accuracy of their RB-FEA solves. If the error indicator needs to be reduced further in any case, users can achieve that by running further Components pre-computing via the Dashboard.
In this article, we will discuss setting options that you can customize for a component training job. default, basic and advanced options that you can customize for running a training job.
How to train your component?
Akselos Dashboard supports users to perform the Component Training process which is required before doing any RB-FEA solves. Any user account type can have the right to implement Component Training on Akselos Dashboard.
Firstly, click on the name of the collection we want to conduct the Component Training process, then click the Training tool to access the Components Training interface.
Figure 1. Components Training interface
In the Select models box, there are many files of models. Users can choose one or multiple models for training by ticking the boxes beside the models’ names, or just ticking All if users want to train all models.
Figure 2. Choosing models for training
After choosing models, the Train [number_of_selected_models] Selected Model(s) button is automatically highlighted and able to be clicked. If users want to train their model with our Default settings, then click on that button and jump to the Jobs tab on the left Menu to monitor the training progress. If users want to customize their models’ training, then click on the Advanced training options (elasticity) menu and set some advanced parameters. The Advanced Training Options include:
(1) Enrichment: choosing the enrichment approach to obtain accurate training datasets for components, there are three options for the enrichment approach.
Global submodels: for each AKS file in the input list of AKS models, submodels that cover the entire AKS file will be created and used for component datasets enrichment. If the accuracy indicator of an AKS model is already smaller than the specified accuracy threshold, the enrichment process for that model will be skipped.
Local submodels: an accuracy indicator local to every port in the AKS model is computed, and for each port that exceeds the specified accuracy threshold, a submodel local to that port is created for enrichment. This typically results in more submodels than the Global submodels approach, but each submodel is typically smaller.
Global Model: global AKS models are used to perform datasets enrichment.
Note: Global Submodels is the recommended approach for large models (e.g. models with 5 million or more FEA degrees of freedom). For smaller models, the Global Model approach is recommended.
(2) Physics Type: Different analyses need to have different training datasets for RB-FEA solve. Here users can choose the target Physics type to generate the according training dataset. There are 3 types of physics supported:
- Elasticity
- Elasticity Harmonic analysis
- Eigenvalue Problem
(3) Create visualization datasets: turning on this option will create datasets that can make visualization run faster, by storing extra visualization data on disk. Disabling this option will reduce disk space usage, and it is optional in most cases but is a requirement in some cases, e.g. for elasticity quasi-static solves and fatigue analysis. This option is turned ON by default.
(4) Generate 64-bit visualization cache: This option is to generate Visualization Datasets that use double precision values (i.e. 64-bit instead of 32-bit). The Visualization Datasets are used for fast visualization in RB-FEA solves, to reconstruct the solution so that it can be visualized and post-processed. Normally it's sufficient to use 32-bit (which uses less disk space; hence it is the default), but in cases where the stress data is more sensitive, it can be helpful to use double-precision visualization data.
(5) Port Enrichment threshold: this is the accuracy indicator threshold used in the Global submodels and Local submodels approaches mentioned above. This parameter is not used in the Global Model approach.
(6) Number of enrichment iterations: the Component Training process can be iterated multiple times to further enrich the component datasets. This input controls the number of iterations that will be performed. In some cases where the datasets enrichment is not necessary, we can skip the enrichment iteration by setting this number to zero. For example, jacket models with 3D joints do not require port enrichment since all ports should be 6 degrees of freedom only.
(7) Target DOFs per submodel: DOFs-based target size of the submodels created in the Global submodels approach. The default target size is 5 million DOFs.
(8) Max. DOFs per processor: the maximum DOFs assigned for each processor when performing FEA computations with submodels. This allows users to ensure fewer DOFs are assigned to each processor to avoid running out of memory. This option can be typically ignored by users, but it may be useful in extreme cases where the enrichment is limited by available memory.
(9) Number of cores per “Train component” job: This is the number of cores used for each component in the "train components" part of the training. The recommended default value is 1. However, if the component is very heavy, for example, it has more than 1 million DOFs, specifying more than one core can help speed up the “train components”.
(10) Training the components from the given aks file(s) only: If this option is ON, only the components used in the input aks file(s) are trained. If this option is OFF, all components in the collection’s component folder will be trained.
(11) Use Component Datasets in RB-FEA Solves: In the case that “Global Submodels” or “Local Submodels” is used for training, this option indicates if the RB-FEA solve used in that training used RB-FEA datasets or not. If this option is ON, the training will run the Train Components stage before any RB-FEA solves are performed (since these datasets need to be generated in this case). If it is OFF then the Train Components stage is only run at the end of the entire training job, and the RB-FEA solves used in the enrichment training are slower in this case since it has to generate training data during each solve. In practice, the overall training time is similar in these two cases, which is why users shouldn't have to change this option in most cases. However, if the component contains too many node ports or face ports, turning ON this option can help speed up the training process.
(12) Run Updates Between Each Solve: This option is dependent on the previous option (Use Component Datasets in RB-FEA Solves). If the previous option is ON, this option must be ON too. This option is related to enrichment training with “Global Submodels” or “Local Submodels” training, those types of enrichment use RB-FEA solves, so when this option is ON, RB-FEA datasets are updated between each enrichment solve. In most cases, the default “OFF” for this option is recommended.
(13) Use Hybrid solver for pre-computing: When this option is ON, a Hybrid solver is used to perform analysis for enrichment solves (instead of RB-FEA solves. In most cases, Akselos training auto-detects appropriate solver strategies for enrichment solves, so this option is OFF by default. It might be useful for aks model(s) consisting of shell and beam superelements.
Figure 3. Advanced training options
There is one thing users should always be aware of. That is about keeping or deleting the old training datasets of the collection. The old training datasets should only be deleted in the case that there are some huge changes in the geometry of the global model or some wrong settings during submitting training jobs which cause the whole training progress to fail. Please contact Akselos support if you are not sure about keeping or deleting the old training datasets. If users want to delete the old training datasets, navigate to Trained configuration(s), tick on the boxes beside the ones they want to delete, and then click the Delete Training Data button.
Figure 4. Deleting old training datasets
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article