This paper is an extended version of a contribution presented
at the Graphiρon 2025 conference.
In
the modern era of digital transformation, artificial intelligence technologies,
and generative neural networks in particular, demonstrate impressive potential
for solving a wide range of problems. However, despite obvious advances in content
generation, data processing, and process automation, a number of critical
limitations exist that hinder their full integration into production and
research processes.
A key
problem with modern neural network technologies is their "black box"
nature, whereby results are generated without a transparent explanation of the
decision-making logic. This significantly complicates the processes of
validating the obtained results, identifying and correcting errors, and making
targeted changes to the final solutions. This problem is particularly acute in
areas requiring high accuracy and reliability of results, such as mathematical
modeling and 3D design.
An
analysis of existing approaches to integrating neural network technologies into
production processes reveals a significant gap between the theoretical
capabilities of artificial intelligence and the practical requirements of
industry. Traditional implementation methods often focus either on fully
automating processes using AI or on using neural networks as an auxiliary tool,
which prevents these technologies from fully realizing their potential.
This
study proposes a hybrid methodological approach designed to overcome these
limitations. The approach is based on the synergy of natural language
processing (NLP) technologies and verified software systems for mathematical
and 3D modeling. The proposed methodology integrates the capabilities of
artificial intelligence systems for natural language processing and the rapid
generation of variable solutions with existing software algorithms.
The
impact of generative neural networks on programming and software development
deserves special attention. Modern language models demonstrate impressive
capabilities in generating program code, automating routine programming tasks,
and assisting with debugging. However, the "black box" problem also
arises: generated code requires careful verification, as neural networks can
create seemingly correct code that contains logical errors or vulnerabilities.
This highlights the need to develop methodological approaches to verifying and
validating generated software solutions.
The
introduction of neural network technologies is significantly transforming the
labor market structure in the technology sector. On the one hand, the barrier
to entry into the profession has been significantly lowered: generative models
provide budding specialists with powerful tools for learning and solving basic
problems. This opens up new opportunities for professional development and
allows for faster mastery of complex technological fields. On the other hand,
the automation of routine operations reduces the need for low-skilled
specialists performing standard tasks.
It's
important to note that the role of highly qualified specialists not only
remains but also grows. This is due to both the need for expert evaluation and
validation of neural network results, as it is currently impossible to fully
guarantee the quality and reliability of automatically generated solutions, and
the need to develop and optimize methodologies for applying AI technologies.
Generative
neural networks are one of the most dynamically developing areas in AI. Their
evolution began with classic generative adversarial networks (GANs) in 2014 and
continues to this day [1]. The current stage is characterized by a transition
to more complex and controllable systems.
Diffusion
models, a breakthrough in recent years, offer a new approach to content
generation based on the gradual refinement of the result through sequential
noise removal [2]. This method has demonstrated exceptional effectiveness in
generating images, 3D models, and other types of data, delivering more stable
and high-quality results than classical approaches.
Diffusion models,
a breakthrough in recent years, offered a new approach to content generation
based on the gradual refinement of the result through sequential noise removal.
Parallel to the development of diffusion models, a breakthrough occurred in the
field of transformer architectures, originally developed for natural language
processing but successfully adapted to work with various types of data,
including program code and mathematical formulas. This opened up new
possibilities for creating universal generative systems capable of
simultaneously processing multiple data modalities.
In
the context of training modern generative models, the example of Stable is
illustrative. Diffusion XL, which uses a multi-stage training strategy. The
model is first pre-trained on the massive LAION-5B image dataset, after which
it is fine-tuned using specialized datasets for specific tasks. A similar
approach is used in Meta 's CodeLlama-34b, where the base language model is
further trained on specially prepared sets of program code, achieving high
accuracy in generating specific programming constructs and working with various
programming languages.
Self-supervised
learning clearly demonstrates its effectiveness in the GPT-4V (Visual)
architecture, where the model is capable of extracting semantic relationships
between images and text without explicitly annotating these relationships in
the training data. In the context of technical problems, a telling example is
OpenAI 's Point -E, which can generate 3D models from text descriptions using
an intermediate point cloud representation.
The
practical application of generative models can be illustrated with specific
examples from various industries. In industrial design, Autodesk uses
generative design in Fusion 360 to create optimized designs. For example, when
designing a bracket for the aerospace industry, the system generated multiple
variants optimized for weight and strength, allowing for a 20-40% reduction in
component weight while maintaining or improving mechanical performance.
Amazon
Software Engineering CodeWhisperer and GitHub Copilot demonstrates the
effectiveness of using generative models to automate development. According to
GitHub research, developers using Copilot complete tasks on average 55%
faster, while code quality, measured by the number of successfully passing
tests, increases by 26% [3].
However,
the implementation of such technologies is associated with specific technical
challenges. For example, the use of Stable Diffusion XL for industrial design
requires significant computing resources: at least 16 GB of video memory for
basic operation and up to 24 GB for optimal performance. When integrating
GitHub Copilot introduces security and code confidentiality issues into
corporate systems, requiring the deployment of local versions of the system and
additional control tools.
The
choice of a specific architecture (Table 1) depends significantly on the
specific problems being solved and the available computing resources. The
current trend in the development of generative models is toward creating hybrid
architectures that combine the advantages of various approaches while
minimizing their drawbacks.
Table 1. Types of neural network architecture
|
Architecture
|
Advantages
|
Restrictions
|
Scope of
application
|
|
Classic GANs
|
High generation
speed
Relative
simplicity of architecture
Low
requirements for computing resources
|
Instability
of learning
Mode problem
collapse
Complexity of
generation control
|
Image
generation
Data
augmentation
Prototyping
|
|
Diffusion
models
|
High
generation quality
Stability of
results
Good process
control
|
High
computational costs
Slow
generation
Complexity of
architecture
|
Professional
content generation
3D modeling
Scientific
research
|
|
Transformers
|
Universality
of application
Good
scalability
Working with
different types of data
|
High memory
requirements
Complexity of
training
High
development costs
|
Code
generation
Multimodal
tasks
Complex
automation
|
|
Hybrid
architectures
|
Combination
of advantages of different approaches
Flexibility
of configuration
Wide
optimization possibilities
|
Complexity of
integration
Increased
infrastructure requirements
Need for
careful configuration
|
Industrial
applications
Complex
production tasks
Research
projects
|
Validation
and verification of the obtained results play a key role in the development of
generative technologies . While quality assessment in image or text generation
tasks can be performed subjectively, technical tasks such as generating
software code or 3D models require strict mathematical criteria and
verification methods. This becomes especially relevant when integrating
generative models into production processes, where the cost of error can be critically
high.
In
the context of training modern generative models, there has been a significant
paradigm shift from classical methods to more comprehensive approaches.
Traditional training methods based on direct minimization of the loss function
have given way to multi-stage strategies that include pre-training on large
datasets followed by specialized fine-tuning for specific tasks. The concept of
transfer learning plays a special role in this process, enabling the efficient
adaptation of pre-trained models to solve specific problems with significantly
reduced requirements for computing resources and training data.
Modern
approaches to training generative models also feature extensive use of
supervised learning techniques. These methods enable models to extract useful
features and patterns from unlabeled data, which is especially important in the
context of technical problems where obtaining high-quality labels can be
extremely expensive or practically impossible. Particular attention is paid to
regularization and preventing overfitting, which is critical to ensuring the
stability and reliability of the generated results.
Mathematical
modeling, a fundamental tool for scientific research and engineering
development, is undergoing a significant transformation under the influence of
AI. The integration of machine learning methods with classical approaches is
creating a new paradigm in computational science. Traditional methods based on
the numerical solution of differential equations face limitations when working
with complex nonlinear systems.
According
to a study [4] published in Nature Reviews In physics, the integration of
machine learning methods with classical mathematical modeling approaches
creates a new paradigm in computational science. Traditional mathematical
modeling methods based on the numerical solution of differential equations face
a number of significant limitations.
In
this context, hybrid approaches that combine classical numerical methods with
neural network models are of particular interest. For example, a study [5]
demonstrates how the use of neural networks in hydrodynamics problems can
reduce computation time by orders of magnitude while maintaining acceptable
accuracy.
Mathematical
modeling in the 2020s is characterized by the active implementation of
high-performance computing systems and new methodological approaches. Leading
research centers such as the US national laboratories (Argonne, Lawrence
Berkeley and European research institutes are demonstrating a strong trend
towards the use of hybrid computing architectures that combine classical
approaches with elements of artificial intelligence.
The
industrial sector is seeing active use of commercial mathematical modeling
packages, with the following occupying leading positions:
1.
ANSYS, which provides tools for:
• finite element analysis
• computational fluid dynamics
• electromagnetic modeling
2.
COMSOL Multiphysics, which, according to the company's technical reports, has
implemented machine learning support in its solvers, which has significantly
accelerated the calculation of complex multiphysics problems.
3.
MATLAB from MathWorks, which in recent versions has significantly expanded its
integration capabilities with machine learning tools.
Supercomputer
centers play a special role in scientific computing. According to the TOP500 (a
ranking of the world's most powerful supercomputers), modern systems achieve
performance in the hundreds of petaflops, opening up new possibilities for
solving complex mathematical modeling problems [6].
Key
trends in the development of mathematical modeling, confirmed by numerous
publications in leading scientific journals and practical applications, are:
1.
Development of multiscale modeling methods that allow taking into account
processes on different spatial and temporal scales.
2.
Implementation of machine learning methods for:
• acceleration of calculations;
• on optimization of computational grids;
• Predicting the behavior of complex systems.
3.
Creation of digital twins, which is confirmed by successful implementations in
the aerospace industry (Boeing, Airbus) and the energy sector (Siemens ,
General Electric)
Modern
3D modeling is characterized by a variety of methodological approaches, each
with its own advantages and applications. In industrial design, parametric,
direct, and hybrid modeling, as well as generative design, are prominent. Cloud
modeling and AI integration are also becoming increasingly popular. Various
industries are developing their own approaches to 3D modeling, such as BIM
modeling in architecture, surface modeling in industrial design, and polygonal
modeling in animation and gaming. Current trends point to the automation of
modeling processes, the integration of various approaches, and the increased
availability of tools. and the implementation of AI.
In
recent years, artificial intelligence technologies have been actively
integrated into traditional 3D modeling tools. Autodesk, one of the industry
leaders, has integrated neural network technologies into Fusion 360 to automate
design and optimization processes. The system uses machine learning algorithms
for generative design, enabling the creation of optimized designs based on
specified parameters and constraints. According to the company, this approach
reduces design time by 30-50% while simultaneously improving the performance of
the final product.
Siemens
NX is also actively developing artificial intelligence in its solutions. The
latest software versions implement machine learning algorithms to predict
potential design issues, automatically optimize topology, and assist in design
decision-making. Neural networks have proven particularly effective in
analyzing and optimizing assemblies, where algorithms can suggest more
efficient layout options based on accumulated experience.
Blender, a popular open-source 3D modeling tool, has integrated support for various
neural network plugins. Most notably, it introduced tools for automatic texture
generation, model topology optimization, and machine learning-based animation.
The developer community is actively pursuing these technologies, creating new
tools for automating various aspects of 3D modeling.
In
the context of neural network-based 3D modeling, several main approaches have
emerged. Neural technology Radiance NeRF (Neural Network Fields), introduced
by researchers at UC Berkeley, has revolutionized the creation of 3D models
from photographs. This method enables the creation of detailed 3D
reconstructions of objects using a set of 2D images. The main advantage of this
approach is the high accuracy of reproducing the geometry and textures of real
objects. However, a significant limitation remains the need for a large number
of source images and significant computing resources for processing.
NVIDIA's
GET3D represents a different approach to neural network modeling, enabling the
generation of 3D models based on text descriptions or single images. The technology
demonstrates impressive results in creating a variety of 3D objects, but its
accuracy and detail are inferior to those of traditional modeling methods. Its
main advantage is the speed of creating basic models and the ability to quickly
prototype.
OpenAI's Point -E offers an alternative approach based on generating point clouds and
then processing them to create full-fledged 3D models. This method is
characterized by high speed and lower computational requirements compared to
other neural network approaches. However, the quality of the resulting models
may be insufficient for industrial applications, limiting its use to rapid
prototyping and conceptual design.
An
important aspect of the development of neural network 3D modeling is the
integration of various approaches and the creation of hybrid solutions. Current
research aims to combine the advantages of various methods while minimizing
their drawbacks. Particular attention is paid to the development of methods for
validating and verifying the obtained results, which is critical for the
industrial application of these technologies.
In modern
practice, significant potential is being demonstrated for synergy between
neural network technologies and traditional approaches in various fields of
engineering and science. Experience in implementing such solutions at leading
technology companies and research centers allows us to assess the real
possibilities and limitations of this integration.
In the programming
field, large-scale implementation of GitHub Copilot demonstrates the practical
applicability of neural network technologies for automated software
development. According to a 2023 GitHub study [7], the use of neural network
assistants can significantly accelerate the coding process, especially in tasks
related to creating standard software constructs and data processing. It is
important to note that the programmer's role is being transformed: from writing
routine code to higher-level architectural design and validation of generated
solutions.
In mathematical
modeling, the most promising direction is the creation of hybrid systems that
combine classical numerical methods with neural network approaches. Research
[4] demonstrates the possibility of significantly accelerating calculations
while maintaining the physical correctness of the results. Such approaches are
particularly effective in optimization and prediction tasks for complex
systems, where traditional methods require significant computational resources.
3D modeling is
being enriched by the ability to automatically generate and optimize models.
NVIDIA, with its GET3D technology, has demonstrated the ability to create
detailed 3D models based on text descriptions or images [8]. This opens up new
possibilities for rapid prototyping and conceptual design. In industrial
applications, the ability to automatically optimize existing models based on
specified parameters and constraints is becoming especially important.
The integration
of these technologies creates new opportunities for interdisciplinary
collaboration. For example, the results of mathematical modeling can be
automatically converted into 3D models, which are then optimized to accommodate
technological constraints. Software code for controlling such systems can be
automatically generated, taking into account the specifics of a particular task
and performance requirements.
Verifying the
results obtained using neural network technologies deserves special attention.
In programming, this is achieved through automated testing and static code
analysis. In mathematical modeling, comparison methods with classical solutions
and experimental data are used. For 3D modeling, specialized methods are being
developed to verify the geometric and topological correctness of the generated
models.
Industrial
implementation of such comprehensive solutions requires the creation of an
appropriate infrastructure and methodology. The experience of companies that
have successfully integrated neural network technologies into their processes
demonstrates the need for a phased approach with thorough validation at each
stage. The key to success is correctly defining the applicability limits of
automated solutions and maintaining specialist oversight.
A promising
development direction is the creation of unified platforms that integrate
various aspects of design and modeling. Such systems enable a seamless process
from conceptual design to the finished product, with neural network
technologies acting as an intelligent assistant at every stage of the process.
This is especially important in the context of the development of digital twins
and smart manufacturing.
This study
proposes a hybrid methodological approach [9] designed to overcome these
limitations. The approach is based on the synergy of natural language
processing (NLP) and verified engineering software systems. It is expected that
the combination of these two approaches will minimize the likelihood of errors
and inaccuracies in the design process, while ensuring the necessary level of
oversight by specialists.
The proposed
methodology is based on the integration of the capabilities of artificial
intelligence systems in the field of natural language processing and the rapid
generation of variable solutions with existing algorithms for constructing CAD
models in domestic automated design systems, such as KOMPAS-3D [10] and TeFlex
[11].
Fig. 1. Scheme of a hybrid methodological approach applied to CAD systems
The methodology
is a hybrid approach to automated 3D modeling, combining natural language
processing (NLP) with the use of proven engineering software packages (CAD),
such as KOMPAS-3D or TeFlex. The so-called hybrid approach offers
a compromise between automation and controllability of the 3D modeling process,
combining the benefits of AI and proven engineering tools. This approach aims
to minimize errors and improve the accuracy of the modeling process compared to
using generative neural networks exclusively. The key advantage lies in the
validation of the parameters of the AI-generated script, rather than the
validation of the entire generated model.
Instead of
directly using a neural network to generate a 3D model, which is prone to
hidden errors, text-based AI is used to create a control script in a
programming language compatible with the selected CAD system. This shifts the
focus of control from checking the finished model to verifying the parameters
specified in the script, ensuring earlier detection and correction of potential
errors. The iterative nature of the process allows for prompt and script
adjustments based on analysis of intermediate results, ensuring flexibility and
high accuracy of the final 3D model.
The practical
implementation of hybrid neural network solutions for engineering and
scientific applications requires a comprehensive approach to ensuring process
reliability, efficiency, and controllability. The experience of leading
technology companies and research centers allows us to formulate key
requirements for such systems.
In
the context of the rapid development of AI and its integration into industrial
and scientific processes, an analysis of the development prospects and socio-economic
consequences of the implementation of hybrid neural network technologies is
particularly relevant.
McKinsey
forecasts Global Institute [12], the introduction of neural network
technologies into engineering and scientific fields will lead to a significant
transformation of the labor market in the next 5-10 years. An interesting
paradox is observed: despite the automation of many processes, the demand for
highly qualified specialists is not only not decreasing but actually
increasing. This is due to the need to develop, implement, and monitor new
technological solutions.
The
barrier to entry into programming has been significantly lowered thanks to
tools like GitHub. Copilot and similar systems. However, as leading tech
companies demonstrate, this doesn't reduce the skill requirements for
experienced developers. Instead, their focus shifts toward more complex tasks
such as architectural design, optimization, and code quality assurance.
Mathematical
and engineering modeling is also undergoing significant changes. The
introduction of hybrid approaches makes it possible to solve increasingly
complex problems that were previously inaccessible due to computational
limitations. At the same time, the role of specialists is transforming: from
performing routine calculations to defining problems, selecting methodology,
and validating results.
The
ethical aspects of implementing neural network technologies deserve special
attention. Questions of responsibility for decisions arise, especially in
critical areas such as medical modeling or the design of critical engineering
structures. Leading organizations, including IEEE and ACM, are actively working
to develop ethical standards and guidelines for the application of AI
technologies.
Prospects
for further development revolve around several key areas. First, improving
methods for ensuring the reliability and interpretability of neural network
component output. Second, developing technologies for automatically adapting
models to changing conditions and requirements. Third, creating more effective
methods for integrating expert knowledge into the training and operation of
neural networks.
Developing
infrastructure to support hybrid solutions is also crucial. This includes both
hardware improvements and the creation of specialized platforms for the
development and implementation of such systems. Leading technology companies
are actively investing in cloud services and tools that simplify the work with
hybrid systems.
A
hybrid approach to using generative neural networks in mathematical and 3D
modeling represents a promising direction, combining the benefits of AI with
traditional methods, ensuring greater accuracy, reliability, and
controllability of results. This approach not only opens up new opportunities
for automating routine tasks and accelerating design processes, but also
enables the solution of more complex and large-scale problems previously
inaccessible due to computational limitations or the complexity of manual
modeling.
Further
development of this approach requires addressing a number of technological,
methodological, and socioeconomic challenges. Key areas include developing new
methods to ensure the reliability and interpretability of results, advancing
technologies for automatically adapting models to changing conditions and
requirements, and creating more effective methods for integrating expert
knowledge into the training and operation of neural networks. Another important
aspect is the development of infrastructure to support hybrid solutions,
including hardware improvements and the creation of specialized platforms for
the development and implementation of such systems.
The
introduction of hybrid neural network technologies is having a significant
impact on the labor market, requiring new competencies and skills from
specialists. Educational institutions must adapt their programs to prepare
specialists capable of effectively working with hybrid systems and critically
evaluating AI-based results.
Overall,
the hybrid approach to using generative neural networks opens up new prospects
for the development of mathematical and 3D modeling, enabling the creation of
more complex, accurate, and efficient models that can be used in various
industries and sciences. Successful implementation of this approach requires a
comprehensive approach that includes technological innovation, methodological
developments, and socioeconomic transformation.
1. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde -Farley, S. Ozair, A. Courville, Y. Bengio, Generative Adversarial Networks, 2014, https://doi.org/10.48550/arXiv.1406.2661
2. J. Ho, A. Jain, P. Abbeel, Denoising Diffusion Probabilistic Models, 2020, URL: https://doi.org/10.48550/arXiv.2006.11239 (date appeals of March 29, 2023)
3. Rodriguez M. Research: Quantifying GitHub Copilot's impact on code quality, GitHub Blog, 10.10.2023, URL: https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-code-quality/
4. Karniadakis, G. E., Kevrekidis, I. G., Lu, L., Perdikaris, P., Wang, S., & Yang, L. (2021). Physics-informed machine learning. Nature Reviews Physics, 3(6), 422-440.
5. Wang H., et al. Recent advances on machine learning for computational fluid dynamics: A survey, 2024 arXiv preprint arXiv:2408.12171
6. TOP500 List - June 2025, URL: https://top500.org/lists/top500/list/2025/06/?utm_source=Securitylab.ru
7. Daigle K. Octoverse : The state of open source and rise of AI in 2023, GitHub Blog, 11/08/2023, URL: https://github.blog/news-insights/research/the-state-of-open-source-and-ai/
8. Isha Salian, World-Class: NVIDIA Research Builds AI Model to Populate Virtual Worlds With 3D Objects, Characters, 2022, Blogs.nvidia, URL: https://blogs.nvidia.com/blog/3d-generative-ai-research-virtual-worlds/
9. NA Bondareva, AE Bondarev, SV Andreev, IG Ryzhova . Development of a Methodology for the Application of Generative Neural Networks in Creating 3d Models (2025). Scientific Visualization 17.3: 25 - 34, DOI: 10.26583/sv.17.3.034.
10. KOMPAS-3 D Russian import-independent system of three-dimensional design. URL: https://kompas.ru/ (date of access 04/29/2025)
11. T - FLEX CAD Russian engineering software for 3D design and development of design documentation URL: https://www.tflexcad.ru / (date of access 04/29/2025)
12. Ellingrud K., Sanghvi S., Dandona G., Madgavkar A., Chui M., White O., Hasebe P., Generative AI and the future of work in America, 2023, URL: https://www.mckinsey.com/mgi/our-research/generative-ai-and-the-future-of-work-in-america