This paper is an extended version of a contribution presented
at the Graphiñon 2025 conference.
The present work continues and develops our
previous studies [1–7] aimed at creating and improving software for automating
technological processes in domestic high-tech CNC laser equipment. Under modern
conditions of scientific and technological progress, the use of components with
complex spatial geometry, particularly with curved and spatially developed
surfaces, has become increasingly important in mechanical engineering. Such
elements are integral components of high-load, high-performance systems of
various applications. Examples include gas and steam turbine blades, jet engine
combustion chambers, centrifugal pump impellers, compressor impellers, and
other components used in power engineering, aviation, transportation, and
marine industries.
The application of geometrically complex
components allows a substantial expansion of machine functionality, improvement
of operational characteristics, and a higher level of adaptability in
technological processes. Accordingly, requirements for processing accuracy and
positioning of such parts are growing, which makes the problem of automated
control particularly relevant.
Problems of spatial positioning using
machine vision are actively studied in a number of works. For example, [8]
considers the integration of machine vision into modular equipment positioning.
In [9], a method of automated tool positioning for modular equipment based on
visual analysis is proposed. In [10], the application of computer vision to
correct control programs through boundary analysis is described. Analysis of
such studies shows that existing solutions are generally oriented toward flat
or reference objects and rely on template matching.
The novelty of our proposed approach lies
in using natural height differences on the workpiece surface, visualized as
contrast boundaries in an image, for automatic determination of the drawing’s
zero point. Unlike template- or marker-based methods, our system uses a
modified breadth-first search (BFS) algorithm to extract object boundaries and
determine either the edge coordinate or the center of a hole. The resulting
information is used to calculate the conversion coefficient from image pixels
to real machine coordinates. This approach enables highly accurate alignment of
the control program with the actual workpiece position, with minimal time cost
and without complex preparatory procedures.
The production of workpieces with curved
surfaces generally includes several sequential stages, performed on different
types of equipment. At each transition between stages, it is necessary to reposition
the workpiece to align the control program, which contains tool paths, with the
actual position of the object in the machine’s working space. Failure to ensure
alignment accuracy may result in defective or poor-quality products.
Traditional positioning methods typically
involve the use of contact sensors that probe the workpiece surface step by
step, or the use of high-precision fixtures that fix the workpiece in a
strictly defined position. However, such approaches are not always applicable,
especially when working with objects that have geometric deviations obtained at
previous production stages. In some cases, traditional methods require
considerable time for preparatory operations.
Thus, the main objective of the study was
to reduce positioning time in multi-axis laser processing and simplify the
preparatory stage. To solve it, a method was developed to detect either the
coordinate of a boundary or the center of a hole (a test specimen is shown in
Fig. 1) using an optical video channel, which involves recognition of height
differences and calculation of their position by computing the
pixel-to-millimeter conversion coefficient.
Fig. 1. Test specimen – cylinder with hole
To solve the problem of locating the
drawing’s zero point, functionality was developed within the existing FlexMV
software module, integrated into the control system, to recognize primitives
that can be interpreted as reference points for multi-axis processing. The
program code was written in C++17 with the Qt 5.15 framework. To simplify
implementation of recognition algorithms and video stream processing from the
industrial camera, the OpenCV library was used. Visualization of recognized
objects and video display were implemented with Qt tools supporting OpenGL. The
use of cross-platform technologies enables effective operation in both Windows
and Linux environments.
As noted earlier, the FlexMV module has a
high degree of integration with the FlexCNC control system through TCP/IP
protocol interaction. This inter-program communication allows FlexMV to
initiate kinematic system actions without considering motion control details or
synchronization. Such an approach made it possible to abstract from low-level device
interaction and focus on implementing control algorithms.
Given the variety of laser system operating
conditions and the wide range of processed materials, it is necessary to
provide the operator with fine-tuning capabilities not only for detector parameters
but also for filter types and their application sequence in video frame
processing. To implement this functionality, a software pipeline system was
integrated into FlexMV, first introduced in [4] and further developed in [5–7].
The software pipeline (Fig. 2) operates with two types of primitives:
· Detectors — algorithms for recognizing patterns
in the input image.
· Filters — algorithms for pre-processing frames
to improve subsequent analysis conditions.
Fig. 2. Video stream processing pipeline
To understand the following discussion, it
is necessary to describe a simplified structure of task execution in our
control system. In addition to conventional G-code commands and tool commands,
the control program may contain extended commands representing predefined sets
of actions with deterministic results. These extended commands may include tool
control routines, G-code sequences, or may temporarily delegate execution to
other programs.
Thus, from the operator’s perspective, the
functionality for detecting the drawing’s zero point is implemented as a call
to one such extended command with parameters. When FlexCNC encounters a zero
point search command, it transfers execution to FlexMV, whose result is tool
positioning at the starting point of the main program execution. This approach
contributes to task separation between staff: the technologist prepares and
debugs the program once for a batch of parts, while the operator executes it
without considering technological details or positioning features.
When FlexMV receives an assignment to
search for a zero point formed by height differences, the program activates the
finite state machine FindEdgeSequence, a simplified scheme shown in Fig. 3.
At the first
stage, preliminary edge recognition is carried out. Parameters of detectors and
filters (stSearchConnect) are loaded into the video stream pipeline, and when
the detector reports activation, movement (stSearchMovement) begins in the
direction of the expected workpiece edge. To define the direction of motion and
correct position calculations, the extended command specifies the motion axis
(X, Y, Z, A, B) and the frame direction (vertical, horizontal). The motion axis
is required to properly generate the search trajectory. The frame direction
parameter arises because in multi-axis tasks, the axis movement direction
cannot be uniquely tied to the object movement direction in the 2D image.
Automatic determination of the frame direction is possible by analyzing frame
shifts during positive and negative axis movement, but this process takes extra
time and requires bidirectional motion from the initial position, which may be
inadmissible due to workpiece geometry. During motion, the detector operates
asynchronously, implementing recognition via the modified BFS algorithm
(described in the next section). When a boundary is detected, the detector
sends an objects detected signal, initiating the transition of the finite state
machine to stStopMovement. In this state, FlexMV halts motion and, after receiving
feedback from FlexCNC (PC based CNC software) confirming the stop (movement
stopped), transitions to stCheckBorder, where it checks whether the boundary
remains within the field of view. This is necessary because kinematics cannot
stop instantly, and high initial speed may result in overshooting beyond the
field of view. If after stopping the boundary is still confidently recognized,
FindEdgeSequence calculates the image coordinate. In stSetFirstPos, the current
motion axis coordinate (firstPosMm) and boundary position (firstPosPx) are
stored. Then in stSetROI, a region of interest is set at the opposite end of
the field of view, where the recognized boundary must be moved. Motion is
repeated in stSearchMovement in the same direction but at reduced speed. Upon
detection in the new ROI, motion halts in stStopMovement and the positions
secondPosMm and secondPosPx are stored in stSetSecondPos. Then in
stCalcBorderPosition the conversion coefficient is calculated:
This coefficient converts image pixels to
millimeters of axis motion. Finally, in stMoveToEdgeCoordinate the tool is
moved to the detected boundary coordinate, recalculated in millimeters using
pxToMm. FlexMV then returns control to FlexCNC, which assigns the current
position as the coordinate system origin and executes the remaining program.
Fig. 3. Finite state machine for zero point detection
As noted earlier, boundary detection is
based on recognizing surface height differences, which in a 2D image appear as
contrast edges: the darker part corresponds to a surface located further from
the focal plane of the camera lens than the lighter part. A modified
breadth-first search (BFS) algorithm is used. The algorithm (Fig. 4) proceeds
step by step:
1. Input image binarization using an inverse
threshold, resulting in a clear separation into depth zones: lighter pixels are
treated as 1, darker as 0.
2. Matrix initialization with values of -1, equal
in size to the input image, followed by subtraction of the binary matrix,
yielding values of -1 and -2.
3. Distance map construction: each pixel is
assigned a value corresponding to its distance from the starting point,
excluding pixels with -2.
4. Reverse traversal of the distance map: the
algorithm sequentially moves to neighboring pixels with lower depth values,
while evaluating invalid values (-2) in the neighborhood, forming a trajectory
back to the starting point. Coordinates of intermediate nodes are stored in an
array for later visualization and boundary calculation.
5. Boundary coordinate calculation: the pixel
coordinate of the boundary is computed as the arithmetic mean of all boundary
node coordinates.
To improve stability, edge pixels of the
image are checked first, enabling quick detection of contours crossing the
frame. Pre-exclusion of light pixels reduces the search area and noise
influence.
Fig. 4. Boundary recognition algorithm
After applying this algorithm, the output
is an array of boundary points in pixel coordinates, enabling visualization on
the input image (Fig. 5) and calculation of the boundary position.
Fig. 5. Boundary recognition on the test specimen
The developed method for detecting the
drawing’s zero point using an optical video channel and the FlexMV software
module solved the problem of positioning workpieces with curved surfaces in
multi-axis CNC laser systems. Unlike traditional contact-based methods and
expensive fixtures, the proposed approach identifies surface height differences
by interpreting their image boundaries, thereby ensuring precise alignment of
the control program with the actual workpiece position.
Practical application required
configuration of filter, detector, and lighting parameters at the initial
integration stage. However, these settings are determined not for a specific
part but for equipment and material characteristics, simplifying transitions
between production batches and minimizing reconfiguration time.
As a result of implementing the new functionality
into the control system, workpiece positioning time was reduced, the influence
of the human factor was decreased, and process repeatability and reliability in
serial production were improved.
The authors express their gratitude to the
management of the Lasers and Equipment TM group of companies for assistance in
providing material and technical support for conducting experimental studies
and process modeling.
1. Molotkov A.A., Tretiyakova O.N. On possible approaches
to visualizing the process of selective laser melting // Scientific
Visualization. 2019. Vol. 11. No. 4. Pp. 1–12.
2. Molotkov A.A., Tretiyakova O.N. Application of
machine vision in laser technologies // Electronic Journal “Proceedings of MAI.”
No. 127. 2022.
3. Molotkov A.A., Tretiyakova O.N., Tuzhilin D.N.
About development and application of a software platform for machine vision for
various laser technologies // Scientific Visualization. 2022. No. 5. Pp.
108–118.
4. Molotkov A.A., Tretiyakova O.N. Application of
machine vision methods and mathematical modeling for developing technologies of
electronic device fabrication // Instruments. 2022. No. 4. Pp. 55–58.
5. [Molotkov A.A., Saprykin D.L., Tretiyakova O.N.,
Tuzhilin D.N. Development of a software suite for creating industrial laser
technological equipment // Instruments. 2022. No. 5. Pp. 15–22.
6. [Molotkov A.A., Tretiyakova O.N. Testing
technological regimes in the development of SLM technology // Instruments. 2023.
No. 8. Pp. 44–47.
7. Tretyakova O.N., Tuzhilin D.N., Shamordin A.A.
Research and Application of Machine Vision Algorithms for Defect Detection in
Additive Technologies // Scientific Visualization. 2025. Vol. 17. No. 1. Pp.
114–121.
8. Afanasyev M.Ya., Fedosov Yu.V., Krylova A.A.,
Shorokhov S.A. Application of machine vision in problems of automatic tool
positioning in modular equipment // Izvestiya Vysshikh Uchebnykh Zavedeniy. Instrument
Engineering. 2020. Vol. 63. No. 9. Pp. 830–839.
9. Peshkova A.M., Kuts M.S., Petrukhin V.Yu.
Application of computer vision in workpiece positioning on CNC machines //
Young Scientist. 2020. No. 52. Pp. 174–177.
10. Mozhaev
R.K. et al. Application of machine vision technology to improve accuracy of
focused laser action on microelectronic structures in research and
micromachining processes // Information Technology Security. 2023. Vol.
30. No. 4. Pp. 150–161.