This paper is an extended version of a contribution presented
at the Graphiñon 2025 conference.
Working with multidimensional data represents one of
the most interesting and promising areas of scientific visualization.
This includes, in particular, the visual analysis of
multidimensional functions, such as the probability density function in the
Boltzmann equation (in the simplest case
)
and the multiparameter solution of the Navier-Stokes equations.
|
,
|
(1)
|
obtained by solving a vector
problem in the parameter space
a wave function in the configuration space when solving multi-particle problems of quantum mechanics, and many others.
However, visualization of multidimensional functions
(even in the case of a scalar function
)
for
is complicated by both technical problems (the required memory and
performance increase catastrophically with increasing problem dimension (“the
curse of dimensionality”)) and geometric problems associated with the
difference in the properties of multidimensional spaces from the standard two-
and three-dimensional spaces [1,2]. First of all, these include the effect of
concentration of measure (Measure concentration), in which
most of the volume is concentrated near the surface of the body.
The volume of objects of characteristic size
(along one coordinate) in
-dimensional
space changes as
.
This affects both the representation of known
functions (especially in the form of two-dimensional sections) and the
approximation of unknown multidimensional functions (when searching for their
singularities, maxima, minima, etc.).
Multidimensional data is typically defined on a set of
points in space that corresponds not to a regular multidimensional grid, but to
a set of points (with a much smaller number of nodes).
A common approach to visualizing and analyzing
multidimensional data is dimensionality reduction and the transition to two-
and three-dimensional objects [3]. Dimensionality reduction can be accomplished
either by intuitively selecting the most important variables or in a more
formal way, for example, using principal component analysis (principal component analysis
(PCA)). Also often used is a set of sections by planes along some path (tour path) [4, 5], this allows us to find
and visualize certain data features, such as holes. We consider data as a set
of function values on a certain ensemble of points in space (hypercube,
hypersphere). Situations are common where a multidimensional function can be
defined by a certain ensemble of points or implicitly, using some algorithm. In
both of these cases, the simplest approach seems to be constructing the
function at the nodes of a regular grid. Unfortunately, in the multidimensional
case, constructing a regular grid is essentially prohibited by the curse of
dimensionality (in the 6-dimensional case, using 100 nodes for each coordinate,
we obtain
grid nodes, which creates serious
difficulties for standard computing equipment). A more realistic approach is to
construct the function as values on a random ensemble of nodes.
A clear drawback of this approach is the
failure to take into account the specific features of multidimensional spaces.
For example, Figure 1 shows a cross-section of a 6-dimensional cube
with a side
along each coordinate by a plane, which provides a
visual representation of the significance of this cube in terms of its
dimensions. Unfortunately, this representation is completely inadequate. In
reality, the ratio of the volume of this cube to the
volume of the cube containing it is equal
and quite small, corresponding to a point. Figure 2 shows a
visualization of a 6-dimensional function taking the value of unity on this
cube. This representation is also inadequate (in six-dimensional space, the
figure in question would be more successfully illustrated by a needle).
Thus, a naive 2D visualization of a 6-dimensional figure radically
exaggerates its significance in terms of volume. Moreover, the error in the
average calculated over these nodes is independent of the dimensionality of the
space and is determined by the number of nodes in the ensemble, which is the
main advantage of the Monte Carlo method for calculating integrals. However,
when using the Monte Carlo method with uniform sampling, there's a high
probability that a feature with a small volume (like the function described
above, Fig. 2) will not be detected, as it will fall between nodes. This is
especially true for resolution in the vicinity of the hypercube/hypersphere
center.
Fig. 1
Fig. 2
Figures 3 and 4 present a visualization of this section using
coordinate transformation (10), corresponding to the presence of a priori
information in the form (11). These images much better reflect the importance
of the figure in terms of the volume it occupies in six-dimensional space.
Here, we have an effect opposite to the "giant moon" effect, in which
a priori information increases some volume in the visualization space.
Fig. 3
Fig. 4
It should be noted that the space of human visual
perception (perceptual space (perception space)) is not a linear projection of the observed
physical space onto the plane of the retina. The simplest example, which
everyone has encountered, is the “huge moon effect”, due to which the
size of the Moon near the horizon appears significantly larger than near the
zenith. Such a transformation is possible by switching to a curvilinear
coordinate system either in Euclidean space or in non-Euclidean space. There is
a large body of work devoted to the analysis of the geometry of the space of
visual perception. In [8], it is argued that perceptual is Lobachevsky
space (has constant negative curvature). In [9] it is argued that the
perceptual space is a Riemann space with positive curvature. In [10] it is
indicated that perceptual space is a Riemannian space with a curvature of
variable sign, smoothly changing with distance from the observer: with negative
curvature at close distances and positive curvature at long distances. In the
work [11], experimental data are presented confirming the assumption of the presence
of a curvature of variable sign; however,
according to these data, the change in sign of the curvature occurs upon
reaching the height of the eye level (it changes not with distance, but with
height).
Overall, there's little doubt that perceptual space is a three-dimensional
Riemannian space of variable curvature and sign. However, there's currently no
consensus on its precise structure. This may be due to both individual
differences between people and the dependence of this space on the object of
observation and a priori information about it.
Nevertheless, it can be considered that, within the
framework of human vision, a certain region of external three-dimensional
Euclidean space is projected onto perceptual space (a certain three-dimensional
Riemannian space of variable curvature). This corresponds to a transformation
of the points of the tangent space and the points of the Riemannian manifold
(exponential and logarithmic maps) [12, 13]. With such a transformation, both the metric tensor and the
curvature of the space change.
Thus, within the framework of human vision, three
spaces can be distinguished: physical, visualization space, and perceptual
space. Scientific visualization (as a technique for data analysis) utilizes a
region in some intermediate space, the visualization space, most often
two-dimensional (a sheet of paper, a screen), but in some cases
three-dimensional (stereo glasses). This allows for a transition to a new
curvilinear coordinate system defined by the Jacobian matrix and the
corresponding metric tensor. The distance between points
in Euclidean space with Cartesian coordinates is defined as
,
in non-Euclidean space
,
and in Euclidean space with curvilinear coordinates
,
where
is a metric tensor that increases (or decreases) a certain volume proportionally
.
A natural attempt is to isolate the features of
interest to us using the corresponding metric tensor and the transition to new
curvilinear coordinates. Assuming the proportionality of the metric tensor (in
the transition from Cartesian to curvilinear coordinates) to some a priori
given positive function
.
We can obtain coordinates in which, where
is greater, the spatial step is greater also. Thus, within the framework of
Euclidean space, prior information about the value of a certain volume of space
can be determined by the metric tensor field
in curvilinear coordinates.
Currently, the visualization space is generally (with
the exception of medieval paintings [10]) assumed to be Euclidean
(
).
However, the question of the advisability of using a Euclidean metric for the
visualization space is nontrivial. On the one hand, to avoid additional
distortions, it would be natural for the visualization space to have the same
metric as the perceptual space. On the other hand, when visualizing
multidimensional spaces, the transition from a Euclidean metric to a
non-Euclidean one in the visualization space may be determined by entirely
different reasons, and it may turn out that the metric of the visual space
should not coincide with the metric of the perceptual space.
But in general, the transition to a non-Euclidean
metric for the visualization space does not seem unnatural, since it is
ultimately already accomplished in the transition from three-dimensional
physical to three-dimensional perceptual.
Next, we will consider the
visualization possibilities provided by the transition to both curvilinear
coordinates in Euclidean space and non-Euclidean space.
In work [10] It is proposed to model perceptual space using the Hilbert-Einstein equations. Indeed, in a fairly
general case, the metric of non-Euclidean space can be described by
Hilbert-Einstein-type equations.
|
,
|
(2)
|
which relate the Ricci tensor
,
,
(the convolution of the rank 4 curvature tensor
),
the curvature
(Ricci scalar), metric tensor
,
stress-energy tensor, Christoffel symbols
.
The Hilbert-Einstein equations can
be used to describe Riemannian
space with both positive and negative curvature.
In general relativity (GR), the Hilbert-Einstein equations are derived from the stationarity
conditions of the Hilbert-Einstein action functional.
|
|
(3)
|
At first glance, using the Hilbert-Einstein equations
to analyze the geometry of perceptual space
and calculate its metric tensor appears unjustified. Moreover, four-dimensional space,
time, and the signature of Minkowski's
pseudo-Euclidean space, characteristic of the Hilbert-Einstein
equations in general relativity (with
pseudo-Riemannian space), are not used in our case.
However, there is a circumstance that justifies this approach.
Equations of the Hilbert-Einstein type are quite universal
and are used not only in general relativity but also in information geometry,
where the metric tensor is the Fisher information matrix
(a tensor in the space of probability densities of a random distribution
),
which has the form:
|
.
|
(4)
|
An asymmetric distance between two probability
densities in the same parameter space can be introduced using the
Kullback-Leibler measure, sometimes also denoted
as cross-entropy:
.
The Kullback-Leibler measure for small
distances between distribution densities is described by the Fisher information
matrix
.
As shown in [13] for this metric tensor one can obtain an equation of the type (2)
.
Analogies between the Hilbert-Einstein action and the Fisher
information measure are also considered in [14,15].
It should be noted that the Fisher
matrix is the inverse covariance matrix and belongs to a Riemannian space with
non-positive curvature, which does not correspond to either general relativity
or perceptual space.
Unfortunately, the nature of the tensor on
the right-hand side (analogous to the energy-momentum tensor) is not specified
in these works (in [13] It is designated as "some
statistical restrictions"). In our opinion, they can be considered as some
a priori information about the metric.
If we assume that the geometry of the perceptual space depends on the
objects displayed in it, then the a priori information about them plays a role
similar to the energy-momentum tensor. Moreover, the a priori information
influences the metric tensor and the curvature globally, even where
(similar to gravity).
When processing information that occurs
during projection onto perceptual space, the use of Fisher information and
information geometry appears quite natural [16-18]. In particular, in [17]
it is shown that the information geometry approach
(using the Fisher matrix as a metric tensor) is also applicable to
the analysis of the operation of neural networks in processing
visual information. In this work, perceptual space is denoted as “visual space”
and is considered as a statistical
parametric space whose geometry is determined by the metric tensor defined by
the Gaussian distribution. In this case, the curvature of the space turns out
to be negative. The corresponding expressions for the Christoffel symbols are
given in [19].
It should be noted that the Fisher matrix
is defined in a parametric space of sufficiently high dimensionality.
Therefore, if it corresponds to neural networks modeling the human brain, then
our three-dimensional perceptual space is not a consequence of our brain's
architecture, but rather a consequence of its training with three-dimensional
samples. By training with multidimensional samples, difficulties in our
perception can be reduced or overcome. As an example, consider the computer
games HyperRouge (2011) and Hyperbolica (2022), which train our perception using
images from a Riemannian space with negative curvature.
Unfortunately, when solving Hilbert-Einstein type equations and subsequently coordinating the resulting solution, a
huge number of both fundamental and technical difficulties are encountered, as
shown in works on general relativity [20]. Using
the known Jacobian matrix,
one can construct coordinates using
.
However, it is impossible to construct coordinates using a single metric tensor
,
and the construction of a metric tensor for the Hilbert-Einstein
equation itself is very nontrivial. This is due to the fact that
10 Hilbert-Einstein equations in General relativity
contains 14 independent quantities, and for closure, 4 additional equations
(“coordinate conditions”) are required. Moreover, thanks to the Bianchi
relations [20] 4 more additional equations will be
required. It should also be noted that equations of this type can have
singularities and contain non-trivial physics, often requiring significant
effort to interpret. In the interesting case of two-dimensional space, the
Hilbert–Einstein equations are degenerate (the Hilbert–Einstein tensor
vanishes, since in this case
.
Because of the difficulties listed above,
there is no hope that B. Rauschenbach’s idea [10]
The modeling of perceptual space using Hilbert-Einstein -type equations
will be implemented, is quite weak. The main reason for this form of equations is the choice
of the Lagrangian density in the Hilbert-Einstein action. It is determined by
the scalar curvature (the Ricci scalar, scalar), since the scalar curvature by
Vermeil's theorem (Vermeil's Theorem) is the only invariant that is linear in the second
derivatives of the metric tensor and suitable for constructing a dynamic
metric. Within the framework of a metric defined by dynamic
information about an object, solving the Hilbert-Einstein
equation is impossible (unless one resorts to more complex Lagrangians).
However, the search for simpler approaches to modeling a Riemannian space of
variable curvature (with simpler Lagrangian densities and simpler equations)
appears promising. In particular, in the practice of computational
aerogasdynamics (when constructing a computational grid), a coordinate
transformation satisfying a certain functional of the metric tensor (without
its derivatives) is often implemented. Instead of solving Hilbert-Einstein-type equations,
Winslow equations (or similar ones) are solved, defining
both the coordinate transformation and the metric coefficients. In the standard
version (problems of aerogasdynamics), these equations define the coordinate
transformation in Euclidean space. However, in the language of general
relativity, the solution of the Winslow equations corresponds to the construction of harmonic coordinates in
Riemannian space.
Winslow-type equations allow one to construct a
curvilinear coordinate system, in some cases corresponding to Riemannian space.
In some works, the coordinization of Riemannian space is accomplished by
solving the Beltrami equations [21, 22].
Here we will consider the possibility of
constructing coordinates in Riemannian space using the
Winslow equations with source terms and the Beltrami
equations.
In works on aerogasdynamics [24, 25, 26], the transformation of coordinates from physical space (two- or
three-dimensional Euclidean) to computational space is often used to construct non-uniform
computational grids. Here we will consider the two-dimensional case, with the physical plane
(
)
is transformed into the computational plane
(
)
by solving the Poisson equation with source terms. Let us consider the use of
the Winslow functional [26, 27, 28]
|
.
|
(5)
|
It ensures maximum smoothness of the transformation and at
the same time prohibits the vanishing of the Jacobian
(and the corresponding metric tensor) and
the degeneracy of the corresponding transformation. Transformations of
derivatives in (5) [28] are obtained from the relations
,
,
which are differentiated by x, y.
Let's add potential to the standard
statement of the functional
|
|
(6)
|
and let's consider its variation
|
.
|
(7)
|
Let's integrate (7) by parts
.
The corresponding Euler-Lagrange equations have the form of
Poisson equations with nonlinear sources (in the standard version, weights
are added artificially)
|
.
|
(8)
|
Next, the variables
are taken as independent and
the system is solved on a uniform grid
|
.
|
(9)
|
The system contains distributed sources and allows for the calculation
of coordinates
at the nodes of a uniform grid
.
It uses a metric tensor, which has a form corresponding to the Euclidean metric.
The source term in (8) and (9) is used in the form
|
,
|
(10)
|
which allows the grid nodes to be compressed in physical
space
around the points
.
In our simplest (two-dimensional) case, the variables
correspond to the original coordinates obtained by a simple plane section. The variables
correspond to the transformed coordinates, but the grid in which
the function is defined will no longer be uniform.
The functionality is also quite popular
|
,
|
(11)
|
which specifies the Cauchy-Riemann conditions
in variational form through
and is called the “length”
functional. The equations expressing the stationarity of the "length"
functional have the form
|
.
|
(12)
|
Another form [23] the “length” functional is
|
.
|
(13)
|
Here
are the positive functions that ensure grid densification in the selected zones
.
The corresponding equations take the form
|
,
.
|
(14)
|
Above the Winslow and length functionals, other functionals
[24] directly related to the components of the metric tensor are sometimes
minimized when constructing a grid. These include the orthogonality functional,
the area functional, and Liao.
Orthogonality functional
|
,
|
(15)
|
(
obtained taking into account
,
,
,
,
)).
It vanishes the off-diagonal terms of the metric tensor. According
to [29] the corresponding
Euler–Lagrange equations are quasilinear, adjoint, and non-elliptic. The
solution often does not converge, and when it does, the resulting meshes are
folded.
Functionality of the area
|
.
|
(16)
|
According to [24, 29] the corresponding Euler-Lagrange equations are
quasilinear, conjugate, and non-elliptic
|
.
|
(17)
|
According to [29] there are no results regarding the existence and uniqueness of these
equations; the solutions are not smooth and have folds.
Liao functional (Frobenius norm of the metric tensor)
|
.
|
(18)
|
The stationarity conditions for these functionals are
second-order partial differential equations, some of which are fairly easily
solved in practice and allow for the transformation
of Cartesian coordinates into curvilinear ones and vice versa. Some functionals
(area, orthogonality, Liao) generate solutions containing folds. Sums of these functionals are sometimes
used as a remedy [29].
In the works [21,22] for the
two-dimensional case the construction of a coordinate system realizing a given
Riemannian metric by means of the solution of the Beltrami system is considered system)
Having expressed
|
,
,
|
(20)
|
and then differentiating (20)
,
,
we get the
opportunity to eliminate one variable
(
),
subtracting the expressions for the cross derivatives
|
.
|
(21)
|
The solution of this equation allows us to calculate
,
then having a field
we calculate
by integration. In the numerical
calculations, we used the a priori importance function
,
in which the distance deformation is written as
and yields the tensor components
,
,
.
In the works [21,22] a fairly universal form is used
,
suitable for Riemannian spaces with both negative and positive
curvature.
Using the Beltrami system of equations allows for the simplest method
of constructing coordinates in Riemannian space, but is limited to the
two-dimensional case. Winslow-type equations are more
universal in this regard, but interpreting the functional
in terms of information is difficult.
If we focus on coordinate transformations of the type (9), (10), our
primary interest is in determining the form and meaning of the source terms
.
They can be related purely to the geometry of space or to the properties of the
function. The analog is the adjoint function
,
which allows you to visualize the areas, where a function
as the greatest impact on valuable functionality. In the case of
The adjoint function can be associated with the metric
tensor
,
which provides a priori information about the
function's zones of influence. In particular, with the giant moon effect,
a priori information distorts volumes. In our case, we use the conformal mapping
and function (11) as a priori information. By further solving the
Beltrami equations (19), we can construct a coordinate transformation similar
to those shown in Figures 3 and 4.
A two-dimensional section of a six-dimensional
hypercube (Figs. 1, 2) is presented in Euclidean coordinates and Riemannian
coordinates (Figs. 3, 4) with an a priori metric (11) and coordinates
constructed by solving the Winslow equation (10). It is more informative in
terms of displaying the volume occupied by a function compared to a standard
section of a hypercube by a plane. This is achieved through a non-Euclidean
transformation of the visualization space.
To a
large extent, visualization comes down to mapping some physical space (usually
three-dimensional, but in some cases multidimensional (for example, the
six-dimensional space of velocities and coordinates in the Boltzmann equation))
onto a two-dimensional (sometimes three-dimensional) intermediate space—the
visualization space. The visualization space is then mapped onto the space of
perception (perceptual space), which is a three-dimensional Riemann space with
variable curvature.
Riemannian geometry of the space of perception has
been confirmed by numerous modern experiments as well as by numerous examples
of medieval painting.
In modern practice, visualization space is typically
Euclidean. In this paper, we examine the possibilities offered by the
Riemannian metric in visualization space and the challenges this approach
poses.
As we have already seen (Figs. 1, 2), a naive
visualization of a multidimensional function using plane sections significantly
distorts the representation of the function's volume. A nonlinear coordinate
transformation using the Riemannian metric allows us to correct these
distortions (Figs. 3, 4).
Ideally, the visualization space should have the same
geometry as the perceptual space, which should reduce distortions when
projecting from one space to the other. In the case of two-dimensional
visualization, complete elimination of distortions is unlikely due to the
difference in dimensionality (artistic attempts at this are presented in [10]).
In the case of three-dimensional visualization, it is unclear how to
technically construct an image in non-Euclidean geometry. Moreover, it is
unclear to what extent the geometry of the perceptual space is universal across
individuals.
To visualize multidimensional functions, one can use a
transformed plane with a Riemannian metric, where the metric tensor can be
determined by the geometry and dimensionality of the space, and a priori
information about the significance of some regions.
Hilbert-Einstein-type equations to model space with a Riemannian
metric is justified by the analogy between Riemannian physical and information
geometries. However, solving Hilbert-Einstein-type
equations is extremely labor-intensive and nontrivial. Instead of the
Hilbert-Einstein equations, it is easier to use Winslow -type equations
or Beltrami equations to model perceptual and
visualization space.
Given that the Fisher matrix is defined in
a parametric space of fairly high dimensionality, determined by the number of
neurons used, our familiar three-dimensional perceptual space is not a
consequence of our brain's architecture, but rather a consequence of its
learning with three-dimensional samples. A transition to a multidimensional
space is quite possible when learning with multidimensional samples. As an
analogy, consider the computer games HyperRouge and Hyperbolica, which allow us
to train our perception using images from a Riemannian space of negative
curvature.
1. Zorich V.A., Multidimensional geometry, functions of very many variables and probability. TVP 59 :3 (2014), 436–451.
2. Milman V.D., Phenomena arising in high dimensions, Uspekhi Mat. Nauk, Vol. 59, No. 1, pp. 157–168, 2004
3. Chen H. et al, Noisy Data Visualization using Functional Data Analysis, arXiv:2406.03396v1 [cs.LG]5 Jun 2024
4. Laa U., Cook D., Andreas Buja, and German Valencia. 2020. Hole or grain? A Section Pursuit Index for Finding Hidden Structure in Multiple Dimensions. arXiv:2004.13327v3 [stat.CO]10 Mar 2022
5. Laa U., Cook D., and Valencia G. 2020. Aslice tour for finding hollowness in high-dimensional data. arXiv:1910.10854v1 [stat.CO]24 Oct 2019
6. Curtis A. and Lomax A., Prior information, sampling distributions, and the curse of dimensionality. Geophysics, 66(2):372–378, March 2001.
7. Laa U., Cook D., Lee S., Burning sage: Reversing the curse of dimensionality in the visualization of high-dimensional data, arXiv:2009.10979v1 [stat.CO] 23 Sep 2020,
8. Luneburg R.K., Metric methods in binocular visual perception, in Courant Anniversary Volume, Eds K. O. Friedricks et al (New York: Interscience) 1948, pp 215-240
9. Battro A.M., Riemannian geometries of variable curvature in visual space: visual alleys, horopters, and triangles in large open fields, Perception, 1976, volume 5, pages 9-23,
10. Rauschenbach B.V., Spatial constructions in painting, M. Nauka, 1980,
11. Koenderink J.J., Van Doorn A.J., and Lappin J.S., Direct measurement of the curvature of visual space. Perception, 29(1):69–79, 2000
12. Barachant A. et al, Classification of covariance matrices using a Riemannian-based kernel for BCI applications, Neurocomputing, 2014
13. Calmet X., Calmet J. Dynamics of the Fisher information metric, arXiv:cond-mat/0410452v1, 2004
14. Chimento L.P., Pennini F., Plastino A., Einstein's gravitational action and Fisher's information measure, Physics Letters A 293 (2002) 133–140
15. Matsueda H., Emergent general relativity from Fisher information metric. arXiv preprint arXiv:1310.1831, 2013.
16. Wagenaar D.A., Information Geometry for Neural Networks, King's College London, 1998
17. Mazumdar D. et al, Investigation of the neural origin of non-Euclidean visual space and analysis of visual phenomena using information geometry, arXiv:2505.13917v1, 2025,
18. Mazumdar D. Representation of 2d frame less visual space as a neural manifold and its information geometric interpretation. arXiv: 2011.13585, 2020.
19. Costa S.I.R., Santos S.A., Strapasson J.E., Fisher information distance: a geometrical reading, arXiv.1210.2354v3[stat.ME], 10 Jan,2014.
20. Arnold D.N., Numerical Problems in General Relativity, Proceedings of the 3rd European Conference on Numerical Mathematics and Advanced Applications, P. Neittaanmaki, T. Tiihonen, P. Tarvainen eds., WS, Singapore, pp. 3–15.
21. Belinsky P. P., Godunov S. K., Ivanov Yu. B., Yanenko I. K., Application of one class of quasiconformal mappings for constructing difference grids in domains with curvilinear boundaries, JVM, 1975, v. 5, no. 6, 1499–1511
22. Bers L. et al., Partial Differential Equations, M. Mir 1966
23. Knupp P.M. and Steinberg. S. Fundamentals of Grid Generation. CRC Press., 1993.
24. Khattri S.K., Grid Generation and Adaptation by Functionals, arXiv:math/0607388v1 [math.NA]17 Jul 2006
25. Charakhchyan A.A. and Ivanenko S.A., A Variational Form of the Winslow Grid Generator, JCP 136, 385–398 (1997)
26. Brackbill J.U. and Saltzman J.S., Adaptive zoning for singular problems in two dimensions, JCP. 46, 342 (1982)
27. Tishkin V. F., Methods for constructing computational grids, Moscow State University, 2016
28. Fortunato M., Persson P. O., High-order unstructured curved mesh generation using the Winslow equations, JCP, V. 303, pp 1-14 2016,
29. Tinoco J.G., Barrera P. and Cortes A., Some Properties of Area Functionals in numerical Grid Generation. X Meshing Round Table, Newport Beach, California, USA, 2001.
30. Alekseev A.K., Bondarev A.E. Application of adjoint equations and visual representation of adjoint parameters in problems of flow identification and control, Preprint of Keldysh Institute of Applied Mathematics of the Russian Academy of Sciences, No. 50, 2011, 24 p.