## Introduction

The notion of matrix determinant is an elementary one which finds its earliest relevance to quantifying changes in volumes of domains under affine transformations. Formally, given an matrix , the determinant of is defined as

where denotes the set of permutations of letters written in a straight line and denotes the sign of a permutation (where we note that returns or only). Now, suppose the entries of are real-valued measurable functions defined over a common domain and they are members of the Lebesgue space . Then, define the *integral of the matrix * through the map whereby

We may denote simply by

because performs integration entry-wise on the matrix . In this way, linearity of the integral is maintained, and we can give meaning to an expressions such as

by employing the usual partial ordering of square matrices through the notion of positive definiteness (called Loewner order; see *Loewner order* Wikipedia page). As such, we say that provided the matrix

is positive semi-definite, which is to say

Since maps square matrices to square matrices, we can construct the following notion of *determinant integral* of a square matrix .

**Definition 1.** (Determinant Integral) *We say the of an matrix with real-valued entries in is the quantity given by *

* *

*where is the integral of .*

**Example 1.*** One can show that the following determinant integral of the given matrix returns whenever and . I produced this result on New Years Eve 2019:*

## Commutativity

A natural question to ask is: when can we interchange the integral and determinant without changing the result? In other words, what are the most general conditions on the matrix which guarantee that

### Case

Commutativity trivially holds since

### General Set Up for

In the previous case, our assumption of the entries being in suffices for the integral to converge. However, this may not be sufficient in general since an arbitrary product of functions in may not be integrable over . Therefore, what conditions on the entries of ensure that each of the products are in ? We may stipulate that entries in the -th row are in , for some appropriate with and for . In this way, we require entries are in the same Lebesgue space row-wise. We then have by inequality

for each permutation , ensuring that the integral

converges and is finite. But then, we also need to ensure that converges so that the determinant integral of is finite. Under the assumption that () with some exponents for which and for all , we could assume is bounded to conclude that each entry of is in , by inequality. However, we would like to keep the domain as arbitrary as possible. So, instead we will assume the following condition:

(H1): The entries of the matrix are real-valued measurable functions with , where are given exponents such that and for all .

Denote the set of matrices satisfying (H1) by , where is the vector storing some given exponents that are compatible with (H1).

### Closure of Under Matrix Inversion

As it will soon become an important issue, let us look at the following question:

*For an invertible matrix , is its inverse also in ?*

To start, consider the simplest case. Suppose we have a matrix :

defined a.e in

that is invertible a.e with inverse

a.e in . Note that The entries of are all in . Further, the inverse of is in provided the following expressions are all in :

(1) ……

Assuming were arbitrary, and a.e in , we see that and are both in if and only if . Thus, in this scenario the diagonal elements of are necessarily in if . On the other hand, we generally don’t know whether or not

is integrable, even though due to the (H1) assumption. Because the determinant of is nonzero a.e in , one reasonable assumption to impose on for our discussion is that zero is not a limit point of the image of over , i.e there exists a constant such that

a.e in . Assuming this, we deduce that

which, by (H1), implies the four expressions in (1) are in . Provided the diagonal entries of are in , we find that for . Hence we have , for whichever is given. In summary we have deduced

**Proposition 1. ***Let be a domain, with given. Suppose , where , is such that*

*, and**there exists a constant for which a.e in .*

*Then, is invertible with inverse .*

Now let’s consider the case Let for some given such that and . We will assume once more that a.e in for some constant , so is invertible a.e. With written as

the inverse of is given by

One can show that, provided for all , then This follows by visiting each row of and checking that every product term is in regardless of whether is finite or infinite. As an example, let’s consider the first row of . Since are in , we have

As such, , and likewise . Hence, , as Lebesgue spaces are vector spaces. Furthermore, suppose . Then, clearly . If , we already know that , so there is nothing to check in this case. If , we have

Hence, in all cases , and likewise . But

Thus, . By assuming , together with the basic assumption that , we see that

as well. In all, the first row of has the properties required by (H1). Essential boundedness of the other entries of , and the (H1) assumption on , imply that the remaining rows of have the properties required by (H1) also. We may summarise our observations in the following proposition.

**Proposition 2.** *Let be a domain, with , and let with and . Suppose , is such that *

*, for all , and**there exists a constant for which a.e in .*

*Then, is invertible with inverse .*

We may thus use the assumption of essential boundedness to guarantee if . Suppose is invertible almost everywhere in . It is well-known that the inverse of is given by the formula

where the adjugate of is given by

a.e in with , given in terms of the -minor of . As is the determinant of the matrix obtained by deleting the -th row and -th column of , this suggests that care needs to be given to developing conditions on which ensure . Just as in the two-dimensional case, we will assume that there exists a constant such that a.e in , to simplify our analysis. Then, , leaving us to determine conditions on which guarantee for . Let be arbitrary. Then, is of the form

where is the matrix given by

Let’s check that our proposed conditions on imply

for each , which would yield . Since for each , we see that

(2) ……

Now let’s impose that all entries of are essentially bounded over . Then, all entries of are essentially bounded over as well. We now check that . As we saw in the case, we show first that

is finite for each to get . This is clear because

via (2). Next, we check that

is finite for each , whenever , and that is essentially bounded when . Indeed, is essentially bounded as a finite product of essentially bounded functions. Therefore, , as required in the case . For

where the final upper bound is finite due to cases . Hence, and so for each , as and were arbitrary. In all, we have deduced sufficient conditions that ensure the inverse of a matrix is also in .

**Proposition 3.** *Let be given. Suppose is a domain, and with and . If , is such that *

*, for all , and**there exists a constant for which a.e in ,*

*then is invertible with inverse .***Remark 1. ***We would find it of interest to produce an example of for which but not all entries of are essentially bounded or the determinant isn’t uniformly bounded away from zero. This, we leave as an investigation for the reader as we resume our analysis of the commutativity of integration and determinant for matrices in *

### Non-invertible Matrices

With the (H1) assumption, let’s now consider the case for non-invertible matrices. As a starting example, the equality that we desire in the case reads

Rearrange to get

If the matrix is symmetric with identical diagonal entries (a.e in ) we have

(3) ……

Notice that this equation implies

if and only if

To simplify (3) further, we can assume that the entries of

have zero mean-value over , which is to say

Note that interpreting this qualification as *vanishing mean-value* is suitable only for domains with finite -dimensional Lebesgue measure , in which case the integrals of and vanish. However, if , we’ll just assume this:

Consequently, (3) reduces to

This can hold provided a.e in , with symmetry and diagonal equality assumed. In fact, without assuming symmetry or equality along the main diagonal of , we find easily that taking determinant and performing integration are operations that commute for the following set of matrices for :

The condition that implies that has linearly dependent rows or linearly dependent columns necessarily. So, in two dimensions the structure of is simple.

**Example 2.** *In the case , consider the matrix*

*defined everywhere in , except at the origin. Then, clearly over . Writing and , we deduce that . Calculating directly, we have*

*which is finite, and likewise*

*Furthermore, *

*We thus conclude for each .*

Returning the to general equality which we desire:

we can consider a slightly more general set of conditions that guarantee commutativity. It suffices to assume det and det a.e in . Thus, both and its integral contain linearly dependent rows or columns (a.e in ). So, we let

Therefore, .

**Example 3.** *For and , let for some bounded domain which we’ll later specify. Provided a.e in , either or its transpose has linearly dependent rows. So, without loss of generality, write*

*for some real-valued measurable function defined a.e over for which . Hence,*

*and since , either the rows of , or its transpose, are linearly dependent. For the rest of the discussion, suppose that the rows of are linearly dependent and let’s see where the analysis takes us.*

*This means then there exists a real constant such that*

*and*

*or*

(4) ……

*and*

(5) ……

*With given, we have two equations for two unknowns and . Eliminating , we get*

*or*

(6) ……

*where *

*and*

*Suppose further that and are linearly independent in (which we assume is equipped with the standard real inner product). We observe that if is in the orthogonal complement of within , then (6) is satisfied and equations (4), (5) imply that and each have vanishing mean-value over . Consequently, is reduced to the trivial matrix, implying the unknown in equations (4), (5) can be any real number. On the other hand, we can construct an example such that does not annihilate or , with . With such we can see from (4) that*

*whereas is determined by (6). However, this solution is possibly non-unique (outside a set of measure zero) because we can add to any nontrivial member from the orthogonal complement of in to give another solution to (6). Such exists as is an infinite-dimensional Hilbert space while is a finite dimensional subspace. This would suggest that to entertain a general discussion on uniqueness in , we ought to consider it within . Such a discussion will be avoided for the moment. *

*Now, for the example. Let , , and in . Then, we clearly have and *

*Thus (6) is satisfied and *

*So, the following matrix is a non-trivial member of :*

### Invertible Matrices

In the context of invertible matrices, we will analyse commutativity of integration and determinant for orthogonal matrices and triangular matrices.

#### Orthogonal Matrices

Let be a matrix with entries being real-valued measurable functions defined a.e in . We say is an *orthogonal matrix* over provided , and both hold a.e in . Therefore, taking determinant across either orthogonality equation, we find is or a.e in , which is to say there exists a set such that and or for given . In this case, is invertible a.e in with inverse given by its transpose . If the subset of where assumes the value has zero -dimensional Lebesgue measure, we say that is a *special orthogonal matrix* over . Let’s denote the set of orthogonal matrices and special orthogonal matrices defined a.e in by and , respectively. For our discussion, we say that a matrix is in if is a real-valued constant matrix that is orthogonal. Further, if and , we say is in as a special orthogonal matrix.

Observe that, when , the simplest solution to the equation

(7) ……

is the identity matrix which satisfies and thus

Note that in general the above equation implies is a constant almost everywhere in , which implies either if or is any non-negative number if . Now suppose is such that and solves (7). Then, we find

So, if we search for such solutions to (7), the underlying domain necessarily has unit volume. Thus, define the set

What we just showed implies then that no element of can satisfy (7) whenever , but in contrast *all* elements of solve (7) when . In another case, suppose solves (7) but . Then, we find

Of course, is non-negative, so we find necessarily and so . Therefore, considering the set

we find that if there exists a solution to (7) then

- the domain has unit volume,
- , and
- every element of solves (7).

Notice that since . We’ll soon investigate whether or not this inclusion is strict for arbitrary domains in . It will also be interesting to determine when either set is empty by virtue of choice of domain and matrix dimension .

Let’s now consider the following two sets:

Clearly . Now suppose is given. Then, is an orthogonal matrix and so there exist sets such that , and for all while for all . If solves (7), we find

On the other hand, if solves (7), it follows that

which is equivalent to By the above, provided satisfies (7) with then and so . Moreover, . We have thus shown the following. Given a domain if there exists a solution to (7) for which , where and are as defined above, then

- the volume of is strictly larger than 1,
- and
- there possibly exists another element of that does not satisfy (7).

In fact, if instead satisfies (7) and , we still have although In any case, the volume of the domain must be strictly larger than unity if we seek to establish the existence of a solution to equation (7), as opposed to assuming unit volume when considering existence over . All considered, we wonder how the sets , and may be characterised.

#### Triangular Matrices

Let be a matrix of real-valued measurable functions defined almost everywhere in a domain . If (for some appropriate ) is a triangular matrix that is a solution to equation (7) we arrive at

(7′) ……

Notice that none of the off-diagonal entries of feature in this equation, but only its diagonal entries appear. Therefore, without loss of generality, it suffices to assume is a diagonal matrix in our search for solutions to equation (7′). We will also assume since since the above equation trivially holds when . Now, assume is a domain that is radially symmetric with respect to the origin in (so and ). Given , suppose the set of diagonal entries of contains an odd number of almost everywhere odd functions while the remaining entries are almost everywhere even functions. It follows that, for some , is an odd function which satisfies

and so

At the same time, the function

is almost everywhere odd over As such,

In all, equation (7′) holds under these assumptions. But notice that, while our argument shows in this case , we may not necessarily have a.e in . Moreover, we can easily produce examples of solutions to equation (7) that indeed lie outside the sets and encountered in our discussion for non-invertible matrices.

**Example 4.** *Let be the open unit ball centred at the origin. Define and over . Then, clearly is radially symmetric with respect to the origin, and in while in It follows that *

*while*

*Then, taking we have an example of an almost everywhere non-degenerate solution in to equation (7′) for any , and for which and for .*

### An Integral-Determinant Equation

In this section we study one consequence of commutativity of integration and determinant for invertible matrices. When is a matrix whose determinant integral doesn’t vanish, we know that is invertible. As such, if we also have

(8) ……

then

Provided satisfies

(9) ……

and

(10) ……

we find

(11) ……

But, we know that

a.e in . This implies equation (11) takes the form

(12) ……

with a.e in . Going forward, we will first study some basic features associated with equation (12). Then, to simplify our discussion on when equations (8), (9) and (10) all hold, we will consider two kinds of matrices, namely

- triangular matrices, and
- orthogonal matrices.

Later on, we will study some interesting examples of functions and domains for which equation (11) holds. It is there that we generate classes of examples with which beautiful loci can be associated.

#### Basic Properties

Firstly, we find that the domain is necessarily bounded by unity whenever satisfying equation (12) is in and positive.

**Proposition 4.** *Let be such that and let be a domain. Suppose is a measurable function in that is positive a.e in and satisfies *

*Then,*

*, and**if a.e in .*

*Proof. * First of all, observe that if , equation (12) cannot hold since then

Thus, . Next, for item 1 we find by Cauchy-Schwarz’s inequality

as required.

For item 2, suppose the contrary holds, that is,

We already know that , so we must have . However, we find

giving a contradiction. //

**Example 5. ***We find*

*In particular, there holds*

In the upcoming result we unravel what conditions on the domain are required when a function satisfying (12) is allowed to assume negative values. Given is a measurable function over a domain , define its positive and negative parts via and , respectively, yielding a.e in . Suppose and are positive everywhere in and , respectively, in such a way that and , while and . After employing a similar argument involving Cauchy-Schwarz inequality and elementary estimation of integrals, we obtain (somewhat) a generalisation of Proposition 4 in

**Theorem 1.** *Let be real numbers for which and . Assume is a domain which we partition into three subsets such that , , , , and Suppose is a measurable function over such that*

*and are positive over and , respectively, and**while .*

*If*

*then*

- ,
- ,
*and*

*where*

*Moreover, if over while over , then*

*and*

#### Triangular Matrices

Suppose is upper triangular with

Further, assume satisfies the conditions of Proposition 1, which indicates is invertible with also. The desired equation (8) reads

Provided has unit volume and a.e in , we see that

while

and so (8) is satisfied. Continuing with this case, we investigate (9) which we would like to be satisfied by

and

Then

with inverse

The integral of is given by

From (9) we necessarily have

(13) ……

and

We are then left with (10) which is equivalent to, in the current case,

This holds without having to make any further assumptions on the matrix . Moreover, notice that (13) is actually (11) in disguise since is upper-triangular. If a.e in also, we have the following elementary proposition in which is not identity.

**Proposition 4.** *Let be a domain of unit volume, with given, and let be such that and . Suppose , given by*

*a.e in is in . Then*,

- and

Let’s return to the desired equation (8), namely

(14) ……

and assume it holds. Assuming satisfies the requirements of Proposition 1, there exists a constant such that a.e in . Hence, neither nor can vanish on a subset of that has positive -dimensional Lebesgue measure. As such, we calculate as

a.e in . We again seek to write out the formula given by (9). To this end, we have

with inverse given by

or

due to (14), while

Hence, if and only if

(15) ……

(16) ……

and

(17) ……

Finally, (10) written out reads

(18) ……

Then, by the argument given earlier on, we deduce that

(19) ……

which is precisely

But we observe that with (14), (15), (17), and (18) only, we can derive (19). There is no need for (16) to hold and so there’s no need to satisfy . This leads us to the following result.

**Proposition 5.** *Let be a domain, with , and let be such that and . Suppose is given by*

*a.e in , and satisfies*

*, and**there exists a constant for which a.e in .*

*If there further hold *

*then*

**Remark 2.** *Since no conditions are placed on the entries off the main diagonal of (apart from those due to membership of in ), we deduce this result also holds in the case of lower triangular matrices. Moreover, we’d like to think of the equality conditions given in the result as pseudo-intuitive relationships, each of which we believe are starting points for interesting numerical analysis. Furthermore, if we allow for inequality, one can ask how easy it is to produce continuous functions which satisfy*

*with a given tolerance . We could also ask functional analytic questions on the above inequality. For example, in what function space does the above inequality describe a closed set?*

What can be said about upper triangular matrices? If the matrix

is invertible, its inverse is given by

or rather

In this case, (8) reads

(20) ……

Equation (10) for reads

(21) ……

Now,

and if is invertible its inverse is

On the other hand, the integral of is given by

Therefore, (9) implies the following system of equations.

(22) ……

and

Again, we see that some equations aren’t necessary to conclude the relationship we desire. Multiplying (20), (21), and noting equations (22), we deduce

Thus, the equations gotten by equating the non-diagonal entries of (9) aren’t necessary for deriving (11). These calculations, together with Proposition 3, suggest the following generalisation of Proposition 5 to upper-triangular matrices .

**Theorem 1.*** Let be given. Suppose is a domain, and is such that with . Let be an upper triangular matrix in that satisfies *

*for all , and**there exists a constant for which a.e in .*

*If *

*and*

*for , then *

#### Orthogonal Matrices

Let’s consider the relation

(23) …..

in the context of orthogonal matrices . Earlier on we described these matrices when we studied the commutativity of integration and determinant (so recall the set is where and the set is where for an orthogonal matrix ). Suppose be a matrix with entries being real-valued measurable functions defined a.e in .

Provided , the relation

(24) ……

which holds a.e in implies

a.e in . Therefore, if (23) holds for such , must have unit volume.

**Proposition 6. ***Let be given and suppose is a domain of unit volume. If , then*

Observe that whenever , (24) implies and assume the same sign a.e in . Assuming satisfies (23), we find

if and only if

which is equivalent to

or

(25) ……

since . One may interpret (25) as a condition which says, *for a given domain and a matrix , there exist two disjoint subsets of such that *

*the union of exhausts in measure,**the determinant of is , on respectively, and**the volumetric difference between and is unity.*

So, above we could take and , for example. Moreover, when (25) holds, it suggests is -1 more often than +1 if , or +1 more often than -1 which would indicate. In all, we can state

**Theorem 2. ***Let be given and assume is a domain. Suppose is such that there exist sets for which *

*the union of exhausts with respect to -dimensional Lebesgue measure,**the determinant of is , on respectively, and**the volumetric difference between and is unity.*

*Then*

Copyright © 2020 Yohance Osborne