Determinant Integrals


The notion of matrix determinant is an elementary one which finds its earliest relevance to quantifying changes in volumes of domains under affine transformations. Formally, given an n\times n matrix A=(a_{ij})_{1\leq i,j\leq n}, the determinant of A is defined as

\displaystyle \text{det}(A)=\sum_{\sigma\in S_n}\text{sgn}(\sigma)\prod_{j=1}^{n}a_{j,\sigma(j)}

where S_n denotes the set of n! permutations of n letters written in a straight line and \text{sgn}(\sigma) denotes the sign of a permutation \sigma\in S_n (where we note that \text{sgn}(\sigma) returns 1 or -1 only). Now, suppose the entries of A are real-valued measurable functions defined over a common domain \Omega\subset \mathbb{R}^m and they are members of the Lebesgue space L^1(\Omega). Then, define the integral of the matrix A\in\mathbb{R}^{n\times n} through the map T:\mathbb{R}^{n\times n}\to \mathbb{R}^{n\times n} whereby

\displaystyle T(A)=\left(\int_{\Omega} a_{ij}\text{ }\mathrm{d}x\right)_{1\leq i,j\leq n}.

We may denote T(A) simply by

\displaystyle \int_{\Omega} A \text{ }\mathrm{d}x

because T performs integration entry-wise on the matrix A. In this way, linearity of the integral is maintained, and we can give meaning to an expressions such as

\displaystyle \int_{\Omega}A\text{ }\mathrm{d}x\leq \int_{\Omega}B\text{ }\mathrm{d}x

by employing the usual partial ordering of square matrices through the notion of positive definiteness (called Loewner order; see Loewner order Wikipedia page). As such, we say that T(A)\leq T(B) provided the matrix

\displaystyle T(B)-T(A)=\left(\int_{\Omega}(b_{ij}-a_{ij})\text{ }\mathrm{d}x\right)_{1\leq i,j\leq n}

is positive semi-definite, which is to say

\displaystyle x^T(T(B)-T(A))x\geq 0\text{ }\text{for all }x\in\mathbb{R}^n.

Since T maps square matrices to square matrices, we can construct the following notion of determinant integral of a square matrix A.

Definition 1. (Determinant Integral) We say the \text{\emph{determinant integral}} of an n\times n matrix A=(a_{ij})_{1\leq i,j\leq n} with real-valued entries in L^1(\Omega) is the quantity given by

\displaystyle \text{det}(T (A)):=\sum_{\sigma\in S_n}\text{sgn}(\sigma)\prod_{j=1}^{n}\left(\int_{\Omega}a_{j,\sigma(j)}\mathrm{d}x\right)

where T(A) is the integral of A.

Example 1. One can show that the following determinant integral of the given matrix returns happy+new-year! whenever a,n,e,w,p,y,h>0 and r\in\mathbb{N}. I produced this result on New Years Eve 2019:

\displaystyle \text{det}\int_{0}^{\infty}\begin{pmatrix}\frac{-4ar!new}{p^2y(4x^2+\pi^2)}&\frac{4^{4/3}hap}{2(4^{2/3}x^2+\pi^{2/3})^2}&\frac{16^{6/5}\sqrt{yar!}}{(16^{2/5}x^2+(3\pi)^{2/5})^3}\\ \frac{4^{4/3}hap}{2(4^{2/3}x^2+\pi^{2/3})^2}&\frac{4e}{4x^2+\pi^2} & \frac{32^{8/7}p}{(32^{2/7}x^2+(5\pi)^{2/7})^4}\sqrt{\frac{y}{ar!}}   \\  \frac{16^{6/5}\sqrt{yar!}}{(16^{2/5}x^2+(3\pi)^{2/5})^3} &\frac{32^{8/7}p}{(32^{2/7}x^2+(5\pi)^{2/7})^4}\sqrt{\frac{y}{ar!}} & 0  \end{pmatrix}\mathrm{d}x


A natural question to ask is: when can we interchange the integral and determinant without changing the result? In other words, what are the most general conditions on the matrix A=(a_{ij})_{1\leq i,j \leq n} which guarantee that

\displaystyle \sum_{\sigma\in S_n}\text{sgn}(\sigma)\prod_{j=1}^{n}\left(\int_{\Omega}a_{j,\sigma(j)}\mathrm{d}x\right)=\int_{\Omega}\text{det}(a_{ij})\mathrm{d}x?

Case n=1

Commutativity trivially holds since

\displaystyle \text{det}\int_{\Omega}A\text{ }\mathrm{d}x=\int_{\Omega}a_{11}\mathrm{d}x=\int_{\Omega}\text{det}(A)\text{ }\mathrm{d}x.

General Set Up for n\geq 2

In the previous case, our assumption of the entries being in L^1(\Omega) suffices for the integral \int_{\Omega} \text{det}(A)\text{ }\mathrm{d}x to converge. However, this may not be sufficient in general since an arbitrary product of functions in L^1(\Omega) may not be integrable over \Omega. Therefore, what conditions on the entries of A ensure that each of the n! products \prod_{j=1}a_{j,\sigma(j)} are in L^1(\Omega)? We may stipulate that entries in the j-th row are in L^{r_j}(\Omega), for some appropriate \{r_j\}_{1\leq j\leq n} with \sum_{j=1}^{n}\frac{1}{r_j}=1 and r_{j}\geq 1 for 1\leq j \leq n. In this way, we require entries are in the same Lebesgue space row-wise. We then have by \text{H\"older's} inequality

\displaystyle \int_{\Omega}\left|\prod_{j=1}^na_{j,\sigma(j)}\right|\mathrm{d}x\leq \prod_{j=1}^{n}\|a_{j,\sigma(j)}\|_{L^{r_j}(\Omega)}<\infty

for each permutation \sigma \in S_n, ensuring that the integral

\displaystyle \int_{\Omega} \text{det}(A)\text{ }\mathrm{d}x

converges and is finite. But then, we also need to ensure that \int_{\Omega} A\text{ }\mathrm{d}x converges so that the determinant integral of A is finite. Under the assumption that a_{ij}\in L^{r_i}(\Omega) (1\leq i,j\leq n) with some exponents \{r_j\}_{1\leq j\leq n} for which \sum_{j=1}^{n}\frac{1}{r_j}=1 and r_{j}\geq 1 for all 1\leq j \leq n, we could assume \Omega is bounded to conclude that each entry of A=(a_{ij})_{1\leq i,j\leq n} is in L^1(\Omega), by \text{H\"older's} inequality. However, we would like to keep the domain as arbitrary as possible. So, instead we will assume the following condition:

(H1): The entries of the matrix A=(a_{ij})\in \mathbb{R}^{n\times n} are real-valued measurable functions a_{ij}\in L^{r_i}(\Omega)\cap L^{1}(\Omega) with 1\leq i,j\leq n, where \{r_i\}_{1\leq i\leq n} are given exponents such that \sum_{i=1}^{n}\frac{1}{r_i}=1 and r_{i}\geq 1 for all 1\leq i \leq n.

Denote the set of matrices satisfying (H1) by \mathcal{M}_n(\Omega,\textbf{r}), where \textbf{r}=(r_i)_{1\leq i\leq n}\in\mathbb{R}^n is the vector storing some given exponents that are compatible with (H1).

Closure of \mathcal{M}_n(\Omega,\textbf{r}) Under Matrix Inversion

As it will soon become an important issue, let us look at the following question:

For an invertible matrix A\in \mathcal{M}_n(\Omega,\textbf{\emph{r}}), is its inverse also in \mathcal{M}_n(\Omega,\textbf{\emph{r}})?

To start, consider the simplest case. Suppose we have a matrix A\in\mathcal{M}_2(\Omega,(2,2)):

\displaystyle A=\begin{pmatrix} a & b \\ c & d \end{pmatrix}

defined a.e in \Omega

that is invertible a.e with inverse

\displaystyle A^{-1}=\frac{1}{ad-cb}\begin{pmatrix} d & -b \\ -c & a \end{pmatrix}

a.e in \Omega. Note that The entries of A are all in L^2(\Omega)\cap L^1(\Omega). Further, the inverse of A is in \mathcal{M}_2(\Omega,(2,2)) provided the following expressions are all in L^2(\Omega)\cap L^1(\Omega):

(1) …… \displaystyle \frac{d}{ad-cb},\text{ }\frac{-b}{ad-cb},\text{ }\frac{-c}{ad-cb}\text{ and }\frac{a}{ad-cb}.

Assuming r_1,r_2 were arbitrary, and \text{det}(A)=ad-cb\equiv\text{const.} a.e in \Omega, we see that A and A^{-1} are both in \mathcal{M}_2(\Omega, \textbf{r}) if and only if d,a\in L^{r_1}(\Omega)\cap L^{r_2}(\Omega)\cap L^1(\Omega). Thus, in this scenario the diagonal elements of A are necessarily in L^{r_1}(\Omega)\cap L^{r_2}(\Omega)\cap L^1(\Omega) if A^{-1}\in \mathcal{M}_2(\Omega,\textbf{r}). On the other hand, we generally don’t know whether or not

\displaystyle \frac{1}{\text{det}(A)}

is integrable, even though \text{det}(A)\in L^1(\Omega) due to the (H1) assumption. Because the determinant of A is nonzero a.e in \Omega, one reasonable assumption to impose on \text{det}(A) for our discussion is that zero is not a limit point of the image of \text{det}(A) over \Omega, i.e there exists a constant C>0 such that

\displaystyle |\text{det}(A)|\geq C

a.e in \Omega. Assuming this, we deduce that

\displaystyle \frac{1}{\text{det}(A)}\in L^{\infty}(\Omega)

which, by (H1), implies the four expressions in (1) are in L^1(\Omega). Provided the diagonal entries of A are in L^{r_1}(\Omega)\cap L^{r_2}(\Omega)\cap L^1(\Omega), we find that (A^{-1})_{ij}\in L^{r_i}(\Omega)\cap L^1(\Omega) for i,j\in{1,2}. Hence we have A^{-1}\in\mathcal{M}_2(\Omega,\textbf{r}), for whichever \textbf{r} is given. In summary we have deduced

Proposition 1. Let \Omega\subset \mathbb{R}^m be a domain, with m\in\mathbb{N} given. Suppose A\in\mathcal{M}_2(\Omega,\textbf{r}), where \textbf{r}=(r_1,r_2), is such that

  • a_{11},a_{22}\in L^{r_1}(\Omega)\cap L^{r_2}(\Omega)\cap L^1(\Omega), and
  • there exists a constant C>0 for which |\text{det}(A)|\geq C a.e in \Omega.

Then, A is invertible with inverse A^{-1}\in \mathcal{M}_2(\Omega,\textbf{r}).

Now let’s consider the case n=3. Let A\in \mathcal{M}_3(\Omega,\textbf{r}) for some given \textbf{r}=(r_1,r_2,r_3) such that r_1^{-1}+r_2^{-1}+r_3^{-1}=1 and r_1,r_2,r_3\geq 1. We will assume once more that |\text{det}(A)|\geq C a.e in \Omega for some constant C>0, so A is invertible a.e. With A=(a_{ij})_{1\leq i,j\leq 3} written as

\displaystyle A=\begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33}\end{pmatrix}

the inverse of A is given by

\displaystyle A^{-1}=\frac{1}{\text{det}(A)}\begin{pmatrix} a_{22}a_{33}-a_{23}a_{32} & -(a_{12}a_{33}-a_{13}a_{32}) & a_{12}a_{23}-a_{13}a_{22} \\ -(a_{21}a_{33}-a_{23}a_{31}) & a_{11}a_{33}-a_{13}a_{31} & -(a_{11}a_{23}-a_{13}a_{21})\\ a_{21}a_{32}-a_{22}a_{31} & -(a_{11}a_{32}-a_{12}a_{31}) & a_{11}a_{22}-a_{12}a_{21} \end{pmatrix}.

One can show that, provided a_{ij}\in L^{1}(\Omega)\cap L^{\infty}(\Omega) \cap L^{r_i}(\Omega) for all 1\leq i,j\leq 3, then A^{-1}\in\mathcal{M}_3(\Omega,\textbf{r}). This follows by visiting each row of A^{-1} and checking that every product term is in L^1(\Omega)\cap L^{r_i}(\Omega) regardless of whether r_i is finite or infinite. As an example, let’s consider the first row of A^{-1}. Since a_{22}, a_{33} are in L^{1}(\Omega)\cap L^{\infty}(\Omega), we have

\displaystyle \int_{\Omega}|a_{22}a_{33}|\mathrm{d}x\leq \|a_{22}\|_{L^{\infty}(\Omega)}\int_{\Omega}|a_{33}|\mathrm{d}x<\infty.

As such, a_{22}a_{33}\in L^{1}(\Omega), and likewise a_{23}a_{32}\in L^{1}(\Omega). Hence, a_{22}a_{33}-a_{23}a_{32}\in L^{1}(\Omega), as Lebesgue spaces are vector spaces. Furthermore, suppose r_1=\infty. Then, clearly a_{22}a_{33}\in L^{\infty}(\Omega). If r_1=1, we already know that a_{22}a_{33}\in L^{1}(\Omega), so there is nothing to check in this case. If r_1>1, we have

\displaystyle \int_{\Omega}|a_{22}a_{33}|^{r_1}\mathrm{d}x\leq \|a_{22}a_{33}\|_{L^{\infty}(\Omega)}^{r_1-1}\int_{\Omega}|a_{22}a_{33}|\mathrm{d}x<\infty.

Hence, in all cases a_{22}a_{33}\in L^{r_1}(\Omega), and likewise a_{23}a_{32}\in L^{r_1}(\Omega). But

\displaystyle \frac{1}{\text{det}(A)}\in L^{\infty}(\Omega).

Thus, (A^{-1})_{11}\in L^{r_1}(\Omega)\cap L^{1}(\Omega). By assuming a_{33},a_{32}\in L^{\infty}(\Omega), together with the basic assumption that a_{12},a_{13}\in L^{r_1}(\Omega)\cap L^{1}(\Omega), we see that

\displaystyle (A^{-1})_{12},(A^{-1})_{13}\in L^{r_1}(\Omega)\cap L^{1}(\Omega)

as well. In all, the first row of A^{-1} has the properties required by (H1). Essential boundedness of the other entries of A, and the (H1) assumption on A, imply that the remaining rows of A^{-1} have the properties required by (H1) also. We may summarise our observations in the following proposition.

Proposition 2. Let \Omega\subset \mathbb{R}^m be a domain, with m\in\mathbb{N}, and let \textbf{r}=(r_1,r_2,r_3) with r_1^{-1}+r_2^{-1}+r_3^{-1}=1 and r_1,r_2,r_3\geq 1. Suppose A\in\mathcal{M}_3(\Omega,\textbf{r}), is such that

  • a_{ij}\in L^{\infty}(\Omega), for all i,j\in{1,2, 3}, and
  • there exists a constant C>0 for which |\text{det}(A)|\geq C a.e in \Omega.

Then, A is invertible with inverse A^{-1}\in \mathcal{M}_3(\Omega,\textbf{r}).

We may thus use the assumption of essential boundedness to guarantee A^{-1}\in\mathcal{M}_n(\Omega,\textbf{r}) if A\in\mathcal{M}_n(\Omega,\textbf{r}). Suppose A=(a_{ij})_{1\leq i,j \leq n}\in\mathcal{M}_n(\Omega,\textbf{r}) is invertible almost everywhere in \Omega. It is well-known that the inverse of A is given by the formula

\displaystyle A^{-1}=\frac{1}{\text{det}(A)}\text{adj}(A)

where the adjugate of A is given by

\displaystyle \text{adj}(A)=(C_{ji})_{1\leq i,j\leq n}

a.e in \Omega with C_{ji}=(-1)^{i+j}M_{ji}, given in terms of the (j,i)-minor M_{ji} of A. As M_{ji} is the determinant of the (n-1) \times (n -1) matrix obtained by deleting the j-th row and i-th column of A, this suggests that care needs to be given to developing conditions on \text{adj}(A) which ensure A^{-1}\in \mathcal{M}_n(\Omega,\textbf{r}). Just as in the two-dimensional case, we will assume that there exists a constant C>0 such that |\text{det}(A)|\geq C a.e in \Omega, to simplify our analysis. Then, \frac{1}{\text{det}(A)}\in L^{\infty}(\Omega), leaving us to determine conditions on A which guarantee (\text{adj}(A))_{ij}=C_{ji}=(-1)^{i+j}M_{ji}\in L^{r_i}(\Omega)\cap L^{1}(\Omega) for 1\leq i,j\leq n. Let i,j\in\{1,\cdots,n\} be arbitrary. Then, M_{ji} is of the form

\displaystyle \sum_{\sigma\in S_{n-1}}\text{sgn}(\sigma)b_{1,\sigma(1)}\cdots b_{n-1,\sigma(n-1)}=\text{det}(B)

where B=(b_{pq})_{1\leq p,q\leq n-1} is the (n-1)\times (n-1) matrix given by

\displaystyle b_{pq}:=a_{pq} \text{ for }1\leq p\leq j-1,1\leq q\leq i-1,\text{ }\text{ }a_{p,q+1} \text{ for }1\leq p\leq j-1,i\leq q\leq n-1,\\ a_{p+1,q} \text{ for }j\leq p\leq n-1, 1\leq q \leq i-1,\text{ }\text{ and }a_{p+1,q+1} \text{ for }j\leq p\leq n-1,i\leq q\leq n-1.

Let’s check that our proposed conditions on A imply

\displaystyle H(i,j,\sigma):=\prod_{p=1}^{n-1}b_{p,\sigma(p)}\in L^{r_i}(\Omega)\cap L^1(\Omega)

for each \sigma \in S_{n-1}, which would yield C_{ji}\in L^{r_i}(\Omega)\cap L^1(\Omega). Since a_{pq}\in L^{r_p}(\Omega)\cap L^{\infty}(\Omega) for each p,q\in\{1,\cdots,n\}, we see that

(2) …… \displaystyle b_{pq}\text{ is in } L^{r_p}(\Omega)\cap L^{1}(\Omega) \text{ for } 1\leq p\leq j-1,1\leq q\leq n-1, \text{ and in }L^{r_{p+1}}(\Omega)\cap L^{1}(\Omega) \text{ for }j\leq p\leq n-1,1\leq q\leq n-1.

Now let’s impose that all entries of A are essentially bounded over \Omega. Then, all entries of B are essentially bounded over \Omega as well. We now check that C_{ji}\in L^{r_i}(\Omega)\cap L^{1}(\Omega). As we saw in the 3\times 3 case, we show first that

\displaystyle \int_{\Omega}|H(i,j,\sigma)|\mathrm{d}x

is finite for each \sigma\in S_{n-1} to get C_{ji}\in L^1(\Omega). This is clear because

\displaystyle \int_{\Omega}|H(i,j,\sigma)|\mathrm{d}x\leq \prod_{p=1}^{n-2}\|b_{p,\sigma(p)}\|_{L^{\infty}(\Omega)}\int_{\Omega}|b_{n-1,\sigma(n-1)}|\mathrm{d}x<\infty

via (2). Next, we check that

\displaystyle \int_{\Omega}|H(i,j,\sigma)|^{r_i}\mathrm{d}x

is finite for each \sigma\in S_{n-1}, whenever 1< r_i<\infty, and that H is essentially bounded when r_i=\infty. Indeed, H is essentially bounded as a finite product of essentially bounded functions. Therefore, H\in L^{\infty}(\Omega), as required in the case r_i=\infty. For r_i>1

\displaystyle  \int_{\Omega}|H(i,j,\sigma)|^{r_i}\mathrm{d}x=\int_{\Omega}|H(i,j,\sigma)|^{r_i-1}|H(i,j,\sigma)|\mathrm{d}x\leq \|H(i,j,\sigma)\|_{L^{\infty}(\Omega)}^{r_i-1}\int_{\Omega}|H(i,j,\sigma)|\mathrm{d}x

where the final upper bound is finite due to cases r_i=1,\infty. Hence, H(i,j,\sigma)\in L^{r_i}(\Omega)\cap L^1(\Omega) and so (\text{adj}(A))_{ij}\in L^{r_i}(\Omega)\cap L^{1}(\Omega) for each i,j\in{1,\cdots,n}, as i and j were arbitrary. In all, we have deduced sufficient conditions that ensure the inverse of a matrix A\in\mathcal{M}_n(\Omega,\textbf{r}) is also in \mathcal{M}_n(\Omega,\textbf{r}).

Proposition 3. Let n,m\in\mathbb{N} be given. Suppose \Omega\subset \mathbb{R}^m is a domain, and \textbf{r}=(r_1,\cdots,r_n) with r_1^{-1}+\cdots+r_n^{-1}=1 and r_1,\cdots,r_n\geq 1. If A\in\mathcal{M}_n(\Omega,\textbf{r}), is such that

  • a_{ij}\in L^{\infty}(\Omega), for all i,j\in\{1,\cdots, n\}, and
  • there exists a constant C>0 for which |\text{det}(A)|\geq C a.e in \Omega,

then A is invertible with inverse A^{-1}\in \mathcal{M}_n(\Omega,\textbf{r}).

Remark 1. We would find it of interest to produce an example of A\in\mathcal{M}_n(\Omega,\textbf{r}) for which A^{-1}\in \mathcal{M}_n(\Omega,\textbf{r}) but not all entries of A are essentially bounded or the determinant isn’t uniformly bounded away from zero. This, we leave as an investigation for the reader as we resume our analysis of the commutativity of integration and determinant for matrices in \mathcal{M}_n(\Omega,\textbf{r}).

Non-invertible Matrices

With the (H1) assumption, let’s now consider the case n\geq 2 for non-invertible matrices. As a starting example, the equality that we desire in the case n=2 reads

\displaystyle \left(\int_{\Omega}a_{11}\mathrm{d}x\right)\times \left(\int_{\Omega}a_{22}\mathrm{d}x\right)-\left(\int_{\Omega}a_{12}\mathrm{d}x\right)\times \left(\int_{\Omega}a_{21}\mathrm{d}x\right)=\int_{\Omega}(a_{11}a_{22}-a_{12}a_{21})\mathrm{d}x.

Rearrange to get

\displaystyle \left(\int_{\Omega}a_{11}\text{ }\mathrm{d}x\right) \left(\int_{\Omega}a_{22}\text{ }\mathrm{d}x\right)-\int_{\Omega}a_{11}a_{22}\text{ }\mathrm{d}x=\left(\int_{\Omega}a_{12}\text{ }\mathrm{d}x\right) \left(\int_{\Omega}a_{21}\text{ }\mathrm{d}x\right)-\int_{\Omega}a_{12}a_{21}\text{ }\mathrm{d}x.

If the matrix is symmetric with identical diagonal entries (a.e in \Omega) we have

(3) …… \displaystyle \left(\int_{\Omega}a_{11}\text{ }\mathrm{d}x\right)^2 -\int_{\Omega}a_{11}^2\text{ }\mathrm{d}x=\left(\int_{\Omega}a_{12}\text{ }\mathrm{d}x\right)^2-\int_{\Omega}a_{12}^2\text{ }\mathrm{d}x.

Notice that this equation implies

\displaystyle \left(\int_{\Omega}a_{11}\text{ }\mathrm{d}x\right)^2 \leq\int_{\Omega}a_{11}^2\text{ }\mathrm{d}x

if and only if

\displaystyle \left(\int_{\Omega}a_{12}\text{ }\mathrm{d}x\right)^2 \leq\int_{\Omega}a_{12}^2\text{ }\mathrm{d}x.

To simplify (3) further, we can assume that the entries of
A have zero mean-value over \Omega\subset\mathbb{R}^m, which is to say

\displaystyle \frac{1}{|\Omega|}\int_{\Omega}a_{11}\text{ }\mathrm{d}x=0=\frac{1}{|\Omega|}\int_{\Omega}a_{12}\text{ }\mathrm{d}x.

Note that interpreting this qualification as vanishing mean-value is suitable only for domains with finite m-dimensional Lebesgue measure |\Omega|>0, in which case the integrals of a_{11} and a_{12} vanish. However, if |\Omega|=\infty, we’ll just assume this:

\displaystyle \int_{\Omega}a_{11}\text{ }\mathrm{d}x=0=\int_{\Omega}a_{12}\text{ }\mathrm{d}x.

Consequently, (3) reduces to

\displaystyle \int_{\Omega}(a_{11}^2-a_{12}^2)\mathrm{d}x=0.

This can hold provided \text{det}(A)=a_{11}^2-a_{12}^2=0 a.e in \Omega, with symmetry and diagonal equality assumed. In fact, without assuming symmetry or equality along the main diagonal of A, we find easily that taking determinant and performing integration are operations that commute for the following set of matrices for n\geq 2:

\displaystyle \mathcal{K}_n(\Omega,\textbf{r})=\left\{A\in\mathcal{M}_n(\Omega,\textbf{r}):\text{det}(A)=0\text{ a.e in }\Omega \text{ and}\int_{\Omega}a_{ij}\mathrm{d}x=0\text{ for }i,j=1,\cdots n\right\}

The condition that \text{det}(A)=0 implies that A has linearly dependent rows or linearly dependent columns necessarily. So, in two dimensions the structure of A is simple.

Example 2. In the case n=m=2, consider the matrix

\displaystyle A(x,y)=\begin{pmatrix} \frac{y}{\sqrt{x^2+y^2}} & \frac{x}{\sqrt{x^2+y^2}} \\  \frac{3y}{\sqrt{x^2+y^2}} & \frac{3x}{\sqrt{x^2+y^2}} \end{pmatrix}

defined everywhere in \Omega=B_{a}(0)\subset \mathbb{R}^2 (a>0), except at the origin. Then, clearly \text{det}(A)=0 over \Omega \backslash \{0\}. Writing f(x,y)=\frac{y}{\sqrt{x^2+y^2}} and g(x,y)=\frac{x}{\sqrt{x^2+y^2}}, we deduce that f,g\in L^2(\Omega)\subset L^1(\Omega). Calculating directly, we have

\displaystyle \int_{\Omega}f(x,y)^2\mathrm{d}x\mathrm{d}y=\int_{\Omega}\frac{y^2}{x^2+y^2}\mathrm{d}x\mathrm{d}y=\int_0^{a}\int_0^{2\pi}\frac{r^2\sin^2{\theta}}{r^2}r\mathrm{d}\theta\mathrm{d}r=\int_0^{a}\int_0^{2\pi}r\sin^2{\theta}\mathrm{d}\theta\mathrm{d}r

which is finite, and likewise

\displaystyle \int_{\Omega}g(x,y)^2\mathrm{d}x\mathrm{d}y<\infty.


\displaystyle \int_{\Omega}f(x,y)\mathrm{d}x\mathrm{d}y=\int_0^{a}\int_0^{2\pi}r\sin{\theta}\mathrm{d}\theta\mathrm{d}r=0=\int_{\Omega}g(x,y)\mathrm{d}x\mathrm{d}y.

We thus conclude A\in\mathcal{K}_2(B_a(0),(2,2)) for each a>0.

Returning the to general equality which we desire:

\displaystyle \text{det}(T(A))=\int_{\Omega}\text{det}(A)\mathrm{d}x

we can consider a slightly more general set of conditions that guarantee commutativity. It suffices to assume det(T(A))=0 and det(A)=0 a.e in \Omega. Thus, both A and its integral contain linearly dependent rows or columns (a.e in \Omega). So, we let

\displaystyle \mathcal{K}_{n}'(\Omega,\textbf{r})=\left\{A\in\mathcal{M}_n(\Omega,\textbf{r}):\text{det}(A)=0\text{ a.e in }\Omega \text{ and }\text{det}(T(A))=0\right\}.

Therefore, \mathcal{K}_n(\Omega,\textbf{r})\subset \mathcal{K}'_n(\Omega,\textbf{r}).

Example 3. For n=m=2 and r_1=r_2=2, let A\in\mathcal{M}_{2}(\Omega,(2,2)) for some bounded domain \Omega\subset \mathbb{R}^2 which we’ll later specify. Provided \text{det}(A)=0 a.e in \Omega, either A or its transpose has linearly dependent rows. So, without loss of generality, write

\displaystyle A(x)=\begin{pmatrix} a_{11}(x) & a_{12}(x) \\ c(x)a_{11}(x) & c(x)a_{12}(x)\end{pmatrix}

for some real-valued measurable function c defined a.e over \Omega for which ca_{11},ca_{12}\in L^2(\Omega). Hence,

\displaystyle T(A)=\begin{pmatrix} \int_{\Omega} a_{11}(x)\mathrm{d}x & \int_{\Omega} a_{12}(x)\mathrm{d}x\\ \int_{\Omega} c(x)a_{11}(x)\mathrm{d}x & \int_{\Omega} c(x)a_{12}(x)\mathrm{d}x \end{pmatrix}

and since \text{det}(T(A))=0, either the rows of T(A), or its transpose, are linearly dependent. For the rest of the discussion, suppose that the rows of T(A) are linearly dependent and let’s see where the analysis takes us.

This means then there exists a real constant q such that

\displaystyle \int_{\Omega}a_{11}(x)\mathrm{d}x=q\int_{\Omega}c(x)a_{11}(x)\mathrm{d}x




(4) …… \displaystyle \int_{\Omega}(1-qc(x))a_{11}(x)\mathrm{d}x=0


(5) …… \displaystyle \int_{\Omega}(1-qc(x))a_{12}(x)\mathrm{d}x=0.

With a_{11},a_{12} given, we have two equations for two unknowns c and q. Eliminating q, we get

\displaystyle \left(\int_{\Omega}a_{11}(x)\mathrm{d}x\right)\left(\int_{\Omega}c(x)a_{12}(x)\mathrm{d}x\right)=\left(\int_{\Omega}a_{12}(x)\mathrm{d}x\right)\left(\int_{\Omega}c(x)a_{11}(x)\mathrm{d}x\right)


(6) …… \displaystyle \int_{\Omega}c(x)\left(Aa_{11}(x)+Ba_{12}(x)\right)\mathrm{d}x=0


\displaystyle A=\int_{\Omega}a_{12}(x)\mathrm{d}x


\displaystyle B=-\int_{\Omega}a_{11}(x)\mathrm{d}x.

Suppose further that a_{11} and a_{12} are linearly independent in L^2(\Omega) (which we assume is equipped with the standard real inner product). We observe that if c is in the orthogonal complement of X:=\text{span}\{a_{11},a_{12}\} within L^2(\Omega), then (6) is satisfied and equations (4), (5) imply that a_{11} and a_{12} each have vanishing mean-value over \Omega. Consequently, T(A) is reduced to the trivial matrix, implying the unknown q in equations (4), (5) can be any real number. On the other hand, we can construct an example such that c does not annihilate a_{11} or a_{12}, with \Omega=B_1(0). With such c we can see from (4) that

\displaystyle q=\frac{\int_{B_1(0)}a_{11}(x)\mathrm{d}x}{\int_{B_1(0)}c(x)a_{11}(x)\mathrm{d}x},

whereas c is determined by (6). However, this solution is possibly non-unique (outside a set of measure zero) because we can add to c any nontrivial member g from the orthogonal complement of X in L^2(B_1(0)) to give another solution to (6). Such g exists as L^2(B_1(0)) is an infinite-dimensional Hilbert space while X is a finite dimensional subspace. This would suggest that to entertain a general discussion on uniqueness in L^2(B_1(0)), we ought to consider it within X. Such a discussion will be avoided for the moment.

Now, for the example. Let a_{12}(x,y)=x, a_{11}(x,y)=x^2+y^2, and c(x,y)=\sqrt{x^2+y^2} in B_1(0). Then, we clearly have A=0 and

\displaystyle \int_{B_1(0)}c(x,y)a_{12}(x,y)\mathrm{d}x\mathrm{d}y=\int_0^{2\pi}\int_0^{1}r\times r\cos{\theta} \times r \mathrm{d}r\mathrm{d}\theta=0.

Thus (6) is satisfied and

\displaystyle q=\frac{\int_0^1r^3\mathrm{d}r}{\int_0^1r^4\mathrm{d}r}=\frac{5}{4}.

So, the following matrix is a non-trivial member of \mathcal{K}_n'(B_1(0),(2,2)):

\displaystyle  A(x,y)=\begin{pmatrix} x^2+y^2 & x \\ (x^2+y^2)^{3/2} & x\sqrt{x^2+y^2} \end{pmatrix}.

Invertible Matrices

In the context of invertible matrices, we will analyse commutativity of integration and determinant for orthogonal matrices and triangular matrices.

Orthogonal Matrices

Let A=(a_{ij})_{1\leq i,j \leq n} be a matrix with entries being real-valued measurable functions defined a.e in \Omega\subset\mathbb{R}^m. We say A is an orthogonal matrix over \Omega provided A^{T}A=I_n, and AA^{T}=I_n both hold a.e in \Omega. Therefore, taking determinant across either orthogonality equation, we find \det(A) is 1 or -1 a.e in \Omega, which is to say there exists a set Y\subset \Omega such that |\Omega\backslash Y|=0 and \det(A(x))=1 or -1 for given x\in Y. In this case, A is invertible a.e in \Omega with inverse given by its transpose A^{T}. If the subset Z of Y where \det(A) assumes the value -1 has zero m-dimensional Lebesgue measure, we say that A is a special orthogonal matrix over \Omega. Let’s denote the set of orthogonal matrices and special orthogonal matrices defined a.e in \Omega by \text{O}_n(\Omega) and \text{SO}_n(\Omega), respectively. For our discussion, we say that a matrix B is in \text{O}_n if B is a real-valued constant matrix that is orthogonal. Further, if B\in \text{O}_n and \det(B)=1, we say B is in \text{SO}_n as a special orthogonal matrix.

Observe that, when |\Omega|=1, the simplest solution to the equation

(7) …… \displaystyle \det(T(A))=\int_{\Omega}\det(A)\mathrm{d}x

is the identity matrix I_n which satisfies T(A)=A and thus

\displaystyle \det(A)=\int_{\Omega}\det(A)\mathrm{d}x.

Note that in general the above equation implies \det(A) is a constant c almost everywhere in \Omega, which implies either |\Omega|=1 if c\neq 0 or |\Omega| is any non-negative number if c=0. Now suppose A\in \text{SO}_n(\Omega) is such that T(A)\in \text{SO}_n and A solves (7). Then, we find

\displaystyle 1=\det(T(A))=\int_{\Omega}\det(A)\mathrm{d}x=\int_{\Omega}1\mathrm{d}x=|\Omega|.

So, if we search for such solutions A to (7), the underlying domain necessarily has unit volume. Thus, define the set

\displaystyle \mathcal{S}_n(\Omega)=\left\{A\in \text{SO}_n(\Omega):T(A)\in \text{SO}_n\right\}.

What we just showed implies then that no element of \mathcal{S}_n(\Omega) can satisfy (7) whenever |\Omega|\neq 1, but in contrast all elements of \mathcal{S}_n(\Omega) solve (7) when |\Omega|=1. In another case, suppose A\in\text{SO}_n(\Omega) solves (7) but T(A)\in \text{O}_n. Then, we find

\displaystyle \pm1=\text{det}(T(A))=\int_{\Omega}\text{det}(A)\mathrm{d}x=\int_{\Omega}1\mathrm{d}x=|\Omega|.

Of course, |\Omega| is non-negative, so we find necessarily \text{det}(T(A))=1 and so T(A)\in\text{SO}_n. Therefore, considering the set

\displaystyle \mathcal{S}_n'(\Omega)=\left\{A\in \text{SO}_n(\Omega):T(A)\in \text{O}_n\right\},

we find that if there exists a solution \tilde{A}\in \mathcal{S}_n'(\Omega) to (7) then

  • the domain \Omega has unit volume,
  • \tilde{A}\in\mathcal{S}_n(\Omega), and
  • every element of \mathcal{S}_n(\Omega) solves (7).

Notice that \mathcal{S}_n(\Omega)\subset\mathcal{S}_n'(\Omega) since \text{SO}_n\subset \text{O}_n. We’ll soon investigate whether or not this inclusion is strict for arbitrary domains \Omega in \mathbb{R}^m. It will also be interesting to determine when either set is empty by virtue of choice of domain and matrix dimension n.

Let’s now consider the following two sets:

\displaystyle \mathcal{H}_n(\Omega)=\left\{A\in\text{O}_n(\Omega):T(A)\in \text{SO}_n\right\},

\displaystyle \mathcal{H}_n'(\Omega)=\left\{A\in\text{O}_n(\Omega):T(A)\in \text{O}_n\right\}.

Clearly \mathcal{H}_n(\Omega)\subset\mathcal{H}_n'(\Omega). Now suppose A\in \mathcal{H}_n(\Omega) is given. Then, A is an orthogonal matrix and so there exist sets Y,Z\subset \Omega such that |\Omega\backslash Y|=0, Z\subset Y and \det(A(x))=-1 for all x\in Z while \det(A(x))=1 for all x\in Y\backslash Z. If A solves (7), we find

\displaystyle 1=\text{det}(T(A))=\int_{\Omega}\text{det}(A)\mathrm{d}x=\int_{\Omega}\det(A)\mathrm{d}x=|Y\backslash Z|-|Z|.

On the other hand, if A\in\mathcal{H}_n'(\Omega) solves (7), it follows that

\displaystyle \pm1=\text{det}(T(A))=\int_{\Omega}\text{det}(A)\mathrm{d}x=\int_{\Omega}1\mathrm{d}x=|Y\backslash Z|-|Z|

which is equivalent to ||Y\backslash Z|-|Z||=1. By the above, provided A\in\mathcal{H}_n'(\Omega) satisfies (7) with |Y\backslash Z|>|Z|>0 then \det(T(A))=1 and so T(A)\in \text{SO}_n. Moreover, |\Omega|=|Y\backslash Z|+|Z|=2|Z|+1>1. We have thus shown the following. Given a domain \Omega\subset\mathbb{R}^m if there exists a solution \tilde{A}\in \mathcal{H}_n'(\Omega) to (7) for which |Y\backslash Z|>|Z|>0, where Y and Z are as defined above, then

  • the volume of \Omega is strictly larger than 1,
  • \tilde{A}\in \mathcal{H}_n(\Omega), and
  • there possibly exists another element B of \mathcal{H}_n(\Omega) that does not satisfy (7).

In fact, if instead \tilde{A}\in\mathcal{H}_n(\Omega) satisfies (7) and |Z|>|Y\backslash Z|>0, we still have |\Omega|>1 although \tilde{A}\notin \mathcal{H}_n(\Omega). In any case, the volume of the domain must be strictly larger than unity if we seek to establish the existence of a solution A\in\mathcal{H}_n'(\Omega) to equation (7), as opposed to assuming unit volume when considering existence over \mathcal{S}_n'(\Omega). All considered, we wonder how the sets \mathcal{S}_n(\Omega),\mathcal{S}_n'(\Omega),\mathcal{H}_n(\Omega), and \mathcal{H}_n'(\Omega) may be characterised.

Triangular Matrices

Let A=(a_{ij})_{1\leq i, j\leq n} be a matrix of real-valued measurable functions defined almost everywhere in a domain \Omega\subset\mathbb{R}^m. If A\in\mathcal{M}_n(\Omega,\textbf{r}) (for some appropriate \textbf{r}\in\mathbb{R}^n) is a triangular matrix that is a solution to equation (7) we arrive at

(7′) …… \displaystyle \prod_{i=1}^{n}\left(\int_{\Omega}a_{ii}\mathrm{d}x\right)=\int_{\Omega}\left(\prod_{i=1}^na_{ii}\right)\mathrm{d}x.

Notice that none of the off-diagonal entries of A feature in this equation, but only its diagonal entries appear. Therefore, without loss of generality, it suffices to assume A\in\mathcal{M}_n(\Omega,\textbf{r}) is a diagonal matrix in our search for solutions to equation (7′). We will also assume n\geq 2 since since the above equation trivially holds when n=1. Now, assume \Omega is a domain that is radially symmetric with respect to the origin in \mathbb{R}^m (so 0\in \Omega and x\in \Omega\Longleftrightarrow -x\in\Omega). Given n, suppose the set of diagonal entries of A contains an odd number of almost everywhere odd functions while the remaining entries are almost everywhere even functions. It follows that, for some m\in\{1,\cdots,n\}, a_{mm} is an odd function which satisfies

\displaystyle \int_{\Omega}a_{mm}\mathrm{d}x=0

and so

\displaystyle \prod_{i=1}^n\left(\int_{\Omega}a_{ii}\mathrm{d}x\right)=0.

At the same time, the function

\displaystyle g_n(A):x\mapsto \prod_{i=1}^{n}a_{ii}(x)

is almost everywhere odd over \Omega. As such,

\displaystyle \int_{\Omega}\left(\prod_{i=1}^{n}a_{ii}\right)\mathrm{d}x=0.

In all, equation (7′) holds under these assumptions. But notice that, while our argument shows in this case \det(T(A))=0, we may not necessarily have \det(A)=0 a.e in \Omega. Moreover, we can easily produce examples of solutions to equation (7) that indeed lie outside the sets \mathcal{K}_n(\Omega,\textbf{r}) and \mathcal{K}_n'(\Omega,\textbf{r}) encountered in our discussion for non-invertible matrices.

Example 4. Let \Omega=B_1(0)\subset\mathbb{R}^m be the open unit ball centred at the origin. Define a_{11}(x_1,\cdots,x_m)=|x|^2 and a_{22}(x_1,\cdots,x_m)=\sin(x_m) over \Omega. Then, clearly \Omega is radially symmetric with respect to the origin, and a_{11}(x)=a_{11}(-x) in \Omega while a_{22}(x)=-a_{22}(-x) in \Omega. It follows that

\displaystyle \int_{B_1(0)}|x|^2\sin(x_m)\mathrm{d}x=0


\displaystyle \int_{B_1(0)}\sin(x_m)\mathrm{d}x=0.

Then, taking A=\text{diag}(|x|^2,\sin(x_m),1,1,\cdots,1)\in \mathbb{R}^{n\times n} we have an example of an almost everywhere non-degenerate solution in \mathcal{M}_n(B_1(0),\textbf{r}) to equation (7′) for any n\geq 2, m\geq 1 and \textbf{r}=(r_1,\cdots,r_n) for which \sum_{i=1}^n\frac{1}{r_i}=1 and r_i>1 for i=1,\cdots,n.

An Integral-Determinant Equation

In this section we study one consequence of commutativity of integration and determinant for invertible matrices. When A\in \mathcal{M}_n(\Omega,\textbf{r}) is a matrix whose determinant integral doesn’t vanish, we know that T(A) is invertible. As such, if we also have

(8) …… \displaystyle \text{det}(T(A))=\int_{\Omega}\text{det}(A)\mathrm{d}x


\displaystyle \text{det}(T(A)^{-1})\int_{\Omega}\text{det}(A)\mathrm{d}x=1.

Provided A^{-1}\in \mathcal{M}_n(\Omega,\textbf{r}) satisfies

(9) …… \displaystyle T(A)^{-1}=T(A^{-1})


(10) …… \displaystyle \text{det}(T(A^{-1}))=\int_{\Omega}\text{det}(A^{-1})\mathrm{d}x,

we find

(11) …… \displaystyle \left(\int_{\Omega}\text{det}(A)\mathrm{d}x\right)\left(\int_{\Omega}\text{det}(A^{-1})\mathrm{d}x\right)=1.

But, we know that

\displaystyle \text{det}(A^{-1})=\frac{1}{\text{det}(A)}

a.e in \Omega. This implies equation (11) takes the form

(12) …… \displaystyle \left(\int_{\Omega}f\text{ }\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{f}\text{ }\mathrm{d}x\right)=1

with f:=\text{det}(A) a.e in \Omega. Going forward, we will first study some basic features associated with equation (12). Then, to simplify our discussion on when equations (8), (9) and (10) all hold, we will consider two kinds of matrices, namely

  • triangular matrices, and
  • orthogonal matrices.

Later on, we will study some interesting examples of functions and domains for which equation (11) holds. It is there that we generate classes of examples with which beautiful loci can be associated.

Basic Properties

Firstly, we find that the domain is necessarily bounded by unity whenever f satisfying equation (12) is in L^1(\Omega) and positive.

Proposition 4. Let \alpha,\beta>0 be such that \alpha<\beta and let \Omega\subset\mathbb{R}^m be a domain. Suppose f is a measurable function in L^1(\Omega) that is positive a.e in \Omega and satisfies

\displaystyle \left(\int_{\Omega}f\text{ }\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{f}\text{ }\mathrm{d}x\right)=1.


  1. 0<|\Omega|\leq 1, and
  2. \sqrt{\frac{\alpha}{\beta}}\leq|\Omega|\leq 1 if \alpha\leq f \leq \beta a.e in \Omega.

Proof. First of all, observe that if |\Omega|=0, equation (12) cannot hold since then

\displaystyle \int_{\Omega}f\mathrm{d}x=\int_{\Omega}\frac{1}{f}\mathrm{d}x=0.

Thus, |\Omega|>0. Next, for item 1 we find by Cauchy-Schwarz’s inequality

\displaystyle |\Omega|=\int_{\Omega}\sqrt{f}\times \frac{1}{\sqrt{f}}\mathrm{d}x\leq \left(\int_{\Omega}f\mathrm{d}x\right)^{\frac{1}{2}}\left(\int_{\Omega}\frac{1}{f}\mathrm{d}x\right)^{\frac{1}{2}}=1,

as required.
For item 2, suppose the contrary holds, that is,


We already know that 0<|\Omega|\leq 1, so we must have |\Omega|\in\left(0\sqrt{\alpha/\beta}\right). However, we find

\displaystyle 1=\left(\int_{\Omega}f\mathrm{d}x\right)\left(\frac{1}{f}\mathrm{d}x\right)\leq \beta|\Omega|\frac{|\Omega|}{\alpha}=\frac{\beta}{\alpha}|\Omega|^2<\frac{\beta}{\alpha}\times \frac{\alpha}{\beta}=1,

giving a contradiction. //

Example 5. We find

\displaystyle \left(\int_0^{\sqrt{1-\alpha^2}}x^{\alpha}\mathrm{d}x\right)\left(\int_0^{\sqrt{1-\alpha^2}}x^{-\alpha}\mathrm{d}x\right)=1\text{ for all }0\leq \alpha<1.

In particular, there holds

\displaystyle \left(\int_0^{\frac{1}{\sqrt{2}}}x^{\frac{1}{\sqrt{2}}}\mathrm{d}x\right)\left(\int_0^{\frac{1}{\sqrt{2}}}x^{-\frac{1}{\sqrt{2}}}\mathrm{d}x\right)=1.

In the upcoming result we unravel what conditions on the domain are required when a function f satisfying (12) is allowed to assume negative values. Given f is a measurable function over a domain \Omega, define its positive and negative parts via f^{+}:=\text{max}(f,0) and f^{-}:=-\text{min}(0,f), respectively, yielding f=f^{+}-f^{-} a.e in \Omega. Suppose f^{+} and f^{-} are positive everywhere in G\subset\Omega and H\subset\Omega, respectively, in such a way that |\Omega \backslash (G\cup H)|=0 and G \cap H=\emptyset, while f^{+},\frac{1}{f^{+}}\in L^1(G) and f^{-},\frac{1}{f^{-}}\in L^1(H). After employing a similar argument involving Cauchy-Schwarz inequality and elementary estimation of integrals, we obtain (somewhat) a generalisation of Proposition 4 in

Theorem 1. Let \alpha^{+},\alpha^{-},\beta^{+},\beta^{-} be real numbers for which 0<\alpha^{+}<\beta^{+} and 0<\alpha^{-}<\beta^{-}. Assume \Omega\subset\mathbb{R}^m is a domain which we partition into three subsets G,H,Z such that \Omega=G\cup H\cup Z, G\cap H=\emptyset, G\cap Z=\emptyset, H\cap Z=\emptyset, and |\Omega\backslash (G\cup H)|=0. Suppose f=f^{+}-f^{-} is a measurable function over \Omega such that

  • f^{+} and f^{-} are positive over G and H, respectively, and
  • f^{+},\frac{1}{f^{+}}\in L^1(G) while f^{-},\frac{1}{f^{-}}\in L^1(H).


\displaystyle \left(\int_{\Omega}f\text{ }\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{f}\text{ }\mathrm{d}x\right)=1


  • \left|\left|G\right|-|H|\right|\leq \sqrt{Q+1-2\left|G\right||H|},
  • |\Omega|\leq \sqrt{Q+1+2\left|G\right||H|}, and
  • |G||H|\leq \frac{Q+1}{2}


\displaystyle Q:=\left(\int_{G}f^{+}\mathrm{d}x\right)\left(\int_{H}\frac{1}{f^{-}}\mathrm{d}x\right)+\left(\int_{H}f^{-}\mathrm{d}x\right)\left(\int_{G}\frac{1}{f^{+}}\mathrm{d}x\right).

Moreover, if \alpha^{+}\leq f^{+}\leq \beta^{+} over G while \alpha^{-}\leq f^{-}\leq \beta^{-} over H, then

\displaystyle 1+\left(\frac{\beta^{+}}{\beta^{-}}+\frac{\alpha^{-}}{\alpha^{+}}\right)\left|G\right||H|\leq \frac{\beta^{+}}{\alpha^{+}}\left|G\right|^2+\frac{\alpha^{-}}{\beta^{-}}|H|^2


\displaystyle \sqrt{\frac{\alpha^{+}\beta^{-}+(\alpha^{+}\beta^{+}+\alpha^{-}\beta^{-})|G||H|}{\alpha^{+}\alpha^{-}+\beta^{+}\beta^{-}}}\leq |\Omega|\leq\sqrt{ Q+1+2\left|G\right||H|}.

Triangular Matrices

Suppose A\in\mathcal{M}_2(\Omega,\textbf{r}) is upper triangular with

\displaystyle A= \begin{pmatrix} a & b \\ 0 & c \end{pmatrix}.

Further, assume A satisfies the conditions of Proposition 1, which indicates A is invertible with A^{-1}\in\mathcal{M}_2(\Omega,\textbf{r}) also. The desired equation (8) reads

\displaystyle \left(\int_{\Omega} a\mathrm{d}x\right)\left(\int_{\Omega}c\mathrm{d}x\right)=\int_{\Omega}ac\mathrm{d}x.

Provided \Omega\subset\mathbb{R}^m has unit volume and c\equiv 1 a.e in \Omega, we see that

\displaystyle \left(\int_{\Omega} a\mathrm{d}x\right)\left(\int_{\Omega}c\mathrm{d}x\right)=\int_{\Omega}a\mathrm{d}x


\displaystyle \int_{\Omega}ac\mathrm{d}x=\int_{\Omega}a\mathrm{d}x,

and so (8) is satisfied. Continuing with this case, we investigate (9) which we would like to be satisfied by

\displaystyle A= \begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix},


\displaystyle A^{-1}= \begin{pmatrix} \frac{1}{a} & -\frac{b}{a} \\ 0 & 1 \end{pmatrix}.


\displaystyle T(A)= \begin{pmatrix} \int_{\Omega}a\mathrm{d}x & \int_{\Omega}b\mathrm{d}x \\ 0 & 1 \end{pmatrix},

with inverse

\displaystyle T(A)^{-1}= \begin{pmatrix} \frac{1}{\int_{\Omega}a\mathrm{d}x} & -\frac{\int_{\Omega}b\mathrm{d}x}{\int_{\Omega}a\mathrm{d}x} \\ 0 & 1 \end{pmatrix}.

The integral of A^{-1} is given by

\displaystyle T(A^{-1})= \begin{pmatrix} \int_{\Omega}\frac{1}{a}\mathrm{d}x & -\int_{\Omega}\frac{b}{a}\mathrm{d}x \\ 0 & 1 \end{pmatrix}.

From (9) we necessarily have

(13) …… \left(\int_{\Omega}a\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{a}\mathrm{d}x\right)=1


\displaystyle \left(\int_{\Omega}a\mathrm{d}x\right)\left(\int_{\Omega}\frac{b}{a}\mathrm{d}x\right)=\int_{\Omega}b\mathrm{d}x.

We are then left with (10) which is equivalent to, in the current case,

\displaystyle \text{det}(T(A^{-1}))=\int_{\Omega}\frac{1}{a}\mathrm{d}x=\int_{\Omega}\text{det}(A^{-1})\mathrm{d}x.

This holds without having to make any further assumptions on the matrix A. Moreover, notice that (13) is actually (11) in disguise since A is upper-triangular. If a\equiv1 a.e in \Omega also, we have the following elementary proposition in which A is not identity.

Proposition 4. Let \Omega\subset \mathbb{R}^m be a domain of unit volume, with m\in\mathbb{N} given, and let \textbf{\emph{r}}=(r_1,r_2) be such that r_1^{-1}+r_2^{-1}=1 and r_1,r_2\geq1. Suppose A, given by

\displaystyle A=\begin{pmatrix} 1 & b \\ 0 & 1 \end{pmatrix}

a.e in \Omega is in \mathcal{M}_2(\Omega,\textbf{r}). Then,

  • \displaystyle \text{det}(T(A))=\int_{\Omega}\text{det}(A)\mathrm{d}x,
  • \displaystyle \text{det}(T(A^{-1}))=\int_{\Omega}\text{det}(A^{-1})\mathrm{d}x,
  • \displaystyle T(A)^{-1}=T(A^{-1}), and
  • \displaystyle \left(\int_{\Omega}\text{det}(A)\mathrm{d}x\right)\left(\int_{\Omega}\text{det}(A^{-1})\mathrm{d}x\right)=1.

Let’s return to the desired equation (8), namely

(14) …… \displaystyle \left(\int_{\Omega} a\mathrm{d}x\right)\left(\int_{\Omega}c\mathrm{d}x\right)=\int_{\Omega}ac\mathrm{d}x

and assume it holds. Assuming A satisfies the requirements of Proposition 1, there exists a constant C>0 such that |\text{det}(A)|=|ac|\geq C a.e in \Omega. Hence, neither a nor c can vanish on a subset of \Omega that has positive m-dimensional Lebesgue measure. As such, we calculate A^{-1} as

\displaystyle A^{-1}=\begin{pmatrix} \frac{1}{a} & -\frac{b}{ac}\\0 & \frac{1}{c} \end{pmatrix}

a.e in \Omega. We again seek to write out the formula given by (9). To this end, we have

\displaystyle T(A)= \begin{pmatrix} \int_{\Omega}a\mathrm{d}x & \int_{\Omega}b\mathrm{d}x \\ 0 & \int_{\Omega}c\mathrm{d}x \end{pmatrix},

with inverse given by

\displaystyle T(A)^{-1} = \begin{pmatrix} \frac{1}{\int_{\Omega}a\mathrm{d}x} & -\frac{\int_{\Omega}b\mathrm{d}x}{\left(\int_{\Omega}a\mathrm{d}x\right)\left(\int_{\Omega}c\mathrm{d}x\right)} \\ 0 & \frac{1}{\int_{\Omega}c\mathrm{d}x}\end{pmatrix}


\displaystyle T(A)^{-1}=\begin{pmatrix} \frac{1}{\int_{\Omega}a\mathrm{d}x} & -\frac{\int_{\Omega}b\mathrm{d}x}{\int_{\Omega}ac\mathrm{d}x} \\ 0 & \frac{1}{\int_{\Omega}c\mathrm{d}x} \end{pmatrix},

due to (14), while

\displaystyle T(A^{-1})=\begin{pmatrix} \int_{\Omega}\frac{1}{a}\mathrm{d}x & -\int_{\Omega}\frac{b}{ac}\mathrm{d}x\\0 & \int_{\Omega}\frac{1}{c}\mathrm{d}x \end{pmatrix}.

Hence, T(A)^{-1}=T(A^{-1}) if and only if

(15) …… \displaystyle \left(\int_{\Omega}a\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{a}\mathrm{d}x\right)=1,

(16) …… \displaystyle \left(\int_{\Omega}\frac{b}{ac}\mathrm{d}x\right)\left(\int_{\Omega}ac\mathrm{d}x\right)=\int_{\Omega}b\mathrm{d}x,


(17) …… \displaystyle \left(\int_{\Omega}c\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{c}\mathrm{d}x\right)=1.

Finally, (10) written out reads

(18) …… \displaystyle \left(\int_{\Omega}\frac{1}{a}\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{c}\mathrm{d}x\right)=\int_{\Omega}\frac{1}{ac}\mathrm{d}x.

Then, by the argument given earlier on, we deduce that

(19) …… \displaystyle \left(\int_{\Omega}ac\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{ac}\mathrm{d}x\right)=1

which is precisely

\displaystyle \left(\int_{\Omega}\text{det}(A)\mathrm{d}x\right)\left(\int_{\Omega}\text{det}(A^{-1})\mathrm{d}x\right)=1.

But we observe that with (14), (15), (17), and (18) only, we can derive (19). There is no need for (16) to hold and so there’s no need to satisfy T(A)^{-1}=T(A^{-1}). This leads us to the following result.

Proposition 5. Let \Omega\subset \mathbb{R}^m be a domain, with m\in\mathbb{N}, and let \textbf{r}=(r_1,r_2) be such that r_1^{-1}+r_2^{-1}=1 and r_1,r_2\geq 1. Suppose A\in\mathcal{M}_2(\Omega,\textbf{r}) is given by

\displaystyle A=\begin{pmatrix} a & b \\ 0 & c \end{pmatrix}

a.e in \Omega, and satisfies

  • a,c\in L^{r_1}(\Omega)\cap L^{r_2}(\Omega)\cap L^1(\Omega), and
  • there exists a constant C>0 for which |\text{det}(A)|\geq C a.e in \Omega.

If there further hold

\displaystyle \left(\int_{\Omega} a\text{ }\mathrm{d}x\right)\left(\int_{\Omega}c\text{ }\mathrm{d}x\right)=\int_{\Omega}ac\text{ }\mathrm{d}x,\text{ } \text{ }\left(\int_{\Omega}a\text{ }\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{a}\text{ }\mathrm{d}x\right)=1,

\displaystyle \left(\int_{\Omega}c\text{ }\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{c}\text{ }\mathrm{d}x\right)=1, \text{ }\text{ and } \text{ } \left(\int_{\Omega}\frac{1}{a}\text{ }\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{c}\text{ }\mathrm{d}x\right)=\int_{\Omega}\frac{1}{ac}\text{ }\mathrm{d}x,


\displaystyle \left(\int_{\Omega}\text{det}(A)\text{ }\mathrm{d}x\right)\left(\int_{\Omega}\text{det}(A^{-1})\text{ }\mathrm{d}x\right)=1.

Remark 2. Since no conditions are placed on the entries off the main diagonal of A (apart from those due to membership of A in \mathcal{M}_2(\Omega,\textbf{r})), we deduce this result also holds in the case of lower triangular 2\times 2 matrices. Moreover, we’d like to think of the equality conditions given in the result as pseudo-intuitive relationships, each of which we believe are starting points for interesting numerical analysis. Furthermore, if we allow for inequality, one can ask how easy it is to produce continuous functions which satisfy

\displaystyle \left|\left(\int_{\Omega}a\text{ }\mathrm{d}x\right)\left(\int_{\Omega} \frac{1}{a}\text{ }\mathrm{d}x\right)-1\right|\leq \epsilon

with a given tolerance 0<\epsilon<\frac{1}{2}. We could also ask functional analytic questions on the above inequality. For example, in what function space does the above inequality describe a closed set?

What can be said about 3\times3 upper triangular matrices? If the matrix

\displaystyle A=\begin{pmatrix} a_{11} & a_{12} & a_{13} \\ 0 & a_{22} & a_{23}\\ 0 & 0 & a_{33} \end{pmatrix}

is invertible, its inverse is given by

\displaystyle A^{-1}=\frac{1}{a_{11}a_{22}a_{33}}\begin{pmatrix} a_{22}a_{33} & -(a_{12}a_{33}) & a_{12}a_{23}-a_{13}a_{22} \\ 0 & a_{11}a_{33} & -(a_{11}a_{23})\\ 0 & 0 & a_{11}a_{22} \end{pmatrix}

or rather

\displaystyle A^{-1}=\begin{pmatrix} \frac{1}{a_{11}} & -\frac{a_{12}}{a_{11}a_{22}} & \frac{a_{12}a_{23}-a_{13}a_{22}}{a_{11}a_{22}a_{33}} \\ 0 & \frac{1}{a_{22}} & -\frac{a_{23}}{a_{22}a_{33}}\\ 0 & 0 & \frac{1}{a_{33}} \end{pmatrix}.

In this case, (8) reads

(20) …… \displaystyle \left(\int_{\Omega}a_{11}\mathrm{d}x\right)\left(\int_{\Omega}a_{22}\mathrm{d}x\right)\left(\int_{\Omega}a_{33}\mathrm{d}x\right)=\int_{\Omega}a_{11}a_{22}a_{33}\mathrm{d}x.

Equation (10) for A^{-1} reads

(21) …… \displaystyle \left(\int_{\Omega}\frac{1}{a_{11}}\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{a_{22}}\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{a_{33}}\mathrm{d}x\right)=\int_{\Omega}\frac{1}{a_{11}a_{22}a_{33}}\mathrm{d}x.


\displaystyle T(A)=\begin{pmatrix} \int_{\Omega}a_{11}\mathrm{d}x & \int_{\Omega}a_{12}\mathrm{d}x & \int_{\Omega}a_{13}\mathrm{d}x \\ 0 & \int_{\Omega}a_{22}\mathrm{d}x & \int_{\Omega}a_{23}\mathrm{d}x\\ 0 & 0 & \int_{\Omega}a_{33}\mathrm{d}x \end{pmatrix}

and if T(A) is invertible its inverse is

\displaystyle \begin{pmatrix} \frac{1}{\int_{\Omega}a_{11}\mathrm{d}x} & -\frac{\int_{\Omega}a_{12}\mathrm{d}x}{\left(\int_{\Omega}a_{11}\mathrm{d}x\right)\left(\int_{\Omega}a_{22}\mathrm{d}x\right)} & \frac{\left(\int_{\Omega}a_{12}\mathrm{d}x\right)\left(\int_{\Omega}a_{23}\mathrm{d}x\right)-\left(\int_{\Omega}a_{13}\mathrm{d}x\right)\left(\int_{\Omega}a_{22}\mathrm{d}x\right)}{\left(\int_{\Omega}a_{11}\mathrm{d}x\right)\left(\int_{\Omega}a_{22}\mathrm{d}x\right)\left(\int_{\Omega}a_{33}\mathrm{d}x\right)} \\ 0 & \frac{1}{\int_{\Omega}a_{22}\mathrm{d}x} & -\frac{\int_{\Omega}a_{23}\mathrm{d}x}{\left(\int_{\Omega}a_{22}\mathrm{d}x\right)\left(\int_{\Omega}a_{33}\mathrm{d}x\right)}\\ 0 & 0 & \frac{1}{\int_{\Omega}a_{33}\mathrm{d}x} \end{pmatrix}.

On the other hand, the integral of A^{-1} is given by

\displaystyle T(A^{-1})=\begin{pmatrix} \int_{\Omega}\frac{1}{a_{11}}\mathrm{d}x & -\int_{\Omega}\frac{a_{12}}{a_{11}a_{22}}\mathrm{d}x & \int_{\Omega}\frac{a_{12}a_{23}-a_{13}a_{22}}{a_{11}a_{22}a_{33}}\mathrm{d}x \\ 0 & \int_{\Omega}\frac{1}{a_{22}}\mathrm{d}x & -\int_{\Omega}\frac{a_{23}}{a_{22}a_{33}}\mathrm{d}x\\ 0 & 0 & \int_{\Omega}\frac{1}{a_{33}}\mathrm{d}x \end{pmatrix}.

Therefore, (9) implies the following system of equations.

(22) …… \displaystyle \left(\int_{\Omega}a_{ii}\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{a_{ii}}\mathrm{d}x\right)=1\text{ for }i=1,2,3

\displaystyle \frac{\int_{\Omega}a_{12}\mathrm{d}x}{\left(\int_{\Omega}a_{11}\mathrm{d}x\right)\left(\int_{\Omega}a_{22}\mathrm{d}x\right)}=\int_{\Omega}\frac{a_{12}}{a_{11}a_{22}}\mathrm{d}x,

\displaystyle \frac{\left(\int_{\Omega}a_{12}\mathrm{d}x\right)\left(\int_{\Omega}a_{23}\mathrm{d}x\right)-\left(\int_{\Omega}a_{13}\mathrm{d}x\right)\left(\int_{\Omega}a_{22}\mathrm{d}x\right)}{\left(\int_{\Omega}a_{11}\mathrm{d}x\right)\left(\int_{\Omega}a_{22}\mathrm{d}x\right)\left(\int_{\Omega}a_{33}\mathrm{d}x\right)}= \int_{\Omega}\frac{a_{12}a_{23}-a_{13}a_{22}}{a_{11}a_{22}a_{33}}\mathrm{d}x,


\displaystyle \frac{\int_{\Omega}a_{23}\mathrm{d}x}{\left(\int_{\Omega}a_{22}\mathrm{d}x\right)\left(\int_{\Omega}a_{33}\mathrm{d}x\right)}=\int_{\Omega}\frac{a_{23}}{a_{22}a_{33}}\mathrm{d}x.

Again, we see that some equations aren’t necessary to conclude the relationship we desire. Multiplying (20), (21), and noting equations (22), we deduce

\displaystyle \left(\int_{\Omega}\text{det}(A)\mathrm{d}x\right)\left(\int_{\Omega}\text{det}(A^{-1})\mathrm{d}x\right)=\left(\int_{\Omega}a_{11}a_{22}a_{33}\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{a_{11}a_{22}a_{33}}\mathrm{d}x\right)=\prod_{i=1}^{3}\left[\left(\int_{\Omega}a_{ii}\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{a_{ii}}\mathrm{d}x\right)\right]=1.

Thus, the equations gotten by equating the non-diagonal entries of (9) aren’t necessary for deriving (11). These calculations, together with Proposition 3, suggest the following generalisation of Proposition 5 to upper-triangular matrices A\in\mathcal{M}_n(\Omega,\textbf{r}).

Theorem 1. Let n,m\in\mathbb{N} be given. Suppose \Omega\subset \mathbb{R}^m is a domain, and \textbf{r}=(r_1,\cdots,r_n) is such that r_1^{-1}+\cdots+r_n^{-1}=1 with r_1,\cdots, r_n\geq1. Let A=(a_{ij})_{1\leq i,j\leq n} be an upper triangular matrix in \mathcal{M}_n(\Omega,\textbf{r}) that satisfies

  • a_{ij}\in L^{\infty}(\Omega) for all i,j\in\{1,\cdots,n\}, and
  • there exists a constant C>0 for which |\text{det}(A)|\geq C a.e in \Omega.


\displaystyle \prod_{i=1}^n\left(\int_{\Omega} a_{ii}\text{ }\mathrm{d}x\right)=\int_{\Omega}\left(\prod_{i=1}^na_{ii}\right)\text{ }\mathrm{d}x,\text{ }\text{ }\prod_{i=1}^n\left(\int_{\Omega}\frac{1}{a_{ii}}\text{ }\mathrm{d}x\right)=\int_{\Omega}\left(\prod_{i=1}^n\frac{1}{a_{ii}}\right)\text{ }\mathrm{d}x,


\displaystyle \left(\int_{\Omega}a_{ii}\text{ }\mathrm{d}x\right)\left(\int_{\Omega}\frac{1}{a_{ii}}\text{ }\mathrm{d}x\right)=1

for i=1,\cdots,n, then

\displaystyle \left(\int_{\Omega}\text{det}(A)\text{ }\mathrm{d}x\right)\left(\int_{\Omega}\text{det}(A^{-1})\text{ }\mathrm{d}x\right)=1.

Orthogonal Matrices

Let’s consider the relation

(23) ….. \displaystyle \left(\int_{\Omega}\text{det}(A)\text{ }\mathrm{d}x\right)\left(\int_{\Omega}\text{det}(A^{-1})\text{ }\mathrm{d}x\right)=1

in the context of orthogonal matrices A\in \text{O}_n(\Omega). Earlier on we described these matrices when we studied the commutativity of integration and determinant (so recall the set Y is where \det{A}\neq 0 and the set Z is where \det{A}=-1 for an orthogonal matrix A\in \text{O}_n(\Omega)). Suppose A=(a_{ij})_{1\leq i,j \leq n} be a matrix with entries being real-valued measurable functions defined a.e in \Omega.

Provided A\in\text{SO}_n(\Omega), the relation

(24) …… \displaystyle \text{det}(A)\text{det}(A^{-1})=1

which holds a.e in \Omega implies

\displaystyle \text{det}(A^{-1})=1

a.e in \Omega. Therefore, if (23) holds for such A, \Omega must have unit volume.

Proposition 6. Let n,m\in\mathbb{N} be given and suppose \Omega\subset\mathbb{R}^m is a domain of unit volume. If A\in \text{SO}_n(\Omega), then

\displaystyle \left(\int_{\Omega}\text{det}(A)\text{ }\mathrm{d}x\right)\left(\int_{\Omega}\text{det}(A^{-1})\text{ }\mathrm{d}x\right)=1.

Observe that whenever A\in \text{O}_{n}(\Omega), (24) implies \text{det}(A) and \text{det}(A^{-1}) assume the same sign a.e in \Omega. Assuming A satisfies (23), we find

\displaystyle \left(\int_{\Omega}\text{det}(A)\text{ }\mathrm{d}x\right)\left(\int_{\Omega}\text{det}(A^{-1})\text{ }\mathrm{d}x\right)=1

if and only if

\displaystyle \left(\int_{\Omega\backslash Z}\mathrm{d}x+\int_{Z}(-1)\mathrm{d}x\right)\left(\int_{\Omega\backslash Z}\mathrm{d}x+\int_{Z}(-1)\mathrm{d}x\right)=1

which is equivalent to

\displaystyle  (|\Omega\backslash Z|-|Z|)^2=1


(25) …… \displaystyle (|Y\backslash Z|-|Z|)^2=1

since |\Omega\backslash Z|=|Y\backslash Z|. One may interpret (25) as a condition which says, for a given domain \Omega\subset \mathbb{R}^m and a matrix A\in \text{O}_n(\Omega), there exist two disjoint subsets G,H of \Omega such that

  • the union of G,H exhausts \Omega in measure,
  • the determinant of A is 1, -1 on G,H respectively, and
  • the volumetric difference between G and H is unity.

So, above we could take G=Y\backslash Z and H=Z, for example. Moreover, when (25) holds, it suggests \text{det}(A) is -1 more often than +1 if |Z|=|Y\backslash Z|+1, or +1 more often than -1 which |Y\backslash Z|=|Z|+1 would indicate. In all, we can state

Theorem 2. Let n,m\in\mathbb{N} be given and assume \Omega\subset\mathbb{R}^m is a domain. Suppose A\in\text{O}_n(\Omega) is such that there exist sets G,H\subset \Omega for which

  • G\cap H=\emptyset
  • the union of G,H exhausts \Omega with respect to m-dimensional Lebesgue measure,
  • the determinant of A is 1, -1 on G,H respectively, and
  • the volumetric difference between G and H is unity.


\displaystyle \left(\int_{\Omega}\text{det}(A)\text{ }\mathrm{d}x\right)\left(\int_{\Omega}\text{det}(A^{-1})\text{ }\mathrm{d}x\right)=1.

Copyright © 2021 Yohance Osborne