Artificial intelligent assistant

Proximal Operator - Scaling by a Matrix Proximal operator is defined for matrices as a map prox$_f:R^m\times R^n \rightarrow R^m\times R^n$: prox$_f$(X) := argmin$_{Y\in R^m\times R^n}$ $ f(Y) + \frac{1}{2}||Y-X||^2$ In case of vectors, it is known < that if $f(x) = \phi(x/p)$, $p \in R, p\ne0$, then prox$_f(x) = p$prox$_{\phi/p^2}(x/p)$. Is there an alternative when we work with arbitrary matrices, i.e. not necessarily invertible? So for example I have $||PX||_1$, where $P \in R^m\times R^m $, $X \in R^m\times R^n$, and $X$ is a variable. It is known that the proximal mapping for $||X||_1$ is soft-thresholding operator, so I need to 'get rid of' $P$ inside the function.

No, there is no way to do this (even if $P$ is invertible), except in some special cases. If we could, it would make it easy to derive very effective methods for a lot of important convex optimization problems.

One special case where we can do this is when the matrix $P$ is orthogonal. To evaluate \begin{equation*} \arg \min_x f(Px) + \frac12 \|x - \hat{x} \|^2 \end{equation*} when $P$ is orthogonal, we can make the substitution $w = Px$. Our problem is now to evaluate \begin{equation*} \arg \min_w f(w) + \frac12 \|P^T w- \hat{x} \|^2 = \arg \min_w f(w) + \frac12 \|w - P \hat{x} \|^2. \end{equation*} So we need only evaluate the prox operator of $f$.

Another special case is when $f(x) = \frac12 \| x \|^2$, and $P$ is a convolution operator. In this case the prox operator of $g(x) = f(Px)$ can be evaluated efficiently by setting the derivative equal to $0$ and using the FFT to solve the resulting linear system.

xcX3v84RxoQ-4GxG32940ukFUIEgYdPy 329e800e53b1f973a74f93f1e4bcadd9