Artificial intelligent assistant

What Stopping Criteria to Use in Projected Gradient Descent Suppose we want to solve a convex constrained minimization problem. We want to use projected gradient descent. If there was no constraint the stopping condition for a gradient descent algorithm would be that **the gradient of function is close to zero**. But for a constrained problem the gradient at the optimal point is not necessarily (close to) zero. Now what stopping condition should we use for a projected gradient descent algorithm?

Just like in the gradient descent method, you want to stop when the norm of the gradient is sufficiently small, in the projected gradient method, you want to stop when the norm of the projected gradient is sufficiently small. Suppose the projected gradient is zero. Geometrically, that means that the negative gradient lies in the normal cone to the feasible set. If you had linear equality constraints only, it would mean that the gradient vector is orthogonal to the feasible set. In other words, it's locally impossible to find a descent direction, and you have first-order optimality.

xcX3v84RxoQ-4GxG32940ukFUIEgYdPy baafc1af45baba783aa13240ab746c8e