Artificial intelligent assistant

What is the difference between projected gradient descent and ordinary gradient descent? I just read about projected gradient descent but I did not see the intuition to use Projected one instead of normal gradient descent. Would you tell me the reason and preferable situations of projected gradient descent? What does that projection contribute?

At a basic level, projected gradient descent is just a more general method for solving a more general problem.

Gradient descent minimizes a function by moving in the negative gradient direction at each step. There is no constraint on the variable. $$ \text{Problem 1:} \min_x f(x) $$ $$ x_{k+1} = x_k - t_k \
abla f(x_k) $$

On the other hand, projected gradient descent minimizes a function subject to a constraint. At each step we move in the direction of the negative gradient, and then "project" onto the feasible set.

$$ \text{Problem 2:} \min_x f(x) \text{ subject to } x \in C $$

$$ y_{k+1} = x_k - t_k \
abla f(x_k)\\\ x_{k+1} = \text{arg} \min_{x \in C} \|y_{k+1}-x\| $$

xcX3v84RxoQ-4GxG32940ukFUIEgYdPy 433ced8834b844e23d204f620825f152