Artificial intelligent assistant

Proof of Cauchy-Schwarz inequality - Why select s so that so that $||x-sy||$ would be minimized? I was looking at a number of different proofs of the cauchy schwarz inequality in an inner product space ($\mathbb{R}^n$ or $\mathbb{C}^n$). All of them used the idea of $||x-sy||$ where $s$ was selected in particular fashion which in the real case, s would be the value that minimized the function $f(s) = ||x-sy||$ The thing I am confused on is that the books say "s was selected so that $||x-sy||$ would be minimized". I don't understand how beforehand minimizing $||x-sy||$ would be known to be relevant to the inequality. What is the inuitive link between the minimum of $f(s) = ||x-sy||$ and the Cauchy Schwarz inequality?

You want to bound $x \cdot y$ in terms of $x \cdot x$ and $y \cdot y$. The only inequality you know about $\cdot$ is that $z \cdot z \ge 0$ for all $z$. So it is reasonable to look at $z \cdot z$ where $z$ is a linear combination of $x$ and $y$, say $z = r x + s y$. By homogeneity, we might as well take $r = 1$. Now expand: $$0 \le (x + s y) \cdot (x + s y) = x \cdot x + 2 s x \cdot y + s^2 y \cdot y$$ For this to give us an upper bound on $x \cdot y$, we need $s$ to be negative. For convenience, write $s = -t$. Thus for all $t > 0$, $$ x \cdot y \le \frac{x \cdot x + t^2 y \cdot y}{2 t}$$ Each positive number $t$ gives us an upper bound on $x \cdot y$. We want the best possible upper bound, so we minimize the right side.

xcX3v84RxoQ-4GxG32940ukFUIEgYdPy a9a761d3fa6b876e52fedafba86eb0f5