Artificial intelligent assistant

Is convexity the most general dividing line between "easy" and "hard" optimization problems? Just got started with Boyd's _Convex Optimization_. It's great stuff and I see how it directly subsumes the all-important linear programming class of models. However, it seems that if a problem is non-convex, then the only recourse is some form of exhaustive search with smart stopping rules (e.g., branch and bound with fathoming). Is convex optimization really the last line of demarcation between "easy" and "hard" optimization problems? By "last line" I mean that there does not exist a strict superset of convex problems that are also easily solved and well-behaved for a global optima.

In my opinion, it is sufficient that the objective is quasi-convex. Indeed, this ensures that all local minimizers are global minimizers and the set of global minimizers is convex. Thus, you do not have to fight against local minimizers (which are not global minimizers).

xcX3v84RxoQ-4GxG32940ukFUIEgYdPy 6b35d51401cf6e417be601ded9ed58b9