Artificial intelligent assistant

What is a good target peer code review rate? When I googled "best practices peer code review rate", I found lots of hits that provide best practices for how to conduct a peer code review, but nothing came up for determining a good peer code review rate. Our team has neither required nor consistently implemented peer code reviews in the past. We're trying to rectify that. However, we don't want to create a process that creates a bottleneck, so I'm assuming that declaring a code review is required for 100% of final commits is a bad idea. **The Question:** Is there a best practice or generally accepted percentage of final commits (or other metric) that is used for selecting commits for peer code review?

I just can tell how the teams I worked with did best: when a developer finished a part of code he or she considered important, they brought it with them to the next review to show to and discuss it with the other team members.
Rotating pair programming-pairs also helps a lot in communicating the current state of a repository.
In my eyes, having a "target peer code review rate" will cause the opposite effect - clutching to a number won't buy the team in but will just add bureaucracy.

xcX3v84RxoQ-4GxG32940ukFUIEgYdPy ad82fd75453c47c1cf04eeaa9a77303f