Artificial intelligent assistant

Understanding the statement in the One-Sided Hardy-Littledwood maximal inequality I am having some trouble understanding the _One-Sided Hardy-Littlewood maximal inequality_. In **[An introduction to measure theory, T. Tao]** it is stated as follows: > Let $f : \mathbb{R} \rightarrow \mathbb{C}$ be an absolutely integrable unction, and let $\lambda > 0$. Then $$ m(\\{x \in \mathbb{R} : \sup\limits_{h>0} \frac{1}{h} \int_{[x, x+h]}|f(t)|\>dt \geq \lambda\\}) \leq \frac{1}{\lambda}\int_{\mathbb{R}} |f(t)|\> dt $$ (where $m$ is the lebesgue measure) What is the theorem actually stating? I'm having some trouble reading the left-hand side of it. And the way in which I am reading it (in my head, I'm having a hard time even writing it down) does not give me much intuition. So informally, what does the theorem say? It seems quite similar to Markov's inequality.

In words, it means that the maximal function is not much larger than $|f|$. (Stein and Shakarchi Real Analysis p.101).

The lefthand side is the measure of the set of values at which the maximal function is larger than a particular real number. Note that as that real number grows to infinity, the RHS goes to zero. So the set on which the maximal function takes infinite values is a set of measure zero. Since $f$ is integrable (i.e. $\int |f|\le \infty$), we know that $|f|=\infty$ on a set of measure zero.

Recall that we're introducing the maximal function because we're hoping to see when the averaging property holds: Is it true that $\lim_{|I|\to 0,\ x\in I} \frac{1}{|I|} \int_I f(y)\,dy=f(x)$? It would have been nice if $f^*$ were integrable; it's not, and the property you cited turns out to be the next best thing (Exercises 4, 5 on p. 146 for a more careful definition).

xcX3v84RxoQ-4GxG32940ukFUIEgYdPy 6503a6c541c23843090c288df1b3be11