They are connected via the Cramer-Rao lower bound. This gives the minimum possible variance of an unbiased estimator as the reciprocal of the Fisher information for $\theta$, $I(\theta)$.
Perhaps more usefully, the maximum likelihood estimator for $\theta$ converges in distribution to $N(\theta, 1/I(\theta))$. I.e., we can write:
$$\hat\theta_n \approx N\left(\theta,\frac{1}{I(\theta)}\right), \;\mbox{as}\; n\rightarrow\infty$$
where $\hat\theta_n$ is the maximum likelihood estimator of $\theta$ based on a random sample of size $n$.