1) Algebraic geometry is indeed vast and difficult.
But don't be discouraged: professors and experts only know parts of it and you would be surprised to discover how little they know outside of their narrow domain of expertise.
This can be a strength: Grothendieck only knew Serre's article FAC and the content of a few Cartan seminars when he began to transform algebraic geometry by the introduction of scheme theory, in accordance with his awesome prophetic vision.
His correspondence with Serre has been published by Leila Schneps and is one of the most exciting documents in the history of mathematics.
His ignorance and his genius are displayed there, to our greatest delight.
2) Yet you should aim at knowing all of it.
There are many approaches to algebraic geometry:
-Classical in the style of the books by Fulton, Harris, Hodge-Pedoe, Kendig, Reid, Seidenberg, Walker, ...
-Complex analytic like in Grauert-Fritzsche, Griffiths-Harris, Huybrechts, Taylor, ...
-Scheme-theoretic like Bosch, Hartshorne, Görtz-Wedhorn, ...
-Especially praiseworthy are books mixing several points of views, the best by far being Shafarevich, but there are others: Danilov-Shokurov, Perrin,...
Ideally you should learn all points of view.
As I wrote this is the aim: there are many hours in a life and knowing that it is impossible to reach this impossible goal should not prevent you from trying.
Willem van Oranje Nassau said it very well:
Point n'est besoin d'espérer pour entreprendre, ni de réussir pour perséverer.
[One need not have hope to begin an undertaking, nor a guarantee of success to persevere]
3) Solve little problems on a napkin while sipping coffee with a friend.
But actually the books you read are not so important.
The most important advice I can give is to solve little concrete problems, which you can find in books, invent yourself or read on this site.
It is no use spending much time on some equivalence of categories involving affine schemes while being incapable of exhibiting a birational isomorphism between a smooth quadric in projective space and a projective plane.
And for explaining why the two-codimensional union of two transverse planes in $\mathbb A^4$ cannot be defined by less than four equations, the equivalence of said category with that of commutative rings will not lead you very far ...
4) Also, draw doodles on that napkin.
Another important aid to understanding scheme theory is to invent conventions that will enable you to draw schemes so as to follow or invent proofs by visualization.
The best way is to start from Mumford's wonderful sketches in his Red Book: the way he draws spaghetti-like generic points (for example) is priceless!
Vakil's wonderful notes are even more graphic : for example, he explains again and again how the "fuzz" in his numerous drawings is the visual translation of algebraic notions like nilpotents, primary decomposition,...
Geometry has been for more than two thousand years the art of reasoning correctly on incorrect figures.
There is no reason why this should stop now.
5) And finally: you can do it! Good luck!
The inequality that I know under the name isodiametric inequality is
$$ \frac{\text{vol}(K)}{\text{diam}(K)^d} \le \frac{\text{vol}(B)}{\text{diam}(B)^d} $$
for any convex body $K$ in $\mathbb{R}^d$, where $B$ denotes the unit ball.
Proof 1: By Steiner symmetrization (which preserves volume, decreases diameter, and tends to the ball if desired). Proof 2: If $K$ has diameter at most 2 then $K-K\subseteq 2B$; by the Brunn-Minkowski inequality, $\text{vol}(K)\le\text{vol}(\frac12(K-K))$; thus $\text{diam}(K)\le\text{diam}(B)\implies \text{vol}(K)\le\text{vol}(B)$, which is equivalent to the desired inequality. Note that proof 2 doesn't really use the fact that $B$ is the Euclidean ball: it actually proves the more general analogous statement where we take $B$ to be any origin-symmetric convex body and measure diameters in the norm whose unit ball is $B$. (Proof 2 also yields that this isodiametric inequality is actually equivalent to the special case of Brunn-Minkowski that was used.) All of the above is in Gruber's recent book on convex geometry, for example.
Another proof of the generalization to arbitrary norms was given by M. S. Mel'nikov ("Dependence of volume and diameter of sets in an $n$-dimensional Banach space", Uspekhi Mat. Nauk 18(4) 165–170, 1963, http://mi.mathnet.ru/eng/umn6384): the key fact in that proof is that if the diameter of $K$ (in the sense of $B$) is at most 2 then the diameter of $K_t$ (in the sense of $B_t$) is also at most 2, where $K_t$ denotes the level set of height $t$ of the projection of $K$ (as a density) onto a fixed hyperplane; this allows a proof by induction on the dimension, and it anticipates the proof of the Prékopa-Leindler inequality, a generalization of Brunn-Minkowski. (For Prékopa-Leindler, see lecture 5 in Keith Ball's An Elementary Introduction to Modern Convex Geometry.)
Another inequality of the type you've asked about is Urysohn's inequality:
$$ \frac{\text{vol}(K)}{w(K)^d} \le \frac{\text{vol}(B)}{w(B)^d} $$
for any convex body $K$ in $\mathbb{R}^d$, where $B$ denotes the Euclidean unit ball and $w(\cdot)$ denotes mean width. (This time it really matters that it's the Euclidean ball.) Since $w(K)\le\text{diam}(K)$, this is a strengthening of the isodiametric inequality above.
Proof 1: Steiner symmetrization reduces mean width. Indeed, if $S_u$ denotes Steiner symmetrization wrt the hyperplane orthogonal to a unit vector $u$, and $R_u$ denotes reflection in that hyperplane, then $h_{S_u(K)}(\theta)=\frac12 h_K(\theta)+\frac12 h_K(R_u(\theta))$, where $h_K$ denotes the support functional of $K$; now integrate over $\theta\in S^{d-1}$ and use Jensen's inequality. (I got this from some unpublished notes of Giannopoulos.) Proof 2: See Pisier's book The Volume of Convex Bodies and Banach Space Geometry (Cambridge UP, 1989, p.6; Pisier writes that he learned this proof from Vitali Milman). In short, you generalize Minkowski addition of sets to Minkowski integration of set-valued functions, and you get an analogue of Brunn-Minkowski:
$$ \int_\Omega \text{vol}(A_t)^{1/n} \,d\mu(t) \le \text{vol}\left(\int_\Omega A_t \,d\mu(t)\right)^{1/n} $$
when $\mu$ is a probability measure and everything is suitably measurable. By symmetry, $\int_{O(d)} TK \,d\mu(T)$ is some multiple of the Euclidean ball (here $O(d)$ is the orthogonal group on $\mathbb{R}^d$, and $\mu$ is its Haar probability measure); a computation shows it's actually $\frac12 w(K)B$, and the Brunn-Minkowski analogue above finishes the proof.
As requested in comments, here's a generalization to other intrinsic volumes:
$$ 1\le i\le j\le d\implies
\frac{V_i(B)^{1/i}}{V_j(B)^{1/j}}
\le \frac{V_i(K)^{1/i}}{V_j(K)^{1/j}}$$
(The case $i=1$, $j=d$ is Urysohn's inequality.) Proof: A special case of the Alexandrov-Fenchel inequality is
$$ W_i(K)^2 \ge W_{i-1}(K) W_{i+1}(K) \tag{$\ast$} $$
where $W_i(\cdot)$ denotes quermassintegrals:
$$ W_i(K) = V(\underbrace{K,\dotsc,K}_{d-i},\underbrace{B,\dotsc,B}_i)
= \frac{\kappa_i}{\binom di} V_{d-i}(K) $$
where $\kappa_i$ is the volume of the $i$-dimensional unit Euclidean ball. It follows that
$$ i\mapsto\left(\frac{W_d(K)}{W_{d-i}(K)}\right)^{1/i} \tag{$\dagger$} $$
is an increasing function for $1\le i\le d$. (You can just prove the $i$-vs-$(i+1)$ case by induction on $i$, but what's really going on here is that $i\mapsto\log W_i(K)$ is "concave" — scare quotes because its domain is discrete. The inequality ($\ast$) is the local version of this, analogous to saying that the second derivative is nonpositive; that ($\dagger$) is increasing means that the slopes over $[d-i,d]$ are increasing with $i$.) But $W_d(K) = \text{vol}(B) = W_{d-i}(B)$, so a bit of rearrangement yields the desired inequality.
(Unfortunately I'm not familiar with the literature around Alexandrov-Fenchel, so I can't give good references here.)
You might also want to consider things like the reverse isoperimetric inequality, which asserts that (1) every centrally symmetric convex body $K$ has an affine image $K'$ such that
$$ \frac{V_d(K')^{1/d}}{V_{d-1}(K')^{1/(d-1)}} \ge \frac{V_d(B_\infty^d)^{1/d}}{V_{d-1}(B_\infty^d)^{1/(d-1)}} $$
where $B_\infty^d$ is the cube $[-1,1]^d$ (i.e., the unit ball of the $\ell_\infty^d$ norm), and that (2) every convex body $K$ has an affine image $K'$ such that
$$ \frac{V_d(K')^{1/d}}{V_{d-1}(K')^{1/(d-1)}} \ge \frac{V_d(\Delta)^{1/d}}{V_{d-1}(\Delta)^{1/(d-1)}} $$
These inequalities are due to Keith Ball (see lecture 6 of his book mentioned above for the proof and references), relying on John's theorem and a normalized version of the Brascamp-Lieb inequality. For a proof of Brascamp-Lieb in the needed form (and much more besides, including equality cases in the above reverse isoperimetric inequalities), see F. Barthe, "On a reverse form of the Brascamp-Lieb inequality", arxiv:math/9705210. (A simplified version for the needed one-dimensional special case appears in K. Ball, "Convex geometry and functional analysis", in volume 1 of Handbook of the Geometry of Banach Spaces, Johnson and Lindenstrauss (eds.), North-Holland, 2001.)
Best Answer
$-1\leq \sin(x)\leq 1$, and the same is true for $\cos$
The AM-GM inequality is so popular it has its own tag on this site.
This inequality involving the choose function is pretty commonly used when the choose function or exponential function pop up $$\frac{n^k}{k^k}\leq {n\choose k}\leq\frac{n^k}{k!}\leq \frac{(n\cdot e)^k}{k^k}$$