It's natural that the Fundamental Theorem of Calculus has two parts, since morally it expresses the fact that differentiation and integration are mutually inverse processes, and this amounts to two statements: (i) integrating and then differentiating and (ii) differentiating and then integrating get us (essentially) back where we started.
On the other hand, many people have noticed that the two parts are not completely independent: e.g. if $f$ is continuous, then (ii) follows easily from (i). However, for discontinuous -- but Riemann integrable -- $f$, the theorem still holds, and this is what requires a nontrivial additional argument. See page 8 of
http://math.uga.edu/~pete/243integrals1.pdf (Wayback Machine)
for some discussion of this point.
I can't tell from your question how squarely this answer addresses it. If yes, and you have further concerns, please let me know.
I would suggest you to have a look at this answer where I have discussed the Fundamental Theorem of Calculus for functions which are not necessarily continuous.
Next note that the integrand here has a discontinuity at $x=1$ and as far as the interval of integration is concerned the discontinuity is removable and hence we just remove it by changing value of integrand at $x=1$ to $0$. Doing this has no impact on the value of the integral (because the value of a Riemann integral does not depend on the value of the integrand at a finite number of points in the interval under consideration) and thus the desired integral is equal to $\int_{1/2}^{1}0\,dx=0$.
In general if we have to evaluate an integral of the form $\int_{a} ^{b} f(x) \, dx$ where $f$ has a finite number of discontinuities at $x_{1},x_{2},\dots,x_{n}$ in $[a, b] $ with $$a\leq x_{1}\leq x_{2}\leq \dots\leq x_{n} \leq b$$ such that left and right hand limits of $f$ exist at each point of discontinuity then we split the integral as follows $$\int_{a} ^{b} f(x) \, dx=\int_{a} ^{x_{1}}f(x)\,dx+\int_{x_{1}}^{x_{2}}f(x)\,dx+\cdots+\int_{x_{n-1}}^{x_{n}}f(x)\,dx+\int_{x_{n}}^{b}f(x)\,dx$$ Each integral on right can be evaluated by observing that the integrand has removable discontinuity at the end points $x_{i}$ and the integrand can be redefined at these points to make it continuous on that interval.
You can use this technique to evaluate $\int_{1}^{2}[x^{2}]\,dx$ as $$\int_{1}^{2}[x^{2}]\,dx=\int_{1}^{\sqrt{2}}1\,dx+\int_{\sqrt{2}}^{\sqrt{3}}2\,dx+\int_{\sqrt{3}}^{2}3\,dx$$
Also to answer your question about evaluating the integral under consideration via second part of fundamental theorem of calculus, note that there is no anti-derivative of $[x] $ on interval $[1/2,1]$ (why? perhaps you should answer this yourself, but let me know if you feel issue here) and hence we can't use fundamental theorem of calculus here.
Also note that continuous functions are guaranteed to have an anti-derivative, but continuity is not necessary. There are discontinuous functions which possess anti-derivative and if we have a function $f$ which is Riemann integrable on $[a, b] $ and possesses an anti-derivative $F$ (note that existence of anti-derivative is not guaranteed and hence we are assuming its existence here as a part of the hypotheses) such that $F'(x) =f(x) $ for all $x\in[a, b] $ then we have $\int_{a}^{b}f(x)\,dx=F(b)-F(a)$.
Best Answer
I like to understand these theorems as kind of a 1-2 punch, where the first theorem sets things up, and the second theorem knocks them down (where "knocking things down" = "evaluating definite integrals".)
So the First Theorem defines a function $g(x)$ more-or less explicitly: What's, say, $g(7)$? Well, (assuming $7$ is between $a$ and $b$), it is $\int_a^7 f(t)dt $. Okay, how do you find that? Well, you've got to construct a bunch of Riemann sums, and then prove that they converge to a limit as the mesh gets smaller, and then that limit is the value of $g$ at $7$.
$g$ is explicitly defined, but it's a real pain in the ass to evaluate $g$ at even one point - a Riemann sum and a limit each time. But the First Theorem does give us some information about how $g$ behaves, and that's going to help us in proving the Second Theorem. Also notice that one of the things that's true about $g$, which appears to be to obvious to mention, is that $g(a) = 0$.
In the Second Theorem, we have $F(x)$. How is $F$ defined in terms of $f$? It's not, at least not like $g$ was. It can be any wild-ass function, except it does have to pass a test: $F$'s derivative has to be equal to $f$, at each point in $[a,b]$. The Second Theorem tells us if we have such an $F$, then (and here's where the sun breaks through the clouds and a chorus of angels starts singing), we can evaluate integrals of $f$ by just evaluating $F$ at two points, and subtracting. And at this point we're absolute whizzes at finding the derivatives of function.
This is amazing - no Riemann sums, no limits, just find an $F$ that passes the test, and you can evaluate definite integrals with two function evaluations and a subtraction. This is what is going let us go forward and start actually evaluating integrals. If we didn't have the Second Theorem, our calculus class might end right here, with your prof saying "Well, that's how an integral is defined, but there aren't many we can evaluate, and it's a pain to try to evaluate new ones, as we have to look for neat, tricky patterns in the Riemann sums each time." Instead we just throw out a guess, check that it has the right derivative, and we are done.
(If the Second Theorem is the meat of the matter you might ask why we even bothered with the First Theorem. Well, it is used in proving the second theorem. As to why it's pulled out separately, and isn't just a step in the Second Theorem, I'm not sure, but it's pretty established as a separate theorem by now, so it might just be historical tradition.)
As to your question about the relation between the $g$ of the First Theorem and the $F$ of the Second Theorem: Note that $F$ isn't uniquely defined; we know that we can add or subtract a constant to any existing $F$ that passes the 2nd thm's test, and get another function that also passes the same test. So the 2nd thm actually gives us a whole family of $F(x)$ functions that will work, and $g(x)$ is one of them. In fact, it is the unique one that has the value $0$ at $a$.
So if you look back at the equation in the 2nd thm, and imagine that you also know that for your $F$, $F(a) = 0$, notice that the equation
$$\int_a^b f(x) dx = F(b) - F(a)$$
becomes
$$\int_a^b f(x) dx = F(b) - 0$$
which is the same as the equation from the 1st thm
$$g(x) = \int_a^x f(t) dt$$
with just a bit of renaming and re-arranging.
If you now understand this, and in particular understand the difference between
then you might enjoy my favorite statement of the (two) Fundamental Theorems of Calculus, which is
$$ \int_a^b f(t) dt = \left. \int f(t)dt \, \right|_a^b$$
It looks like almost nothing, just shifting two variables over from an long 'S' onto a vertical line, but the notation hides, and encapsulates, all the details you've just worked through, and is incredibly powerful, as I hope I've convinced you.