Roughly speaking, if you know the sum of the dice is $6$, that gives you some information about what the first die rolled: namely, that it couldn't possibly have rolled a $6$. On the other hand, if you know the sum of the dice is $7$, that doesn't tell you anything about the result of the first die. For any number $n$ it might have rolled, the second die could have rolled the number $7-n$. So events relating to the result of the first die are independent of whether you rolled a $7$, but not of whether you rolled a $6$.
You can make this a little more rigorous by thinking in terms of conditional probabilities: if $A$ and $B$ are events with nonzero probability, then they're independent if and only if $P(A|B)=P(A)$ (or $P(B|A)=P(B)$). For your events, a bit of case analysis will show you that $P(G|F)=1/6=P(G)$, but $P(G|E)=1/5 \neq P(G)$.
The mathematical definition is very easy. Two events $A$ and $B$ are independent if and only if $$P(A\cap B) = P(A)P(B).$$
In "pure" probability theory there's no interpretation of this, it's just a definition. It's a purely mathematical statement I can make about two events and a probability distribution.
To explain what it "means" you have to explain what probability means. There's no acceptable answer to this question. It's a big philosophical problem that mathematicians avoid by writing down some equations and solving them.
The motivation comes from the idea of conditional probability.
Suppose you throw a die. The probability you throw a six is $\frac 16$ and the probability you throw an even number is $\frac 12$. You can check with the formula above that the two events are not independent.
To get an idea of why suppose you throw a die but don't look at it. You want to get a six. I tell you whether or not it's even and you decide whether to keep it or roll it again. If I tell you it's odd then you know it's not a six and you roll it again. If I tell you it's even then there are only three numbers it could be and one of them is a six. So the probability that you got a six is now one in three. So you'd be crazy to throw again.
In maths we define conditional probabilities as follows$$P(A|B) = \frac{P(A\cap B)}{P(B)}.$$
Again in "pure" maths there's no interpretation of this, it's just a formula.
But in the real world $P(A|B)$ is the probability that $A$ happens if you already know that $B$ happened.
So the interpretation of independence is that $A$ and $B$ are independent if and only if $P(A|B) = P(A)$ if you know that $B$ happened it doesn't affect the probability that $A$ happened.
This concept makes intuitive sense to people. If my team is winning at half time it's more likely to win the game than if it wasn't, so not independent. If my team is winning at half time it doesn't make it less likely that it's going to rain tomorrow, so independent.
It's worth noting though that independence is an assumption that I might be wrong about. If my team happens to play well in the rain and they're winning at half time it's more likely to be raining during the game. This might mean it's more likely to be raining tomorrow. So they might not be independent.
So in fact a better definition of independence would be an assumption I make to simplify my model which is usually wrong, but hopefully not that wrong.
Best Answer
The sum of the die is odd if one die is odd and the other is even.
Then the event of the red die being odd and the sum being odd, is the same event as the red die being odd and the blue die being even. And such.
Thus $(A \cap C) = (A\cap B^\complement)$ and $(B\cap C)=(B\cap A^\complement)$.
So...
Hint: $\mathsf P(A) = \mathsf P(A\cap B)+\mathsf P(A\cap B^\complement)$ by the Law of Total Probability