I will respond directly to this part of your question.
I would be happy if I could avoid such topics [analysis], but I don't know what type of mathematics is studied in a graduate degree level, so this leads me to the following questions:
What topics of Abstract Algebra should I study in depth ? what topics in Abstract Algebra should I be familiar with their basics ?
Are there any topics in analysis, topology etc' that are likely to be needed for answering a graduate degree level type of questions ?
What should be the focus of my work, should I try to do many exercises within the text, or focus on the proofs and the theory ?
Are there topics in Abstract Algebra, or other in other areas that I would need to know (maybe topology ?) that I can skip some parts of (mainly non-core topics that are hard to learn) since they would probably not help me (and due to lack of time) ?
I have the book Abstract Algebra by Dummit and Foote to study with, as well as books in other area of mathematics such as Topology by Monkers that might help me with this goal.
Firstly, I want to mention that unless you are absolutely certain that you are going to specialize in pure group or ring theory, then you will need some analysis. In fact, you'll probably need a lot of analysis. Explaining why is a bit more complicated. The short version is that almost every area of math relies or is at least informed by analysis, algebra, and topology; this is why most graduate programs (in the US anyway) require these as either graduate classes or graduate entrance exams or graduate qualification tests, etc.
To expand in a slightly longer way - calculus is pretty interesting, and lets you do a lot of things. A common thing that mathematicians do is put measures on weirder spaces so that you can have some variant of integration. In number theory (even algebraic number theory, which is often the same thing as algebraic geometry, which is often the same thing as commutative algebra, which is just algebra and group theory), we really like having measures called Haar measures on matrix groups, like the $GL(n), SL(n), Symp(n)$, etc. This lets us do integration on these groups. So we study functions invariant under actions of these groups, or functions invariant on certain cosets of these groups that behave nicely under ring translation, or some similar idea. And one way we do this is to integrate them, or consider an integral over a weighted average of a function across the cosets our function is invariant over (read: Eisenstein Series for example), to extract largely algebraic information about number fields. Or we consider representations (as in representation theory, which I clump into the larger algebra domain sometimes) and analytic extensions of representations. Everything I've mentioned here requires a certain comfort with topology, analysis, and algebra.
This is to say that algebra mixes quite a bit with analysis in many ways. You would really benefit from having a good understanding of analysis and topology. In particular, don't focus solely on algebra. The other answer says this a little, but I am going to emphasize this a lot. It is very important to understand analysis and topology, unless you are going do limit yourself to pure, remote group theory. And even then, I wouldn't recommend it.
But back to your question at hand about algebra:
I would prescribe a path into algebra. In a comment on the other answer, you mention that you know groups, ring, fields, Galois theory. Cool! You also say you have Dummit and Foote (by far my preferred introduction to group and ring theory). Then I suggest two paths:
Go learn more about whatever parts you liked most. Sylow theorems interest you? Try to learn your way through Burnside's theorem. You like Galois theory? Pick up some infinite-dimensional Galois theory and try your hand. Maybe you already know that? Go pick up some algebraic number theory text - as an intro algebraic number theory text builds nicely on basic field theory and Galois theory, and suggests further paths. To be fair, I'm biased - I'm a number theorist. The important thing is that you go and dig deeper into things that interest you.
Pick up Atiyah and MacDonald's Commutative Algebra (hopefully from a library, as they're proud), and do your best at all the exercises. This is the 'natural' extension of what to do next, and it's the real path into a serious interest into algebra in my opinion. I say that you should do all the exercises because this book is famous for having really important lemmas and theorems in the exercises as opposed to the exposition. This will also really set your group theory and ring theory in stone, and you have Dummit and Foote to fall back on if you need. If you know this already, you should next go to Lang's Algebra (quite a bit, scary thing - take a look at it first), Matsumura's Commutative Ring Theory (much, much, much higher than Atiyah MacDonald, even though they have essentially the same name), or Eisenbud's Commutative Algebra (also harder than Atiyah MacDonald, but designed for people interested in algebraic geometry - if you don't know what that is, look it up).
I'd like to add one more thing about your (3) - the problem with learning the proofs and theory is that there is no reason for them to stick on their own. You might open up Atiyah MacDonald and understand everything you read, for example. But I wouldn't expect much of it to last, unless you use it. So a good general philosophy is to read and try to absorb, but then do exercises to let it solidify. Well written exercises require you to build on the text, both as a review and to build intuition.
A hard problem is knowing how many exercises to do. Too many, you waste your time. Too few, you'll forget much. But this is sort of moot, as it's hard to know what problems are useful or good to do before you actually do them, and in some texts some problems are much much better for you than others. For this, I advise you to ask your advisor (or find someone who can provide some sort of guidance) for direction once you have an idea what sort of things you want to learn about.
This is really just a comment - see my postscript below - but this is way, way too long:
So the main idea is this: we're looking for an idea which generalizes all or many of the already-very-general kinds of mathematical object studied so far. For instance, groups, rings, fields, vector spaces, etc. Whatever definition we settle on should have two properties:
It should be sufficiently general to cover all, or at least a wide variety, of the mathematical objects we've already become interested in.
It should be specific enough that we can prove things about it. Too much generality is not inherently good!
Now, let me point out that this question has been answered in different ways! E.g.
In the context of universal algebra, we're interested in sets equipped with functions - no relation symbols allowed!
In the context of model theory, the definition of structure is as you've given it. Note that this generalizes beyond the universal algebraic setting by allowing relations.
What about times when we care about topology? Topological groups, rings, fields etc. make sense, but aren't captured by the classical notion of "structure" from model theory. If you want to talk about topological structures, you need an even more general definition: a topological structure is a structure (in the usual sense) together with a topology on its underlying set. See e.g. the book Continuous Model Theory by Chang and Keisler. Now, maybe we also want to add some compatibility conditions on how the structure and topology interact; this is the approach taken in continuous logic, which has so far been more effective than the unrestricted version (at least, that's my impression).
And, of course, sometimes we're interested in things that look like structures "from the outside" - e.g., a group object in some category. We can develop categorical versions of model theory or universal algebra (see e.g. Lawvere theories), which is even more general than what I've described so far.
Going in a different direction, we could allow infinitary relations and functions. I believe this was looked at by Addison a long time ago, but I don't have a citation. Note that there are very natural examples of infinitary operations and relations - for example, the relation "converges as an infinite series!"
So that's my pushback against the idea that there's a "right" notion of structure. That said, the classical model-theoretic notion of structure has clearly been extremely useful. So, what made it so?
It's difficult to make a convincing argument here, but let me list a few points in favor of this definition.
It is sufficiently general to capture basically every structure studied in (the non-topological parts of) abstract model theory.
It provides a semantics for set theory, which in turn lets us talk about the more general approaches. Remember that ZFC, despite talking about the set-theoretic universe, is really a first-order theory! Note that we run into trouble here if we want to talk about e.g. class-sized objects, but I don't really think that's fatal to this idea (see e.g. NBG or universes).
This definition is sufficiently narrow that we can prove theorems about it: e.g. compactness, Lowenheim-Skolem, Herbrand's theorem, etc. These theorems in turn have been applied to specific mathematical problems - see e.g. the Ax-Kochen theorem or proof mining. Note that these theorems are really theorems of the underlying logic (first-order logic) rather than the notion of structure per se; but these aren't really that different. The completeness theorem gives a sense in which the notion of structure is in correspondence with the underlying logic. (Speaking of logic, note that we could also ask why first-order logic is the "right" logic to use, and how that decision was made; see e.g. this paper by Ferreiros. Also see Lindstrom's theorem.)
Note that this answer completely avoids any real historical discussion. This reflects my lack of knowledge. While I know a little bit about the philosophical arguments which surrounded the adoption of first-order logic, I know nothing about the history of the definition of "structure." That's why everything in this answer is retrospective: knowing what we know now mathematically, what can we say about "structure"? I hope someone with actual knowledge will give a better answer.
Best Answer
Abstract algebra has an interesting way of making a problem more transparent by forgetting about superfluous properties. I'll give some real world applications to illustrate:
Let's say you're a physicist studying the motion of a moving particle modeled by some differential equation. To find solutions to such an equation, one typically takes the Fourier transform and solves a corresponding algebraic problem. The Fourier transform is essentially allowing us to see past the complexity of arising from taking derivatives to illuminate an underlying algebraic problem.
As another example, lets say you are studying the effects of a gravitational field in a certain area of spacetime. Particles which travel through this area are subject to the curvature of spacetime induced by the gravitational field. Again this situation is extremely complicated. However, We can reduce the problem to an algebraic problem locally. That is, we use the tangent bundle and a smoothly varying metric to describe its motion.
Another example dates all the way back to Descartes. Geometric shapes are hard to understand, but imposing coordinates on such objects allows us to use algebraic techniques to understand the object better.
Anomolies in physics arise as elements of cohomology groups. Without the notion of a group, they would be very hard to calculate. You may not even know where to start.
The bottom line is, we know how to do algebra. It is the ability to translate of difficult problems into algebraic ones that makes it so useful.