At the first glance it appears that he more or less just gave the first nontrivial example(s) of what was later called the Casimir operators.
His obituary says:
On 1 May 1931 he wrote a letter from The Hague to the famous Gottingen mathematician
Hermann Weyl and announced: ‘While studying the quantum-mechanical properties of the
asymmetic rotator I arrived at some ‘results’ (?) concerning the representation of continuous
groups.’ He then sketched his findings on the matrix elements of the irreducible representations
for the three-dimensional rotation group, and a possible extension for semi-simple groups in
general, where he introduced what was later called the ‘Casimir operator’. This operator turned
out to be a multiple of the unit-operator and may be used to characterize in an elegant way
the irreducible representations of a given continuous group. To Casimir’s question, ‘Whether
the case is worth considering?’, Weyl answered definitely ‘Yes’. Hence the Leiden doctoral
candidate published his mathematical results in a paper, communicated by Ehrenfest to the
meeting of 27 June 1931 of the Amsterdam Academy [7], and he also included them as Chapter
IV of his dissertation, which he defended on 2 November 1931 at the University of Leiden [8].
My opinion is that physicists transferred from study of "individual objects" to that of "large systems" where the order arises from limit probability laws rather than from simple deterministic formulae and from the study of something "readily observable" to something that is, essentially, "a purely mathematical object" invisible to a direct experiment. This brought them to the realm traditionally reserved for pure mathematicians. And, of course, with their eagerness to use whatever tools they have available in any way that is short of total lunacy, they went on to make predictions, many of which could be confirmed experimentally, leaving a long trail of successes and failures in their wake for mathematicians to explain.
I do not know the situation with the string theory and low dimensional topology but I have some idea about what's going on in random matrices (thanks to Mark Rudelson and his brilliant series of lectures) and in percolation/random zeroes (thanks to Stas Smirnov and Misha Sodin). The thing that saves physicists from making crude mistakes there is various "universality laws".
Here is a typical physicist's argument (Bogomolny and Schmidt). You want to study the nodal domains of a random Gaussian wave $F$ (the Fourier transform of the white noise on the unit sphere times the surface measure). Let's say, we are in dimension 2 and want just to know the typical number of nodal lines (components of the set $\{F=0\}$) per unit area. The stationary random function $F$ has only a power decay of correlations. However,
we ignore that and model it with a square lattice that has the same length per unit area as $F$ (this is a computable quantity if you use some standard integral geometry tricks). Now, at each intersection of lattice lines, we choose one of the two natural ways to separate them (think of the intersection as of a saddle point with the crossing lines being the level lines at the saddle level). Then, we get a question (still unresolved on the mathematical level, by the way) about a pure percolation type model. Thinking by analogy once more, we get a numerical prediction.
From the viewpoint of a mathematician, this all is patented gibberish. There is no way to reduce one process to another (or, at least, no one has the slightest idea how this could be done as of the moment of this writing). Still, the Nature is kind enough to make the answers the same or about the same for all such processes and Mathematics is powerful enough to provide an answer (or a part of an answer) for some models, so the physicists run a simulation, and, voila, everything is as they predicted and we are left with 20 years or so worth of work to figure out what is really going on there.
I'm not complaining here, quite the opposite: this story is really quite exciting and the work mentioned is both real and fascinating. We are essentially back to the days when Newton tried to explain the nature of gravity looking at Kepler's laws trying various options and separating what works from what doesn't. I'm only saying that the famous "physicists' intuition", which is so overrated, is actually just the benevolence of Nature. Why should the Nature be so benevolent to us remains a mystery and I know neither a physicist, nor a mathematician, who could shed any light on that. The best explanation so far is contained in Einstein's words "God is subtle, but not malicious", or, in a slightly less enigmatic form, "Nature conceals her mystery by means of her essential grandeur, not by her cunning".
Best Answer
It seems (as mentioned by Sam Hopkins above) that the Singularity Theorem is the official reason for the Nobel Award.
But that is by no means the only (and perhaps not even the most important) contribution of Sir Roger Penrose to mathematical physics ( not to mention his works as a geometer and his research on tilings, and so many other things).
In Physics, his grand idea is Twistor Theory, an ongoing project which is still far from completion, but that has been incorporated in other areas (see for instance here for its connection to Strings Theory, and also there is another connection with the Bohm-Hiley approach using Clifford Algebras, see here ).
But his influence goes even beyond that: Penrose invented Spin Networks in the late sixties as a way to discretize space-time. The core idea was subsequently incorporated in the grand rival of String Theory, Loop Quantum Gravity. As far as I know, all approaches to a background independent Quantum Theory of gravity use spin networks, one way or the other.
Moral: Congratulations Sir Roger !
ADDENDUM @TimotyChow mentioned that my answer does not address the ask of the OP, namely Penrose's contribution to General Relativity. I have mentioned two big ideas of Penrose, namely Spin Networks and Twistor Theory. The first one is, as far as I know, not directly related to standard relativity, rather to "building" a discrete space-time. It is not entirely unrelated, though, because the core idea is that space-time, the main actor of GR, is an emergent phenomenon. The ultimate goal of spin networks and also of all theories which capitalize on them is to generate a description of the universe which accommodates Quantum Mechanics and at the same time enable the recovery of GR as a limit process.
As for the second theory, Twistors, I am obviously not the right person to speak about them, as they are a quite involved matter, with many ramifications, from multi dimensional complex manifold to sheaf cohomology theory, and a lot more.
But, for this post, I can say this: the core idea is almost childish, and yet absolutely deep. Here it is: Penrose, thinking about Einstein's universe, realized that light lines are fundamentals, not space-time points. Think for simplicity of the projective space: you reverse the order. Rather than lines being made of points, it is points which are the focal intersection of light rays. The set of light rays , endowed with a suitable topology, make up twistor space (it is a complex manifold of even dimension).
Now, according to Penrose, relativity should be done inside Twistor Space, and the normal space-time can be recovered from it using the "points trick" and the Penrose mapping which transforms twistor coordinates into the lorentzian ones. What is more is that twistor space provide some degree of freedom for QM as well. How? well, think of a set of tilting light rays. Rather than a well defined space-time point you will get a "fuzzy point". But here I stop.