Burning (and fusion) is "unsustainable" by definition because it means to convert an increasing amount of fuel to "energy" plus "waste products" and at some moment, there is no fuel left.
I am not sure whether the word "unsustainable" was used as a joke, a parody of the same nonsensical adjective that is so popular with the low-brow media these days, but I have surely laughed (because it almost sounds like you are proposing to extinguish the Sun to be truly environment-friendly). The thermonuclear reaction in the Sun has been "sustained" for 4.7 billion years and about 7.5 billion years are left before the Sun goes red giant. That's over 10 billion years – many other processes are much less sustainable than that. More importantly, there is nothing wrong about processes' and activities' being "unsustainable". All the processes in the real world are unsustainable and the most pleasant ones are the least sustainable, too.
But back to your specific project.
When it comes to energy, it is possible to blow a star apart without spending energy that exceeds the actual thermonuclear energy stored in the star. Just make a simple calculation for the Sun. Try to divide it to 2 semisuns whose mass is $10^{30}$ kilograms, each. The current distance between the two semisuns is about $700,000$ kilometers, the radius of the Sun. You want to separate them to a distance where the potential energy is small, comparable to that at infinity.
It means that you must "liberate" the semisuns from a potential well. The gravitational potential energy you need to spend is
$$ E = \frac{G\cdot M\cdot M}{R} = \frac{6.67\times 10^{-11}\times 10^{60}}{700,000,000} =
10^{41}\,{\rm Joules} $$
That's equivalent to the energy of $10^{24}$ kilograms (the mass of the Moon or so) completely converted to energy via $E=mc^2$, or thermonuclear energy from burning the whole Earth of hydrogen (approximately).
You may force the Sun to do something like the "red giant" transition prematurely and save some hydrogen that is unburned. To do so, you will have to spend the amount of energy corresponding to the Earth completely burned via fusion.
But of course, the counting of the energy which was "favorable" isn't the only problem. To actually tear the Sun apart, you would have to send an object inside the Sun that would survive the rather extreme conditions over there, including 15 million Celsius degrees and 3 billion atmospheres of pressure. Needless to say, no solid can survive these conditions: any object based on atoms we know will inevitably become a plasma. A closely related fact is that ordinary matter based on nuclei and electron doesn't allow for any "higher-pressure" explosion than the thermonuclear one so there's nothing "stronger" that could be sent to the Sun as an explosive to counteract the huge pressure inside the star.
One must get used to the fact that plasma is what becomes out of anything that tries to "intervene" into the Sun – and any intruder would be quickly devoured and the Sun would restore its balance. The only possible loophole is that the amount of this stuff is large. So you may think about colliding two stars which could perhaps tear them apart and stop the fusion. This isn't easy. The energy needed to substantially change the trajectory of another star is very, very large, unless one is lucky that the stars are already going to "nearly collide" which is extremely unlikely.
Physics will not allow you to do such things. You would need a form of matter that is more extreme than the plasma in the Sun, e.g. the neutron matter, but this probably can't be much lighter (and easier to prepare, e.g. when it comes to energy) than the star itself. A black hole could only drill a hole (when fast enough) or consume the Sun (which you don't want).
However, if you allow the Sun to be eaten by a black hole, you will actually get a more efficient and more sustainable source of energy. Well, too sustainable. ;-) A black hole of the mass comparable to the solar mass would have a radius about 3 miles. It would only send roughly one photon of the 3-mile-long wavelength every nanosecond or so in the Hawking radiation and it would only evaporate after $10^{60}$ years or so. It would be so sustainable that no one could possibly observe the energy it is emitting. However, the black hole would ultimately emit all the energy $E=mc^2$ stored in the mass.
If there are powerful civilizations ready to do some "helioengineering", they surely don't suffer from naive and primitive misconceptions about the world such as the word "sustainable" and many other words that are so popular in the mentally retarded movement known as "environmentalism". These civilizations may do many things artificially but they surely realize that the thermonuclear reaction in the stars is a highly efficient and useful way to get the energy from the hydrogen fuel. Even some of us realize that almost all the useful energy that allowed the Earth to evolve and create life and other things came from the Sun.
The Sun may become unsustainable in 7.5 billion years but according to everything we know about Nature, it's the optimum device to provide large enough civilizations – whole planets – with energy.
Low-mass M dwarfs are the only stars that are fully convective, but most stars have at least some convection going on either in the core or in the outer envelope.
Convection occurs because the temperature gradient exceeds the adiabatic temperature gradient and becomes susceptible to convective instabilities.
If a star has a temperature gradient exactly equal to the adiabatic temperature gradient, then a parcel of rising gas in pressure equilibrium with its surroundings will change its temperature in exactly the same way as its surroundings and nothing really happens. However, if the modulus of the (negative) temperature gradient of the surroundings is higher, then as the parcel rises it expands because it is hotter than the gas around it. This makes it more buoyant and it rises further, transporting heat outwards. This is a convective instability.
The key to your question is to examine the conditions under which the temperature gradient in a star becomes large enough to trigger convective instability. There are basically three cases where this happens.
The opacity of the gas to radiation becomes large. The temperature gradient then must become larger to carry the same energy flux. Roughly speaking
$$\frac{dT}{dr} \propto \kappa,$$
where $\kappa$ is the opacity of the gas.
The adiabatic temperature gradient could become smaller due to changes in the adiabatic index - for instance where ionisation state of the gas changes near the photosphere.
If the heat generation in the core of a star is very temperature sensitive then this induces a very steep temperature gradient. Main sequence stars more massive than the Sun generate energy through the CNO cycle, which is more temperature sensitive than the pp chain, and hence have convective cores.
In low-mass M-dwarfs it is mechanism (1) that is in operation. The opacity in a star is approximated by Kramer's opacity
$$\kappa \propto \rho T^{-7/2},$$
where $\rho$ is the density and $T$ the temperature.
M-dwarfs are denser than more massive stars and have lower interior temperatures. The opacity of the gas is then so high that convective instability is present throughout the star (except right at the photosphere). In higher mass main sequence stars, the opacity in the deep interior is low enough (because of higher temperatures and lower densities) to avoid convective instability. But convection then happens in the cooler outer layers (e.g. in the outer 20% or so of the Sun).
Best Answer
I work with stellar models, so I thought I'd chip in here. My instant reaction is that you shouldn't worry too much: determining the age of a star is difficult and different models will disagree (sometimes significantly!) on that age.
I can't see an obvious reason to doubt the conclusion.
Basically, one tries to measure as many properties about the star as accurately as possible, and then find the best fitting stellar model. These models are solutions to a set of differential equations (in time and one spatial dimension) that tries to capture all the relevant physics that determines how stars evolve. The bulk physics is a fairly well-defined problem but there are several potentially important components that are lacking in these models. (I'll expand on this if desired...)
The usual difficulty here is breaking down the degeneracy between brightness and distance. That is, a distant object is fainter, so it's hard to know whether a certain object is intrinsically faint or just further away. The principal result in this paper is the Hubble-based parallax measurement, which makes a big improvement on that distance measurement and, therefore, the brightness of the star. The other things they use are proxies for the surface composition and the effective temperature of the star, as far as I can see.
Incidentally, this is where I would suspect the tension can be resolved. If you look at Fig. 1 of the paper, they show the evolution of different stars for different compositions. What you're looking for, roughly speaking, is lines that go through the observed points. That figure shows that if the oxygen content is underestimated, then the best fit is actually about 13.3 Gyr, which is no longer at odds with the age of the Universe.
Take note of Table 1, where the sources of error (at 1$\sigma$) are listed. It's interesting that, not only is the star's oxygen content the largest source of error, but even the uncertainty of the oxygen content of the Sun is a contributor!
The age of Methuselah, definitely. I would describe our estimates of the age of the Universe as in some way "converegent": different methods point to consistent numbers. Sure, Planck shifted the goalpost by 80 Myr or so, but it'd be a real shock to see that number change by, say, half a billion years.
I have no idea and haven't really thought about it. Since I'm pretty sure this isn't a big problem, I don't think relativistic effects are necessary to explain the discrepancy.