1) Those standard deviations aren't so badly different.
2) Since $n_1=41000$, and the standard deviations aren't very large, even if the variances were very different, it wouldn't matter.
You could even treat the mean of the first sample as fixed (it almost is) and do a one sample t-test.
3) The skewness likely won't matter much either, unless it's quite strong in the smaller sample. (you say 'skews above 10' ... but that doesn't really say how big they are. If, say the skewness in the smaller sample is less than 20, the distribution of the mean should still be close to normal, and between CLT for the numerator and using Slutsky's theorem for the rest of the statistic, it should be close to normal)
--
The Welch test should be okay.
Another alternative is to consider a permutation test (the standard deviations aren't all that different) or a bootstrap test. They'll likely give very similar results to what you already have.
Edit: (Answering follow-up question from comments)
Well, sure. The way to tell if the difference is not so bad is to see how much impact ignoring it would have.
The relevant measures of impact are the significance level when $H_0$ is true and power when it's false, and more generally the shape of the power function (which can reveal issues like test bias). You can most easily calculate and compare power functions under various assumptions via simulation.
For example, I used simulation in parts of my answer to this related question. I carried out those simulations in R.
So you can assume some population ratios of variances close to the one observed and see how badly it affects significance and power if you treat them as equal, and how close to the nominal significance you get if you use say the Welch approximation instead, as well as any impact on power.
I am not a Matlab expert, but looking at the documentation for ttest2
it looks like there is not an option to change the null value (what the difference is under the null hypothesis).
But it can return a confidence interval on the difference between the 2 means and you can use a confidence interval to do a hypothesis test (the yes/no part, not the exact p-value). Create the confidence interval and look to see if the difference that you are interested in is within the interval. If it is in the interval then that means the result is not statistically significant, if it is outside of the interval then it is statistically significant.
Another option is that if you subtract the null value difference from each value in one set (the one you expect to be higher) then that adjusts the mean appropriately and does not change the variances or other values. So you can pass the new vectors (one with the subtraction done) to the ttest2
function and it will give you the appropriate p-value.
Best Answer
While you can do this in a Bayesian way, have you considered whether it would actually be better to estimate the difference in the means rather than test whether they are different? This is what Andrew Gelman frequently recommends. I can imagine some possible reasons for wanting to do hypothesis testing, but I don't think they're that common.
I don't think you need something like a t-test, because you can estimate the standard deviation well because you said the groups have very similar standard deviations.
If that's the case then I think this link should be what you need. It shows how to estimate a difference in means or do a hypothesis test (though I don't recommend this). You could also take a look at the part they reference in bolstad's book (you can find electronic copies online). Its possible to incorporate estimating the variances as well but it's more complex, so I suspect you're better off incorporating the prior information you have about the variances in a naive way (for example, using the unbiased Stdev estimator on each of the sets and then averaging them and pretending those are your 'known' stdevs).