If you report the interaction, you need to report the main effects as well, whether pooled (as @Frank suggests) or "plain". I usually report some predicted values as well - often in a graph - as I think these show things intuitively.
I agree with @Frank about significance tests. That's not a good way to build a model.
I think you may have mis-remembered the advice. It is true that you should not interpret main effects in the usual way when there is an interaction. And I've heard some people say that you should not report an interaction if it is not significant, although I don't agree.
A little niggle
'Now many textbook examples tell me that if there is a significant
effect of the interaction, the main effects cannot be interpreted'
I hope that's not true. They should say that if there is an interaction term, say between X and Z called XZ, then the interpretation of the individual coefficients for X and for Z cannot be interpreted in the same way as if XZ were not present. You can definitely interpret it.
Question 2
If the interaction makes theoretical sense then there is no reason not to leave it in, unless concerns for statistical efficiency for some reason override concerns about misspecification and allowing your theory and your model to diverge.
Given that you have left it in, then interpret your model using marginal effects in the same way as if the interaction were significant. For reference, I include a link to Brambor, Clark and Golder (2006) who explain how to interpret interaction models and how to avoid the common pitfalls.
Think of it this way: you often have control variables in a model that turn out not to be significant, but you don't (or shouldn't) go chopping them out at the first sign of missing stars.
Question 1
You ask whether you can 'conclude that the two predictors have an effect on the response?' Apparently you can, but you can also do better. For the model with the interaction term you can report what effect the two predictors actually have on the dependent variable (marginal effects) in a way that is indifferent to whether the interaction is significant, or even present in the model.
The Bottom Line
If you remove the interaction you are re-specifying the model. This may be a reasonable thing to do for many reasons, some theoretical and some statistical, but making it easier to interpret the coefficients is not one of them.
Best Answer
Generally, when somebody says that data must satisfy some condition C (e.g., they must produce a significant interaction term in an ANOVA) before you can use a follow-up procedure (e.g., tests of simple main effects), what they mean is that any guarantees that the follow-up procedure will be accurate or useful require C. So, if C does not hold, you have lost your reason to believe anything the follow-up procedure tells you. If doing the follow-up test gets you results you wanted anyway, that's no reason to believe them. Thinking that your analysis is correct because it got the results you wanted is just wishful thinking.
The caveat to the above is that the mechanistic procedures suggested by certain quantitatively weak textbooks in quantitatively weak fields (as a psychologist, I ought to know) where you do significance tests and then make further analytic decisions on that basis (e.g., a lack of significant departure from normality means that ANOVA is appropriate) have, to my knowledge, no good basis in mathematics or in science. They are cargo-cult statistics.