Answering two of your three questions, because I'm not comfortable enough in R to diagnose coding errors - though your code looks right to me.
For your estimates, as you didn't specify a link function, R is using the default log link function. In order for your estimates to make sense, you need to exp(estimate) - this will give you 1.15, 1.51 and 2.94 respectively. These should be close to the HRs coming off your Cox model. These numbers are "incident rate ratios" - the name is pretty suggestive of what they are. They can be interpreted very similarly to hazard ratios, and indeed should equal hazard ratios under certain assumption.
As for the benefits (and drawbacks) of Poisson regression. Poisson survival analysis is a fully parametric, maximum likelihood method of estimating differences in the survival between groups, which has some nice properties for some uses. It estimates the baseline hazard, which if you intend to use the baseline hazard in further analysis (I often do), is something a Cox model expressly does not estimate. The incidence density (# of cases / time at risk) is also vastly more intuitive than the hazard.
Now the drawbacks, of which there are many. The Poisson model is vulnerable to overdispersion - I'd rerun your model using quasipoisson or a negative binomial model to check and see if your results are sensitive to overdispersion. More important, IMO, is the assumptions the poisson model makes about the underlying survival function. The Cox Proportional Hazards model, as the name suggests, assumes the hazard function is proportional between the two groups over time. The Poisson model assumes not only are the hazards proportional, but constant. This is often a pretty major assumption, and should be checked. Adding a term or several in the model for time can help relax this assumption somewhat.
As for the similarity of your results: I'm not sure what kind of model your competing risk package is assuming, but I'm guessing it's estimating a parametric model of the survival function - the Cox model, which is "semi-parametric", doesn't estimate this baseline hazard. If the estimated parametric survival function is close to a exponential model (the distribution of the survival function assumed with a Poisson model), you may get very similar results if your data isn't sensitive to the assumption of no competing risks. Your results don't seem terribly vulnerable to this assumption.
What you do seem to be sensitive to is the violation of a constant hazard assumption, hence the dramatic difference in your estimates between Poisson and Cox models. If the hazard function was perfectly constant, the two models should actually produce the same estimate. I would try one of two things:
Add a time term or several to the Poisson model. Something like time and time*time, and see if your Poisson estimate moves closer to the Cox result.
You should be able to visualize the hazard function. The most common way to do this is to plot the log(-log(survival function)) versus the log of survival time for each of your variables, stratified by the groups. For the Cox model to be valid, they should be parallel. For the Poisson model to be valid, they should be straight and parallel. The survival functions in R must be able to do this, though I don't know how.
I'd find it very odd to see this in a simulated data set introduced essentially by accident, but I'd try looking at it anyway.
It makes sense to use the attrition rates so you can compare them with those of other companies, which usually have a different amount of employees. The industry benchmark also makes only sense as a rate.
If the attrition rates within the year are approximately normal iid, it makes sense to use the one-sample t-test to compare to the industry benchmark.
And, finally, if you want to compare the attrition to the previous year, and you presume the attrition to be dependent on the month (e.g. there is more attrition in the winter than in the summer), it makes sense to use the paired t-test.
Best Answer
Let's start there. You can't say "my data are definitely Poisson". It's more a question of whether it's a reasonable model.
There are two main approaches.
is to investigate whether requirements that will yield a Poisson distribution for the data are met or likely to be met or that you are prepared to assume are met, or are sufficiently closely met in some sense.
is to see whether this kind of data appear to be close enough to Poisson that inferences obtained by assuming it for your data would be 'close enough' for your purposes. (How close it needs to be depends on your needs, preferences and so on.)
In case 1, the obvious thing to consider is whether you can treat it like a Poisson process. You need:
(1) constant intensity of events within a single variable
(2) independence
(3) "rare" events (so that, for example, the chance of more than one event occurring in a very small interval of time is correspondingly small) -- they don't actually have to be rare overall, just rare in small intervals of time.
The second approach would seek to identify how non-Poisson data like yours is and so what specific forms of non-Poissonness you might have, and to consider the degree to which that might affect your inference. There are some suitable alternatives to consider (such as the negative binomial, which might be more suitable if the intensity varies from observation to observation).
This would be one potential source of 'non-constant rate' (heterogeneity of people) that might lead you to consider whether the variable is 'overdispersed' (tends to show more variation relative to the mean than you'd expect from a Poisson).
Then it may be better to start with that goal. A Poisson may be appropriate, but that goal can be approached even if it isn't.
It's a pretty good place to start.
What's a typical expected count?
There are a number of approaches to comparing two Poisson counts.
Perhaps the most common is to condition on the total count and test whether the counts are in proportion to the ratio of the specific gene to all other genes. The conditioning converts the test to a binomial proportion.
There are a number of other ways to approach it even with a simple Poisson comparison.
Some other issues:
If there are any other variables to account for, you might consider a GLM.
You might also consider whether patients should be treated as random effects.
If you're not prepared to assume it's Poisson, you might consider a variety of other possibilities; perhaps a permutation test.