When trying to code this in R, I'm getting very confused about what to do. Apologies if my terminology is incorrect but I would be grateful for any advice.
The Problem:
I have been given two normal distributions for the mean and standard deviation of rat weights. So:
- The mean rate weight is itself a normal distribution with a mean of 1.68 and a standard deviation, or confidence interval, of 1.81 (mean = 12.68, SD = 1.81)
- The standard deviation is itself a normal distribution with a mean of 11.19 and a standard deviation of 3.2 (mean = 11.19, SD = 3.2)
Summary
Mean Rate Weight: dnorm(Mean = 12.68, SD = 1.81)
Standard Distribution of Rat Weight: dnorm(Mean = 11.19, SD = 3.2)
The Question:
In R, how to a code this to have a Monte Carlo run of 50,000 samples? Is the following example correct?
MC_Runs = 50000
Rat_Weight = rnorm(MC_Runs,
mean = rnorm(MC_Runs,mean = 12.68, sd = 1.81),
sd= rnorm(MC_Runs,mean = 11.19, sd = 3.2))
Best Answer
Your solution is correct, assuming the two normal random variables are independent. According to the R documentation of
rnorm
, you can input a vector of means and standard deviations for themean
andsd
arguments respectively.To verify, consider this toy example:
Some output:
Clearly the first variate for each simulation is $N(0,1)$ distributed, the second is $N(10,1)$ distributed, and the third is $N(100,1)$ distributed.