Try to personalize statistics. To show why understanding its concepts (even though they will forget the math, acknowledge it) is useful to them. For instance, how to interpret breast cancer test results. To quote from http://yudkowsky.net/rational/bayes:
Here's a story problem about a
situation that doctors often
encounter:
1% of women at age forty who
participate in routine screening have
breast cancer. 80% of women with
breast cancer will get positive
mammographies. 9.6% of women without
breast cancer will also get positive
mammographies. A woman in this age
group had a positive mammography in a
routine screening. What is the
probability that she actually has
breast cancer?
What do you think the answer is? If
you haven't encountered this kind of
problem before, please take a moment
to come up with your own answer before
continuing.
Next, suppose I told you that most
doctors get the same wrong answer on
this problem - usually, only around
15% of doctors get it right.
("Really? 15%? Is that a real
number, or an urban legend based on an
Internet poll?" It's a real number.
See Casscells, Schoenberger, and
Grayboys 1978; Eddy 1982; Gigerenzer
and Hoffrage 1995; and many other
studies. It's a surprising result
which is easy to replicate, so it's
been extensively replicated.)
Since your students will be medical doctors, make it clear: if they don't understand statistics, they will give the wrong interpretation of the results to their patients. This is not an academical matter.
Also acknowledge that unless they go in research, they will forget the details you will teach them. Don't even hope it's not the case. Aim for them to understand the fundamental concepts (type I and II errors, correlations and causations and so on) so when faced with a situation, they will remember "hey, perhaps I shouldn't rush drawing a conclusion, but talk to someone who understand stats better." Preventing cognitive errors and teaching them to be inquisitive of the results provided by others (especially in an industry where large sums of money are at stake) will be signs you succeeded.
This is my personal opinion, so I'm not sure it properly qualifies as an answer.
Why should we teach hypothesis testing?
One very big reason, in short, is that, in all likelihood, in the time it takes you to read this sentence, hundreds, if not thousands (or millions) of hypothesis tests have been conducted within a 10ft radius of where you sit.
Your cell phone is definitely using a likelihood ratio test to decide whether or not it is within range of a base station. Your laptop's WiFi hardware is doing the same in communicating with your router.
The microwave you used to auto-reheat that two-day old piece of pizza used a hypothesis test to decide when your pizza was hot enough.
Your car's traction control system kicked in when you gave it too much gas on an icy road, or the tire-pressure warning system let you know that your rear passenger-side tire was abnormally low, and your headlights came on automatically at around 5:19pm as dusk was setting in.
Your iPad is rendering this page in landscape format based on (noisy) accelerometer readings.
Your credit card company shut off your card when "you" purchased a flat-screen TV at a Best Buy in Texas and a $2000 diamond ring at Zales in a Washington-state mall within a couple hours of buying lunch, gas, and a movie near your home in the Pittsburgh suburbs.
The hundreds of thousands of bits that were sent to render this webpage in your browser each individually underwent a hypothesis test to determine whether they were most likely a 0 or a 1 (in addition to some amazing error-correction).
Look to your right just a little bit at those "related" topics.
All of these things "happened" due to hypothesis tests. For many of these things some interval estimate of some parameter could be calculated. But, especially for automated industrial processes, the use and understanding of hypothesis testing is crucial.
On a more theoretical statistical level, the important concept of statistical power arises rather naturally from a decision-theoretic / hypothesis-testing framework. Plus, I believe "even" a pure mathematician can appreciate the beauty and simplicity of the Neyman–Pearson lemma and its proof.
This is not to say that hypothesis testing is taught, or understood, well. By and large, it's not. And, while I would agree that—particularly in the medical sciences—reporting of interval estimates along with effect sizes and notions of practical vs. statistical significance are almost universally preferable to any formal hypothesis test, this does not mean that hypothesis testing and the related concepts are not important and interesting in their own right.
Best Answer
One thing I have done with students that went over well was to take several packages (the small ones) of M and M's candy and have the students count how many of each color there is in a pack (depending on the number of students they may each get their own or work in groups of 2 or 3). The students can usually figure an appropriate way to dispose of the candies afterwards. If you want more data, or comparisons, or just the "Population Proportions" I have recorded some values here (if you do this consider submitting your data to add).
Then you can use the data that they have just collected to show some basic concepts like variation (they did not all get the same counts/proportions). You can show some basic graphics like a histogram of the proportion of Blue candies, or boxplots comparing the proportions of a color from different types.
I then usually show them the true proportion for one of the colors and show how their proportions, while not exactly the truth, tend to cluster around the true value. I then show how close they tend to be to the truth (a general rule of thumb says that for a sample size of 50 the 95% margin of error will be about 14-15%). Then I show them the proportion of a different color from one of their samples and ask what values of the "truth" would be believable (using the 14-15% rule of thumb again) without telling them what the truth is. This gives a general idea of the concept of a confidence interval.
Another option is living graphs, have each of the students know some numeric fact about themselves (height in inches/cm works well). Clear a space on the floor and put some masking tape down with values written on it (like the axis of a plot). Have the students line up next to their value. You can then climb up on a desk/ladder and take a picture of the living histogram (I have seen this done outside with a tall ladder for a really good effect). Then you can have them count off from each end and put down a stripe of tape where they meet in the middle (the median), then do the same for each half and put down tape for the quartiles, wrap the tape around the middle half, then have them lower that to the floor, add the wiskers and have them step away to see the boxplot remaining on the floor. If there are enough students you could have them do this separately for boys and girls and compare the boxplots.
An activity to show the need to take good samples and avoid biased sampling can be done by getting some regular drinking straws and cutting them to lengths of 1 inch, 2 inches, and 4 inches. Put 4 of each length in a paper bag. Give a paper bag to each group of students and have them take a sample of size 4 from each bag by reaching into the bag without looking and taking out 4 at random. Have each group put their straws back and take a few more samples. Record the means of their samples and create a histogram, show the real mean on the graph to show how their means tend to be larger on average than the truth due to the biased sampling.
You could also discuss some of the principles of study design by having the students make paper helicopters (you can google for templates) and vary some options (wing length, body width, paper clip or no paper clip, etc.) to see if they can find the design that takes the longest to fall a set distance. You can discuss replication, randomization of testing order (what if the wind changes during the testing period?) and other concepts.