One way to do this would be to consider the Hausdorff distance. Given two subsets $A$, $B$ of a metric space $X$ (for example, two intervals in $\mathbb{R}$) the Hausdorff distance between them is the largest value out of
$$\sup_{a\in A}\, \inf_{b\in B}\, d(a,b)$$
$$\sup_{b\in B}\, \inf_{a\in A}\, d(a,b)$$
That is, first we consider a point $a \in A$, find the least distance to a point $b\in B$, and maximise this over $A$. Then we do the same thing with the roles of $A$ and $B$ reversed. Finally we take the largest of these two values.
Under the Hausdorff metric, the set of non-empty compact subsets of $X$ into a metric space (so, for example, all distances are non-negative, if the distance is zero then the sets coincide, and distances satisfy the triangle inequality).
How does this work in your case? Let's write $A=[a_1,a_2]$ and $B=[b_1,b_2]$ and take a special case of $a_1<a_2<b_1<b_2$, so that the intervals are non-overlapping and every $a\in A$ is less than every $b\in B$. Then the first expression has the value $b_1-a_1$ and the second has the value $b_2-a_2$, so the distance between the intervals is
$$d(A,B)=\max(b_1-a_1, b_2-a_2)$$
How about if $B$ is contained inside $A$, so we have $a_1<b_1<b_2<a_2$? Then the first quantity is $\max(b_1-a_1, a_2-b_2)$ and the second quantity is $0$, so the distance between the intervals is
$$d(A,B)=\max(b_1-a_1, a_2-b_2)$$
You can proceed through all possible cases in this way. In fact, my conjecture is that for any two such intervals, we always have
$$d(A,B) = \max(|a_1-b_1|, |a_2-b_2|)$$
although my enthusiasm doesn't quite stretch to working through all the cases. Shouldn't be too hard to do it yourself though!
More information: http://en.wikipedia.org/wiki/Hausdorff_distance
Hierarchical clustering could be made to do this for you. http://en.wikipedia.org/wiki/Hierarchical_clustering
The idea is that when points are close enough together, you merge them into a cluster. Then you measure distances between clusters. If two clusters are close enough, you merge them. You just need to be careful as to which metric you use to measure the distance between clusters. In order to guarantee that you have a max pre-defined radius, you'll want the distance between two clusters to be the maximum distance between any two points in the cluster.
The way the algorithm is defined it keeps merging clusters until there is only one. Then you can back up to however many clusters you want. You can modify this so that you only merge as long as the distance is smaller than your pre-defined max radius. Once all the distances between clusters are too big, you stop merging.
Best Answer
These two clustering rules do not necessarily produce the same results
Suppose you have the following four points and you want $k=2$ clusters:
What would the two clusters be?
With method $1$ you would cluster $\{A,B\}$ with a circle of diameter $66$ centred at $(0,0)$ and $\{C,D\}$ with circle of diameter $68$ centred at $(0,90)$, making the maximum diameter $68$. You might see that you are not going to do better than this by considering circles centred for example at the point $(0,18)$ near the centre of the smallest circle containing $A,B,C$ though that centre is more than $35$ away from all four points making any circle containing at least one of them have a diameter over $70$
With method $2$ you would cluster $\{A,B,C\}$ since they are all $65$ or $66$ away from each other, leaving $\{D\}$ for the other cluster since $D$ is at least $68$ away from any of the others