Standard univariate logistic regression of $y$ on $x$ finds the coefficients $\alpha$, $\beta$ that best fit your training data $\{(x_i, y_i), i \in [1, N]\}$ in the following equation:
(model 1): $y_i = (1 + exp(-(\alpha + \beta x_i)))^{-1}$
Note that the fit will be bad if the $y$ in your data are not in $(0,1)$, so you'll have to transform your data if you want to use logistic regression. One option might be to transform $y$ into a proportion (number of occurences of the "phenomenon" divided by population of the corresponding area?).
Also, the fact that "one of the inputs for which I need a predicted output is far larger than the inputs which I used to make a regression" is a problem, because you will be extrapolating the results of the model to unknown regions of the data. A value of $x_i$ much higher than the ones in the training sample will probably give you a $\hat y_i$ very close to 1.
Directly assessing prediction error
Once the model is fitted and you have your estimated parameters $\hat \alpha$ and $\hat \beta$, you get a predicted output value $\hat y_i$ for each observation $x_i$: $\hat y_i = (1 + exp(-(\hat \alpha + \hat \beta x_i)))^{-1}$. You can easily assess goodness of fit on your graphic calculator using the observed $y_i$ and their corresponding predict values $\hat y_i$:
either by plotting one against the other (if the fit was perfect this would give you a straight line, the identity line, because then $y=\hat y$)
or by computing an error measure, for instance the root mean squared error: $rmse= \sqrt{\frac{1}{N} \sum_{i=0}^N (y_i - \hat y_i)^2}$. This will tell the average distance between observed outcomes $y_i$ and the model-predicted outcomes $\hat y_i$ (the lower $rmse$, the better the fit). It is not a standardized score like $R^2$, but is easy to compute and interpret.
Now to asses the prediction power of the model, it is best to compare $y_i$ and $\hat y_i$ on a validation dataset, ie. data that were not used in the fit (eg. by withholding a portion of data during training, see cross-validation for more info).
Pseudo-R²
The usual $R^2$ of linear regression does not apply to logistic regression, for which several alternative measures exist. In all variants, $R^2$ is a real value between 0 and 1, and the closer to 1 the better the model.
One of them uses the likelihood ratio, and is defined as follows:
$R^2_L = 1 - \frac{L_1}{L_0}$, where $L_1$ and $L_0$ are the log-likelihood of (respectively) model 1 (see above) and the following model 0, which is a logistic regression on just a constant (and does not depend on $x$):
(model 0): $y_i = (1 + exp(-\alpha))^{-1}$
For any logistic regression model with $y \in \{0,1\}$ the log-likelihood is computed from the observed $y$ and the predicted $\hat y$, using the following formula (but I'm not sure it applies for continuous $y \in [0,1]$):
$L=\sum_{i= 1}^N(y_i \ln(\hat y_i)+(1−y_i)\ln(1−\hat y_i))$
Another pseudo-R² is based on the linear correlation of $y$ and $\hat y$, which is easily computed on any graphic calculator with stat functions:
$R^2_{cor} = \left( \widehat {cor(y_i, \hat y_i)} \right) ^2$
First, over-fitting may not always be a real concern. No variable selection (or any other way of using the response to decide how to specify the predictors), few estimated parameters, many observations, only weakly correlated predictors, & a low error variance might lead someone to suppose that validating the model-fitting procedure isn't worth the candle. Fair enough; though you might ask why, if they're so sure about that, they didn't specify more parameters to allow for non-linear relationships between predictors & response, or for interactions.
Second, it may be that parameter estimation rather than prediction is the aim of the analysis. If you're using regression to estimate the Young's modulus of a material, then the job's done once you have the point estimate & confidence interval.
Third, with ordinary least-squares regressions (& no variable selection) you can calculate estimates of predictive performance analytically: the adjusted coefficient of determination & predicted residual sum of squares statistic (see Does adjusted R-square seek to estimate fixed score or random score population r-squared? & Why not using cross validation for estimating the error of a linear model?).
Best Answer
I'd agree with @Octern that one rarely sees people using train/test splits (or even things like cross-validation) for linear models. Overfitting is (almost) certainly not an issue with a very simple model like this one.
If you wanted to get a sense for your model's "quality", you may want to report confidence intervals (or their Bayesian equivalents) around your regression coefficients. There are several ways to do this. If you know/can assume that your errors are normally distributed, there's a simple formula (and most popular data analysis packages will give you these values). Another popular alternative is to compute them through resampling (e.g., bootstrapping or jackknifing), which makes fewer assumptions about the distribution of errors. In either case, I'd use the complete data set for the computation.