Yes, there is a similar relationship: for circumstances where it makes sense and where both variables are coded by $0$ and $1$ (the analog of standardization), the "slope" in the logistic regression of $Y$ against $X$ equals the slope in the logistic regression of $X$ against $Y$.
Recall that (univariate) logistic regression models a binary response $Y$ in terms of a variable $x$ and a constant, using two parameters $\beta_0$ and $\beta_1$, by stipulating that the chance of $Y$ equaling one of its values (generically termed "success") can be modeled by
$$\mathbb O(Y=\text{success}) = \beta_0 + \beta_1 x$$
where "$\mathbb O$" refers to the log odds, equal to the logarithm of the odds $\Pr(\text{success}) / \Pr(\text{not success})$.
The only circumstance under which it makes sense to switch the roles of $Y$ and $x$, then, is when $x$ also is binary. That compels us to view its outcomes now as draws from a random variable $X$. The values of $Y$ must be encoded as fixed (nonrandom) values $1$ for "success" and $0$ otherwise. We might as well assume, then, that the encoding $1$="success" and $0$="not success" has been used all along for both variables.
Notice that the data in this situation can be considered a two-by-two contingency table in which the counts of all four possible combinations of $x$ and $y$ are displayed. Let the counts for $x=i$ and $y=j$ be written $n_{ij}$, for $i=0,1$ and $j=0,1$.
The conventional estimator of the parameters is obtained in the maximum likelihood theory by finding values for which the gradient of the log likelihood equals zero. In the first case, viewing $Y$ as the dependent variable, the likelihood equations are
$$\cases {
0 = n_{01} + n_{11} - \frac{n_{00}+n_{10}}{1+\exp{\beta_0}} - \frac{n_{10}+n_{11}}{1+\exp(\beta_0+\beta_1)} \\
0 = n_{11} - \frac{n_{10} + n_{11}}{1+\exp(\beta_0+\beta_1)}
}$$
When all the $n_{ij}\ne 0$ the solution is
$$\cases{
\beta_0 = \log(n_{00}) - \log(n_{01}),\\
\beta_1 = \log(n_{01}) + \log(n_{10}) - \log(n_{00}) - \log(n_{11}).}$$
Switching the roles of the variables merely permutes the subscripts of the $n$'s (although now $\beta_0$ and $\beta_1$ have different meanings, for they multiply the $y$ values instead of the $x$ values). But the symmetry of the solution for $\beta_1$ shows that it remains unchanged. This is the "slope" term and it is the perfect analog of the regression coefficient in ordinary least squares regression.
Example
Software will confirm this result. Here, for instance, are the results of the two logistic regressions in R
using the following two-way table:
Y=0 Y=1
X=0: 1 3
X=1: 2 4
Regressing $Y$ against $X$ gives $(\hat\beta_0, \hat\beta_1)$ = $(\log(1/3), \log(3/2))$ = $(-1.0986, 0.4055)$ while regressing $X$ against $Y$ gives $(\hat\beta_0, \hat\beta_1)$ = $(\log(1/2), \log(3/2))$ = $(-0.6931, 0.4055)$.
y <- matrix(c(1,2,3,4),nrow=2)
(fit <- glm(y ~ as.factor(0:1), family=binomial))
(fit.t <- glm(t(y) ~ as.factor(0:1), family=binomial))
The output suggests that both the slopes and the null deviances remain the same upon switching $X$ and $Y$:
Coefficients:
(Intercept) as.factor(0:1)1
-1.0986 0.4055
Degrees of Freedom: 1 Total (i.e. Null); 0 Residual
Null Deviance: 0.08043
Residual Deviance: 2.22e-16 AIC: 7.948
Coefficients:
(Intercept) as.factor(0:1)1
-0.6931 0.4055
Degrees of Freedom: 1 Total (i.e. Null); 0 Residual
Null Deviance: 0.08043
Residual Deviance: 4.441e-16 AIC: 8.072
Best Answer
Logistic regression does not have an "error" term as with classical linear regression. The exception to this might be thresholded linear regression with a logistic error term, but this isn't a commonly accepted probability model which results in a logistic regression model. This is because logistic models have a mean-variance relationship. The analogue to "adding an error term" to a linear regression model is actually a quasibinomial model in which the variance is merely proportional to p*(1-p).
A related question may be how to obtain regression model results which are identical over various designs or replications. This can be done with a "trick" in regression modeling software. You can generate non-integral $Y$ outcomes from the predicted risk which result in the same logistic regression results independent of the design of $X$. For instance:
x1 <- seq(-3, 3, 0.1)
andx2 <- rnorm(61)
as two different designs. As in your case,y1 <- plogis(0.3*x1)
andy2 <- plogis(0.3*x2)
both result in the same logistic regression model results with 0.3 as the log odds ratio and 0.0 as the log odds for $x=0$.This relates to your question because the parameter estimates are exactly as defined in your probability model, independent of the design of $x$, and without separation (e.g. log odds ratios, $\beta = \pm \infty$).
Modeling fractional results in a logistic model is an accepted way of analyzing ecological data, where the outcome may indeed be fractional. Not coincidentally, this is also a type of modeling when quasibinomial models are of the most use. Also not coincidentally, I think dispersion is proportional to a scale parameter for the logistic error term when doing "latent logistic regression".