I have given real numbers $x_1,x_2,y_1,y_2$ such that $x_1 > x_2$ and $y_1 < y_2$. The the claim is that there exists some $\lambda \in (0,1)$ such that $\lambda (x_1 – x_2) + (1-\lambda)(y_1-y_2) = 0$. In order to proof this, one needs ( at least in my opinion) the intermediate value theorem. But the intermediate value theorem does not hold in constructive mathematics (that is without the law of excluded middle; or constructive mathematics acts in intuitionistic logic). For a proof of this c.f. this paper.
Is there any constructive way to show the above equation?
Constructive intermediate value theorem
analysisconstructive-mathematicslogic
Related Solutions
In general, there are credible early attempts to study aspects of mathematics in the context of paraconsistent calculi (there are many, just look at the ToC of Priest's textbook).
That said, the answer to the precise question you actually asked, the one regarding type theory, is negative.
In type theory (as in most foundational systems), the statement $0=1$ is naturally explosive, in the sense that you can prove anything using $0=1$ without ever invoking $\bot$-elimination. Why? As in my answer to your other question about type theory, given types $A,B$, you can use $\mathbb{N}$-induction to construct a function $f: \mathbb{N} \rightarrow \mathcal{U}$ such that $f(0)$ reduces to $A$ and $f(Sn)$ reduces to $B$. Applying the congruence rule to $0=1$ yields $A = B$, so all types are equal, and in particular $A \rightarrow B$ holds. In fact, Martin-Löf type theory is maximally explosive: if you can inhabit a type $T$ in the empty context without using $\bot$-elimination, an induction on the structure of the proof shows that you could have inhabited the type $T^{\bot \leftarrow 0=1}$, the type obtained from $T$ by replacing every occurrence of $\bot$ with $0=1$. There is nothing specific about $0=1$ here: Martin-Löf type theory without explosion is really just a world where $\bot$ denotes an arbitrary proposition. In particular, if $A$ does not mention $\bot$, then MLTT without explosion proves $A \rightarrow \bot$ precisely if $A$ is explosive.
This means that, as long as you don't intend to add new, classically inadmissible principles, there is not much point in studying these "minimal" type theories based on minimal logic. The situation is more interesting in e.g. the variant of Heyting arithmetic that uses minimal logic instead of intuitionistic logic as its logical substrate. This theory is almost, but not quite, the same as full intuitionistic Heyting arithmetic, since $0=1$ is explosive.
For example, if you want to prove using $0=1$ that every number is even, you can multiply the inconsistent equality by $2$ to get $0=2$, apply transitivity to get $1=2$, then replace $1$ with $2$ in $\forall x. 1x = x$ to conclude $\forall x. 2x=x$, and then $\forall x. \exists y. 2y=x$. You can do this systematically (by induction on the structure of the formula $A$) to prove that $0 = 1 \rightarrow A$.
This time, the converse does not hold, and Heyting arithmetic formulated in minimal logic is not maximally explosive: there are positive statements that it proves false, but that are not explosive. That's because one of the axioms of Heyting arithmetic says something non-trivial about $\bot$. There are two ways to conclude $\bot$ in Heyting arithmetic:
From an induction axiom. This is not a problem: when you replace all occurrences of $\bot$ with $0=1$ in an induction axiom, the result is another induction axiom. (This is the only case in MLTT, which is why it justifies explosion.)
By applying the non-logical axiom $\forall x. 0 = Sx \rightarrow \bot$ to some $St = 0$. The problem is that you cannot replace this axiom with $\forall x. 0 = Sx \rightarrow 0 = 1$: it takes some work to show that this really cannot be done: it turns out that $0 = 2$ does not provably imply $0=1$ if you formulate Heyting arithmetic inside minimal logic. For details you can consult Heerkens' MSc thesis.
This is, however, the only obstacle: you replace $\forall x. 0 = Sx \rightarrow \bot$ with $\forall x. 0 = Sx \rightarrow 0 = 1$, and you get a minimal arithmetic that coincides perfectly with intuitionistic Heyting arithmetic. One can argue that this modified axiom is philosophically justified purely on basis of what we want arithmetic to be.
First, give us a definition of what it means for a logic to be $n$-valued; then, we'll be able to tell whether intuitionistic logic is 2-valued or not.
Since there is no standard, widely-accepted notion or definition of "$n$-valued logic" at all, discussion regarding whether a given logic is $n$-valued or not tend to be very confused at best. The discussion on Wikipedia is no exception in this regard.
One could define many different notions of $n$-valuedness. Below, I consider four possible such definitions.
In algebraic logic, one has a notion of $n$-valued semantics. This is not the definition we have in mind when we say that classical logic is 2-valued, since, while classical propositional logic has a complete 2-valued semantics, it also has a complete 4-valued semantics, where the four possible truth values are the sets $\emptyset, \{1\}, \{2\}, \{1,2\}$, conjunction is interpreted as intersection, and the negation of $x$ is interpreted as the relative complement $\{1,2\} \setminus x$. Saying that classical logic is therefore a 4-valued logic would be weird and confusing, to say the least. On the other hand, intuitionistic logic cannot have a complete $n$-valued semantics for any natural number $n$.
We could also try to pin down the notion of $n$-valuedness internally, without committing to a particular semantics. E.g. one could say that a subsystem of classical logic has no more than $n$ truth values if $$\bigvee_{0 \leq i \leq n} \bigvee_{0 \leq j < i} (A_i \leftrightarrow A_j)$$ is a tautology, i.e. out of every $n$ formulas, at least two are logically equivalent. Then, as expected, classical logic has no more than 2 truth values since $(A_1 \leftrightarrow A_0) \vee (A_2 \leftrightarrow A_0) \vee (A_2 \leftrightarrow A_1)$ is a classical tautology, and is not $1$-valued since $A_1 \leftrightarrow A_0$ is not a classical tautology. However, using the disjunction property, one can again show that intuitionistic logic is not $n$-valued in this sense for any $n \in \mathbb{N}$.
The Wikipedia page links a Stanford Encyclopedia of Philosophy article, which implicitly uses another ad-hoc definition of truth value. The basic story is as follows: we can say that a logic "admits $n$ truth values" if we can find $n$ formulas $f_1(P),\dots,f_n(P)$ in one free propositional variable $P$, so that we can prove $f_i(P) \rightarrow \neg f_j(P)$ for all $i \neq j$, and for each $f_n$ we can find some formula $X_n$ for which $f_n(X_n)$ has a proof in our logic. In this sense classical logic admits at least 2 truth values: we can take $f_1(P) = P$, $X_1 = P \vee \neg P$, $f_2(P) = \neg P$ and $X_2 = P \wedge \neg P$. Using a result from Glivenko's 1928 paper, one can show that it's not possible to extend this set $f_1,f_2$ with any third $f_3$ and $X_3$, even in intuitionistic logic: no "third truth value" is ever taken. Using later results of Rieger and Nishimura, one can prove a much stronger statement: for any 3 formulas $A_1,A_2,A_3$ in one free propositional variable $P$, at least one of $A_i \rightarrow A_j$ with $i \neq j$ is always an intuitionistic tautology. In this sense, intuitionistic logic admits only two truth values, as does classical logic.
There are, of course, systems that people traditionally call systems of many-valued logic, such as Łukasiewicz's 3-valued logic, or Belnap's four-valued logic. But these are just traditional names, and do not fall under any general definition of $n$-valued logic. In particular, intuitionistic logic is not usually included at all in the study of "many-valued logic".
Again, none of these definitions are standard or widely used across logic. This should answer your main question: there is a subtlety, namely that since "$n$-valuedness of a logic" doesn't actually have a standard definition, whether intuitionistic logic is two-valued or not will depend on the definition you choose to adopt.
Best Answer
Just solve the equation for $\lambda$. You get $\lambda =\frac {y_2-y_1}{x_1-x_2+y_2-y_1}$.