This is almost exactly the same problem as one of the ones you cite. The only interesting difference is that instead of having to determine the camera’s facing from the endpoints of a line segment (robot arm section), you already know it. In addition, the field of view has a limited range, but that’s not a significant complication, nor is the addition of a rear-facing camera.
To review, in the cited question the target point is in the field of view if $$(Q-P_4)\cdot(P_4-P_3)\ge\|Q-P_4\|\,\|P_4-P_3\|\cos\theta,\tag{*}$$ where $Q$ is the target point, $P_4$ is the location of the camera, $P_3$ the other end of the robot arm segment, and $\theta$ is the half-angle of the field of view. Since we don’t have a robot arm holding the camera, let’s just call the camera’s location $P$ and let’s call the green robot’s heading $\phi$. $P_4-P_3$ gives us a vector in the direction of the camera’s facing, so assuming that the camera is facing directly forward, we can use the unit vector in that direction: $(\cos\phi,\sin\phi)$. Using a unit vector also eliminates the factor of $\|P_4-P_3\|$, since that came from normalizing the camera-direction vector. For this camera, $\theta=\pi/4$, so $\cos\theta=1/\sqrt2$. Putting this all together and moving the $\sqrt2$ to the left side, we get $$\sqrt2\,(Q-P)\cdot(\cos\phi,\sin\phi)\ge\|Q-P\|.\tag{1}$$ The range check is, of course, $\|Q-P\|\le100$. To check the rear view, just reverse the unit facing vector to $(-\cos\phi,-\sin\phi)$.
Since you have two conditions that both involve testing against $\|Q-P\|$, it’s more efficient to test the one that requires less work first. I suggest testing the square of the distance first, i.e., $\|Q-P\|=(Q-P)\cdot(Q-P)\le100^2$, to avoid computing a square root if the target is too far away. I can’t tell from your code snippet whether or not that’s practical, though. If the target isn’t too far away, then you can go ahead and take the square root and compute the rest of test (1).
Looking at your code snippet, it looks like you’re trying to do something along these lines, but there’s an error. The gray robot’s heading is wrong thing to use for inAngle
. The target’s heading is irrelevant to deciding whether or not it’s visible. In the original formula, $(P_4-P_3)/\|P_4-P_3\|$ corresponds to your greenHeadingVector
instead. Observe, too, that at least for this computation, there’s no need to normalize vectorFromGreen2Grey
. That saves you a division operation. Of course, if you need the normalized value for other things, it’s almost certainly more efficient to normalize once, as you do.
Best Answer
1-method: The equations of the lines are: $$\begin{cases}y_{green}=-\frac93x+\frac{27}{3}\\ y_{orange}=-\frac73x+\frac{22}{3}\end{cases} \Rightarrow (x_0,y_0)=(5/2,3/2).$$ 2-method: Use similarity of triangles:
$\hspace{3cm}$ $$\triangle BEO \sim \triangle BFC \Rightarrow \frac{BE}{BF}=\frac{EO}{FC} \Rightarrow \frac{BE}7=\frac{EO}3 \\ \triangle AEO \sim \triangle AGD \Rightarrow \frac{AE}{AF}=\frac{EO}{GD} \Rightarrow \frac{AE}9=\frac{EO}3 \\ \begin{cases}7AE=9BE \\ AE=BE+1\end{cases} \Rightarrow BE=\frac72 \Rightarrow y_0=5-\frac72=\frac32.$$