Big-O calculation for for loop with a nested while loop

algorithmsasymptotics

Lets say we have some algorithm like below

for i=0 to n
    \\ some operations with runtime O(1)
    while (condition with runtime O(1)) 
        \\ some operations with runtime O(1)

For each iteration of the for loop the number of iteration of the while loop is unknown. However, in total, so when the for loop terminates as $i=n$, the total number of iterations of the while loop is less or equal to $n$.
I have been told that in this situation when the total number of iterations of the inner loop doesn't exceed the number of iterations of the outerloop will have worst case runtime $O(n)$. I can't really understand why this is true.
Also, will there be a mathematical way to prove algorithms like this that have loops that doesn't behave the same for each iteration?

The question might be a bit vague to answer, it might be becuase I don't understand this very well. I'll add as much as I can when asked.

Best Answer

What you've been told is just blatantly false. The runtime of the "verification" of the condition can be $O(1)$, even if the condition is rarely or ever going to happen.

Take this as an example:

for i=0 to n
    \\ some operations with runtime O(1)
    while (0 != 1) 
        \\ some operations with runtime O(1)

This will literally never finish, because $0 \ne 1$, even though verifying that $0 \stackrel{?}{=} 1$ takes $O(1)$ time.

This is another example:

A = 2 ^ n
for i=0 to n
    \\ some operations with runtime O(1)
    while (i <= A) 
        \\ some operations with runtime O(1)

The central while loop will always have an exponential runtime, so the total runtime can never be less than $O(2^n)$. Even though again, verifying that $i \le 2^n$ takes $O(1)$ time (if we've calculated $2^n$ previously).

Related Question