[Math] Queueing Theory: How to estimate steady-state queue length for single queue, N servers

dynamical systemsnetwork-flowqueueing-theorysimulation

I have a real-life situation that can be solved using Queueing Theory.
This should be easy for someone in the field. Any pointers would be appreciated.

Scenario:
There is a single Queue and N Servers.
When a server becomes free, the Task at the front of the queue gets serviced.
The mean service time is T seconds.
The mean inter-Task arrival time is K * T (where K is a fraction < 1)
(assume Poisson or Gaussian distributions, whichever is easier to analyze.)

Question:
At steady state, what is the length of the queue? (in terms of N, K).

Related Question:
What is the expected delay for a Task to be completed?

Here is the real-life situation I am trying to model:
I have an Apache web server with 25 worker processes.
At steady-state there are 125 requests in the queue.
I want to have a theoretical basis to help me optimize resources
and understand quantitatively how adding more worker processes
affects the queue length and delay.

I know the single queue, single server, Poisson distribution is well analyzed.
I don't know the more general solution for N servers.

thanks in advance,
— David Jones
dxjones@gmail.com

Best Answer

Probably you'll find this one very useful. Chapter 5 ($M/M/C$ queue) corresponds to your model, where it is assumed that there are $c$ servers, the service time is exponentially distributed, and so are the interarrival times (of course, with different means). Anyway, the key is $M/M/C$ (or $M/M/N$, etc.) queue.