1 General Roadmap
1.1 The goal
Within our model, assuming jobs are malleable, have exponentially distributed sizes, and all follow the same speed-up function, \(s \colon C_c \to \mathbb {R}\), \(\mathbb {E}[T]^{EQUI} \leq \mathbb {E}[T]^P\) for any scheduling policy \(P\) i.e. The mean response time of EQUI is always better than any other policy \(P\).
Note: A formal proof of this statement will follow later on. Therefore we will not focus on proving it completely rigorously right now. See: ??
The following proof is generalised from [ . Let \(P\) be a mapping policy that processes malleable jobs that can be switched between number of allocated cores without overhead, and currently has \(i\) active jobs. Recall that the rate of departures from state \(i\) under policy \(P\) is of the form:
If, furthermore, there are indices \(m\) such that \(s(P(i,m)) = \vec{0}\) then we can furthermore simplify this policy to an equivalent policy \(P'\) with departure rate:
with \(j\) the number of non-zero vectors for policy \(P\) in state \(i\).
However, if \(\sum _{m=1}^j P'(i,m) {\lt} c_k\), for some \(1 \leq k \leq n\) and \(c_k\) describing how many cores of type \(k\) we have, then our allocation is clearly suboptimal, since we now have compute time left over. Therefore we will assume that \(\sum _{m=1}^j\alpha _{m,k} = c_k\) for all \(k \in \{ 1,\dots ,n\} \).
Now we will upper bound the previous summation using the concavity of \(s\).
1
Therefore we conclude that
Thus an upper bound on P’s rate of departures is
From None it follows that \(j\cdot s(\frac{c}{j}) \mu \) is non-decreasing in \(j\). And thus to obtain the optimal rate we might as well use \(i\) to get an upper-bound of the form:
which is exactly what is achieved by the policy EQUI. Since both Markov chains only differ in their departure rates we can conclude that
Now recall that Little’s law states: \(E[N] = \frac{E[T]}{\Lambda }\). From this we can conclude that
as desired