BscThesisFormalisation

1 General Roadmap

1.1 The goal

Theorem 1
#

Within our model, assuming jobs are malleable, have exponentially distributed sizes, and all follow the same speed-up function, \(s \colon C_c \to \mathbb {R}\), \(\mathbb {E}[T]^{EQUI} \leq \mathbb {E}[T]^P\) for any scheduling policy \(P\) i.e. The mean response time of EQUI is always better than any other policy \(P\).

Note: A formal proof of this statement will follow later on. Therefore we will not focus on proving it completely rigorously right now. See: ??

Proof

The following proof is generalised from [ . Let \(P\) be a mapping policy that processes malleable jobs that can be switched between number of allocated cores without overhead, and currently has \(i\) active jobs. Recall that the rate of departures from state \(i\) under policy \(P\) is of the form:

\begin{equation*} \mu \sum _{m=1}^i s(P(i,m)). \label{eq:rate_of_departures} \end{equation*}

If, furthermore, there are indices \(m\) such that \(s(P(i,m)) = \vec{0}\) then we can furthermore simplify this policy to an equivalent policy \(P'\) with departure rate:

\[ \mu \sum _{m=1}^i s(P(i,m)) = \mu \sum _{m=1}^j s(P'(i,m)), \]

with \(j\) the number of non-zero vectors for policy \(P\) in state \(i\).

However, if \(\sum _{m=1}^j P'(i,m) {\lt} c_k\), for some \(1 \leq k \leq n\) and \(c_k\) describing how many cores of type \(k\) we have, then our allocation is clearly suboptimal, since we now have compute time left over. Therefore we will assume that \(\sum _{m=1}^j\alpha _{m,k} = c_k\) for all \(k \in \{ 1,\dots ,n\} \).

Now we will upper bound the previous summation using the concavity of \(s\). 1

\begin{align*} \frac{1}{j}\sum _{m=1}^j s (\alpha _m) & = \frac{1}{j} s(\alpha _1) + \frac{1}{j} s(\alpha _2) + \frac{1}{j}\sum _{m=3}^j s (\alpha _m)\\ & = \frac{2}{j}\left(\frac{1}{2} s(\alpha _1) + \frac{1}{2} s(\alpha _2)\right) + \frac{1}{j}\sum _{m=3}^j s (\alpha _m)\\ & \leq \frac{2}{j}s\left(\frac{1}{2} (\alpha _1 + \alpha _2)\right) + \frac{1}{j}\sum _{m=3}^j s (\alpha _m)\\ & \leq \dots \leq \frac{j}{j} s(\frac{1}{j} \sum _{m=1}^j\alpha _m) = s(\frac{c}{j}) \end{align*}

Therefore we conclude that

\[ \sum _{m=1}^j s (\alpha _m) \leq j\cdot s(\frac{c}{j}) \qquad \forall 0 {\lt} j \leq i, \forall \{ \alpha \} _{m=1}^j \]

Thus an upper bound on P’s rate of departures is

\[ j\cdot s(\frac{c}{j}) \mu . \]

From None it follows that \(j\cdot s(\frac{c}{j}) \mu \) is non-decreasing in \(j\). And thus to obtain the optimal rate we might as well use \(i\) to get an upper-bound of the form:

\[ i\cdot s(\frac{c}{i}) \mu , \]

which is exactly what is achieved by the policy EQUI. Since both Markov chains only differ in their departure rates we can conclude that

\[ E[N]^{EQUI} \leq E[N]^P. \]

Now recall that Little’s law states: \(E[N] = \frac{E[T]}{\Lambda }\). From this we can conclude that

\[ E[T]^{EQUI} \leq E[T]^P \]

as desired

  1. Formally this should be an inductive argument. However in this case the idea for the base-case, proven here, is exactly the same as the induction argument