Because the point of cache is to save time, not waste it. Like most naïve delays in response to timing attacks, that also doesn’t solve the tracking problem – if there’s any detectable difference (consider a cross-site tracking server that serves the content with a controllable delay) under any circumstances (consider network and disk load and availability), the mitigation is defeated.
Sites don’t share that many resources byte-for-byte anyway. The current solution is fine.
Random delays don’t stop timing attacks. You just need to gather more data before your estimate of the “unrandomized timing” is good enough for you to make your conclusions.