On Wed, Jun 2, 2010 at 7:08 PM, Richard Weait <span dir="ltr"><<a href="mailto:richard@weait.com">richard@weait.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Benchmarking a server application finks that performance is six times<br>
better when restricted to a single core<br>
<br>
<a href="http://mailinator.blogspot.com/2010/02/how-i-sped-up-my-server-by-factor-of-6.html" target="_blank">http://mailinator.blogspot.com/2010/02/how-i-sped-up-my-server-by-factor-of-6.html</a><br></blockquote></div><br clear="all">
For long running processes, invalidating the CPU cache when a task switches<br>from one core (or socket) exacts a high toll, and this seems to be a case of exactly that. Another case would be a long running compile.<br><br>
For other workloads, which are short tasks, quick in, quick out (e.g. serving<br>a web page), the penalty is not as high as that, since even web server processes do not stick around for a long time on a server (to avoid leaks and such).<br>
-- <br>Khalid M. Baheyeldin<br><a href="http://2bits.com">2bits.com</a>, Inc.<br><a href="http://2bits.com">http://2bits.com</a><br>Drupal optimization, development, customization and consulting.<br>Simplicity is prerequisite for reliability. -- Edsger W.Dijkstra<br>
Simplicity is the ultimate sophistication. -- Leonardo da Vinci<br>