What is this SharePoint Output Cache and why should I use
it?
In ASP.NET there is an output cache that manages how pages
content is severed. It allows IIS to
cache static elements, such as images and pages so that subsequent requests do
not need to go looking for these items.
Similar to how your browser cache keeps you from downloading the same images
over and over again. The big advantage
with SharePoint is that when this is enabled it caches fully rendered versions
of pages.
When SharePoint loads a page it is a really big
process. It needs to get the Master Page
from the file system or the database. It
needs to get the Page Layout from the file system or again the database. It needs to get all CSS, all the images and
all the JavaScript. From here it can
start rendering the page, but there is more.
For every security trimmed or audience trimmed control it needs to make
a call back to the database to determine if it should render it or not. BTW:
You read correctly it doesn’t make one giant call to figure out what to
do with each security trimmed object, it calls them one at a time. This can take a bit of time when you think
about all the buttons in the ribbon that may or may not appear, on top of the
web parts the page requires.
As you can see that is a lot of calls just to render one
page. With output caching enabled this
fully rendered page will be cached and every subsequent call will not have to
go through all the above process. It will
simply make one call and get one page back.
As you can image this makes a huge difference on throughput performance.
To demonstrate just how much of an improvement this makes, I
load tested the a customized SharePoint 2013 site. This
site isn't a very heavy site, but does have a lot of JavaScript for responsive
design on top of all the other SharePoint scripts that are required. I ran the same set of tests, with same number
of users against the same site and you see the results in the below chart:
Counter
|
Before
|
After
|
Average
Page Load Time (s)
|
23.4
|
1.4
|
%Processor
Time App Server
|
24.8
|
16.8
|
%Processor
Time WFE Server
|
81.1
|
41.5
|
As you can see it made quite an improvement in Page Load
Time and took a significant amount of stress off both servers.
Now you are probably wondering why this wouldn't be enabled
by default, since it provides such significant performance gains. Well there is a bit of a catch to this. It is caching a version of the page, this
means there is more memory being used on the server to store the page. On a publishing site this is a bigger piece
as it will cache two version of page (published and draft). Also, as you've guessed this could lead to
some inconsistencies as well. The cached
page is removed when the page is updated, but there is a bit of a delay and
each WFE has its own timer on when it updates the page. In a farm with load balanced multiple WFE
servers it is possible that the first request could go to a WFE that has
updated the page in cache and a second request goes to a WFE that has not. The time to update is by default 60 seconds,
so this is a small window, but still possible.
To address some of the issues is the ability to create caching
rules. You can target rules to a Site
Collection, Site and Page Layouts. You
can also create caching rules based on a user’s access rights, so that all
readers see cached versions, where contributors do not. This rules can also be extended programmatically
through the VaryByCustom handler.
Overall, in my opinion, Output Caching should always be leverage
for low write, high read sites. Also
considering that you can have rules set for different page layouts, you could set
the caching time higher for the a low write page, like maybe the home page of
an intranet, and set the time lower on high write pages within the site. With careful planning this feature can really
help scale out the farm to handle more users while saving server resources.