Thursday, November 27, 2008

What resolution should your website be designed for?

What resolution should your website be designed for?

What resolution should your web site be designed for?

Varying resolutions of user's monitor can eclipse your site

Kudos for making a fascinating web site and for all the real hard work you put in place from your end. But as a matter of fact you deserve rewards in terms of what you expect your web site to yield rather than eulogies from some other quarter.

The beauty of your web site should percolate down to its intended end users. But what is critical on this front is: Your web site should dawn on your users precisely the way you have designed it -- intelligently and laboriously -- to appear.

Your desire may be topping your wish list but it may be marred by some hard facts. The usage of PC is still evolving and characterized by marked disparity in its constituent parts. This is note that the majority of PC users use older versions and their willingness to opt for latest accessories is something very discouraging.

A glimpse on differing resolutions in use

Right at the outset, be it known to you that monitor size and monitor resolution are two altogether different entities and should not make you perplexed to render your web design adaptive to it. The matter of pretty concerns is the resolution of the monitor which can be a hurdle or a good help to showcase your web site's prized attributes.

In the preliminary phase of PC popularity, 640 by 480 resolutions ruled the roost but it gave way to other resolutions as they hit the market fancifully. The commonest resolutions, as of now, happen to be 640 by 480, still quite in number, 800 by 600, and 1,024 by 768. What makes the matter tough to deal with is the fact that these resolutions apart, there is good number of some odd dimensions. Though not common, but still they assume importance because you will be missing out to count on them to be beneficiaries of your web site. Reasons? Well, they may end up being deprived to see your web site the way you have intended them because their resolutions do not support to see your web site in its entirety, with all its beauty and elegance.

You are right if you get a little confused as this is almost a quirky situation when it comes to design at what resolutions for better results, and of course, for better reach.

Let's find some workable solutions!

One-Size-Does-Not-Fit-All. So, what are the workarounds?

What workarounds you should be up to largely depends on what kind of coverage your web site has been conceived and accordingly designed for. Take for example, if you intended your web site to be classy and artistic, go on with the resolutions that help look it as desired regardless of its reach.

Due consideration cannot be done away in case where one aims for optimal exposure, especially when the web site supports and promotes business interests. Though web site in entirety is something of great importance to a business, but nonetheless some part of the web page might be comparatively important to other parts. Like, part that exhibits navigation bar, ad banners, your own or of clients, new product launches, or cool offers.

These important stuff should be visible to all - whoever browses your site no matter what screen resolutions they are using. While designing for resolutions, keep all such crucial stuff in 640-by-480 display size simply because this is believed to be most fundamental dimension in use. The advantage emanating from this approach is that you do not necessarily design exclusively for 640-by-480 users, but at the same time do not deprive them to see what could be important to you -- and for them, too. And those using higher resolutions are nicely targeted in the process.

There is a useful way to help you in your prepatory works to make your web site rightly visible across users of different resolutions. It is a good move to see for yourself how your web page looks in different resolutions. Whether the crucial part is catching the attention of users in different resolutions. If not, where on the web page that important part may be strategically placed to augment visibility across different resolutions as may be possible.

But how do you begin? There is something called shareware programs, compatible with Windows or Macintosh, which comes for this purpose. With this, you can accomplish all the above tasks, and can do the adjustments as a result thereof.

There is yet another means that can help you out in your visibility drive. As the benefits of this means are galore; so expectedly, it will demand a good amount of time and efforts on your part. After all, what rewards you ultimately is what tests your nerves. You need to have different versions of our web site in terms of resolutions to begin with. Further, you have to use JavaScript to find out the resolution of your user. This will be followed by redirecting the user to the version best suited to the resolution in question, giving your web site utmost exposure with vividness and clarity.

Coming out of this intrigue, on safer side

You must be up to this point that there is no simple answer to the simple question: What resolution should your design web site for?

Well, it depends on a lot of things. Though it can vary in the context of the nature of web site and its intended purpose as outlined in preceding workarounds part of this article.

Depending on your needs, you may well go for accommodating the full space of the window browser for lower resolutions, or you may act cautious so that your web content is rightly printed out on normally used papers in standard laser printer. Designing for 640-by-480 is a safe resort, but designing for 750 pixels width will be especially better for higher dimensions, and will go in harmony with lower dimensions as well if decisive stuff is placed thoughtfully with an eye on visibility accruing on lower dimensions .

Another careful consideration in your web site could be the use of frames. Frames consume a good deal of space, and may make other significant things on your web site wanting in space. Follow a thumb rule: Use the minimal number of frames in your web site, and its use must have convincing reasons. Simply put, use it when and only when you cannot think of its substitutes and frame alone is the answer. Sensible use of frames will make other important things on the site visually prominent.

To cut the story short, you do not have direct controls on what resolutions users would be having on their monitors while browsing. But accommodating most of the users across varying resolutions to make them see the most vital aspect of your web page is something you cannot afford to miss. Technologies keep on progressing and user patterns change, albeit slowly - this very aptly depicts the way users are opting for higher resolutions. In this backdrop, make sure that significant low end users are not unattended and your business is not at an opportunity cost.

About the author
Deepak Sharma is a Web Designer at BlueApple, a Web Design and Development Company with a well connected development infrastructure in India having a strong portfolio with global clientele and offering superior web services and solutions at competitive costs.

Thursday, July 31, 2008

FTE Outsourcing Offshore story

FTE Outsourcing -The Offshore story

The way we do business has rapidly changed in the last few years, technologies of telecommunication, information technology and media have of course been a major catalyst. The most recent and in some sense the oldest trend has been outsourcing and contract based work-FTE. Though both ideas have been here since the industrial revolution, their combination and organization plus the new technologies has led to a revolution in the way work is done.

To increase their flexibility, economy and creativity, many large and small companies have developed a strategy of focusing on their core business, and offshore outsourcing or work or FTE hiring.

Frequently, work is offshored in order to reduce labor expenses, to enter new markets, to tap talent currently unavailable domestically or to overcome regulations that prevent specific activities domestically as well.

Outsourcing Vs Offshoring

Outsourcing is the practice of using outside firms to handle work normally performed within the company. Offshoring is also a type of outsourcing; it involves having the outsourced business functions of the company done in another country.

Offshoring is sometimes contrasted with outsourcing.

  • Companies subcontracting in the same country would be outsourcing, but not offshoring.

  • On the other hand a company moving an internal business unit from one country to another would be offshoring, but not outsourcing.

  • A company subcontracting business to a different company in another country would be both outsourcing and offshoring.

Offshoring Vs Full Time Equivalent

FTEhiring is a system which hires employees who work on a contract basis, Having FTE of 1.0 means that the person is equivalent to a full-time worker; while an FTE of 0.5 signals that the worker is only half-time.

Again when FTE is combined with outsourcing or offshoring it simply means that using the FTE standards you are subcontracting business to a different company in another country

Reasons & Benefits

There are many numerous reasons why Offshoring -FTE & outsourcing are considered

Control capital costs- Cost-cutting may not be the only reason to outsource, but it's certainly a major factor. Offshoring releases capital for investment elsewhere in your business, and allows you to avoid large expenditures in the early stages of your business.

Increase efficiency- Companies that do everything themselves have much higher research, development, marketing, and distribution expenses, An outside provider's cost structure and economy of scale can give your firm a boost and advantage.

Reduce labor costs-This is another big incentive as hiring or training staff for short-term or peripheral projects can be very expensive. Offshoring enables you to hire expert and trained employees at the fraction of the cost.

Take new projects quickly- An offshoring firm has the resources to start a project right away. Handling the same project in-house might involve months to hire people, train them, and provide the support they need. And if a project requires major capital investments the startup process can be even more complicated.

Focus on your core business-Offshoring can help your business to shift its focus from peripheral activities toward work that serves the customer, and it can help managers set their priorities more clearly.

Give you competitive edge-Outsourcing can help small firms by giving them access to the same economies of scale, efficiency, and expertise that large companies enjoy.

Reduce risk- Outsourcing providers assume and manage this risk for you, and they generally are much better at deciding how to avoid risk in their areas of expertise

Not many businesses really understand the benefits of outsourcing. It's true that outsourcing can save money, but that's not the only reason to do it.

  • Staffing flexibility
  • Acceleration of projects and quicker time to market
  • High caliber professionals
  • Ability to tap into best practices
  • Knowledge transfer to permanent staff
  • Cost-effective and predictable expenditures
  • Access to the flexibility and creativity of experienced problem solvers
  • Resource and core competency focus
  • Reduce overheads, free up resources
  • Avoid capital expenditure
  • Offload non-core functions
  • Enhance tactical and strategic advantages
  • Spread your risks
  • Focus scarce resources on time-critical projects


Offshoring can not only reduce costs but can also make you a global player in a very short time and within limited resources. There is almost a never ending list of services and tasks that could be Offshored, some are fairly rare, but others, such as IT and Human Resources, are becoming very common indeed.

Friday, February 8, 2008

Research Right Keywords Website

How To Research The Right Keywords For Your Website

When you set about boosting your ranking on the search engines, you will want to pick keywords that will get you the best results. Selecting the right keywords for your website is a balancing act, but it is essential to get it right for good search engine optimization (SEO).

You want to select keywords that are popular and relevant to your site. However, you do not want keywords that return thousands of results, as it will be very difficult to get to the top of the rankings. Put simply, keywords are the words people type in to search engines when they search for something on the internet.

Ideally, you should do your keyword research before you set up your website as you can then build your site around your targeted keywords. You can employ an e-marketing professional to research the best keywords for you, or you can do it yourself.

There are some great free tools on the internet. Try out the Overture (Yahoo!) Keyword Suggestion Tool. Here you can type in keywords and the keyword suggestion tool will return an estimation of the monthly search volume for that phrase in Google, Yahoo, MSN and AOL. It will also show results for similar keywords.

You should select about 15 keywords and key phrases. Perform searches using these words to find competitors websites and see what keywords they are using in their websites, and how they are employed.

Another excellent tool to help you select the best keywords is Wordtracker. It can tell you how often people search for the keywords you chose and can offer some excellent alternative suggestions. You can try it out for free on the Wordtracker website. Google also offers a free keyword suggestion tool.

Once you have selected the right keywords for your website, you will need to use them wisely to achieve the best search engine optimization. The search engines do not like the overuse of keywords and you may even get banned if there are two many keywords and key phrases on your pages.

Keep keyword density in articles to about 2% and make sure that all content reads well and is informative. Articles written with the sole purpose of packing in as many keywords as possible will quickly drive visitors to your site away.

Also, be sure to include your keywords in page titles, META data, and try to work them in to links on your site when possible. Following these tips should help you strike the right balance when choosing your keywords and boost your search engine page ranking.


Thursday, February 7, 2008

How to Design Web Accessible Pages for the ColorBlind

How to Design Web Accessible Pages for the Colorblind


Have you ever thought about how many people are visiting your website and can’t use it for one reason or another? Well, this number might be higher than you think. If you are truly looking to create a web accessible site, then you need to take color impaired visitors into consideration. For colorblind individuals, the wrong color combinations on a website can make navigation and interaction impossible. However, don’t panic, there are a few simple rules that you can follow to design a website that is functional for the colorblind without giving up any of your website’s favorite design aspects.


I know what some of you might be thinking. Why should I create a website for a small group of people? You might be surprised to find out that colorblindness isn’t as rare as you think it is. This means that one in twelve of your visitors might be coming to your site with some sort of color disability. Just think how many visitors and customer conversions that you might lose if your website is not accessible and usable by the colorblind.


However, if this doesn’t sway you, here are a few more reason why you might want to consider designing your website with the colorblind in mind:

(1) An accessible website is more likely to be ranked well with the search engines than an inaccessible website.

(2) By designing a colorblind accessible website, you are also targeting PDAs, 3G phones, and similar technological devices that are used for web access.

(3) It is seen as more professional to have a website that doesn’t exclude the impaired or disabled.

(4) Equal access to everyone regardless of their abilities is always a nice things to do.


Unfortunately there isn’t only one kind of colorblindness to take into consideration when designing …It would be much easier if that were true. There are three different color vision impairments and they are explained below.

(1) Trichromat Vision
“normal” color vision, uses red/green/blue color receptors … this is the kind of vision that 11 out of 12 visitors have.

(2) Anomalous Trichromat Vision
Anomalous Trichromat vision, uses three color receptors but one pigment is misaligned

(a) Protanomaly Vision: reduced color red sensitivity
(b) Deuteranomaly Vision(most common): reduced color green sensitivity
(c) Tritanomaly Vision: reduced color blue sensitivity

(3) Dichromat Vision
Dichromat vision, uses only 2 of the 3 visual pigments - red, green or blue is missing

(a) Protanopia Vision: unable to receive color red.
(b) Deuteranopia Vision: unable to receive color green.
(c) Tritanopia Vision: unable to receive color blue.

(4) Monochromat Vision (can see only one color)


As web designers, we are all used to having the entire palette of colors to choose from. Designing a website for the colorblind won’t limit your color palette at all, however, you will need to watch out for the color combinations that you do use. Learning what color combinations are “no-no’s” is a great place to start, because without this you will get nowhere. Basically, you need to remember to stay away from Red and Green Combinations. Although most people see Red and Green as contrasting, those with Anomalous Trichromat Vision Colorblindness (the most common type) will not be able to tell these colors apart. This also goes for combination with variations of green and red, including colors such as purple and orange.


It is necessary that you prioritize your website’s content to find the most important content. The more important the content, the more necessary that it will be to make these items colorblind-safe. The most important aspects of a website are navigational text (includes image and button text), menus, headers, and subheaders. Make sure that these items are very high in contrast…this means that you should either make these items black and white or opposite ends of the color saturation pole. However, I suggest black and white as the best possible contrasting colors for these critical page elements. Also, with articles and other large format pieces of copy, using dark text on a white background is essential in my opinion. Maybe I’m getting old, but I am sure that we have all read an article online and landed up with a huge migraine headache because the yellow text on blue background was too much for our eyes to take. If you don’t want to use black and white for text, then after laying out the page, ask yourself, “Does this text Contrast Well With the Background?!” Use as much color as you want in the surrounding parts of the page, as long as it doesn’t take away from the contrast of the text.

A website that uses a monochromatic look splashed with color is Adobe’s website is very clean, professional, and most importantly beautiful.

If you aren’t sure if a page is contrasted enough, one good tip is to desaturate your website (save first) in Fireworks or Adobe Photoshop and see if the images still have an impact. Desaturating the image will remove all the color from the image and this way you will be able to tell if the image has enough contrast without color to be seen. However, an easier way is to use these tools that I found on the net. The first one is The Web Design Evaluation Tool … This free online utility allows you to see the 3 different ways that your page can look depending on the viewer’s vision and color disability. Another tool that I found useful was the Colorblind Web Page Filter … The way that it works is that you type in the URL and then choose some options that describe different types of color blindness. Then this filter shows you what the page will look like to the colorblind viewer. Of course I had to try out these cool tools. I had a lot of fun playing around with the different ways that my website looks when changing the filters. Take a look at what looks like in different colorblind filters. You can click on the images to go to see the pictures bigger or to play around with the colors yourself.

Deuteranomaly Vision - What All Web Design Resources looks like in colorblind for Deuteranomaly Vision Monochromatic Black and White Black White Grey ColorBlindness Anomalous Protanomaly Vision Impairment - What All Web Design Resources Webpage Looks Like When Deisgned for Anonmalous Protanomaly Vision Colorblind

I already know what type of comments I will get about this article. Can you guess what I am going to say? Rachel … Why haven’t you followed the rules of this article? Well guys, I am having a hard time figuring out this blog software at the moment. I am used to using HTML to publish articles and I haven’t quite figured this bad boy out yet ( I only downloaded it last night ) As soon as I can, I am going to switch from this template to a more reader-friendly version. Does anyone have an opinion on a WordPress Theme / Skin / Template?

Here are Some Other Colorblind Design Links that Might Help You:

Consider The Colorblind
Article describing some of the problems colorblind web viewers have when
viewing web sites and why the web designer should care.

Colorblind Web Page Filter
A colorblind web page filter. You type in the URL and choose some
options that describe different types of color blindness and the filter
shows you what the page will look like to the colorblind viewer.

Web Design Evaluation Tool
Make sure that when you are designing for the web that you take into
account the color disabled. This online utility allows the web designer
to see 3 different ways that your page can look depending on the viewer’s
vision & color disability.

Color Theory for the Color Blind
article for colorblind web designer. Starts with some general
information about color blindness, and then continues by providing
information on color theory and advice for color blind people who want
to do web design.

Colorblindness Information and Online Tests
A great site providing a great deal of information about color blindness.

Colormaps for Checking The Readability for Dichromats
This site provides a comparative color palette showing how the 256 color
palette looks to people with normal vision and two versions of color

What is Colorblindness and The Different Types
Article describing the different types of colorblindness

Are Your Web Pages Color Sensitive?
About’s article on colorblind friendly website design. This article will help
you understand what is necessary to design a colorblind-friendly
webpage design.

Vischeck is a way of showing you what things look like to someone
who is color blind. You can try Vischeck online- either run Vischeck
on your own image files or run Vischeck on a web page. You can also
download programs to let you run it on your own computer.

How Do Things Appear to Colorblind People?
Many people might be surprised to find out that being colorblind doesn’t
mean that you just see black and white. Take a look at this article to
find out what it is like to be colorblind.


Tips How Create Traffic Generating Website

Tips On How To Create A Traffic Generating Website

Building a website initially may seem like a major undertaking. There is so much information out there that one can become overwhelmed. There is so much to think about. What to write, what graphics to use, what domain name should I pick and so on.

What you really need to do is sit back and start small. Remember the saying "keep it simple stupid". If you are even considering building a website you must at this juncture have some notion in your head about what you want to do.

This notion could be broad or specific. In other words you may just want to make money as you have heard that many people are making money online with their websites.

On the other hand you may have a specific interest that you want to share with the rest of the world. What ever the case there is something going on between your ears that is leading you to want to build a website.

So you need to write this down. Get it on paper. Create a list. No matter what your notion is - there will be one common goal amongst everyone who builds a website and that is to get traffic to it. You will need eyeballs on your web pages either to share information or generate income with it.

So your number one objective is to understand the main purpose of building a website. Once you understand this you can now specifically research to achieve your goal. This will keep you focused and you will be less inclined to just jump from one idea to the next. Then you will be able to build an effective website.

Watch These Free Videos At Site Build It Customer Reviews please allow time for the videos to load.

Learn how to publish a real website That works Site Build It Review. Sign Up for the free affiliates masters course.

Tuesday, February 5, 2008

Dynamic Site Mapping

Dynamics of Dynamic Site Mapping

Dynamic site mapping technology empowers your search engine visibility by indexing your hard to get dynamic content. DSM provides you with a deep-crawled searchable index for dynamic pages. This enables you to obtain web page access information (page view event tracking), and compile web site statistics reports that provide page visitation data and client marketing information.

What Is Dynamic Site Mapping

During the early days web site content was almost entirely static HTML or text documents. By definition, static document is a web page that is saved to disk and passed back to a requesting browser without changes. Whereas a dynamic web page is a web page, which has, content that is changed through a program or script at the time the page is requested. Common examples of trivial dynamic pages are the current date and time. A dynamic site is easily recognized by the "?" or other special characters located in the page's URL.

Optimizing Dynamic Pages

For ages, search engine spiders were unable to index dynamic pages dependably. But now search engine technology has advanced and even complex dynamic URLs are appearing in the SERPs now. There are certain basics that a search engine requires to successfully index your dynamic page:

URL Processing Ability - Even though Search engine technology is improving by leaps and bounds, search engine experts still recommend restricting dynamic URLs to two parameters or less.

Content Accessing Ability - Search engine spiders cannot enter values into forms, so any content that is accessible only through a form on your site is just one more part of the invisible web.

Ability to Return to Your Page - Spiders encounter problems if they cache a dynamic URL with a session ID. If that session ID times out, the indexed page will most likely point any search engine referrer to an error page and the search engine spider will be unable to return to your page for further spidering. For that reason, most search engine spiders do not cache dynamic URLs with session IDs.
Dynamic sites require highly specialized search engine marketing strategies that differ from those used for static sites. It's till date hard to get dynamic sites indexed unless they're properly optimized.

The Problem

Dynamic pages are created on the fly with technology such as ASP, Cold Fusion, and Perl etc. Such pages work well for users who visit the site, but don't work well for search engine crawlers. As dynamically generated pages don't actually exist until a user selects the variable(s) that generate them. A search engine spider can't select variables; as a result pages don't get generated, and cant be indexed.

Crawlers such as Goggle and Yahoo can't read the entire dynamic database of URLs, which either contain a query string (?) or other database characters (#&*!%) which are spider traps. As search crawlers have problems reading deep into a dynamic database, they are programmed to detect and ignore many dynamic URLs.

The Solution

There are a few dynamic-page optimization techniques that can be used for the indexing of dynamic sites:

1. Converting dynamic URLs to search engine-friendly URLs for ex: Using tools like Goggle site crawler and DSM developed by Bruce Clay converting dynamic Active Server Pages (ASP) pages into search engine-compatible formats.

2. Placing links to dynamic pages on static pages, and submitting static pages to the search engines manually according to each search engine's guidelines is another way out. This is easily done with a Table of Contents page that displays links to dynamic pages. While the crawlers can't index the entire dynamic page, they will index most of the content.

3. Another way to achieve wider visibility is to use paid inclusion and trusted feed programs that guarantee the indexing of dynamic sites, or a specific number of click-bys.

4.The best way to get a dynamic site fully indexed is to first fix the URLs by having them rewritten into static appearing URLs.

The Importance of Dynamic Site Mapping For SEO

Dynamic site mapping is important because search engine optimizer need as much content to be found and indexed as possible. Indexing is important, as it has an impact on how the site will perform on the search engines.

Every single page indexed increases the chance of a visitor frequency to the site. Each page can potentially be ranked for any combination of keywords found on that page.

The second most important reason for indexing is because they influence link popularity. The search engines today are driven by link popularity. The site with the most links can rank higher than the site with fewer links.


Dynamic Site Mapping simplifies content management, streamlines website generation and provides personalization features that cannot be replicated with purely static web pages. If accurately utilized by SEOs it can double a sites popularity and increase business opportunities and revenue.

Sunday, January 20, 2008

Cache Solve PHP Performance Problems

In the good old days when building web sites was as easy as knocking up a few HTML pages, the delivery of a web page to a browser was a simple matter of having the web server fetch a file. A site's visitors would see its small, text-only pages almost immediately, unless they were using particularly slow modems. Once the page was downloaded, the browser would cache it somewhere on the local computer so that, should the page be requested again, after performing a quick check with the server to ensure the page hadn't been updated, the browser could display the locally cached version. Pages were served as quickly and efficiently as possible, and everyone was happy.

Then dynamic web pages came along and spoiled the party by introducing two problems:

  • When a request for a dynamic web page is received by the server, some intermediate processing must be completed, such as the execution of scripts by the PHP engine. This processing introduces a delay before the web server begins to deliver the output to the browser. This may not be a significant delay where simple PHP scripts are concerned, but for a more complex application, the PHP engine may have a lot of work to do before the page is finally ready for delivery. This extra work results in a noticeable time lag between the user's requests and the actual display of pages in the browser.

  • A typical web server, such as Apache, uses the time of file modification to inform a web browser of a requested page's age, allowing the browser to take appropriate caching action. With dynamic web pages, the actual PHP script may change only occasionally; meanwhile, the content it displays, which is often fetched from a database, will change frequently. The web server has no way of discerning updates to the database, so it doesn't send a last modified date. If the client (that is, the user's browser) has no indication of how long the data will remain valid, it will take a guess. This is problematic if the browser decides to use a locally cached version of the page which is now out of date, or if the browser decides to request from the server a fresh copy of the page, which actually has no new content, making the request redundant. The web server will always respond with a freshly constructed version of the page, regardless of whether or not the data in the database has actually changed.

To avoid the possibility of a web site visitor viewing out-of-date content, most web developers use a meta tag or HTTP headers to tell the browser never to use a cached version of the page. However, this negates the web browser's natural ability to cache web pages, and entails some serious disadvantages. For example, the content delivered by a dynamic page may only change once a day, so there's certainly a benefit to be gained by having the browser cache a page--even if only for 24 hours.

If you're working with a small PHP application, it's usually possible to live with both issues. But as your site increases in complexity--and attracts more traffic--you'll begin to run into performance problems. Both these issues can be solved, however: the first with server-side caching; the second, by taking control of client-side caching from within your application. The exact approach you use to solve these problems will depend on your application, but in this chapter, we'll consider both PHP and a number of class libraries from PEAR as possible panaceas for your web page woes.

Note that in this chapter's discussions of caching, we'll look at only those solutions that can be implemented in PHP. For a more general introduction, the definitive discussion of web caching isrepresented by Mark Nottingham's tutorial.

Furthermore, the solutions in this chapter should not be confused with some of the script caching solutions that work on the basis of optimizing and caching compiled PHP scripts, such as Zend Accelerator and ionCube PHP Accelerator.

This chapter is excerpted from The PHP Anthology: 101 Essential Tips, Tricks & Hacks, 2nd Edition. Download this chapter plus two others, covering PDO and Databases, and Access Control, in PDF format to read offline.

How do I prevent web browsers from caching a page?

If timely information is crucial to your web site and you wish to prevent out-of-date content from ever being visible, you need to understand how to prevent web browsers--and proxy servers--from caching pages in the first place.


There are two possible approaches we could take to solving this problem: using HTML meta tags, and using HTTP headers.

Using HTML Meta Tags

The most basic approach to the prevention of page caching is one that utilizes HTML meta tags:

The insertion of a date that's already passed into the Expires meta tag tells the browser that the cached copy of the page is always out of date. Upon encountering this tag, the browser usually won't cache the page. Although the Pragma: no-cache meta tag isn't guaranteed, it's a fairly well-supported convention that most web browsers follow. However, the two issues associated with this approach, which we'll discuss below, may prompt you to look at the alternative solution.

Using HTTP Headers

A better approach is to use the HTTP protocol itself, with the help of PHP's header function, to produce the equivalent of the two HTML meta tags above:

We can go one step further than this, using the Cache-Control header that's supported by HTTP 1.1-capable browsers:

For a precise description of HTTP 1.1 Cache-Control headers, have a look at the W3C's HTTP 1.1 RFC. Another great source of information about HTTP headers, which can be applied readily to PHP, is mod_perl's documentation on issuing correct headers.


Using the Expires meta tag sounds like a good approach, but two problems are associated with it:

  • The browser first has to download the page in order to read the meta tags. If a tag wasn't present when the page was first requested by a browser, the browser will remain blissfully ignorant and keep its cached copy of the original.
  • Proxy servers that cache web pages, such as those common to ISPs, generally won't read the HTML documents themselves. A web browser might know that it shouldn't cache the page, but the proxy server between the browser and the web server probably doesn't--it will continue to deliver the same out-of-date page to the client.

On the other hand, using the HTTP protocol to prevent page caching essentially guarantees that no web browser or intervening proxy server will cache the page, so visitors will always receive the latest content. In fact, the first header should accomplish this on its own; this is the best way to ensure a page is not cached. The Cache-Control and Pragma headers are added for some degree of insurance. Although they don't work on all browsers or proxies, the Cache-Control and Pragma headers will catch some cases in which the Expires header doesn't work as intended--if the client computer's date is set incorrectly, for example.

Of course, to disallow caching entirely introduces the problems we discussed at the start of this chapter: it negates the web browser's natural ability to cache pages, and can create unnecessary overhead, as new versions of pages are always requested, even though those pages may not have been updated since the browser's last request. We'll look at the solution to these issues in just a moment.

How do I control client-side caching?

We addressed the task of disabling client-side caching in "How do I prevent web browsers from caching a page?", but disabling the cache is rarely the only (or best) option.

Here we'll look at a mechanism that allows us to take advantage of client-side caches in a way that can be controlled from within a PHP script.

Apache Required!
This approach will only work if you're running PHP as an Apache web server module, because it requires use of the function getallheaders--which only works with Apache--to fetch the HTTP headers sent by a web browser.


In controlling client-side caching you have two alternatives. You can set a date on which the page will expire, or respond to the browser's request headers. Let's see how each of these tactics is executed.

Setting a Page Expiry Header

The header that's easiest to implement is the Expires header--we use it to set a date on which the page will expire, and until that time, web browsers are allowed to use a cached version of the page. Here's an example of this header at work:

expires.php (excerpt)

' );
echo ( 'The GMT is now '.gmdate('H:i:s').'
' );
echo ( 'View Again
' );

In this example, we created a custom function called setExpires that sets the HTTP Expires header to a point in the future, defined in seconds. The output of the above example shows the current time in GMT, and provides a link that allows us to view the page again. If we follow this link, we'll notice the time updates only once every ten seconds. If you like, you can also experiment by using your browser's Refresh button to tell the browser to refresh the cache, and watching what happens to the displayed date.

Acting on the Browser's Request Headers

A more useful approach to client-side cache control is to make use of the Last-Modified and If-Modified-Since headers, both of which are available in HTTP 1.0. This action is known technically as performing a conditional GET request; whether your script returns any content depends on the value of the incoming If-Modified-Since request header.

If you use PHP version 4.3.0 and above on Apache, the HTTP headers are accessible with the functions apache_request_headers and apache_response_headers. Note that the function getallheaders has become an alias for the new apache_request_headers function.

This approach requires that you send a Last-Modified header every time your PHP script is accessed. The next time the browser requests the page, it sends an If-Modified-Since header containing a time; your script can then identify whether the page has been updated since that time. If it hasn't, your script sends an HTTP 304 status code to indicate that the page hasn't been modified, and exits before sending the body of the page.

Let's see these headers in action. The example below uses the modification date of a text file. To simulate updates, we first need to create a way to randomly write to the file:

ifmodified.php (excerpt)

array (0,1,1);
if ( $random[0] == 0 ) {
$fp = fopen($file, 'w');
fwrite($fp, 'x');
$lastModified = filemtime($file);

Our simple randomizer provides a one-in-three chance that the file will be updated each time the page is requested. We also use the filemtime function to obtain the last modified time of the file.

Next, we send a Last-Modified header that uses the modification time of the text file. We need to send this header for every page we render, to cause visiting browsers to send us the If-Modifed-Since header upon every request:

ifmodified.php (excerpt)

header('Last-Modified: ' .
gmdate('D, d M Y H:i:s', $lastModified) . ' GMT');

Our use of the getallheaders function ensures that PHP gives us all the incoming request headers as an array. We then need to check that the If-Modified-Since header actually exists; if it does, we have to deal with a special case caused by older Mozilla browsers (earlier than version 6), which appended an illegal extra field to their If-Modified-Since headers. We use PHP's strtotime function to generate a timestamp from the date the browser sent us. If there's no such header, we set this timestamp to zero, which forces PHP to give the visitor an up-to-date copy of the page:

ifmodified.php (excerpt)

$request = getallheaders();
if (isset($request['If-Modified-Since']))
$modifiedSince = explode(';', $request['If-Modified-Since']);
$modifiedSince = strtotime($modifiedSince[0]);
$modifiedSince = 0;

Finally, we check to see whether or not the cache has been modified since the last time the visitor received this page. If it hasn't, we simply send a 304 Not Modified response header and exit the script, saving bandwidth and processing time by prompting the browser to display its cached copy of the page:

ifmodified.php (excerpt)

if ($lastModified <= $modifiedSince) { header('HTTP/1.1 304 Not Modified'); exit(); } echo ( 'The GMT is now '.gmdate('H:i:s').' ' ); echo ( 'View Again
' );

Remember to use the "View Again" link when you run this example (clicking the Refresh button usually clears your browser's cache). If you click on the link repeatedly, the cache will eventually be updated; your browser will throw out its cached version and fetch a new page from the server.

If you combine the Last-Modified header approach with time values that are already available in your application--for example, the time of the most recent news article--you should be able to take advantage of web browser caches, saving bandwidth and improving your application's perceived performance in the process.

Be very careful to test any caching performed in this manner, though; if you get it wrong, you may cause your visitors to consistently see out-of-date copies of your site.


HTTP dates are always calculated relative to Greenwich Mean Time (GMT). The PHP function gmdate is exactly the same as the date function, except that it automatically offsets the time to GMT based on your server's system clock and regional settings.

When a browser encounters an Expires header, it caches the page. All further requests for the page that are made before the specified expiry time use the cached version of the page--no request is sent to the web server. Of course, client-side caching is only truly effective if the system time on the computer is accurate. If the computer's time is out of sync with that of the web server, you run the risk of pages either being cached improperly, or never being updated.

The Expires header has the advantage that it's easy to implement; in most cases, however, unless you're a highly organized person, you won't know exactly when a given page on your site will be updated. Since the browser will only contact the server after the page has expired, there's no way to tell browsers that the page they've cached is out of date. In addition, you also lose some knowledge of the traffic visiting your web site, since the browser will not make contact with the server when it requests a page that's been cached.

How do I examine HTTP headers in my browser?

How can you actually check that your application is running as expected, or debug your code, if you can't actually see the HTTP headers? It's worth knowing exactly which headers your script is sending, particularly when you're dealing with HTTP cache headers.


Several worthy tools are available to help you get a closer look at your HTTP headers:

This add-on to the Firefox browser is a simple but very handy tool for examining request and response headers while you're browsing.

Another useful Firefox add-on, Firebug is a tool whose interface offers a dedicated tab for examining HTTP request information.

This add-on to Internet Explorer for HTTP viewing and debugging is similar to LiveHTTPHeaders above.

Charles Web Debugging Proxy
Available for Windows, Mac OS X, and Linux or Unix, the Charles Web Debugging Proxy is a proxy server that allows developers to see all the HTTP traffic between their browsers and the web servers to which they connect.

Any of these tools will allow you to inspect the communication between the server and browser.

How do I cache file downloads with Internet Explorer?

If you're developing file download scripts for Internet Explorer users, you might notice a few issues with the download process. In particular, when you're serving a file download through a PHP script that uses headers such as Content-Disposition: attachment, filename=myFile.pdf or Content-Disposition: inline, filename=myFile.pdf, and that tells the browser not to cache pages, Internet Explorer won't deliver that file to the user.


Internet Explorer handles downloads in a rather unusual manner: it makes two requests to the web site. The first request downloads the file and stores it in the cache before making a second request, the response to which is not stored. The second request invokes the process of delivering the file to the end user in accordance with the file's type--for instance, it starts Acrobat Reader if the file is a PDF document. Therefore, if you send the cache headers that instruct the browser not to cache the page, Internet Explorer will delete the file between the first and second requests, with the unfortunate result that the end user receives nothing!

If the file you're serving through the PHP script won't change, one solution to this problem is simply to disable the "don't cache" headers, pragma and cache-control, which we discussed in "How do I prevent web browsers from caching a page?", for the download script.

If the file download will change regularly, and you want the browser to download an up-to-date version of it, you'll need to use the Last-Modified header that we met in "How do I control client-side caching?", and ensure that the time of modification remains the same across the two consecutive requests. You should be able to achieve this goal without affecting users of browsers that handle downloads correctly.

One final solution is to write the file to the file system of your web server and simply provide a link to it, leaving it to the web server to report the cache headers for you. Of course, this may not be a viable option if the file is supposed to be secured.

How do I use output buffering for server-side caching?

Server-side processing delay is one of the biggest bugbears of dynamic web pages. We can reduce server-side delay by caching output. The page is generated normally, performing database queries and so on with PHP; however, before sending it to the browser, we capture and store the finished page somewhere--in a file, for instance. The next time the page is requested, the PHP script first checks to see whether a cached version of the page exists. If it does, the script sends the cached version straight to the browser, avoiding the delay involved in rebuilding the page.


Here, we'll look at PHP's in-built caching mechanism, the output buffer, which can be used with whatever page rendering system you prefer (templates or no templates). Consider situations in which your script displays results using, for example, echo or print, rather than sending the data directly to the browser. In such cases, you can use PHP's output control functions to store the data in an in-memory buffer, which your PHP script has both access to and control over.

Here's a simple example that demonstrates how the output buffer works:

buffer.php (excerpt)

$buffer = ob_get_contents();
echo '2. A normal echo
echo $buffer;

The buffer itself stores the output as a string. So, in the above script, we commence buffering with the ob_startfunction, and use echo to display a piece of text which is stored in the output buffer automatically. We then use the ob_get_contents function to fetch the data the echo statement placed in the buffer, and store it in the $buffer variable. The ob_end_clean function stops the output buffer and empties the contents; the alternative approach is to use the ob_end_flushfunction, which displays the contents of the buffer.

The above script displays the following output:

2. A normal echo
1. Place this in the buffer

In other words, we captured the output of the first echo, then sent it to the browser after the second echo. As this simple example suggests, output buffering can be a very powerful tool when it comes to building your site; it provides a solution for caching, as we'll see in a moment, and is also an excellent way to hide errors from your site's visitors, as is discussed in Chapter 9. Output buffering even provides a possible alternative to browser redirection in situations such as user authentication.

In order to improve the performance of our site, we can store the output buffer contents in a file. We can then call on this file for the next request, rather than having to rebuild the output from scratch again. Let's look at a quick example of this technique. First, our example script checks for the presence of a cache file:

sscache.php (excerpt)

If the script finds the cache file, we simply output its contents and we're done! If the cache file is not found, we proceed to output the page using the output buffer:

sscache.php (excerpt)

html public "-//W3C//DTD XHTML 1.0 Transitional//EN"

Cached Page

This page was cached with PHP's
>Output Control Functions

Before we flush the output buffer to display our page, we make sure to store the buffer contents in the $buffer variable.

The final step is to store the saved buffer contents in a text file:

sscache.php (excerpt)

$fp = fopen('./cache/page.cache','w');

The page.cache file contents are exactly same as the HTML that was rendered by the script:

Cached Pagecache/page.cache (excerpt)

This page was cached with PHP's
Output Control Functions


For an example that shows how to use PHP's output buffering capabilities to handle errors more elegantly, have a look at the PHP Freaks article Introduction to Output Buffering, by Derek Ford.

Template engines often include template caching features--Smarty is a case in point. Usually, these engines offer a built-in mechanism for storing a compiled version of a template (that is, the native PHP generated from the template), which prevents us developers from having to recompile the template every time a page is requested.

This process should not be confused with output--or content--caching, which refers to the caching of the rendered HTML (or other output) that PHP sends to the browser. In addition to the content cache mechanisms discussed in this chapter, Smarty can cache the contents of the HTML page. Whether you use Smarty's content cache or one of the alternatives discussed in this chapter, you can successfully use both template and content caching together on the same site.

HTTP Headers and Output Buffering

Output buffering can help solve the most common problem associated with the header function, not to mention the issues surrounding session_start and set_cookie. Normally, if you call any of these functions after page output has begun, you'll get a nasty error message. When output buffering's turned on, the only output types that can escape the buffer are HTTP headers. If you use ob_start at the very beginning of your application's execution, you can send headers at whichever point you like, without encountering the usual errors. You can then write out the buffered page content all at once, when you're sure that no more HTTP headers are required.

Use Output Buffering Responsibly
While output buffering can helpfully solve all our header problems, it should not be used solely for that reason. By ensuring that all output is generated after all the headers are sent, you'll save the time and resource overheads involved in using output buffers.

How do I cache just the parts of a page that change infrequently?

Caching an entire page is a simplistic approach to output buffering. While it's easy to implement, that approach negates the real benefits presented by PHP's output control functions to improve your site's performance in a manner that's relevant to the varying lifetimes of your content.

No doubt, some parts of the page that you send to visitors will change very rarely, such as the page's header, menus, and footer. But other parts--for example, the list of comments on your blog posts--may change quite often. Fortunately, PHP allows you to cache sections of the page separately.


Output buffering can be used to cache sections of a page in separate files. The page can then be rebuilt for output from these files.

This technique eliminates the need to repeat database queries, while loops, and so on. You might consider assigning each block of the page an expiry date after which the cache file is recreated; alternatively, you may build into your application a mechanism that deletes the cache file every time the content it stores is changed.

Let's work through an example that demonstrates the principle. Firstly, we'll create two helper functions, writeCache and readCache. Here's the writeCache function:

smartcache.php (excerpt)

The writeCache function is quite simple; it just writes the content of the first argument to a file with the name specified in the second argument, and saves that file to a location in the cache directory. We'll use this function to write our HTML to the cache files.

The readCache function will return the contents of the cache file specified in the first argument if it has not expired--that is, the file's last modified time is not older than the current time minus the number of seconds specified in the second argument. If it has expired or the file does not exist, the function returns false:

smartcache.php (excerpt)

function readCache($filename, $expiry)
if (file_exists('./cache/' . $filename))
if ((time() - $expiry) > filemtime('./cache/' . $filename))
return false;
$cache = file('./cache/' . $filename);
return implode('', $cache);
return false;

For the purposes of demonstrating this concept, I've used a procedural approach. However, I wouldn't recommend doing this in practice, as it will result in very messy code and is likely to cause issues with file locking. For example, what happens when someone accesses the cache at the exact moment it's being updated? Better solutions will be explained later on in the chapter.

Let's continue this example. After the output buffer is started, processing begins. First, the script calls readCache to see whether the file header.cache exists; this contains the top of the page--the HTML tag and the start tag. We've used PHP's date function to display the time at which the page was actually rendered, so you'll be able to see the different cache files at work when the page is displayed:

smartcache.php (excerpt)

if (!$header = readCache('header.cache', 604800))
W3C//DTD XHTML 1.0 Transitional//EN"
unked Cached Page

The header time is now:

Note what happens when a cache file isn't found: the header content is output and assigned to a variable, $header, with ob_get_contents, after which the ob_clean function is called to empty the buffer. This allows us to capture the output in "chunks" and assign them to individual cache files with the writeCache function. The header of the page is now stored as a file, which can be reused without our needing to rerender the page. Look back to the start of the if condition for a moment. When we called readCache, we gave it an expiry time of 604800 seconds (one week); readCache uses the file modification time of the cache file to determine whether the cache is still valid.

For the body of the page, we'll use the same process as before. However, this time, when we call readCache, we'll use an expiry time of five seconds; the cache file will be updated whenever it's more than five seconds old:

smartcache.php (excerpt)

if (!$body = readCache('body.cache', 5))
echo 'The body time is now: ' . date('H:i:s') . '
$body = ob_get_contents();
writeCache($body, 'body.cache');

The page footer is effectively the same as the header. After the footer, the output buffering is stopped and the contents of the three variables that hold the page data are displayed:

smartcache.php (excerpt)

if (!$footer = readCache('footer.cache', 604800)) {

The footer time is now:

The end result looks like this:

The header time is now: 17:10:42
The body time is now: 18:07:40
The footer time is now: 17:10:42

The header and footer are updated on a weekly basis, while the body is updated whenever it is more than five seconds old. If you keep refreshing the page, you'll see the body time updating.


Note that if you have a page that builds content dynamically, based on a number of variables, you'll need to make adjustments to the way you handle your cache files. For example, you might have an online shopping catalog whose listing pages are defined by a URL such as:

This URL should show page two of all items in category one; let's say this is the category for socks. But if we were to use the caching code above, the results of the first page of the first category we looked at would be cached, and shown for any request for any other page or category, until the cache expiry time elapsed. This would certainly confuse the next visitor who wanted to browse the category for shoes--that person would see the cached content for socks!

To avoid this issue, you'll need to incorporate the category ID and page number in to the cache file name like so:

$cache_filename = 'catalogue_' . $category_id . '_' .
$page . '.cache';
if (!$catalogue = readCache($cache_filename, 604800))
...display the category HTML...

This way, the correct cached content can be retrieved for every request.

Nesting Buffers
You can nest one buffer within another practically ad infinitum simply by calling ob_startmore than once. This can be useful if you have multiple operations that use the output buffer, such as one that catches the PHP error messages, and another that deals with caching. Care needs to be taken to make sure that ob_end_flush or ob_end_clean is called every time ob_start is used.


How do I use PEAR::Cache_Lite for server-side caching?

The previous solution explored the ideas behind output buffering using the PHP ob_* functions. Although we mentioned at the time, that approach probably isn't the best way to meet to dual goals of keeping your code maintainable and having a reliable caching mechanism. It's time to see how we can put a caching system into action in a manner that will be reliable and easy to maintain.


In the interests of keeping your code maintainable and having a reliable caching mechanism, it's a good idea to delegate the responsibility of caching logic to classes you trust. In this case, we'll use a little help from PEAR::Cache_Lite (version 1.7.2 is used in the examples here). Cache_Lite provides a solid yet easy-to-use library for caching, and handles issues such as: file locking; creating, checking for, and deleting cache files; controlling the output buffer; and directly caching the results from function and class method calls. More to the point, Cache_Lite should be relatively easy to apply to an existing application, requiring only minor code modifications.

Cache_Lite has four main classes. First is the base class, Cache_Lite, which deals purely with creating and fetching cache files, but makes no use of output buffering. This class can be used alone for caching operations in which you have no need for output buffering, such as storing the contents of a template you've parsed with PHP.

The examples here will not use Cache_Lite directly, but will instead focus on the three subclasses. Cache_Lite_Function can be used to call a function or class method and cache the result, which might prove useful for storing a MySQL query result set, for example. The Cache_Lite_Output class uses PHP's output control functions to catch the output generated by your script and store it in cache files; it allows you to perform tasks such as those we completed in "How do I cache just the parts of a page that change infrequently?". The Cache_Lite_File class bases cache expiry on the timestamp of a master file, with any cache file being deemed to have expired if it is older than the timestamp.

Let's work through an example that shows how you might use Cache_Lite to create a simple caching solution. When we're instantiating any child classes of Cache_Lite, we must first provide an array of options that determine the behavior of Cache_Lite itself. We'll look at these options in detail in a moment. Note that the cacheDir directory we specify must be one to which the script has read and write access:

cachelite.php (excerpt)

'writeControl' => 'true',
'readControl' => 'true',
'fileNameProtection' => false,
'readControlType' => 'md5'
$cache = new Cache_Lite_Output($options);

For each chunk of content that we want to cache, we need to set a lifetime (in seconds) for which the cache should live before it's refreshed. Next, we use the start method, available only in the Cache_Lite_Output class, to turn on output buffering. The two arguments passed to the start method are an identifying value for this particular cache file, and a cache group. The group is an identifier that allows a collection of cache files to be acted upon; it's possible to delete all cache files in a given group, for example (more on this in a moment). The start method will check to see if a valid cache file is available and, if so, it will begin outputting the cache contents. If a cache file is not available, start will return false and begin caching the following output.

Once the output for this chunk has finished, we use the end method to stop buffering and store the content as a file:

cachelite.php (excerpt)

if (!$cache->start('header', 'Static')) {
html public "-//W3C//DTD XHTML 1.0 Transitional//EN"

PEAR::Cache_Lite example

PEAR::Cache_Lite example

The header time is now:


To cache the body and footer, we follow the same procedure we used for the header. Note that, again, we specify a five-second lifetime when caching the body:

cachelite.php (excerpt)

if (!$cache->start('body', 'Dynamic')) {
echo 'The body time is now: ' . date('H:i:s') . '

if (!$cache->start('footer', 'Static')) {

The footer time is now:


On viewing the page, Cache_Lite creates cache files in the cache directory. Because we've set the fileNameProtection option to false, Cache_Lite creates the files with these names:

- ./cache/cache_Static_header
- ./cache/cache_Dynamic_body
- ./cache/cache_Static_footer

You can read about the fileNameProtection option--and many more--in "What configuration options does Cache_Lite support?". When the same page is requested later, the code above will use the cached file if it is valid and has not expired.

Protect your Cache Files
Make sure that the directory in which you place the cache files is not publicly available, or you may be offering your site's visitors access to more than you realize.

What configuration options does Cache_Lite support?

When instantiating Cache_Lite (or any of its subclasses, such as Cache_Lite_Output), you can use any of a number of approaches to controlling its behavior. These options should be placed in an array and passed to the constructor as shown below (and in the previous section):

$options = array(
'cacheDir' => './cache/',
'writeControl' => true,
'readControl' => true,
'fileNameProtection' => false,
'readControlType' => 'md5'
$cache = new Cache_Lite_Output($options);


The options available in the current version of Cache_Lite (1.7.2) are:

This is the directory in which the cache files will be placed. It defaults to /tmp/.

This option switches on and off the caching behavior of Cache_Lite. If you have numerous Cache_Lite calls in your code and want to disable the cache for debugging, for example, this option will be important. The default value is true (caching enabled).

This option represents the default lifetime (in seconds) of cache files. It can be changed using the setLifeTime method. The default value is 3600 (one hour), and if it's set to null, the cache files will never expire.

With this option activated, Cache_Lite uses an MD5 encryption hash to generate the filename for the cache file. This option protects you from error when you try to use IDs or group names containing characters that aren't valid for filenames; fileNameProtection must be turned on when you use Cache_Lite_Function. The default is true (enabled).

This option is used to switch the file locking mechanisms on and off. The default is true (enabled).

This option checks that a cache file has been written correctly immediately after it has been created, and throws a PEAR::Error if it finds a problem. Obviously, this facility would allow your code to attempt to rewrite a cache file that was created incorrectly, but it comes at a cost in terms of performance. The default value is true (enabled).

This option checks any cache files that are being read to ensure they're not corrupt. Cache_Lite is able to place inside the file a value, such as the string length of the file, which can be used to confirm that the cache file isn't corrupt. There are three alternative mechanisms for checking that a file is valid, and they're specified using the readControlType option. These mechanisms come at the cost of performance, but should help to guarantee that your visitors aren't seeing scrambled pages. The default value is true (enabled).

This option lets you specify the type of read control mechanism you want to use. The available mechanisms are a cyclic redundancy check (crc32, the default value) using PHP's crc32 function, an MD5 hash using PHP's md5 function (md5), or a simple and fast string length check (strlen). Note that this mechanism is not intended to provide security from people tampering with your cache files; it's just a way to spot corrupt files.

This option tells Cache_Lite how it should return PEAR errors to the calling script. The default is CACHE_LITE_ERROR_RETURN, which means Cache_Lite will return a PEAR::Error object.

With memory caching enabled, every time a file is written to the cache, it is stored in an array in Cache_Lite. The saveMemoryCachingState and getMemoryCachingState methods can be used to store and access the memory cache data between requests. The advantage of this facility is that the complete set of cache files can be stored in a single file, reducing the number of disk read/write operations by reconstructing the cache files straight into an array to which your code has access. The memoryCaching option may be worth further investigation if you run a large site. The default value is false (disabled).

If this option is enabled, only the memory caching mechanism will be used. The default value is false (disabled).

This option places a limit on the number of cache files that will be stored in the memory caching array. The more cache files you have, the more memory will be used up by memory caching, so it may be a good idea to enforce a limit that prevents your server from having to work too hard. Of course, this option places no restriction on the size of each cache file, so just one or two massive files may cause a problem. The default value is 1000.

If enabled, this option will automatically serialize all data types. While this approach will slow down the caching system, it is useful for caching nonscalar data types such as objects and arrays. For higher performance, you might consider serializing nonscalar data types yourself. The default value is false (disabled).

This option will automatically clean old cache entries--on average, one in x cache writes, where x is the value set for this option. Therefore, setting this value to 0 will indicate no automatic cleaning, and a value of 1will cause cache clearing on every cache write. A value of 20 to 200 is the recommended starting point if you wish to enable this facility; it causes cache cleaning to happen, on average, 0.5% to 5% of the time. The default value is 0 (disabled).

When set to a nonzero value, this option will enable a hashed directory structure. A hashed directory structure will improve the performance of sites that have thousands of cache files. If you choose to use hashed directories, start by setting this value to 1, and increasing it as you test for performance improvements. The default value is 0 (disabled).

This option was added to enable backwards compatibility with code that uses the old API. When the old API was run in CACHE_LITE_ERROR_RETURN mode (see the pearErrorMode option earlier in this list), some functions would return a Boolean value to indicate success, rather than returning a PEAR_Error object. By setting this value to true, the PEAR_Error object will be returned instead. The default value is false (disable).

How do I purge the Cache_Lite cache?

The built-in lifetime mechanism for Cache_Lite cache files provides a good foundation for keeping your cache files up to date, but there will be some circumstances in which you need the files to be updated immediately.


In cases in which you need immediate updates, the methods remove and clean come in handy. The remove method is designed to delete a specific cache file; it takes as arguments the cache ID and group name of the file. To delete the page body cache file we created in "How do I use PEAR::Cache_Lite for server-side caching?", we'd use this code:

$cache->remove('body', 'Dynamic');

If we use the clean method, we can delete all the files in our cache directory simply by calling the method with no arguments; alternatively, we can specify a group of cache files to delete. If we wanted to delete both the header and footer cache files we created in "How do I use PEAR::Cache_Lite for server-side caching?", we could do so like this:



The remove and clean methods should obviously be called in response to events that arise within an application. For example, if you have a discussion forum application, you probably want to remove the relevant cache files when a visitor posts a new message.

Although it may seem like this solution entails a lot of code modifications, with some care it can be applied to your application in a global manner. If you have a central script that's included in every page, your script can simply watch for incoming events--for example, a variable like $_GET['newPost']--and respond by deleting the required cache files. This keeps the cache file removal mechanism central and easier to maintain. You might also consider using the php.ini setting auto_prepend_file to include this code in every PHP script.

How do I cache function calls?

Many web sites provide access to their data via web services such as SOAP and XML-RPC. (You can read all about web services in Chapter 12.) As web services are accessed over a network, it's often a very good idea to cache results so that they can be fetched locally, rather than repeating the same slow request to the server multiple times. A simple approach might be to use PHP sessions, but as that solution operates on a per-visitor basis, the opening requests for each visitor will still be slow.


Let's assume you wish to create a web page that lists all the SitePoint books available on Amazon. The actual list is not likely to change from moment to moment, so why would we make the request to the Amazon web service every time the web page is displayed? We won't! Instead, we can take advantage of Cache_Lite by caching the results of the XML-RPC request.

Requires PEAR::SOAP Version 0.11.0
The following solution uses the PEAR::SOAP library version 0.11.0 to access the Amazon web service. You can find this package on the PEAR web site.

Here's some hypothetical code that fetches the data from the remote Amazon server:

$results = $amazonClient->ManufacturerSearchRequest($params);

Using Cache_Lite_Function, we can cache the results so the data returned from the service can be reused; this will avoid unnecessary network calls and significantly improve performance.

The following example code focuses on the caching aspect to prevent us from getting bogged down in the details of using the Amazon web service. You can see the complete script if you download this book's code archive from the SitePoint web site.

The Cache_Lite_Function requires the inclusion of the following file:

cachefunction.php (excerpt)

require_once 'Cache/Lite/Function.php';

We instantiate the Cache_Lite_Function class with some options:

cachefunction.php (excerpt)

$options = array(
'cacheDir' => './cache/',
'fileNameProtection' => true,
'writeControl' => true,
'readControl' => true,
'readControlType' => 'strlen',
'defaultGroup' => 'SOAP'
$cache = new Cache_Lite_Function($options);

It's important that the fileNameProtection option is set to true (this is in fact the default value, but in this case I've set it manually to emphasize the point). If it were set to false, the filename would be invalid, so the data will not be cached.

Here's how we make the calls to our SOAP client class:

cachefunction.php (excerpt)

$results = $cache->call('amazonClient->ManufacturerSearchRequest',

If the request is being made for the first time, Cache_Lite_Function will store the results as a serialized array or object in a cache file (not that you need to worry about this), and this file will be used for future requests until it expires. The setLifeTime method can again be used to specify how long the cache files should survive before they're refreshed; currently, the default value of 3600 seconds (one hour) is being used. You can then use the $results variable exactly as if you were calling the web service method directly. The output of our example script can be seen in Figure 11.1.


Caching is an important and often overlooked aspect of web site development. Many factors that affect the performance of today's web sites weren't a problem for their predecessors--from complex, dynamic page generation, to a reliance on third-party data over the network. In this chapter, we've examined HTML meta tags, HTTP headers, PHP output buffering and PEAR::Cache_Lite, and we've seen how you can use them to control the caching of your web site content and improve the site's reliability and performance.

Implementing a caching system for your site might be simple, but ultimately, it depends on your requirements. If you have a busy and predominantly static web site--such as a blog--that's managed through a content management system, it will likely require little alteration, yet may benefit from huge performance improvements resulting from a small investment of your time. Setting up caching for a more complex site that generates content on a per-user basis, such as a portal or shopping cart system, will prove a little more tricky and time consuming, but the benefits are still clear.

Regardless, I hope the information in this chapter has given you a good grasp of the options available, and will help you determine which techniques are most suitable for your application. Don't forget to download this chapter, plus two others -- PDO and Databases, and Access Control -- to enjoy offline. For information on the contents of the book's other chapters, check out the full Table of Contents.