Landing page optimization

Filed under:
Landing Page Optimization (LPO, also known as WebPages Optimization) is the process of improving a visitor’s perception of a website by optimizing it’s content and appearance in order to make them more appealing to the target audiences as measured by target goals such as conversion rate or other.

Multivariate Landing Page Optimization (MVLPO) is Landing Page Optimization based on an experimental design.
here are three major types of LPO based on targeting:
Associative Content Targeting (also called ‘rules-based optimization’ or ‘passive targeting’). Modifies the content with relevant to the visitors information based on the search criteria, source, geo-information of source traffic or other known generic parameters that can be used for explicit non-research based consumer segmentation.
Predictive Content Targeting (also called ‘active targeting’). Adjusts the content by correlating any known information about the visitors (e.g., prior purchase behavior, personal demographic information, browsing patterns, etc.) to anticipated (desired) future actions based on predictive analytics.
Multilingual SEO and localisation strategies are vital if your company is seeking to be effective in business markets beyond the UK. (also called ‘social’). The content of the pages could be created using the relevance of publicly available information through a mechanism based on reviews, ratings, tagging, referrals, etc.name="LPO_based_on_experimentation" id="LPO_based_on_experimentation">

LPO based on experimentation
There are two major types of LPO based on experimentation:
Close-Ended Experimentation exposes consumers to various executions of landing pages and observes their behavior. At the end of the test, an optimal page is selected that permanently replaces the experimental pages. This page is usually the most efficient one in achieving target goals such as conversion rate, etc. It may be one of tested pages or a synthesized one from individual elements never tested together. The methods include simple A/B-split test, multivariate (conjoint) based, Taguchi, Total Experience testing, etc.
Open-Ended Experimentation is similar to Close-Ended Experimentation with ongoing dynamic adjustment of the page based on continuing experimentation.
This article covers in details only the approaches based on the experimentation. Experimentation based LPO can be achieved using the following most frequently used methodologies: A/B split test, Multivariate LPO and Total Experience Testing. The methodologies are applicable to both – close-ended and open-ended types of experimentation.
A/B Testing
A/B Testing (also called ‘A/B Split Test’): a generic name of testing a limited set (usually 2 or 3) of pre-created executions of a web page without use of experimental design. The typical goal is to try, for example, three versions of the home page or product page or support FAQ page and see which version of the page works better. The outcome in A/B Testing is usually measured as click-thru to next page or conversion, etc. The testing can be conducted sequentially or concurrently. In sequential (the easiest to implement) execution the page executions are placed online one at a time for a specified period. Parallel execution (‘split test’) divides the traffic between the executions.
Pro’s of doing A/B Testing:
Inexpensive since you will use your existing resources and tools
Simple –no heavy statistics involved
Con’s of doing A/B Testing:
It is difficult to control all the external factors (campaigns, search traffic, press releases, seasonality) in sequential execution.
The approach is very limited, and cannot give reliable answers for pages that combine multiple elements.
MVLPO
MVLPO structurally handles a combination of multiple groups of elements (graphics, text, etc.) on the page. Each group comprises multiple executions (options). For example, a landing page may have n different options of the title, m variations of the featured picture, k options of the company logo, etc.
Pro’s of doing Multivariate Testing:
The most reliable science based approach to understand the customers mind and use it to optimize their experience.
It evolved to a quite easy to use approach in which not much IT involvement is needed. In many cases, a few lines of javascript on the page allows the remote servers of the vendors to control the changes, collect the data and analyze the results.
It provides a foundation for a continuous learning experience.
Con’s of doing Multivariate Testing:
As with any quantitative consumer research, there is a danger of GIGO (‘garbage in, garbage out’). You still need a clean pool of ideas that are sourced from known customer points or strategic business objectives.
With MVLPO, you are usually optimizing one page at a time. Website experiences for most sites are complex multi page affairs. For a e-commerce website it is typical for a entry to a successful purchase to be around 12 to 18 pages, for a support site even more pages.
Total Experience Testing
Total Experience Testing (also called 'Experience Testing') is a new and evolving type of experiment based testing in which the entire site experience of the visitor is examined using technical capabilities of the site platform (e.g., ATG, Blue Martini, etc.).[1]
Instead of actually creating multiple websites, the methodology uses the site platform to create several persistent experiences and monitors which one is preferred by the customers.
Pro’s of doing Experience Testing:
The experiments reflect the total customers experience, not just one page at a time.
Con’s of doing Experience Testing:
You need to have a website platform that supports experience testing, (for example ATG supports this).
It takes longer than the other two methodologies.

Multivariate Landing Page Optimization (MVLPO)
The first application of an experimental design to website optimization was done by Moskowitz Jacobs Inc. in 1998 in a simulation demo-project for Lego website (Denmark). MVLPO did not become a mainstream approach until 2003-2004.

Execution Modes
MVLPO can be executed in a Live (production) Environment (e.g., Google website optimizer,[2] Optimost.com, etc.) or through a Market Research Survey / Simulation (e.g., StyleMap.NET).

Live Environment MVLPO Execution
In Live Environment MVLPO Execution, a special tool makes dynamic changes to the web site, so the visitors are directed to different executions of landing pages created according to an [experimental design]. The system keeps track of the visitors and their behavior (including their conversion rate, time spent on the page, etc.) and with sufficient data accumulated, estimates the impact of individual components on the target measurement (e.g., conversion rate).
Pro’s of Live Environment MVLPO Execution:
This approach is very reliable because it tests the effect of variations as a real life experience, generally transparent to the visitors.
It has evolved to a relatively simple and inexpensive to execute approach (e.g., Google Optimizer).
Con’s of Live Environment MVLPO Execution (applicable mostly to the tools prior to Google Optimizer):
High cost
Complexity involved in modifying a production-level website
Long time it may take to achieve statistically reliable data caused by variations in the amount of traffic, which generates the data necessary for the decision
This approach may not be appropriate for low traffic / high importance websites when the site administrators do not want to lose any potential customers.
Many of these drawbacks are reduced or eliminated with the introduction of the Google Website Optimizer – a free DIY MVLPO tool that made the process more democratic and available to the website administrators directly.

Simulation (survey) based MVLPO
Simulation (survey) based MVLPO is built on advanced market research techniques. In the research phase, the respondents are directed to a survey, which presents them with a set of experimentally designed combinations of the landing page executions. The respondents rate each execution (screen) on a rating question (e.g., purchase intent). At the end of the study, regression model(s) are created (either individual or for the total panel). The outcome relates the presence/absence of the elements in the different landing page executions to the respondents’ ratings and can be used to synthesize new pages as combinations of the top-scored elements optimized for subgroups, segments, with or without interactions.
Pro’s of the Simulation approach:
Much faster and easier to prepare and execute (in many cases) compared to the live environment optimization
It works for low traffic websites
Usually produces more robust and rich data because of a higher control of the design.
Con’s of the Simulation approach:
Possible bias of a simulated environment as opposed to a live one
A necessity to recruit and optionally incentivise the respondents.
MVLPO paradigm is based on an experimental design (e.g., conjoint analysis, Taguchi methods, etc.) which tests structured combination of elements. Some vendors use full factorial approach (e.g., Google Optimizer that tests all possible combinations of elements). This approach requires very large sample sizes (typically, many thousands) to achieve statistical importance. Fractional designs typically used in simulation environments require the testing of small subsets of possible combinations. Some critics of the approach raise the question of possible interactions between the elements of the web pages and the inability of most fractional designs to address the issue.
To resolve these limitations, an advanced simulation method based on the Rule Developing Experimentation paradigm (RDE)[3] has been introduced. RDE creates individual models for each respondent, discovers any and all synergies and suppressions between the elements, uncovers attitudinal segmentation, and allows for databasing across tests and over time.

How Web Search Engines Work

Filed under:
Search engines are the key to finding specific information on the vast expanse of the World Wide Web. Without sophisticated search engines, it would be virtually impossible to locate anything on the Web without knowing a specific URL. But do you know how search engines work? And do you know what makes some search engines more effective than others?

When people use the term search engine in relation to the Web, they are usually referring to the actual search forms that searches through databases of HTML documents, initially gathered by a robot.

There are basically three types of search engines: Those that are powered by robots (called crawlers; ants or spiders) and those that are powered by human submissions; and those that are a hybrid of the two.

Crawler-based search engines are those that use automated software agents (called crawlers) that visit a Web site, read the information on the actual site, read the site's meta tags and also follow the links that the site connects to performing indexing on all linked Web sites as well. The crawler returns all that information back to a central depository, where the data is indexed. The crawler will periodically return to the sites to check for any information that has changed. The frequency with which this happens is determined by the administrators of the search engine.


Human-powered search engines rely on humans to submit information that is subsequently indexed and catalogued. Only information that is submitted is put into the index.
Key Terms To Understanding Web Search Engines

spider trap
A condition of dynamic Web sites in which a search engine’s spider becomes trapped in an endless loop of code.

search engine
A program that searches documents for specified keywords and returns a list of the documents where the keywords were found.

meta tag
A special HTML tag that provides information about a Web page.

deep link
A hyperlink either on a Web page or in the results of a search engine query to a page on a Web site other than the site’s home page.

robot
A program that runs automatically without human intervention.

In both cases, when you query a search engine to locate information, you're actually searching through the index that the search engine has created —you are not actually searching the Web. These indices are giant databases of information that is collected and stored and subsequently searched. This explains why sometimes a search on a commercial search engine, such as Yahoo! or Google, will return results that are, in fact, dead links. Since the search results are based on the index, if the index hasn't been updated since a Web page became invalid the search engine treats the page as still an active link even though it no longer is. It will remain that way until the index is updated.

So why will the same search on different search engines produce different results? Part of the answer to that question is because not all indices are going to be exactly the same. It depends on what the spiders find or what the humans submitted. But more important, not every search engine uses the same algorithm to search through the indices. The algorithm is what the search engines use to determine the relevance of the information in the index to what the user is searching for.

One of the elements that a search engine algorithm scans for is the frequency and location of keywords on a Web page. Those with higher frequency are typically considered more relevant. But search engine technology is becoming sophisticated in its attempt to discourage what is known as keyword stuffing, or spamdexing.

Another common element that algorithms analyze is the way that pages link to other pages in the Web. By analyzing how pages link to each other, an engine can both determine what a page is about (if the keywords of the linked pages are similar to the keywords on the original page) and whether that page is considered "important" and deserving of a boost in ranking. Just as the technology is becoming increasingly sophisticated to ignore keyword stuffing, it is also becoming more savvy to Web masters who build artificial links into their sites in order to build an artificial ranking.