Sabtu, 02 Juni 2018

Sponsored Links

A/B Testing | What is A/B testing (split testing)?
src: www.invespcro.com

In web analytics, the A/B test ( basket test or split testing ) is a controlled experiment with two variants, A and B. This is a form of testing a statistical hypothesis or "testing two-sample hypotheses" as used in the statistical field. A/B Testing is a way to compare two versions of one variable specifically by testing the subject's response to variable A against variable B, and determining which of two variables is more effective.

As the name suggests, two versions (A and B) are compared, which are identical except for one variation that may affect user behavior. Version A may be the current version (control), while version B is modified in some cases (treatment). For example, on an ecommerce website, a purchase funnel is usually a good candidate for A/B testing, as even a marginal increase in drop-off rates can represent a significant increase in sales. Significant improvements can sometimes be seen through testing elements such as copy text, layout, images, and colors, but not always.

Multivariate testing or multinomial testing is similar to A/B testing, but can test more than two versions at the same time or use more control. A simple A/B test is invalid for experimental, quasi-experimental or other non-experimental situations, as is common with survey data, offline data, and more, more complex phenomena.

A/B testing has been marketed by some as a change in business philosophy and strategy in a particular niche, although the approach is identical to the inter-subject design, commonly used in many research traditions. A/B testing as a web development philosophy brings this field in line with the broader movement toward evidence-based practice. The benefits of A/B testing assume that it can be done on an ongoing basis on almost anything, especially since most marketing automation software now, usually, comes with the ability to run A/B testing on an ongoing basis. It's possible to update websites and other tools, using current resources, to keep up with changing trends.


Video A/B testing



General test statistics

The "two-sample hypothesis test" is suitable for comparing two samples in which the sample is divided by two control cases in the experiment. Z-tests are appropriate to compare means under strict conditions of normality and known standard deviations. T-test students are appropriate to compare means in relaxed conditions when less assumed. The Welch t test assumes the fewest and therefore the most commonly used test in a two sample hypothesis test where the mean of the metric should be optimized. While the average variable to be optimized is the most common estimator option, others are regularly used.

For comparison of two binomial distributions such as a single click-through rate will use Fisher's exact test.

Maps A/B testing



History

As with most fields, setting a date for the emergence of a new method is difficult because of the continuing evolution of the topic. Where the difference can be defined is when the switch is made from using the assumed information from the population to the tests performed on the sample only. This work was done in 1908 by William Sealy Gosset when he changed the Z-test to create a Student t-test.

Google engineers ran their first A/B test in 2000 in an attempt to determine how many of the optimal results will be shown on their search engine results pages. The first test was unsuccessful due to interruption resulting from slow loading time. A/B testing research will be more advanced, but the underlying foundations and principles generally remain the same, and in 2011, 11 years after Google's first test, Google ran more than 7,000 different A/B tests.

A/B Testing PowerPoint Template - SlideModel
src: cdn.slidemodel.com


Sample email campaign

A company with a customer database of 2,000 people decided to create an email campaign with a discount code to generate sales through its website. This creates two versions of email with multiple calls to action (part of a copy that drives customers to do something - in the case of a sales campaign, makes a purchase) and identifies the promotional code.

  • For 1,000 people sent an email with a call to action stating, "Offer ends this Saturday! Use A1 code",
  • and to 1,000 others send an email with a call to action stating, "The offer ends soon! Use B1 code".

All other elements of email copy and layout are identical. The company then monitors which campaigns have a higher success rate by analyzing the use of promotional codes. Emails that use A1 code have a 5% response rate (50 of 1,000 people sent via email using code to purchase the product), and emails using B1 code have a 3% response rate (30 of recipients use code to buy products). Therefore, the company determines that in this case, the first Call To Action is more effective and will use it in future sales. A more nuanced approach would involve the application of statistical testing to determine whether the difference in response rates between A1 and B1 is statistically significant (ie, it is likely that the difference is real, repeatable, and not by random coincidence).

In the example above, the purpose of testing is to determine which is a more effective way to encourage customers to make a purchase. However, if the purpose of this test is to see which emails will result in higher click rates - that is, the number of people who actually clicked to the website after receiving the email - then the results may be different.

For example, even though more customers are receiving B1 code accessing the website, because Call To Action does not state the end date of the campaign, many of them may feel no need to buy immediately. Consequently, if the goal of testing is just to see which emails will bring more traffic to the website, then emails containing B1 code may be more successful. A/B tests should have specified results that can be measured like the number of sales made, click-level conversions, or the number of people signing up/registering.

A/B Testing with Email Marketing to Skyrocket Your Conversions ...
src: hiverhq.com


Segmentation and targeting

A/B testing most often implements the same variant (e.g., user interface elements) with the same probability for all users. However, in some circumstances, the response to the variant may be heterogeneous. That is, while the A variant may have a higher overall response rate, variant B may have a higher response rate within a certain segment of the customer base.

For example, the details of response rates by sex may be:

In this case, we can see that while the A variant has a higher overall response rate, the B variant actually has a higher response rate with the male.

As a result, companies can choose a segmented strategy as a result of A/B testing, sending B variants to men and A variants to future women. In this example, the segmented strategy will result in an improved response rate from               5         %         =                                           40                 ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ, 10                             Â <Â> <500>                 Â <Â> <500>                                            {\ textstyle 5 \% = {\ frac {40 10} {500 500}}} To               6.5         %         =                                           40               Â Â <Â>                             Â <Â> <500>                 Â <Â> <500>                                            {\ textstyle 6.5 \% = {\ frac {40 25} {500 500}}}   - represents a 30% increase.

It is important to note that if a segmented result is expected from A/B tests, the test should be properly designed at the beginning to be distributed evenly across all major customer attributes, such as gender. That is, the test should be both (a) containing representative samples of men vs women, and (b) assigning men and women randomly to every "variant" (variant A vs. variant B). Failure to do so may lead to experimental biases and inaccurate conclusions to be drawn from the test.

This segmentation and targeting approach can be further generalized to include multiple customer attributes rather than a single customer attribute - for example, the age and gender of a customer - to identify more nuanced patterns that may be present in the test results.

Planning an Effective A/B Test | Megalytic Blog
src: megalytic.com


Reception

Many companies use a "designed experiment" approach to making marketing decisions, in the hope that relevant sample results can improve positive conversion results. This is an increasingly common practice as tools and expertise grow in this area. There are many A/B testing case studies that show that testing practice is increasingly popular with small and medium sized businesses as well.

The Difference Between A/B and Multivariate Testing
src: www.invespcro.com


See also

  • Adaptive control
  • Selected modeling
  • Multi-armed Bandit
  • Multivariate testing
  • Test stats

A/B Testing | MailChimp
src: static.mailchimp.com


References

Source of the article : Wikipedia

Comments
0 Comments