Why A/B testing of web design fails

by Urs E. Gattiker on 2009/12/17 · 59 comments 22,316 views

in a analytics taking action,b why benchmark failures,e marketing 101 style matters

Image - tweet by ComMetrics - social networks: who you follow matters. Build on similarities, benefit from differences + connect across borders #metrics

2009-12-24 – more about A/B testing and measurement: Using great visuals = Failing your customers

How do you know when you have a good website design. Is it what you believe is good design or is it based on your customers’ needs.  In case it is the latter, did you use A/B testing or ask your clients for feedback to make sure?

An A/B test shows two versions of a web page that get compared, with version A usually being the existing (control) page and version B being the alternative.

The page that wins based on responses by viewers will be the control page in a follow-up test against yet another alternative.

Here we share some tips and tricks for A/B testing and how you can more effectively leverage this approach for your own needs. Also, keep an eye out for next week’s post, about a case study for applying the concepts below.

1. A/B testing does not guarantee the best solution for design and copy

A/B tests can tell you how the bounce rate of those shown design A compares to design B (i.e. the bounce rate is the percentage of visitors who left your website after having looked at the landing page – 100% bounce rate means someone looked at the first page only and then left your site). The page with the lower bounce rate is then the control page in the follow-up A/B test.

A/B testing works best for projects with an all-important KPI (Key Performance Indicator) that can be measured by counting simple user actions. Examples include registering for a product or making a purchase on an e-commerce site. Unfortunately, things are rarely that simple.

Fallacy: Believing that A/B testing alone will assure that you end up with the best design. In fact, one may just end up with the less-bad one, instead.

2. Permission-based exit surveys suffer from bias
A/B tests often do not give a complete picture. More information can be gained by asking a few questions of groups that look at your design options.

For instance, one can use a permission-based exit survey. Pose three to five questions, such as:

    - What did you like about this design (please explain?
    - What did you not like about this design (please explain)?
    - What made your navigating of the site more difficult (please explain)?

Fallacy: Believing that people who really know the answers will take the time to respond.

Unfortunately, this non-response bias may lower data validity and, therefore, causes one to make decisions that may be wrong in the long term.

3. Qualitative assessment = talk to clients about design
It seems obvious that, to address the weaknesses mentioned in points 1 and 2, one should speak with clients about the design.

For instance, does your headline tell the average non-geek what the site is really about or are you using too much jargon? An A/B test might suggest that design A is preferred, when in fact it is simply B’s confusing headline that causes people to reject it. If you fix the headline, design B might suddenly be preferred by most users.

If about 30 percent of the test group suggest a change that will improve usability for them, take this FREE advice seriously. It could be that another 20 percent of the sample mention this small change as having improved usability during the next round of tests, without being prompted.

Image - tweet by ComMetrics - Next blog post: http://ComMetrics.com Why A/B tests fail to bring home the beacon? Care to comment? #abtests #measure #SMMOf course, this is an exercise in checks and balances. To illustrate, I tweeted about possible titles for this blog post (see image at right). Unfortunately, what I felt was a great headline for stirring interest and getting more traffic failed the acid test. I got several messages suggesting that the title may not be appreciated by vegetarians…

Fallacy: Believing that talking to customers or other knowledgeable resources takes less time and money than A/B testing. Interviewing people and evaluating notes/data carefully takes time. Sometimes reaching people by phone can be difficult, further extending the time period needed to move from draft to final design. Be patient!

More resources about A/B testing

Bottom line
A/B testing is important and works best for clearly defined problems and tasks. However, the Web 2.0 environment requires that such tests, which use simple metrics, be supplemented by qualitative data. The latter provides the all-important insight needed to adequately respond to clients’ usability issues.

Take-aways
There are some crucial things to remember when applying A/B testing in conjunction with qualitative assessment.

    1. 30 percent rule. If that many interviewed clients or experts want something, do it. Every 30 percent you satisfy with a key element that is integrated into your design will increase relevant traffic – guaranteed.
    2. Designers do not automatically know what clients crave. They have an agenda, if not a preference, but one needs to make sure that YOUR clients are happy with what the designer believes looks best.
    3. The US is not the world – LOL (laughing out loud). We tend to forget that we need to provide good usability to people from vastly different backgrounds and cultures. Hence, make sure that your usability and/or A/B testing includes customers representing your key markets based on country, age group, education level, occupation and gender.

Please, leave a comment! We love to hear your thoughts: how do you use A/B testing, and when does it fail for you? Here is a chance for anyone with first-hand knowledge (this means you!) to share your insights.

Special thanks to Deni Kasrel, who got me to write this post.

P.S. – You can get updates on this blog through Twitter, by following @ComMetrics. You can also get a free subscription by RSS:


  • arunpoojari

    The biggest advantage and unfortunately the disadvantage I see, is the ease of data analysis, the data is clearly laid out in two parts but does not have the depth to answer the functionality of evolved web technologies. This kind of testing has limited usability and is a good starting point rather than an end.

    Web has evolved to be a richer medium and also spans across different platforms, in late 1800's if people were asked what they want to move one from place to another most would have said faster horses, not many would have suggested a Car. Given the technology progress and usage of web A/B testing solves a fraction of the problem.

    • http://My.ComMetrics.com Urs E. Gattiker

      Thanks @arunpoojari

      Agreed A/B testing solves a fraction of the problem(s) but besides qualitative data there is not much else we can do unless we don't care running down the wrong alley.

      Also what the designer comes up with I may not necessarily like that much. However, if my clients do this is the way I have to go.

      Finally, you suggest more data is key. My experience is slightly different in that I have find too much data available but the wrong one instead of actionable metrics.

      Thanks for sharing.

      • arunpoojari

        Dear Urs

        completely agree with your point .. this is a good reference point to start.

        i will try and share some work we ave done around A/B testing

        cheers / Arun

  • http://www.authenticresponse.com/ Jonathan Tice

    I've employed A/B testing in a number of different marketing and product management roles and it is helpful but, as your article points out, it is only as good as the two options presented. One of the things I have found is that your own perceptions going into an A/B test are sometimes your worst enemy since it's hard to have perspective on ideas you have authored and your own biases are likely to be played out in the options your testing. Though similar to your findings it's an important nuance because you mention supplemental data rather than research before you begin testing.

    The best option I think is to let qualitative and directional research be your compass from the beginning. When doing directional research it's important to cast a wide net so that you don't eliminate potentially successful ideas. So, for instance, I discussed a new portable laundry product being released by a leading cleaning brand recently at a research conference. Their target audience was “on the go soccer moms” but they later found that travelling business men and the elderly were both potential customer segments. If they had gone to market without this information, whatever tests they did would have been made assuming a smaller market and they would have limited the potential revenue of their product offering. During a qualitative exercise, if possible, it's a good idea for you to ask the participants to brainstorm themselves about the various options of the upcoming test with as many variables as is feasible (using online qualitative tools the number of variables can be quite large). A card sort or collage type exercise would be one example of this.

    If you have limited time or resources and are not able to conduct upfront research as described, it's ideal to again cast a wide net and expand on the number of variables (i.e. doing an A/B/C/D test).

    At the end of the day, there are limits to A/B testing but it is an effective way to decide between limited options (another example would be Max Diff). If one knows this and can answer other business questions using different research and testing methods, that is really the best course of action.

    Jonathan

    Jonathan Tice
    VP, Regional Sales and Marketing
    Authentic Response, Inc.

    • http://My.ComMetrics.com Urs E. Gattiker

      Jonathan

      Thanks so much for this thoughtful comment. I especially like your point about the fact that a market segment could larger than the brand might originally consider to be (e.g., “soccer moms on the go”, travelling sales reps and the elderly).

      I had a similar case recently when I discussed this A/B testing with a luxury brand. The c-suite had thought that their users where all in a certain age bracket. But we went out and informally discussed this with some potential clients — much younger ones than those being currently targeted.

      We were all surprised to learn that teenagers were prepared to spend some money to get a certain kind of bag …. if …

      This is, of course, an interesting market segment that the luxury brand had ignored so far. Again A/B testing would not have told us about this new opportunity nor how social media could be used to reach this much younger audience. Only qualitative data and focus interviews provided us with this valuable bit of information.

      Agreed, with all this budget is a big issue and can never be ignored when testing designs.

      Jonathan, thanks for sharing this important information with me.

      Urs

  • http://www.catseyemarketingblog.com/ Judy Dunn

    Urs,

    Good, actionable stuff here. A/B testing has been used forever in direct mail copywriting and guides many of the decisions in headlines, graphics, etc. But designers in particular (and sometimes writers) live in right-brain worlds and don't always put on that scientist/tester's hat.

    I agree with you that the problem of any testing—qualitative or quantitative—can be the population you end up getting the data from. So who chooses to respond can actually skew the results. Your tweeting of various headlines and the response about one being offensive to vegetarians is a perfect example.

    I like the “30 percent” rule. That does seem that it would be a change worth considering if that many people suggest it.

    I like to think that A/B testing is just one of many measurement tools in one's arsenal. Thanks for giving us a focus and perspective on this important tool.

    • http://My.ComMetrics.com Urs E. Gattiker

      Judy

      Thanks for this comment – yes you are right.

      A/B testing is one of many tools we have including reviewing of design, us of focus groups and so forth.

      Naturally, all inthe hope that we will hit the mark.

      Thanks so much for sharing.

  • http://rexduffdixon.com/ RexDixon

    Have you shared some of these examples of A/B Testing fail to http://www.abtests.com ? Testing isn't all about success, but the failures can also help as well as create a great discussion.

    • http://My.ComMetrics.com Urs E. Gattiker

      Rex thanks for leaving a comment and replying to my e-mail to you.

      I wanted to point out that while I find the abtests.com site interesting it suffers from some of the usual weaknesses. For instance, A/B tests are not just based on quantitative data…. In fact, as we point out in the above post, the telephone and person-to-person interviews we did revealed more insights than the classical A/B tests (i.e. show two designs and let people choose which one they like the most).

      In fact, in the global marketplace this is especially important. For instance, most of our clients like our light background but in some countries white is not the ideal color (e.g., China).

      Also, if your group of users dislikes a design because of a small thing….. the best way might be just removing that obstacle and then see what happens.
      I outlined above that in several instances what we thought was superior failed miserably after one or two small tweaks were made on the 'less liked design.'

      Often these things cannot be revealed by using a simple yes or know question but it requires some probing by the interviewer to get at the root of the problem.

      Hence, while A/B design is wonderful, it requires a more careful research design than I see used often. For instance, control groups may have to be used as well including interviewing and observation methods. These techniques might reveal things one never would have discovered otherwise.

      Hence, creating a perfect landing page should go beyond a simple A/B test, would you not agree?

      Rex, thanks for sharing.

  • Pingback: Apel Mjausson

  • John Hodson

    In my opinion you have listed other forms of research as supliments to stand along side of A/B testing; however I believe that these are all tools to inform the designers and the decisionmakers to drive what to test.

    Of course a design which was developed with strong research should preform well; but that is what A/B testing is for. The test verifies that the research has been properly applied to the design and that it works for the masses.

    Please don't muddy the waters and make it sound as if A/B testing needs to be suplimented because it doesn't if it was properly designed with the right metrics assigned.

    It's the simple scientific method of verifying the hypothosis but you need all of the other research that you described to feed the hypothosis; but in the end the results for the test will speak for themselves.

    • http://My.ComMetrics.com Urs E. Gattiker

      John

      Thanks for your comment. The one statement that got me thinking for sure was:

      “The test verifies that the research has been properly applied to the design and that it works for the masses.”

      I agree wholeheartedly with what you are saying but I must also point out that it must work for my targeted audience…. and that may not be the masses but those 1000 or 10,000 people I want to reach.

      Unfortunately, I still feel that the greatest weakness with A/B testing is that it will not give us insights about _why people dislike something_ more than another design.

      Without understanding the why, however, it is difficult to improve the design. To illustrate, based on our kind of clients it is obvious that they prefer a more conservative design. As well, they appear to not want to be confused by too many whistles and bells.

      But some of these things we found out only by intereviewing them and asking these people to explain to us what made a design less useful for them than another they had chosen.

      But I agree, without A/B testing we would have never gotten that far in the first place. But I hope you can also agree that A/B testing by itself may get you to run down a dead-end. In other words, it is better to use a multi-method approach (e.g., A/B testing and interviews) than just a single-method approach to find the best design for your webpage or blog.

      John, thanks so much for sharing and I look forward to your next comment

  • dancj

    Hi Urs,

    Thanks for an interesting and thought-provoking post, especially as I thought you almost asserted that A/B testing causes bad web design. It got my attention and made me read on.

    I think you, and the comments on your post, make some useful points. A/B testing is a tool and in itself won't necessarily lead to better design, just less sub-optimal design.

    However, I think where we would agree is that A/B testing is good for testing hypotheses as to whether or not a particular page can be improved by another variation(s). In my view, web analytics taken in its broadest sense – including qualitative research – should encourage the development of hypotheses.

    A/B testing then provides a means to quantitatively prove that A beat B or vice versa. It is the quality of one's hypotheses, and the richness of your research to generate these hypotheses, that will ultimately determine how useful a A/B programme is.

    Dan Croxen-John
    Applied Web Analytics

    • http://My.ComMetrics.com Urs E. Gattiker

      Dan

      Thanks so much for sharing this comment with our readers and me.

      I certainly agree that the quality of your hypotheses is an important matter and will help one to figure out things with the A/B testing. In fact what I hear you point out is that if you start of with a bad research design …. using A/B testing will not be as helpful as it could be if one would have done one's homework better.

      But even then one can confirm or disconfirm one's hypotheses with A/B testing only. I still found that with our clients (corporate suits) coming from different countries (many's mother tongues is not English) A/B testing was not sufficient by itself.

      And yes, maybe it just means we don't know enough about A/B testing…. but what the phone and person-to-person interviews told us is that in some cases, throwing the baby out with the water was not needed. Instead, a small tweak here or there made a design that had lost against another using A/B testing win the contest.

      Dan, I hope you comment again soon because your thoughts surely enrich the discussion and help me sharpen my own ideas.

  • paraschopra

    Agreed that A/B testing alone won't take you far – it is not a magical potion that you can drink and see your conversion rates going through the roof. What A/B testing provides is a scientific methodology to check if your hypotheses are correct. As an example, you interview your customers and they say do *this* particular change and your website may be much better.

    You have this interesting piece of information which you can directly apply or you can A/B test against your existing version.

    I prefer the latter case because no proof is higher proof than an idea succeeding in the wild (and not just in customer's mind).

    Again, A/B testing is a methodology – you have to wiggle your grey cells in order to come up with hypotheses and usability tests, feedback, expert opinion can be a great source for those hypotheses.

    • http://My.ComMetrics.com Urs E. Gattiker

      Thanks so much for the reply.

      But maybe I was not clear in my explanation, we used both A/B testing and then interviews to find out more.

      A/B testing is not an exact science so I am not sure if using the word “proof” is correct in this context. Maybe what we can say is that one failed to disconfirm the null hypothsis (Design 1 and 2 are the same) or one was able to reject it.

      In that regard, I describe how we use additional information to get a better handle on the reason why Design 1 was favored over Design 2.

      What we found was that sometimes it is a little thing such as the font that leads to the rejection or acceptance of a design. Re-testing with a changed letter font may then result in Design 2 being favored over Design 1. Put differently, just because the font is not ideal with Design 2 does not mean it is weaker than Design 1. A classical A/B test will not tell us why people preferred Design 1. In fact, after changing the font things may be different.

      In our case, why throw the baby out with the water when a small tweak or adjustment can allow us to use a possibly better design that was rejected using a classical A/B test.

      I hope that helps clarify my post above and I appreciate you taking the time to share your experiences and thoughts. Look forward to your next comment on our blog. Thanks again and have a nice weekend.

  • Pingback: Jason Cohen

  • Pingback: Seth Simonds

  • Pingback: Smashing Magazine

  • Pingback: Stoyan Shishev

  • Pingback: Stoyan Shishev

  • Pingback: Vincent Nguyen

  • Pingback: Pedro Bachiega

  • Pingback: Gee Ranasinha

  • Pingback: Jane Oxley

  • edgaruy

    I believe A/B tests can be useful only if you limit the variables that you test and a balancing what you have suggested above. Or maybe measure the performance of a page based on a single KPI?

    • http://My.ComMetrics.com Urs E. Gattiker

      Dear Edgaruy

      Thanks so much for your comment.  I agree with what you state and would add that successful A/B testing requires that we control for:

      - mediating AND
      - moderating

      variables.  Failure to do so will cause us a lot of problems because we may no longer be sure what caused what.

      Another concern I have with much of the A/B testing is that whilst there may be a correlation between factor X and Y, it does not mean that X causes Y. In other words, just because people dislike the color yellow does not mean this causes to dislike them design A and thereby lets them choose B instead. It is possible that something much more important triggered this decision.

      Accordingly, unless we digg deeper we will not find out what causal relationships might exist.  In turn, A/B testing is not a cure all as many people might want us to make believe.  It takes a bit more to get the design or web shop that serves your clients' needs best (PS: consider cross-cultural differences as well.

      Thanks so much for sharing. Please have a look at this blog post where I try to further explore these important issues you raised.

      Using great visuals = Failing your customers

  • Pingback: Alice L

  • Pingback: irfaan

  • Pingback: Kurren

  • Pingback: Angie McKaig

  • Pingback: John Roy

  • http://www.WiderFunnel.com/ Raquel Hirsch

    Glad to see vigorous debate on the topic!

    However, with respect, I think you are totally missing the point of A/B/n (and multivariate, for that matter) testing.

    If you define a conversion objective, testing will allow you to find a variation that improves your conversion rate, no doubt about that. It may not “guarantee the best solution for design and copy” – but you are very likely to get an improvement on what you currently have – before you run your next test and improve on it! More info here http://www.widerfunnel.com/our-process/ab-split

    If you “ask your clients for feedback” only outliers will respond. Do you want to hear nice things or do you want to convert more web visitors and increase revenues?

    “A/B tests can tell you how the bounce rate of those shown design A compares to design B” – the bounce rate is a poor metric of success. Infinitely better to use a tool like GWO and measure actual conversion actions.

    To succeed in a Conversion Optimization strategy you need an air-tight process. Here is an example of one: http://www.widerfunnel.com/our-process/website-

    Interested in case studies? Go here: http://www.widerfunnel.com/proof/case-studies

    Thanks,

    Raquel Hirsch
    WiderFunnel Marketing Optimization

    • http://My.ComMetrics.com Urs E. Gattiker

      Raquel

      Thanks for your comment, I really appreciate the feedback. I am not sure if I miss the point with A/B testing totally althought if I do I have done it wrong for the last 20 years and that would be terrible for our clients if it were this way. I think some of your comments may have been triggered by my lack of communication skills.

      Hence, let me make an attempt to clarify some issues. You state:
      “If you define a conversion objective, testing will allow you to find a variation that improves your conversion rate, no doubt about that. It may not “guarantee the best solution for design and copy” – but you are very likely to get an improvement on what you currently have – before you run your next test and improve on it! “

      My response would be that it is not always possible to assess things as a simple challenge of improvi8ng the conversion rate. For instance, designing a website like ours does not mean that all we want is that people sign up for http://My.ComMetrics.com (upper right button) but we also want them to read the material and do a few more things.

      “If you “ask your clients for feedback” only outliers will respond. Do you want to hear nice things or do you want to convert more web visitors and increase revenues? “

      I disagree that only outliers will respond. If you have created a level of trust with your clients and associates, call them up, visit them and do a structured interview to find out why they made the choices they did.

      The above structured interviewing will reveal a ton of information simple A/B testing (multivatiate or not) will never give you. This does, of course as you point out as well, not absolve you from designing a methodologically sound A/B test in the first place. We surely agree on this one.

      “”A/B tests can tell you how the bounce rate of those shown design A compares to design B” – the bounce rate is a poor metric of success. Infinitely better to use a tool like GWO and measure actual conversion actions.”

      Totally agree with your statement above…. I apologise if my comment might have suggested that I rely on the bounce rate only.

      I especially recommend your case studies

      ===> http://www.widerfunnel.com/proof/case-studies

      Thanks, Raquel for sharing your important insights and giving me a chance for clarifying may point due to the lack of my communication skills.

      • http://www.WiderFunnel.com/ Raquel Hirsch

        Urs, you make some excellent points to clarify. Thank you.

        Usability is a very broad concept, and we stay very focused on conversion optimization… so we do Usability concepts to generate hypotheses worth testing. You clarification has helped me see this more clearly.

        Best regards,

        Raquel Hirsch

  • Pingback: Barbara Nowacka

  • Pingback: CodeMyDesigns.com

  • Pingback: Sherry Holub

  • Pingback: IS5103 Web Tech

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    RaquelnnThanks and I look forward to your next insightful comment on our blog. nnAlways helps to get people like you to ask the ‘real’ questions and point out some weaknesses in one’s reasoning. nnSuch dialogue helps me learn. For that I thank you.

  • http://My.ComMetrics.com Urs E. Gattiker

    Raquel

    Thanks and I look forward to your next insightful comment on our blog. Always helps to get people like you to ask the 'real' questions and point out some weaknesses in one's reasoning. Helps my learning a great deal. Thank you.

  • Pingback: ComMetrics weekly review: Hollyoaks excels as Google stumbles » social media monitoring, social media measurement, marketing metrics, best metrics, best practice, cost-benefit analysis, benchmark social media, right blog metrics, reputation, brand manage

  • Pingback: 4 strategies to leverage usability tests » best practice, checklist, social media monitoring, social media marketing, benchmark testing, Twitter monitoring, Facebook strategy, customer engagement » ComMetrics University social media seminar (Weiterbildu

  • Pingback: Measuring social media to boost ROI » social media monitoring, social media measurement, marketing metrics, ROI, best metrics, best practice, cost-benefit analysis, benchmark social media, right blog metrics, reputation, brand management » Helping you b

Previous post:

Next post: