Showing posts with label Search Engine. Show all posts
Showing posts with label Search Engine. Show all posts

7 Ways to Make Your Google Search Result Stand Out

glitter-pencil

Getting listed on the first page of Google is an incredible accomplishment. Your efforts shouldn't stop there, however.

A general listing without all the bells and whistles just won't get you the same amount of site traffic as it used to since users tend to go for the one that stands apart from the rest. The search engine competition is fierce, so don't get left behind while everyone else makes the necessary changes to get noticed.

Consider how you would approach building the perfect web page. You might fine tune your headings, messaging, include images, and add resources to increase conversions. Conversion rate optimization (CRO) is just as important in the search engine result pages (SERPs).

Here are a few tips that should help your Google search result attract the most attention.

SERP Optimization

 

1. Unique, Relevant Page Titles

Use as many characters as possible. That's prime real estate!

A good character range to work with is 50-65 characters. Google search results will truncate right around the 65 character mark.

Pixel width of your title tag should be considered, because words with wide letters such as A and W won't be able to fit as many characters. Google will completely ignore anything past the 70 character mark and potentially report a “Long title tag” error in Google Webmaster Tools.

Your page title should be unique and relevant to the topic of your page. Include your targeted key phrases as close to the beginning as possible and, if it applies, localize your page title.

For more tips on titles, see "How to Write Title Tags for Search Engine Optimization" and "Ecommerce Title Tags: Top 5 Ways to Increase Clicks".

2. Search Engine Friendly URLs

If your domain name has your targeted keyword(s) in it, that's a bonus, however, don't go out of your way to buy an exact match domain name and get caught up in the exact match domain algorithm.

Utilize search engine friendly file names and permalink structures to make your listing more relevant, for example, /awesome-web-page/. URLs, for example, ?pageid=246, don't say anything to the user and won't help your rankings.

3. Enticing Meta Descriptions With a Call-to-Action

Your meta descriptions won't have an effect on your rankings, however, this is your chance to encourage searchers to click your link over your competition's.
Your descriptions should range from 100 to 150 characters. Use them wisely. Anything shorter or longer might result in an error reported in Google Webmaster Tools.

Describe what the page or service is about and then ask the user to do something with a call-to-action (CTA). In my example, I include my phone number and mention that we pick up the phone (you'd be surprised how many companies don't).

Have you ever wondered what would happen if you had multiple page title tags in the head of your webpage? How about multiple meta description tags? Check out this Google SERP experiment.

4. Google Authorship

Google authorship is one of the most effective ways to make your search result stand out from the others, especially if nobody else in your niche is doing it. If you Google anything SEO related, the SERPs are peppered with faces; however, if you're in an industry that is less tech savvy, like construction or real estate, you could likely be the only one with your face in the search results.

Authorship is pretty easy to implement. If you haven't already claimed your content with Google authorship, follow this tutorial to find out how.

5. Google Local

Optimizing your website and brand for local can be very beneficial whether it's for a huge company or a small mom-and-pop shop. You can review the accuracy of your business listings with tools like GetListed.org.

Claiming your business listings and filling out your profiles completely help Google to understand more about your business. Businesses with more citations from high-quality websites will rank higher.

By helping Google understand the link between your business listings and your website, you are more likely to have the map listing shown just below your search result.

6. Personalized Annotations

Google tries to make recommendations more discoverable to those who would find them most useful. Because people trust their friends' recommendations, personalized annotations display who have already +1'd a webpage or blog post.
You can increase the likelihood of these annotations to show up in search results by adding a +1 button and a Google+ badge on your page.

Inline annotations aren't currently supported in mobile browsers.

7. Google Reviews

Reviews 
A great way to increase your local search rankings is to simply ask your customer and clients for Google reviews.

Make asking for a Google review part of your process. About 10 percent of my company's clients actually take the time to submit a website design review. While I'd like to get a review from 100 percent, the ones who do add up over time, and those gold stars really stand out in the SERPs!

Most service industries have the luxury of hand-picking who they ask to submit a review. On the other hand, restaurants and hotels, for example, have to take a different approach by monitoring their reviews and find ways to address unhappy customers with resolutions that will hopefully turn those poor reviews into great reviews.

Conclusion

Don't be satisfied with a Google listing that doesn't highlight you against the competition. Implementing these guidelines will ensure that you're well on your way to an optimized search result.


Original Article Post by Matt Morgan @ Search Engine Watch

Validity of Bing It On Challenge Results Put Bing on Defense

Bing It On

If you live in the U.S., you're probably aware of the "Bing It On" challenge, where Bing claims that in blind testing, that users prefer Bing over Google 2-to-1. Considering Google has the significant market share, currently rocking in at 67 percent from the latest comScore data, while Bing has a mere 18 percent, the has been plenty of speculation over how accurate that claim really is. And Bing has confirmed around 5 million people have taken the challenge online.

But Ian Ayres, on the Freakononics blog, questions the validity of the Bing It On Challenge and how they decided to run it, which they have also detailed in a much longer paper.

First off, is the fact that there statistic where the states users chose Bing over Google was based on a mere 1,000 users – not a very large pool at all. And they don't specify how these participants were selected, other than they were 18 and up and from across the U.S. Were these people who are signed up for doing surveys, as there is quite a market for people getting paid for doing surveys online. Or were they selected in some other way?

Ayres, along with four Yale Law students, decided to do their own survey. But what they found was rather interesting.

When users were selecting the search terms from the ones Bing suggests, the results shown seem to be much more favorable for Bing, leading to speculation that either these results were hand curated or just happen to be pages that previous challenges had shown a marked preference for the Bing results.
When Bing-suggested search terms were used the two engines statistically tied (47% preferring Bing vs. 48% preferring Google). But when the subjects in the study suggested their own searches or used the web's most popular searches, a sizable gap appeared: 55-57% preferred Google while only 35-39% preferred Bing. These secondary tests indicate that Microsoft selected suggested search words that it knew were more likely to produce Bing-preferring results.
Ayres was the only one who had concerns with how Bing It On was conducted. Matt Cutts, distinguished engineer at Google, also raised similar concerns in his Google+ post regarding their claims.
Freakonomics looked into Microsoft's "Bing It On" challenge. From the blog post: "tests indicate that Microsoft selected suggested search words that it knew were more likely to produce Bing-preferring results. .... The upshot: Several of Microsoft's claims are a little fishy. Or to put the conclusion more formally, we think that Google has a colorable deceptive advertising claim."

I have to admit that I never bothered to debunk the Bing It On challenge, because the flaws (small sample size; bias in query selection; stripping out features of Google like geolocation, personalization, and Knowledge Graph; wording of the site; selective rematches) were pretty obvious.
After the Freakonomics post appeared, Bing's first response was a comment to Slate from Bing's behavioral scientist, Matt Wallaert:
The professor's analysis is flawed and based on an incomplete understanding of both the claims and the Challenge. The Bing It On claim is 100% accurate and we're glad to see we've nudged Google into improving their results. Bing it On is intended to be a lightweight way to challenge peoples' assumptions about which search engine actually provides the best results. Given our share gains, it's clear that people are recognizing our quality and unique approach to what has been a relatively static space dominated by a single service.
It is an impressive claim that Bing is suggesting Google improved their results due to this challenge. Bing's share gains, from recent comScore data, shows that Bing has increased by 2 percent between June 2012 and June 2013, while in the same period Google increased its share by 0.2 percent.

However, Wallaert also comments on the Freakonomics post to remind people that Bing's search reach is simply more than Bing.com itself.
There is just one more clarifying point worth making: you noted that only 18% of the world's searches go through Bing. This is actually untrue; because Bing powers search for Facebook, Siri, Yahoo, and other partners, almost 30% of the world's searches go through Bing. And that number is higher now than it was a year ago. So despite your assertions, I'm happy to stand by Bing It On, both the site and the sentiment.
And Wallaert brings up some valid points on why they suggest keywords now for people taking the Bing It On challenge, instead of having people enter in their own searches.
Here is what I can tell you. We have the suggested queries because a blank search box, when you're not actually trying to use it to find something, can be quite hard to fill. If you've ever watched anyone do the Bing It On challenge at a Seahawks game, there is a noted pause as people try to figure out what to search for. So we give them suggestions, which we source from topics that are trending now, on the assumption that trending topics are things that people are likely to have heard of and be able to evaluate results about.
It is worth nothing that ironically, Bing It On uses Google's 2012 Zeitgeist list for their "web's top queries" suggestions.

Then later, Michael Archambault interviewed Wallaert on the Freakonimics piece and how he viewed many of the issues Ayres raised in his Freakonimics piece.

The first issue was how Ayres found his 1,000 people for his test – through the website Mechanical Turk and that those types of people prefer Google, although Wallaert fails to reveal exactly how the people were found for their own study, other than it was conducted by a third-party firm.
"Ayres' used Mechanical Turk to recruit subjects, a site that is known to very few people on the web. While he measured things like gender, age, and race, and showed that his sample was representative of the internet-using population, one strong possibility is that those aren't the relevant variables along which people pick search. For example, it may be that the more likely you are to use Mechanical Turk, the more technology-inclined you are, and that being technology-inclined is correlated with a preference for Google results over Bing results."
If you have used or seen Mechanical Turk, it does have many highly technical members, and it could be very likely that there is a search engine bias to a higher degree than would normally occur in a random selection of people across the U.S. And they could be more likely to recognize the slight nuances between Google and Bing search that the average searcher might not notice on a Bing It On challenge.

In comments Wallaert made on the Verge (he also made sure to include a disclaimer that he works at Microsoft), he brings up another way that the Mechanical Turk audience is more technical and how it is possible they also search different from your average searcher.
Let's pretend, and I have no idea if this is true, but let's pretend Bing does better at head queries than Google does and Googles does better at tail queries than Bing does. The average MTurk person could be more likely to enter a tail query than the average American.
There is also the issue with the fact that there are two studies Bing did, which is causing some confusion. The initial study had the 2-to-1 claim, while the subsequent study showed a preference to Bing albeit not 2-to-1.
The first issue is Ayres' challenging Microsoft's older "2 to 1" study. If you visit the campaign's website today, you will notice that Microsoft has changed their headline to "people prefer Bing over Google for the web's top searches." Wallaert, explained that Microsoft started with a study in which users could pick any search query they wished – this study is the basis of the "2 to 1" claim and it was reported back in September of 2013.

Microsoft then performed a new study in which they used Google's top queries instead of user dictated ones. You might expect Bing to not perform as well, as these are Google's top and most handled searches. The results were surprising. While Google did gain some edge Bing handled Google's top searches better.
He also states that all their claims go through Bing's lawyers, while Ayres claims have not undergone any validation at all.

Bing responded in a way where they tried to devalue many of the points Ayres brought up. This includes the fact that Ayres disagreed with the fact that Bing used such a small sample size, yet also used the same sample size of 1,000 people in their own testing.

Also Ayres did not like the fact that Bing has not released any data coming from their online "taste test" at BingItOn.com. Bing responded by saying that they don't keep any of the data at all, because retaining that information would be unethical.
Next, Ayres is bothered that we don't release the data from the Bing It On site on how many times people choose Bing over Google. The answer here is pretty simple: we don't release it because we don't track it. Microsoft takes a pretty strong stance on privacy and unlike in an experiment, where people give informed consent to having their results tracked and used, people who come to BingItOn.com are not agreeing to participate in research; they're coming for a fun challenge.

It isn't conducted in a controlled environment, people are free to try and game it one way or another, and it has Bing branding all over it.
So we simply don't track their results, because the tracking itself would be incredibly unethical. And we aren't basing the claim on the results of a wildly uncontrolled website, because that would also be incredibly unethical (and entirely unscientific).
Many find it rather astounding that Bing isn't tracking this information, even if it was without identifiable information – and even if they didn't track the specific search queries – simply so they could say the percentage of people that are choosing thing over Google. Those understandable that people were speculating that they aren't releasing the information because it wasn't favorable to Bing.

As for the claim of Bing favorable results, Bing is falling back on the "We don't track this" line as well, citing privacy considerations. So we unfortunately cannot get further data from their online tests at their Bing It On website.
First, I think it is important to note: I have no idea if he is right. Because as noted in the previous answer, we don't track the results from the Bing It On challenge. So I have no idea if people are more likely to select Bing when they use the suggested queries or not.

Here is what I can tell you. We have the suggested queries because a blank search box, when you're not actually trying to use it to find something, can be quite hard to fill. If you've ever watched anyone do the Bing It On challenge at a Seahawks game, there is a noted pause as people try to figure out what to search for. So we give them suggestions, which we source from topics that are trending now on Bing, on the assumption that trending topics are things that people are likely to have heard of and be able to evaluate results about.

Which means that if Ayres is right and those topics are in fact biasing the results, it may be because we provide better results for current news topics than Google does. This is supported somewhat by the second claim; "the web's top queries" are pulled from Google's 2012 Zeitgeist report, which reflects a lot of timely news that occurred throughout that year.
In comments on an article on the Verge about the situation, Wallaert responds to many reader's comments, but he also makes some interesting claims about Ayres, the author of the Freakonomics post, which was understandably omitted from the official Bing post, but raises questions about Ayres credibility.
Also, not to cast stones, but this is an academic who has admitted to plagerism (sic). If his motive was entirely about making sure the truth was known, he could have easily just asked me for the supporting data first, which is what academics generally do to other academics. As a matter of fact, it is a rule for many publications in my field (psychology) that once a paper is published in a peer-reviewed journal, anyone can request the and it must be provided.

I can tell you, I'm a pretty easy guy to find, and Ayers never asked me about anything.
(Note: I work at Microsoft)
Does plagiarism make Ayres study less valid? I could see where it could raise suspicions due to his previously admitted plagiarism.

It does make the entire Bing It On challenge something that should be looked into more, with both sides releasing all their data from the testing, including search terms used. This would show the differences between specific search queries used in the testing for both parties, and to compare the types of searches that were done for each, which would be most interesting for the Mechanical Turk searchers, to see if they do search differently from most searchers.

Ayres has not responded to Bing's rebuttal to the claims he made in his Freakonomics post, or responded to any of the comments made on Freakonomics. I suspect we haven't yet seen the rest of this play out yet.


Original Article Post by Jennifer Slegg @ Search Engine Watch

The 10 Best Shopping Engines

Top 10 Comparison Search Engines 
Each quarter, CPC Strategy releases their Comparison Shopping Report, a compilation of data from more than 4 million clicks and more than $8 million in revenue. The Report ranks the industry’s top comparison shopping engines (CSEs) for online merchants based on significant metrics like overall traffic, revenue, conversion rate, cost of sale, average CPC, and merchant response ratings.

Ecommerce merchants can use this data to tailor their marketing budget to make more sales and increase product exposure on the sites that online shoppers frequently use to find great deals on products. Here are the 10 best comparison shopping engines.

1. Google Shopping (CPC)

Since Google Product Search became the paid Google Product Listing Ads / Google Shopping program, it has been the top-performing shopping engine in nearly every significant ecommerce KPI. It's only getting better, too, with overall traffic growing 82 percent from Q4 2012 to Q1 2013 and another 40 percent from Q1 to Q2 2013.

Compared to every shopping engine over the last three quarters, Google Product Listing Ads has generated the most traffic and revenue while maintaining the lowest cost of sale for ecommerce merchants.

Merchants can manually upload feeds or use an FTP to upload in bulk.
Get an Account | Sample Data Feed | Data Feed Specifications

2. Nextag (CPC)

Nextag is a paid comparison shopping site that allows businesses to list physical products, event tickets, real estate, travel plans, sales, and more.

Ranking in second place for the last two quarters, Nextag used to be the top paid shopping engine before Google PLAs kicked into gear. It generated the second highest amount of traffic and revenue in Q2 2013 as well as achieved the third lowest cost of sale for merchants.

Nextag has a nice CPC bidding model too, allowing merchants to bid at the product, brand, and category-level. Nextag also supports bulk product imports.
Get an Account | Sample Data Feed | Data Feed Specifications

3. PriceGrabber (CPC)

PriceGrabber is a paid comparison shopping site that features many deals, coupons, and weekly specials. PriceGrabber has a no-minimum CPC bidding model, which allows retailers to penny-bid, or bid as low as $0.01 on a product.

Last quarter, PriceGrabber had the second lowest cost of sale for merchants. Merchants using Pricegrabber also have the added advantage of sending their product listings to Yahoo Shopping.

The shopping engine also allows for a bulk data feed upload through an FTP.
Get an Account | Sample Data Feed | Data Feed Specifications

4. Google (Free)

This is the lingering presence of Google Product Search and no one should count on Google's free clicks heading into the near future because they are significantly decreasing.

Nevertheless, Google Shopping and Product Listing Ads, Product Search's replacement, is a highly profitable marketing channel that now makes advertisers pay to play.

Get an Account | Sample Data Feed | Data Feed Specifications

5. Amazon Product Ads (CPC)

Amazon Product Ads (APA) benefits from the huge traffic pool of Amazon's Marketplace program. Unlike Amazon Marketplace, Amazon Product Ads links shoppers to a merchant's external web store to make a transaction.

This is a great option for retailers who already have listings on the Marketplace or even just for retailers who want to list on Amazon without subscribing to the Marketplace program.

APA had the fourth lowest COS, sent the fourth most traffic, and generated the third most amount of revenue in Q2 2013.

Get an Account | Sample Data Feed | Data Feed Specifications (Requires account login to view.)

6. Shopping.com (CPC)

Also known as the eBay Commerce Network, Shopping.com is a paid comparison shopping site. Its CPC bidding model allows for only category-level bids.

Shopping.com finished third in overall traffic in Q2 2013, and this is partly due to the fact that merchant listings on Shopping.com also receive exposure on Google Shopping.

The shopping engine allows for bulk uploads via an FTP.
Get an Account | Sample Data Feed | Data Feed Specifications

7. Bing (Free)

Bing Shopping is a free comparison shopping site that is similar to how Google Product Search functioned when it was free. Merchants need to have a Bing Ads account to list products on Bing.

Bing had the second highest conversion rate in Q2 2013. A paid Bing Shopping program isn't an unrealistic prospect in the next couple of years.
Get an Account | Sample Data Feed | Data Feed Specifications

8. Shopzilla (CPC)

Shopzilla is a paid comparison shopping site that, like Shopping.com, also offers high product exposure through listing on Google Shopping. For merchants, Shopzilla features one of the easiest bidding tools and it allows retailers to zero-bid, or bid nothing on a poor-converting product.

It is partnered with BizRate, so merchant ratings are easily distinguished for shoppers to look at when considering purchases. Shopzilla featured the fifth highest conversion rate among studied shopping engines in Q2 2013.
Shopzilla also supports bulk product feed uploads via an FTP.

Get an Account | Sample Data Feed | Data Feed Specifications(Requires account login to view.)

9. Become (CPC)

Become is unique because they aggregate research on products (expert reviews, consumer reviews, articles, buying guides, forums, etc.) to provide all the relevant information for shoppers to make a purchase.

For merchants, Become has the highest engine responsiveness ratings among all the shopping engines, so problems with your feed or campaign will be addressed quickly.

Become supports bulk product feed uploads via an FTP.
Get an Account | Sample Data Feed | Data Feed Specifications

10. Pronto (CPC)

Pronto makes it easy for shoppers to find current sales and facilitates buying popular products (e.g., HDTVs) with buying guides.

For merchants, Pronto was third in conversion rate among studied shopping engines in Q2 2013.

The comparison shopping site allows for whole feed processing via an FTP.
Get an Account | Sample Data Feed | Data Feed Specifications

Original Article Post by Mary Weinstein @ Search Engine Watch

Bing Gets Trendy With Video, News Redesigns

Two of Bing's search products got makeovers this week. Wednesday, Bing Video streamlined its look with a redesign from the ground up. Yesterday, Bing News also got an updated look, with a more natural feel for reading headlines and trending stories.

Bing Video

Bing Video Preview

Starting from scratch, the new Bing Video design shows video results in a four-column grid. Each cell contains a larger preview in a higher resolution along with the title, the name of the site on which the video is hosted, and its upload date.

Search results are pulled from all over the web. Aside from customary content from YouTube and Vimeo, Bing also serves up results from Hulu, Dailymotion, Vevo, MTV, CBS, and MSN.

New features include:
  • Video Search Filters: Similar to image search filters, Bing Video now lets you search videos by specific properties including length, date posted, resolution and video source.
  • Pop-out hover previews: You've always been able to preview videos on Bing. Now you can control the volume and even flag videos.
  • Improved Video Overlay: Once you've seen the preview and click to view, the video will load in a lightbox atop the search results. Bing has included their carousel navigation to easily view other videos from your search result.

 

Bing News

Bing News Old vs New

Bing News also got a fresh new look. In an effort to help people access news faster and easier, the more modern layout of Bing news makes it easier to stay on top of the news you search for. The Bing News redesign includes:
  • Responsive design for all devices.
  • A revamped Trending Topics navigation.
  • A new Trending Now section.

With an emphasis on touch, the new design gives a proper layout for desktop, tablet, or phone users. Headlines were sparse with awkward navigation in the old, two-column design. The redesigned Bing News utilizes the full width of the viewport, serving up nearly a dozen headlines, each with their own description and image.

Trending topics appear under the headlines, again using Bing's carousel layout. This layout contains wide images with a caption on a scrolling navigation.

Other news categories appear in a three-column layout beneath the Trending Topics carousel. The third column also includes News trending on Facebook and Twitter, based on sharing volume and likes.

The newest feature to Bing News search is called Trending Now. Powered by Satori, the same back-end that powers Bing's new instant snapshot search, Trending Now shows people, locations and other items related to the latest headlines in your news search.


Article Post @ Search Engine Watch

New Data Mining Tool Will Let You Make Your Own Private Search Engine

MinerazziThe Minnerazzi project is a platform that allows you to build topic-specific search engines without programming knowledge. The brainchild of Dr. Edel Garcia, the Minerazzi project aims to allow anyone to build small, on-topic search indexes. His hope is that anyone, regardless of technical background, can be involved in data mining and learning through discovery by building these search indexes.

The Minerazzi Project was initially intended as an indexing project. When first conceived by Dr. Garcia, it was hosted at the Microsoft Inovation Center of Inter American University of Puerto Rico. However, the project was diluted and changed numerous times. A few weeks after initially presenting the project concept at SES New York 2012, Dr. Garcia moved the project out of the MIC and redesigned it as a self-service search platform.

A little over a year later, the Minerazzi project is in beta testing. With the help of local librarians and developers, Dr. Garcia.

Once an index is built, users can start mining email addresses, phone number and other keywords straight from search result pages. Minerazzi also allows you to identify sets of keywords with common features such as number of occurrences, byte size, etc.

For business, Minerazzi allows an organization to build a small, searchable index relevant to any specific set of data. Things like products and services, market information even a competitor index can be built quickly for employees to search and mine. Such a unique, topic-specific index can be ideal for researchers to store, share and search information.

When released to the public, the service will require users to sign up and open an account. Once that account is open, you can start crawling.

Using it is relatively simple. Pick your vertical - news, sports, etc or use something more meaningful like the local music scene, internal departmental resources and Minerazzi helps you search and index documents on that topic. Minerazzi then crawls the Web in search for your documents, when it finds matches, it adds it to your index. That data can then be searched by friends, clients, co-workers or anyone else with whom share access.

Minerazzi uses 11 different interactive search modes to help control the data that is crawled. Some modes make sense like AND, which includes all terms in your search and OR which will look for documents that match any term specified. There are other search modes like NOT AND, NOR, EXCLUSIVE OR and even PROXIMITY, which allows you to specify a number and two terms in any order that are separated by no more than the number you chose.

The science behind these modes is sound. Looking at two metrics – the ration of AND/OR search results and EXACT/AND results provide some important signals. In addition to helping with mining content from your index, these ratios also provide important clues about the nature of a search engine index and its content.

"In general, we can compute other types of search mode results ratios to extract very useful information," Garcia said. "With some of these ratios we can estimate the organic/inorganic incompatibility of keywords in a collection."

Garcia emphasizes that Minerazzi places users at the center of the search experience. Instead of limiting users to a list of results, Minerazzi allows users to interact more with the returned data beyond simply staring down a list of links and clicking.

"In my book, that is a technology waste. It is like sending your eyes to 'window shopping' across an oversized digital mall. Boring!" Garcia told Search Engine Watch. "With Minerazzi, users interact at query time with search result pages, extracting information that matter to them, and doing something with that information."

Minnerazzi is still in beta testing with no official public launch date at this time. Garcia and his team are hoping to have it available within the coming weeks.


Article Post @ Search Engine Watch

5 Important Link Removal Facts Post Penguin 2.0

Penguin 2.0 launched May 22nd, causing many sites to lose vital rankings, visibility, and traffic. This will without a doubt lead to yet another wave of link removal projects, which have been prevalent since Penguin 1.0.

Before diving into your backlink portfolio and attempting a Penguin recovery, here are 5 important link removal facts of which you should be aware.

1)    Matt Cutts recently stated that link removal/disavow needs to be done with a “machete”, not a “scalpel” or “fine toothed comb”.




Cutts is fairly direct and straightforward. When a site is hit by Penguin there’s no sugar-coating it. Anything under suspicion needs to be removed if there’s to be hope of forward progress. His actual statement:

“Hmm. One common issue we see with disavow requests is people going through with a fine-toothed comb when they really need to do something more like a machete on the bad backlinks. For example, often it would help to use the “domain:” operator to disavow all bad backlinks from an entire domain rather than trying to use a scalpel to pick out the individual bad links. That’s one reason why we sometimes see it take a while to clean up those old, not-very-good links.”

Personally I can attest to this. Anyone who has spent time working on link removal, disavow, and reconsideration requests knows that Google’s not going to reward half efforts. There needs to be considerable work done, and a true mending of ways. Even a hint of spam will receive nothing more than a vague “At this time…”

2)    There are no guarantees with Google
The first mantra of every SEO’s life should be ‘There are no guarantees with Google’. Before you launch a project, especially link removal, it’s important to stare this statement in the face. Think about it, understand it, and truly accept it.

We’ve had successful link removal campaigns, and we’ve seen recovery from both manual actions and algorithmic penalties. Given enough time, energy, and resources I have no doubt that virtually all recovery campaigns are possible. But, at the end of the day, there are no guarantees with Google.

3)    Link Removal isn’t a small undertaking
Link removal is an exhausting task. To meet Google’s standards there are basically four steps to any link removal campaign:

A)     Backlink portfolio analysis
Here you’ll be taking a complete analysis of your backlink portfolio using Open Site Explorer, Majestic, or Ahrefs. These can be quite large, potentially with tens of thousands of links or more, and need to be properly categorized.

Again, it’s important to ensure you’re not skimming the worst off the top. You’ll have to dive deep and ensure you’re getting as close to every offender as possible. Specifically:
  • Paid links
  • Link directories
  • Irrelevant links
  • Bad link neighborhoods
  • Site-wide links on low quality sites
  • Spammy blog comments
  • Article directories
  • Link exchanges
  • Etc. etc.

Basically, any link you wouldn’t want Google to take a look at, or you’d have to explain with a conditional statement (that link is actually good because…) get rid of it.

B)      Find contact information
You’ll need a way to contact all these sites in order to request to have the link removed – a very important part of the link removal process. Google will potentially ignore any disavowed links if there’s been no effort to have the link removed.

C)      Outreach
Such a simple word for an exhausting process. Here, you’ll be contacting every site which you wish to have your link removed from. Which typically involves thousands of sites, depending upon the project.

It’s important to note that contacting them all once isn’t enough. You should contact all the sites at least three times, over the course of a month, in order to prove you’ve made every possible effort.

D)     Disavow
You’ll never be able to have every spammy, low quality, irrelevant link removed. There will be sites that are abandoned, sites with no contact information, webmasters that refuse or ask for money, etc. etc.

Once again, make every effort to have the link removed. Once you’ve had as many links as possible removed, go ahead and disavow the rest, including notes as necessary.

E)      Rinse and repeat
The hidden step, you’ll often have to rinse and repeat the whole process depending upon Google’s response. Once again, think machete, not scalpel.

4)    New links are vital
Link building is generally overlooked, or put on pause, during a link removal campaign. And, while it logically it makes sense to focus all of your energy into link removal, it’s actually better and more effective to build quality links in conjunction to link removal.

This is true for a variety of reasons.
First of all, building quality links signals Google that you’ve changed your ways, mended your tune, and changed your song. These newly built, high authority links will be a point of proof that you’re moving in a new, better direction, which is very important when wrangling with Google.

Secondly, these new links will help lessen the blow of your current link removal. You should be removing a large amount of links from your backlink portfolio. And, no matter how careful you are (you shouldn’t be overly cautious) you’ll be removing links that were passing value. Having new links, of higher quality, should ensure a quick recovery from any dip you see as you remove these links.

5)    Link removal is extremely difficult without tools
Tools are absolutely vital to a successful, effective & efficient link removal campaign. Often these projects have hundreds of hours invested into them, and any tool that can help provide an edge is important.

At the bare minimum, you’ll need help from a tool that can run a backlink analysis on your site. Some of the top rated:

Going beyond that, there’s tools specifically developed to help ease the pain of link removal. Some of the top rated:
  • Remove’em – A very comprehensive tool, also the most expensive. Helps keep track of the project and emails, as well as suspicious link discovery.
  • rmoov – Helps identify contact information, create and manage outreach, complete with reminders.
  • SEO Gadget – Automatically rates whether the link is ‘safe or not’. Can do 200 at a time, and will help find contact information as well.

No matter which tools you use, make sure you’re documenting your work. Documentation, documentation, documentation! Not only will it keep the project flowing smoothly and efficiently, but Google’s unlikely to revoke manual actions without proof of effort and change.

Here’s a video from Cutts himself which discusses the unnatural link detection warning as well as a few changes Google’s currently working on:


 

These 5 link building removal facts will hopefully prove useful as webmasters gear up for lengthy link removal projects, especially since the release of Penguin 2.0. I wish everyone the best moving forward and a speedy recovery.

If you have any questions, comments, or insights, please leave them in the comments below.


Article Post @ Search Engine Journal

Organic vs. Paid Listings: Do Users Know the Difference?

Last week, we reported on a recent calculation that showed paid search was taking over the search engine results pages (SERPs). This was experimental math by a local business owner who was concerned that organic search was dead.

While organic search isn't dead, there is another facet of the SERP worth exploring and one we hear about ever so often: When presented with multiple choices, where do users click on the SERP and do they even know what they are clicking on?

Where Users Click and Why

A 2012 study of 1.4 billion searches in the UK by GroupM UK and Nielsen showed the organic listing won 94 percent of the clicks.

paid-vs-natural-click-distribution-2012

But earlier this year, research by UK-based Bunnyfoot showed 81 out of 100 customers clicked on AdWords over the organic listings in Google.

Because the methodology and sample was so different, of course it's hard to compare the two, but feedback from the respondents in Bunnyfoot's research was intriguing.

Bunnyfoot research showed 4o percent had no clue the ads were actually paid listings, citing the following reactions, to name a few:
  • "…the first 3 that meet my search criteria, presumably the best."
  • "…best match for what you have put in the search. They have got the words that you have put in or are the most popular."
  • "…the most searched I guess."

But surely the younger generation is able to decipher paid from organic, right? According to the eConsultancy data, only 5 percent ages 2 to 17 click on paid listings, and the tendency to click on paid ads grew as age increased.
However, this post by Glenn Gabe painted a different picture of our youth's insight into the SERP. Which begs the question …

Even the Next Generation Has No Clue?

As savvy as the next generation is with technology, that doesn't mean they understand the intricacies of search. Gabe presented on two different occasions to U.S. middle schoolers, and on both occasions, asked the groups to point out which listings on a SERP were paid and which were organic.
Not one student was able to do so.

"Think about this for a minute," Gabe wrote in his post. "How many paid search clicks are occurring right now from younger web users that have no idea that those listings are advertisements? Let's face it, there can't be a trust issue with paid search if you don't know they are paid ads in the first place."

The FTC Tries to Make Sense of it All

FTC Logo 

On the matter of trust, this is the FTC's current issue with search engines and paid listings. More than 10 years and countless changes to the SERP finally prompted new FTC guidelines for paid search listings last month.

In a letter sent out to numerous search engines, the FTC voiced concerns over a SERP that may not be understood by users:

"We have observed that search engines have reduced the font size of some text labels to identify top ads and other advertising and often locate these labels in the top right-hand corner of the shaded area or ‘ad block,' as is the case with top ads. Consumers may not as readily notice the labels when placed in the top right-hand corner, especially when the labels are presented in small print and relate to more than one result."

The FTC also recommended ways to remedy any doubt:
"We recommend that in distinguishing any top ads or other advertising results integrated into the natural search results, search engines should use: (1) more prominent shading that has a clear outline; (2) a prominent border that distinctly sets off advertising from the natural search results; or (3) both prominent shading and a border."

Regardless of which listings get more clicks, in the case of how much the average user actually knows about the SERP, the jury is still out. In last week's post on organic search, I made a point to say that I glaze over the ads at the top of the SERP automatically, and assumed others did that, too. But just how many know enough to make the choice to do that is the question.


Article Post @ Search Engine Watch
 
Support : Creating Website | Johny Template | Mas Template
Copyright © 2013. Google Updates - All Rights Reserved
Template Created by Creating Website Published by Mas Template
Proudly powered by Blogger