Showing posts with label Search Engine Algorithm. Show all posts
Showing posts with label Search Engine Algorithm. Show all posts

9 Ways to Prepare for a Future Without Cookie Tracking

It was over a year ago that I first wrote about do not track legislation, and luckily for most organizations the browser-provided imperative is loosely supported or regulated today, with very few sites adhering to interpretation and compliance of the preference.

For the most part, do not track legislation is often misunderstood by the general public, and even our regulators in its definition, usage, and most importantly the privacy implications and confidence it is meant to instill.

From a digital practitioner standpoint “do not track” is the least of my worries but upcoming news about Microsoft and Google pursuing cookie-less tracking capabilities indicates to me that education on how digital information is collected and shared will become even more important in the near future.

Rather than panicking, there is a lot we can do today to enact guiding principles that will likely ease a transition into tighter privacy controls in the future.

Education

One of the biggest problems facing digital marketing and analytics practitioners will be education. The industry has evolved so quickly that much of the technology that we rely on every day is likely taken for granted.

Personalization is one such area that relies on tracking, profiling, and delivering a lot of information about visitor preferences and behavior, which many of us likely take for granted.

One might argue that personalization is a byproduct of contextual advertising, and without underlying tracking technologies, wouldn't be possible to deliver.
Teasing apart a key delivery mechanism such as a session or persistent cookie will be very challenging, but explaining the importance of cookies and their usage to visitors and customers even more so.

What can you do to prepare?

1. Ensure your privacy policy is up to date and fully transparent.
2. Explain what tracking technologies are used (savvy users will know how to check for this themselves anyways).
3. What cookies are employed and for what reason.

Usage

It’s probably safe to say that aside from a few specific highly-regulated industries and regions, most digital marketing practitioners don’t spent too much time or due diligence in reviewing data usage models with third-party vendors and their technology.

Regulators focus both on collection and usage of data in these scenarios, particularly when third parties are involved because in many cases, these partners assume ownership of the data collected on your digital properties. This is the same reason why many browsers automatically block third-party cookies, to ensure data collection services and the usage of visitor information are being entrusted to the right recipients.

What can you do to prepare?

4. Explain how data collected is used.
5. Explain how disabling functionality may affect user experience or functionality.
6. Ensure correlating verbiage between your privacy policy and acceptable use policy are complementary.

Consent

In my opinion, this is where most of the opportunity is for much of North America. Very few companies actually gather consent in a clear and concise manner.

To be brutally honest, most of us think that relying on a single line radio box at the bottom of a registration page, with a link to a hundred page disclosure is acceptable. From a legal standpoint, it probably will cover you from any litigation, but from a customer experience perspective, hundreds of pages of disclosure tend to make the average Joe either uninterested or a little paranoid.

What can you do to prepare?

7. Humanize your terms and conditions. Less legalese and more transparency.
8. Separate your service level agreement and delivery conditions from your data consent.
9. Introduce ways visitors and customers can opt-into and out of technology that enables digital marketing and personalization quickly and easily.

Conclusion

Think about the steps you can take today to instill a greater confidence in your digital business and marketing efforts today. Sometimes little things go a long way to earn the respect and trust of visitors and customers, making the impact of future technology tracking capabilities or regulatory guidelines easier to transition into.

Have you done anything to prepare your website and visitors for the future of tracking and digital marketing personalization?


Original Article Post by Garry Przyklenk @ Search Engine Watch

How Search Engines Rank Web Pages

Once upon a time search engines looked at words. Over the years the movement has been towards concepts. Or as the new mantra goes, "things not strings."

Yes, indeed, the words on the page do come into play, but more often than not, it is more about identifying the concepts of a page or a website. This helps them better deliver results in a commonly more personalized world.

The Search Black Box

The goal of this article is to give you a sense of the core concepts used in modern search. This is not a guide to Google. Nor Bing. It is a starting point to better understand the landscape so that you might venture out to discover more.

Corralling the Wild, Wild Web

While this journey is more about the elements of ranking a page or a site, one really can't get to that point without the page actually being found. The two obvious elements are;
  • Crawling: The ability for the search engine to get around the site
  • Indexing: Actually getting pages into the search engine's index.
These days most of this can be handled by automated tools provided by the major search engines. At least to let them know you exist.
What's not quite as evident is the level and depth of the crawling and indexation given to a website. This can often be attributed some various on-site and off-site factors we'll be looking at shortly.

Google's Matt Cutts has a nice video worth watching;

Understanding Signals

First things first: signals. It is strangely a commonly bandied about term in the world of SEO. But often misunderstood.

A search engine can use signals for many things including categorization, geo-localization, behavioral, demographic, and more. Not just for ranking purposes. Some might be used as signals of quality (task completion) while others used in display elements in the search results.

Where things get interesting are the various page-level signals and site-wide signals. How a search engines "views" your site on the web. In the strictest understanding of "ranking factors", these might not always be considered, but indeed are important concepts.

In simplest terms these can include:

Site-Level Signals:
  • Authority/Trust
  • Classifications
  • Internal link ratios
  • Localization
  • Entities
  • Domain history
  • Thin content
Page Level Signals:
  • Meta data
  • Classifications (and Localization)
  • Entities
  • Authority/trust (external links)
  • Temporal signals
  • Semantic signals
  • Linguistic indicators (language and nuances)
  • Prominence factors (bold, headings, italics, lists, etc.)
Off-site Signals:
  • Link related signals
  • Temporal signals
  • Trust elements (known by the company you keep)
  • Entity/Authority; citations, co-citation, etc.
  • Social graph signals
  • Spam signals (that might incur dampening)
  • Semantic relevance (of the other signals)
Please do bear in mind, 'link related' doesn't mean PageRank. Links can send a variety of signals, methods like PageRank, being just one.

While we'll avoid the specifics, you can get a sense that there are a variety of signals a search engine might use to understand what a site and/or a web page is about. And of course what types of elements might be used in scoring search results.

The Land of Graphs

Next stop in our journey is to start understanding some of the various classifications, categorizations, relations, and correlations that a search engine might employ. These days we tend to think of them as "graphs".

Some of these include;
  • Link graph: Most commonly known one to SEO professionals, all the links to sites that determine relevance, authority, and trust.
  • Social graph: Connections, topicality, behavioural data, etc.
  • Entity graph: People, places, things, events, etc. (named entities).
  • Knowledge graph: Information related to entities.
  • Term and taxonomy graphs.
Where this can play into rankings is that Google categorizes the relations and can score and re-rank web pages through these graph relations. Consider things such as co-citation, topical link graphs, social associations toward authority, and much more.

Adapting to the Social Graph

Get the idea? Yes, I know it can start to get confusing. But for the purposes of this article, we're keeping it simple and moving fast here. Just fire up the desire to learn more. That's the goal here.

What I hope is emerging is that there's a whole lot going on under the hood. There's an entire article that could dig into each of the points above. Never fall for the simple answers when trying to understand what's happening to your site and your target query spaces.

Understanding Ranking Mechanisms

So now let's move along to why we're here. To get a better sense of how a search engine might rank web pages in today's environment. Again, I am trying to get you moving down the tracks to wanting to learn more. This is the tip of the proverbial iceberg.

Simple concepts:
  • Scoring: The major search engines use hundreds of factors nestled into many algorithms. Think about it like an onion and it's layers. All too often, people say things like "the Google algorithm" when in fact, there are many. The scoring over all of them makes up the initial rankings.
  • Boosting: This is another element or signal that might raise a page's position in the rankings. One example is a statement Google made that fast mobile sites are given a boost in mobile search. Various forms of personalization also use a boosting element to re-rank results.
  • Dampening: Not to be confused with penalties, a dampening factor is an element that would lower the rankings of a web page after the initial scoring process. One example is the now infamous Google Penguin or Panda algorithms. While it may seem like a penalty, it is in fact a dampening element.
Now we're getting somewhere right? We consider the various on site signals, off site signals and graphs (link, social, entity, knowledge). Within each of those, there are ranking and scoring mechanisms at play that are affecting where a website appears in the results. 

Personalization and the World of Flux

While the path so far may seem daunting, it's about to get a little more convoluted

Once upon a time, life was simple. We had 10 blue links. Those links were by and large stable no matter who searched it nor where they were at the time. No longer.

The shift started most notably with Google back in 2004. At the time it was in Google Labs and users could identify preferences. By 2009, they unleashed it on the world for all users. That flavor used search history and behaviour to adapt the results.

Google SERP 2010
Google SERP 2013

Today, we have a few different types of re-rankings and results based on the end user including:
  • Behavioral: Based on search history, query reformations, last query, etc.
  • Social: The rise of the social graph has lead to logged in personalization.
  • Geographic: Rankings can change based on the location of the user.
  • Demographic: Somewhat linked to the social, but categorizes users.
  • Temporal: Results based on user activities (daily, weekly, annual, etc.).
Some elements are tied to user accounts that are logged in, (social, behavioral) while others, can be in any state (such as geographic). We've seen instances where searching the term [hockey] produces personalized results tailored to the region, regardless if the user was logged in.

This means that at any given time, the results and rankings can be drastically different from one searcher to another. It is, in large part, the reason we've seen ranking reports become less important in the world of SEO.

The Land of Search Personalization

The Web Spam Connection

While we might not ultimately consider it as a ranking element, web spam does bear a mention along the ride. While it may not be something that can increase your rankings, it sure can lower or demolish them. So may for the sake of this article, we might consider them as "ways a search engine un-ranks a web page".

Web spam is the term used to those that seek to manipulate the search results. As with all things in information retrieval and SEO, there are a multitude of methods they might use and associated scoring that we don't really know. But at least Google gives us this handy page to get a sense of some things that might be in play.

Some web spam elements might include;
  • Cloaking and/or sneaky redirects
  • Hacked site
  • Hidden text and/or keyword stuffing
  • Parked domains
  • Pure spam
  • Spammy free hosts and dynamic DNS providers
  • Thin content with little or no added value
  • Unnatural links from a site
  • Unnatural links to a site
  • User-generated spam
Again, for more read this article. And if you really want to learn more, you can also dig into this guide to web spam over here.

The point is that we should always consider what affects a ranking of a web page from a negative as well as a positive standpoint. I prefer to leverage the many factors we looked at here today more so than manipulating them. If you do this, you probably won't ever have to worry about the web spam issue.

No Pretty Bow, I'm Afraid

And so where does this all leave us? I know you might have come to this page looking for a definitive guide as to how you can go and start making bucket loads of cash ranking your pages. The reality is that it is never that simple. Indeed, SEO isn't dead, in fact, it becomes more complicated all the time.

Any one section of this article could be broken up into a post of its own. In fact, many. What we tried to do for you here today was to give a sense of what goes into the process.

There is no one way to attack SEO. Given what's involved in how modern search engines rank pages, you need to think on your feet as to how you can appeal to them and the strategy that will work for your situation.

This is your starter guide... now go forth and learn even more. Then, and only then, will you become a kickass SEO pro.

Some random reading to continue your journey;
And there you have it. This is but a beginning towards learning how a search engine actually works, from the SEO perspective. Now go forther and continue to learn more. And by all means, play safe.


Original Article Post by David Harry @ Search Engine Watch

After Being Crushed By Google Panda, Voucher and Car Classified Sites Recover

After unprecedented numbers of sites recovered from Panda as a result of the last confirmed update, many people stil have questions about what sites need to do to escape the wrath of the algorithm. The answer: the same things as before, regardless of how many other sites are seeing a return to the SERPs. The 10 day update is still in full flow, and with so little data around Panda recoveries it can be difficult to know where to start.

By now, most everyone has seen a graph of what Google's Panda algorithm update can do to a site, but not a lot of people have seen what a recovery looks like. That's probably because there haven't been many documented recoveries – at least not full, 100 percent traffic returning recoveries – at least until July's update.

The good news is that partially recovering from Panda should become more straightforward, as Google reportedly moves to add Panda to the real-time algorithm, rolling it out continually for up to a third of each month, instead of on a more occasional basis. Sadly this also means that it's going to become more difficult to diagnose Panda issues – at least from the go-to resources such as Moz's Google Algorithm Change History.

Is it Panda? Where to Start Looking

Internet marketers, just like users (and Google), should be able to tell at a glance whether content hosted on your site is worth reading or not. If you instinctively ignore your own images (unless it's because they're so small you have to squint to work out what they are) and your star ratings are stuck on 0, it should be time to think about improving your content, regardless of whether your site has been hit by Panda.

A completely separate (though related) issue is the state of content not on your site. Many sites have experienced Panda problems related to content that has been scraped and hosted elsewhere on the web; as well as the more embarrassing problem of hosting content that has been stolen from elsewhere.

Traffic Returns to a Voucher Site

Voucher and car classifieds sites are two of the industries hit hardest by Panda. It's not unheard of that Google might take action across a particular vertical, but the idea that an algorithm might be affecting voucher sites, car classifieds sites, and the like is worrisome for people working in those industries. Scarier still for site owners is the fact that this isn't a manual action, but an algorithmic update that just doesn't like their payday loans website or others like it.

One voucher site my agency has been working with for a number of years was badly affected by the Panda algorithm April 11, 2011:
voucher-site-panda-april-2011

The site was runs an affiliate program, which at the time hosted a section on the site linking out more generally to great deals on products from across the web, in addition to the core product. These pages typically drove around 20,000 visits per month; however each deal's description was (very helpfully) provided by the manufacturer.

This meant that those small blocks of vaguely useful text could also be found in many other locations on the Internet, and there was no way our affiliate program could be the source of that information, hence the impressive drop illustrated in the graph above.

This issue was dealt with by killing this section of the site once it had been determined that this was likely to be the cause for the loss in traffic/visibility. These pages were irrelevant to the core purpose of the site, and in comparison were badly maintained.

The poor quality pages in question were redirected to another, less important site owned by the same company, and the traffic very quickly returned…and grew, well beyond the 20,000 that on paper were guaranteed losses as a result of redirecting the pages.

One of the most significant contributors to the low number of documented Panda recoveries is that while Searchmetrics is a great tool for diagnosing huge drops in visibility, it won't register massive gains in traffic once the site cleans up. You need Google Analytics for that, which means that Panda casualties are generally well-noted and Panda recoveries are not.

In this instance, many of the site's better rankings were related to the poorly converting terms on poorly constructed pages that were killed off – even though traffic improved beyond "normal", visibility is still pretty static.

Can Car Classifieds Sites Recover?

Car classifieds sites have struggled massively since the first Panda update. The content that users are interested in, for the most part, is the selection of cars themselves.

One problem is that users will typically upload their car ad to as many classified sites as they can find in order to get maximum exposure and hopefully sell their car more quickly; the other is that in the UK, this vertical has a runaway market leader in AutoTrader, which gets two or three times as much inventory as its competitors.

Obviously from a seller's perspective, the former isn't a problem at all; it's the sensible thing to do. From Google's perspective we're left with an entire (extremely competitive) vertical with the same content across nearly every site. These sites are invariably crushed by Panda, and take desperate measures in order to escape, such as below:
car-site-domain-switch

A prominent car site recently purchased a new (dropped) domain, and redirected its previous one, in order to escape from an old Panda problem that had been plaguing the site for months.

Visually the company had created a wholly new site, with new branding to go along with it; however the same content issues lingered because the same content was hosted on the site, and the visibility plummeted once the algorithm established what had happened. Switching domains is no way to escape Panda – bad content is bad no matter where it's hosted.

The way my agency tackled the drop for the voucher site involved hemorrhaging the bad content pages from the overall strategy, despite being painfully aware of how much traffic the site could lose. The difference for many car classifieds sites is that there is little or no good content on the site to begin with, and what good content there is can be found elsewhere on the web in a more easily digestible format (on AutoTrader).
panda-helps-several-car-sites-recover

The good news is that several companies in this vertical have seen a recovery thanks to this update, as illustrated in the graph above. Many car classified sites have been rapidly improving their pages; adding useful content such as videos and reviews; and still seeing no result for many months.

The better news is that car sites naturally have access to a large, still niche audience, by leveraging their authority on several subjects: which car to buy and how to buy it. Large libraries of content sit naturally on car classified sites that many users will find helpful, and with users visiting 11 pieces of content before they convert, a large inventory of content, as well as a large inventory of cars, can be extremely helpful.

How One Travel Site Recovered from Panda

"Cloaking" isn't an issue that often crops up on marketing blogs these days, and few SEO professionals will still try to display content to Google that isn't easily visible to users. But showing content to users that search engines can't see is sometimes necessary to escape Panda's clutches.

A travel site my agency had been working with for years had experienced a big loss in search traffic and visibility due to a Panda update. The site would collate holidays from other travel operators' sites, which inevitably meant that large portions of the descriptions could be found elsewhere.
content-hidden-panda-recovery

Consisting of itineraries and general resort information provided by the holiday operators, nobody would suggest that this content belonged to the site to begin with – but nobody could argue that this content wasn't useful for people looking to book a holiday through the website. Rewriting the descriptions would take months, or even years, and with Panda strangling the traffic there probably wouldn't be years to work on this.

To combat the problem the stolen content was placed in iframes so visitors could still read it; set about adding more original content to the pages; and after a few months the site did recover.

The same tactic could be employed across voucher and car classifieds sites, and in a recent Webmaster video Matt Cutts said that content that is not necessarily visible to search engines is not necessarily bad; but the issue we often face in those industries is that all the content can be found elsewhere. If we put that in iframes and add more content, such as buying guides and general information, then Google is going to think our site is intended to do something completely different to what it actually does.

 

Should You Still Worry About Google Panda?

Google's softening the Panda algorithm is at odds with (or perhaps as a result of) content becoming an integral part of most SEO strategies. As Panda changes, so do the tactics we employ to beat it, and we need to start thinking about content differently when it comes to commercial landing pages.

You can employ a content marketing agency or become a content marketer yourself, and unless you look subjectively you'll end up with a blog full of awesome frickin' content and your business won't sell itself effectively on the pages that you need people to land on and convert. The last thing you want is a blog post that convinces a visitor to buy your product – and a product page that convinces them to buy it from someone else.

Take a look at your landing pages and start asking the tough questions.
  • Is Your Content Only There for Google? Again, we're not talking about cloaking. Users can see it, but they also won't read it. Your "150 words of unique copy" might as well be white font on white background. If the intention of your landing page is to sell your product then you need to write copy that will do that for you; your content should be your sales assistant.
  • Does Your Content Answer All the Questions? Find out what people are searching for – if you think people are looking for your product when they're asking a question, you need to make sure your content answers it effectively. Not many people enter "where do I buy this from?" or "should I buy this product?" into Google, and yet those are the questions SEO professionals instinctively answer in the copy: "we sell this" and "you should definitely buy it".

 

Summary

People are asking "why should I buy this instead of that?" and "what's in it for me?" If your content can't answer those questions, then you should link to someone who can.

Gone are the days when you needed to hoard your link equity. If your content doesn't answer those questions, Google will know because people will probably go back and search again.


Article Post @ Search Engine Watch
 
Support : Creating Website | Johny Template | Mas Template
Copyright © 2013. Google Updates - All Rights Reserved
Template Created by Creating Website Published by Mas Template
Proudly powered by Blogger