Just came across an interesting post by Bill Slawski on the subject of a patent granted to Google last week titled ‘Search result ranking based on trust‘. There’s been a lot of talk about trustworthiness being the next big factor to be considered in search engine algorithms but it makes me wonder how useful it would be in reality.

When I use Google to search for something what I’m really looking for is the most relevant result I can find which will answer my query. To be honest, I often couldn’t care less how much I trust the source as long as it’s reputable (maybe rep is a better factor?).

Where I can see a trust (or rep) factor being useful is with weeding out spam search results who have got themselves a really good natural search ranking. Beyond that I’m not convinced it’s required for normal web search (except perhaps as an option or it would be useful in a search through social streams like Twitter, but here I’m referring to the kind of search you do through http://www.google.com).

You see, to some extent trust is in the eye of the beholder and a very difficult thing to turn into a scientific algorithm. A source could be trustworthy to one person and not at all to another depending on many factors. So to really put an accurate trust rating on web pages is going to be extremely difficult (I think).

Would Google’s time be better spent working on relevancy ranking and using trust sparingly as a factor to filter out spam results? Or do you think trust has a bigger place in the future of search algorithms (maybe trust/rep on news/blog search)? Interested to hear your thoughts…


There’s an interesting post over on the Nielsen Wire blog from Jon Gibs (their VP Media Analytics) about the methods people are now using to discover content. Their findings show that a lot of surfers now make social media their starting point when trying to find information online. 18% of the people Nielsen quizzed said they start a search for new information using Wikipedia, blogs or social networks (such as Facebook or Twitter). That’s a pretty big percentage, especially when you consider that only 37% said they started their hunt on a search engine (which you’d traditionally think might be the starting point).

It’s not really a surprise to see web users learning to trust information that is recommended by their peers, or info that comes from within their sphere of influence on social networks. Recommendation is a powerful thing online and crowd sourced recommendations are a great way to answer certain queries. However, as networks are inherently distributed are they as likely to get an accurate answer to their more complex info needs as they would on a search engine?

In the same report 26% of respondents who identified themselves as socializers (the ones likely to start on a socnet) said that they feel ‘there is too much information online’. Perhaps this is what draws them to social networks for their information searching needs? Maybe the trust factor that someone has recommended some information to them gives them the confidence that they will drill through the dross and find the nugget they are looking for. However, if they feel overloaded now that is only going to get worse as networks grow.

Social network contacts are never going to be able to answer all the questions that intelligent use of a search engine would (unless your network includes some of the finest minds). If we, as web users, rely too much on social networks for finding information are we going to lose the ability to really search when we need to? Search can be a bit of an art form; some people struggle to get how to think laterally when searching for a niche topic, others can get there in one or two searches by using word combinations and booleans cleverly. Could reliance on social networks for the answers to questions destroy the art of searching or data mining that the web has historically been so good at encouraging? Social networks are great for recommendation and serendipitous discovery of content but how do they cater for people who just want their questions answered? Do they need to branch out more deeply into indexing and search or perhaps encourage search to begin on their networks by providing tools to do so?

I’m not offering any answers as this is a shift in habits which has been underway for a long time, but as more users become socializers (inevitable), and information becomes more and more realtime, someone needs to provide a way they can divine the realtime stream for the information they seek. Either that or search engines need to provide new, more social ways to search their indexes. What do you think?

PS. Just in case you think I mean divine in the Biblical sense, have a look at this.

Search engine marketers aren’t having an easy time with Google these days.

Google have made a change to the search results interface which aims to make it more Digg/Wiki like by allowing users to move results up and down the list, delete sites from a results set and even add sites into a set of results.

For the user this is actually a really nice piece of functionality as it allows you to tailor search results to make them more relevant to you, I’m assuming it’s all stored in your web history so future searches keep the customisation.

What this does do is make it really hard for a search engine marketer to know whether what they see as the top ten results on Google is what the users are seeing. In most cases the answer is probably no now.

I think it’s a great move though and could actually help to focus the search engine optimisation industry on making pages more relevant through improving content and engaging users as that is what will encourage them to keep a result high up their list!

Full details on the Google Blog.

It seems the mobile web is beginning to come of age at last. Could it be down to the iPhone? The greater occurrence of phones with wireless built in? Better mobile web apps? Increasing addiction to the internet so you just have to get at it everywhere you are? Well, personally I think it’s down to our increasing need for data, connections with our networks and need to access email on the go. Definitely the iPhone has made a big difference but I believe it’s social networks which will really kick start the mobile phenomenon. Just look at Japan and the iMode surge a few years ago, the majority of apps used were related to social uses and email.

Anyway, ZDNet are reporting that Google have seen a 20% increase in mobile search usage.

According to SEMPO (the Search Engine Marketing Professional Organization) it does.

Apparently money is shifting into search and away from print and classified at an increasing rate. The reason for this I’d surmise is that search is being seen as a way to follow consumers rather than just trying to put an ad in front of them. It’s now widely accepted that most markets have a need to be active in search so it’s natural for spend to shift towards it.

Key findings from the SEMPO study are:

  • The North American SEM industry grew from $9.4 billion in 2006 to $12.2 billion in 2007, exceeding earlier projections of $11.5 billion for 2007.
  • North American SEM spending is now projected to grow to $25.2 billion in 2011, up significantly from the $18.6 billion forecast a year ago.
  • Marketers are finding more search dollars by poaching budget from print magazine spending, website development, direct mail and other marketing programs
  • Paid placement captures 87.4 percent of 2007 spending; organic SEO, 10.5 percent; paid inclusion, 0.07 percent, and technology investment, 1.4 percent.
  • Google AdWords remains the most popular search advertising program, but both Google and Yahoo sponsored search spending has decreased from a year ago

Here’s how the spend is shifting:

Shifting to search is great for all the SEO agencies out there but is also going to make their jobs a lot harder as they have to work for their money to get clients to the top of the listings. As competition grows, so it becomes more difficult to get dramatic improvements in position, some SEO agencies have had an easy ride in recent years and that’s going to get harder.

Other developments will also affect SEO such as the introduction of semantic search technology (as announced by Yahoo recently). Developments such as this could change the rankings entirely and again will mean the agencies have to stay ahead of the game and work hard (not a bad thing).

So says Tim Berners-Lee in this article on the future of the web, search and semantic technologies over on the Times website.

I tend to agree with him unless Google move into the semantic search space pretty quickly. With Yahoo announcing support for semantic mark-up within their search index Google will surely not want to be left behind.

I’d like to think the future of Google will embrace semantic technologies and make it a real ‘discovery engine’, surfacing links of high relevance to searchers through much stronger understanding of the content within.

As an aside; one thing I’ve been thinking would be a nice app would be a semantic web robot which you could set off to scour the web for content and with the added semantic features (rather than the more usual boolean profile based robot) it could learn as it went by allowing you to score results for relevance to you. The first really intelligent agent?

So Yahoo recently announced their Open Search platform. Now more details are emerging and Yahoo have announced they will be supporting semantic mark-up and making use of the structured, meaningful data that can be applied to web pages to help them index better and serve up more relevant results.

This is a big step forwards and if released into the main Yahoo Search will surely help them in their fight for users with Google and Microsoft.

Relevance is king in the search engine world, being able to interpret results by more than just standard search algorithms of content density and link equity has the potential to deliver a much more relevant results set to every search. As semantic mark-up and web standards increase in usage this could give Yahoo and edge they badly need.

There hasn’t been a major move to optimise relevance in search results for years, this could give SEO’s something to keep them busy. Rather than following the usual tactics of copy optimisation and ensuring pages are well formed, developers will now need to ensure they use the relevant semantic tags to add meaning to their pages.

The one thing that will bring the users flooding in is if an engine finds a way to deliver highly relevant results. Returning three truly relevant links is far more useful than delivering one thousand arbitrarily ordered links. I for one would immediately switch to using an engine who gets semantic search right.

I hope to see this implemented asap if Yahoo have any chance of capitalising on this move. Google will be hot on their heels otherwise…