October 20, 2009
Just came across an interesting post by Bill Slawski on the subject of a patent granted to Google last week titled ‘Search result ranking based on trust‘. There’s been a lot of talk about trustworthiness being the next big factor to be considered in search engine algorithms but it makes me wonder how useful it would be in reality.
When I use Google to search for something what I’m really looking for is the most relevant result I can find which will answer my query. To be honest, I often couldn’t care less how much I trust the source as long as it’s reputable (maybe rep is a better factor?).
Where I can see a trust (or rep) factor being useful is with weeding out spam search results who have got themselves a really good natural search ranking. Beyond that I’m not convinced it’s required for normal web search (except perhaps as an option or it would be useful in a search through social streams like Twitter, but here I’m referring to the kind of search you do through http://www.google.com).
You see, to some extent trust is in the eye of the beholder and a very difficult thing to turn into a scientific algorithm. A source could be trustworthy to one person and not at all to another depending on many factors. So to really put an accurate trust rating on web pages is going to be extremely difficult (I think).
Would Google’s time be better spent working on relevancy ranking and using trust sparingly as a factor to filter out spam results? Or do you think trust has a bigger place in the future of search algorithms (maybe trust/rep on news/blog search)? Interested to hear your thoughts…
September 16, 2009
There’s so much talk about news at the moment with the moves to bring back paywalls (as I’ve written about previously here) and the general nervousness among old media houses that the likes of Google are stealing their thunder (and their ad dollars). Everyone is musing about how this will play out, who will be the winners and what is actually the best way to deliver news to web users. A couple of things have struck me in the last couple of days that deserved a blog post.
Firstly, Google have launched a new product from its labs department. Google Fast Flip is being touted as a new way to read the news, that brings the web experience more in line with newspapers. Basically Google has taken screenshots of web pages containing news stories and put them into a neat user interface that allows you to flip between stories and zoom in on them before deciding whether to click through to the original article.
It’s a really nice way to browse and a great UX to find serendipitous content as you might stumble across something you’re interested in. Of course there’s also Google’s search power under the hood so you can narrow down the content available, isn’t there?
No, it seems that Google hasn’t put it’s search power to good use by letting you search the full content of the articles available (I’ve tested it by searching for specific content within the stories). It is actually just a nice UI to flip and browse through news stories, it doesn’t put that UI on top of the power of Google News though.
Secondly, Microsoft recently unveiled a vision of a ‘Next-Generation Newspaper’ as part of a response to a request from the Newspaper Association of America asking for ideas about ‘monetizing digital content’. The next-generation newspaper delivered as part of their response is remarkably similar to Tweetdeck, as noted by Nieman Journalism Lab today.
The news-deck delivers content in an RSS reader kind of way using a stack of Microsoft technology, content and advertising products. It looks good, and the promise of semantic search, personalisation and contextual awareness is promising.
So, are either of these a ground breaking new way to consume news and do they hold enough promise to herald the future of web based news reading? I don’t think so. They both offer nice solutions to different types of user scenario in my opinion.
Google Fast Flip is a great UI that allows for casual browsing of the news in a similar kind of way to which I read the Sunday newspapers. It’s a coffee and bacon sandwich type situation, but not a serious trawl for information experience for me. It’s too much detail upfront with not enough ability to refine for anyone serious (or used to) discovering content online or even using RSS readers. At least with an RSS reader you can make your choices of who to subscribe to and read articles based on headlines or quick glances at the content. Yes, this is a quick way to thumb the pages but it’s a little gimicky for me and doesn’t fulfill my needs for consuming information. Google partnered with top newspapers to create this but I don’t think it helps them. Digital news content is consumed in a different way to paper content and I think this needs more work before it could ever become the norm for online news reading. Must admit it’s nice on an iPhone though!
Microsoft’s news-deck aproach suits my needs much better. It has the search and refine type features I’d demand and a familiar UI that works for dashboard type experiences. Is it the future? I don’t think so. I actually think it’s a little lame of Microsoft to propose a solution like this to the challenge from the NAA, it’s too similar to RSS readers and looks too like a cross between Netvibes and Tweetdeck. In fact if you read the details of their response (available from the Nieman Labs post) it really does just sound like next-gen Netvibes to me. It’s not really the future, more what we have now mk2.
Now I’m not sure what the future of news consumption is but I’m pretty sure this isn’t it. If you could combine the two you might be getting somewhere, if Fast Flip allowed me to search and read all the content and not click off to the news source it might be better and if Microsofts dashboard wasn’t just a rehashed Netdeck/Tweetvibe it might get me more excited. Personally I’m seeking something more intelligent and with better data mining possibilities to satisfy my search for news (and I wouldn’t limit it to news, any solution should be my medium for information consumption as a whole).
One thing this does make apparent though is that as much as the newspapers and old media want to pull their content in and keep it close, they are going to have to relinquish their hold and let their content free as whatever the user experience of the future is it will demand that.
November 21, 2008
Search engine marketers aren’t having an easy time with Google these days.
Google have made a change to the search results interface which aims to make it more Digg/Wiki like by allowing users to move results up and down the list, delete sites from a results set and even add sites into a set of results.
For the user this is actually a really nice piece of functionality as it allows you to tailor search results to make them more relevant to you, I’m assuming it’s all stored in your web history so future searches keep the customisation.
What this does do is make it really hard for a search engine marketer to know whether what they see as the top ten results on Google is what the users are seeing. In most cases the answer is probably no now.
I think it’s a great move though and could actually help to focus the search engine optimisation industry on making pages more relevant through improving content and engaging users as that is what will encourage them to keep a result high up their list!
Full details on the Google Blog.
November 6, 2008
Sometimes even a mega-company like Google casn get beaten to the mark with a new piece of functionality that they should really be providing themselves. The reasons for this? Perhaps they overstretch themselves with their range of products and can’t focus enough to add the bells and whistles we’d all like? Or maybe they get a product to the point where it gets traction and keeps acquiring users and then leave it open for the rest of us to add the bells and whistles functional pieces?
Whatever the reason, there are occasions when great additions are made to their services which they aren’t responsible. The latest of these that I’ve come across is something called Glync which has been created by a company called Virante.
It’s a Firefox plugin which grants them access to store your data from Google Webmaster Tools to enable them to show you a graph showing the history of incoming links to your site and how that changes over time. An extremely useful tool, but in my opinion one which should be a standard feature of Googles webmaster tool set.
How long will this plugin be useful? Until Google decides to offer it themselves I’d say. That said, it is a very nice piece of functionality and the free version is most useful.
Troogle? TravGoogle? GooTravel? Whatever you want to imagine the name may be the thought of Google jumping into the travel arena has operators and agents either salivating with the thought of the traffic and sales it could drive or quaking with fear at the thought of them owning the customer experience. Rumours keep appearing of the intentions of Google, but up till now there hasn’t been any obvious functionality leaking out of Googleplex which could support a serious move into travel. That is, until now (at least I’ve only just found it)…
There’s a new(ish) beta on Google for a Merchant search (only appears when you’re in the right place searching the right thing). This is basically a price comparison engine, currently only serving the loan market. My question is, could this power a travel price comparison engine and so switch users to this type of interface and functionality if they search for the right travel related keywords? Could Google then become an affiliate to travel companies? I’d imagine TravelSupermarket would hope not!!
March 19, 2008
The Google homepage rarely changes in anyway except for their tradition of adding themed logo’s to go with the season, holiday or event. Japan though has had a treat and the homepage of http://www.google.co.jp.
The new design is not available in every location (I can’t see it) but it looks much better with the addition of tabs (see below).
It really would be nice to see a redesign of the main Google homepage, I’m sure with all the services they offer it is about time they offered a better way for them to be accessed from Google.com. The tabbed design would allow them to make their services more prominent while keeping them easy to access.
March 19, 2008
It seems the mobile web is beginning to come of age at last. Could it be down to the iPhone? The greater occurrence of phones with wireless built in? Better mobile web apps? Increasing addiction to the internet so you just have to get at it everywhere you are? Well, personally I think it’s down to our increasing need for data, connections with our networks and need to access email on the go. Definitely the iPhone has made a big difference but I believe it’s social networks which will really kick start the mobile phenomenon. Just look at Japan and the iMode surge a few years ago, the majority of apps used were related to social uses and email.
Anyway, ZDNet are reporting that Google have seen a 20% increase in mobile search usage.
March 18, 2008
So says Tim Berners-Lee in this article on the future of the web, search and semantic technologies over on the Times website.
I tend to agree with him unless Google move into the semantic search space pretty quickly. With Yahoo announcing support for semantic mark-up within their search index Google will surely not want to be left behind.
I’d like to think the future of Google will embrace semantic technologies and make it a real ‘discovery engine’, surfacing links of high relevance to searchers through much stronger understanding of the content within.
As an aside; one thing I’ve been thinking would be a nice app would be a semantic web robot which you could set off to scour the web for content and with the added semantic features (rather than the more usual boolean profile based robot) it could learn as it went by allowing you to score results for relevance to you. The first really intelligent agent?
March 17, 2008
It was announced a couple of weeks ago that leading ISP’s were planning to use Phorm as a platform to serve up targeted adverts to ISP registrants. It’s been touted as a great way to provide more relevant ads to users and all the initial talk seemed like PR spin designed to mask any potential privacy issues.
Now at last the privacy issues are getting a good airing!
Personally I’m against my ISP using the data of my surfing habits for advertising purposes. I use my ISP for access to the internet, I do not expect them to share my data on surfing habits with anyone (unless asked to by the authorities…).
Other blogs are asking what the fuss is about this and comparing Phorm to behavioral targeting technologies in use on retail websites. I disagree with this completely as this is going to collect data at the ISP level and share it with any websites which serve adverts through Phorm, this makes it far more pervasive.
An interesting question has to be asked though; how does this differ to Google / Doubleclick? If Google starts to share behavioral search data with Doubleclicks ad serving platform isn’t that going to be similarly invasive to users privacy? Potentially; although at least we expect that from Google as an ad revenue based business…
Interestingly, the BBC has just published a story that states that the Foundation for Information Policy Research has claimed that Phorm could well be illegal. They believe Phorm contravenes the Regulation of Investigatory Powers Act 2000 (RIPA), which protects users from unlawful interception of information.
This has the potential to get very interesting and could open up other networks and ad serving technologies to scrutiny.
So Yahoo recently announced their Open Search platform. Now more details are emerging and Yahoo have announced they will be supporting semantic mark-up and making use of the structured, meaningful data that can be applied to web pages to help them index better and serve up more relevant results.
This is a big step forwards and if released into the main Yahoo Search will surely help them in their fight for users with Google and Microsoft.
Relevance is king in the search engine world, being able to interpret results by more than just standard search algorithms of content density and link equity has the potential to deliver a much more relevant results set to every search. As semantic mark-up and web standards increase in usage this could give Yahoo and edge they badly need.
There hasn’t been a major move to optimise relevance in search results for years, this could give SEO’s something to keep them busy. Rather than following the usual tactics of copy optimisation and ensuring pages are well formed, developers will now need to ensure they use the relevant semantic tags to add meaning to their pages.
The one thing that will bring the users flooding in is if an engine finds a way to deliver highly relevant results. Returning three truly relevant links is far more useful than delivering one thousand arbitrarily ordered links. I for one would immediately switch to using an engine who gets semantic search right.
I hope to see this implemented asap if Yahoo have any chance of capitalising on this move. Google will be hot on their heels otherwise…