Search has changed a lot, and not just in the way that search engines work (with Panda & Penguin) but in the way that we use them.
We used to type specific keywords and would often get a lot of false positives, especially when the same word has multiple meanings and relies on context; A search for the word “orange” might bring up the fruit, the colour or the mobile phone company. The verb “run”, with 606 different meanings, is the largest single entry in the Oxford English Dictionary, just ahead of “set”, at 546 meanings. Ambiguity has been a bug in the ointment of the computer’s understanding of natural language for a while.
But increasingly the connected consumer are able to ask questions of Google (and other search engines) in a more colloquial, conversational way and it understand them.
“When I use a word,” Humpty Dumpty said in rather a scornful tone, “it means just what I choose it to mean — neither more nor less.” Alice In Wonderland : Lewis Carroll
Under the old methods, a search for the term that film with Harrison Ford and the androids shouldn’t work, but it does and it gets this result:
It works because the search engine is trying to understand not just the specific query that was made but the context and the implicit query. It’s not 100% perfect of course, but it’s getting there (very quickly) and the reason it’s so important and exciting (yes, it is, honestly) is this: Voice Search.
While Apple’s Siri is the reality TV star of voice recognition software, Google have been quietly busy, polishing and integrating voice search into Chrome, Android phones and Google Glass at a much deeper and practical level. Many iPhone users tried Siri and stopped using it because it didn’t work very well and Apple hadn’t bothered to integrate it in any useful way (you know, like the ability to launch an app, for example). Meanwhile Google’s approach to voice recognition had been focussed on search, as this video illustrates.
Clever, eh? But it doesn’t stop there.
Contextual Search & Session-Based Matching
Google recently added the ability for search terms to be more colloquial and in natural language as shown above but the next move is to understand each individual and what that person is looking for.
Let’s say you are out shopping in a department store, and an assistant is helping you look for baby clothes & baby items. You ask the assistant “where can I buy a bottle“; what would you expect to happen next? You’d expect them to assume you meant a baby bottle and not for them to send you down the off-licence, right?
Well, that’s what session-based matching does; based on your most recent search history, Google’s Voice Search uses Google’s Knowledge Graph, to relate successive searches to the same subject. In my example above, it will act like the shop assistant and assume you’re probably interested in baby products, look for bottles that are related to the previously searched term baby and show relevant results, even though you didn’t actually use the keyword baby that particular time.
Contextual search results are similar and may change according to the device used, location, time, (and possibly the weather, time of year etc). Different results might appear for the same search term, depending on the individuals’ specific circumstances. If a person is on a mobile in London and they search for “Tubes” they may get different results to when they’re at a PC in Milton Keynes. This ability to target context potentially enables a business wanting to promote, for example, a commuter bike, to target their advertising at people when the they are likely to be commuting on a sweaty train and idly surfing on their mobiles (by filtering by time, location, device etc). The same adverts would not show on a saturday afternoon, when the same target audience are likely at home or socialising and less open to the advert.
You Said This Was Exciting
Yes, I did, didn’t I? OK.
Once upon a time we interacted with computers through a “command line”, typing instructions one letter at a time. Using a computer required specialist knowledge back then, until the drag n drop, GUI interfaces was adopted in Windows & OSX (et al) reducing the learning curve. Today young children and OAPs happily use touch-screens to browse the web, play games and perform various tasks because the touch-screen interfaces of tablets like the iPad and smartphones has become so focused & evolved.
“Voice” is the touch screen of search. It makes everyone, from a child to an OAP, a power-user. There’s no need to use esoteric “+” symbols or type “define:” or “calculate:” to force the search engine to provide the specific answer you seek because, over time, the search engine understands what is meant and what is required. This is, potentially, a pivotal moment in human-computer interaction and the start of a whole new, and much more natural, way of using technology.
To be honest, it’s not as accurate or reliable as we’d like just yet but get used to it now (especially if you use Adwords) and familiarise yourself with this shift in user’s search behaviour. It’s unclear how much this will have an impact on business’ visibility in SERPs, SEO and paid search but, what is clear, is the speed of change is increasing.