It was a masterstroke by Google.
In any given quarter, Google commands between an 82-87% global market share in desktop/laptop search engine traffic.
The numbers are even more striking for mobile search.
At more than 95% market share, Google’s dominant position in mobile search would be called a monopoly by most anybody. It’s not just one country, either. It’s global dominance.
How did Google achieve such dominance in such a competitive market?
It got an early start — indexing the world wide web — and built a superior search engine for the early internet.
Then Google mastered the algorithms that match consumers’ data with advertisers trying to reach consumers — those who might be good buyers of products and services.
The advertising revenues generated Google’s free cash flow, which the company invested in the search engine, the advertising technology, and creating an operating system for mobile devices — Android OS.
And this was Google’s masterstroke…
Google spent billions in research and development to create a smartphone operating system.
And then, it gave it away for free to smartphone manufacturers.
Google maintains and updates Android OS throughout the year with improvements and bug fixes. And it doesn’t cost the smartphone manufacturers a thing.
For smartphone manufacturers — faced with the decision to incur massive expenses to write their own operating system and maintain it, or just take Google’s Android OS — it was an easy decision.
Google’s strategy was to become the path of least resistance when it came to a mobile operating system. And it worked, resulting in a 95% global market share for mobile search engines, and about 70% of the world’s smartphone operating system market.
This is why I always chuckle when someone says that Apple is a monopoly.
In order to get Google’s Android operating system (OS), the manufacturers only had to agree to two key terms. Google would be the default search engine. And Google could collect data from the phones. That was clearly an acceptable deal.
Some of us might be wondering how Google has 95% of the mobile search market, with only 70% of the smartphone OS market. The answer is money. Google pays Apple around $18 billion every year to be the default search engine on iPhones, which have only 28% global market share.
That might sound like a huge sum, but consider this. Alphabet (Google) generated about $307 billion in revenue last year, resulting in about $70 billion in free cash flow.
And almost 80% of total revenues comes from Google’s advertising related to search, which includes ads on YouTube, as well. (Google earns money whenever people click on ads in Google search results, or the more graphic “display” ads you see splashed on the thousands of websites online. Each click means advertisers pay Google a small fee.)
Paying Apple $18 billion to be a monopoly is nothing for a company like Alphabet (Google).
It has been hard to imagine unseating Google from its monopoly position. It controls the search experience for almost every smartphone in use today. “Just Google it” is the common expression, and “Googling” has become an active verb to represent what millions of us do every day.
But what most people don’t see is the other side.
Google collects data on us every minute of the day. How we use our apps, which websites we browse, our behavior on those websites, location history, search history, engagement patterns at various times of the day, and so much more.
It then sells access to that data.
It’s a common misconception that Google directly sells our data. But actually, Google sells access to the data it collects from us… to help advertisers target more effectively.
For example, if someone searches for "running shoes," Google uses this data to allow shoe companies to show ads for their running shoes to that person… as they browse around online. The advertisers don't know who the person is. But they know someone interested in running shoes is seeing their ad.
By analyzing what users search for, watch, and interact with, Google can help advertisers reach specific groups of people who are more likely to be interested in their products or services. This targeted advertising is valuable to advertisers, because it increases the chances that users will engage with their ads.
This may seem quite innocuous, until we understand that Google even uses its dominant position in an effort to influence us and the way that we think. It does this by presenting us with information that is consistent with its desired political ideology. For example, if advertisers don’t “play by” Google’s rules, they aren’t allowed to run their ads on Google’s advertising networks.
But what about artificial intelligence (AI)? Is that the key? Will that be the technology that knocks Google off its throne, which it’s now held for more than 20 years?
That was the big rumor over the weekend in high tech.
Just yesterday, OpenAI announced a new product related to its large language model (LLM) technology.
The rumor leading up to the event had been that OpenAI would be announcing a new search engine, which would be a threat to Google’s core advertising business.
But that wasn’t exactly what OpenAI announced.
Not only did OpenAI not announce a search engine product, it didn’t even announce the much-anticipated GPT-5.
It announced a shift that is arguably larger in the form of its new product: GPT-4o.
The “o” stands for “omni.” And omni basically means all.
Like omnichannel marketing, all marketing channels including online and offline.
And in the case of OpenAI’s new AI, the omni stands for omni-input.
GPT-4o will accept inputs of voice, video (including real-time video), images, data, and audio, with the ability to generate text, audio, or image outputs.
This latest version is much faster too, enabling responses in as little as 232 milliseconds with an average of 320 milliseconds. This is just about on par with normal human conversation.
These technical descriptions may not sound like much, but what they equate to is the “humanization” of OpenAI’s AI.
It’s remarkable — and to understand it, I recommend checking it out on OpenAI’s blog post here.
Just scroll down to section on “Model capabilities” where you’ll see a bunch of videos that look like this:
Enabling the AI to “see” the user and the surroundings enables it to have a very real and contextually relevant discussion with whomever is using the technology.
GPT-4o has the ability to describe an individual, what they’re wearing, their surroundings, and whatever else is in sight of the camera’s range.
Seeing and hearing is believing, which is why checking it out is so useful. Here are a few of the demonstrations given:
All of these types of tasks can be done simply by speaking with GPT-4o in “voice mode,” with very low latency, giving the feel of very human-like interactions.
There’s no longer any need for a chat box like the one below, just have a normal conversation.
The media seemed disappointed in the absence of a new search engine product. But they missed the point entirely.
Their framework for search is traditional search, as we use it today. Pull up Google, enter a query, and view the results.
Voice, as a user interface, replaces the need for almost all traditional searches. And it goes further. It has the ability to solve for friction-free companionship (as we explored in Outer Limits — Are You AIsexual?), a personalized digital assistant, boredom, loneliness, and an agent for disintermediation.
For those that haven’t yet seen the movie Her with Joaquin Phoenix, I highly recommend it. It may be uncomfortable for some to watch, but it was an accurate view of the future for a movie that was made in 2013… and it speaks to the future of human-to-AI interaction.
The reality is that GPT-4o is just one step away from an AI capable of interacting for us in the real world.
And, this will be possible all through a voice interface.
And my last point about disintermediation is a critical one...
The desire to disintermediate transactions and communication is deeply cultural and a major shift, which has been the result of software applications and social media.
Younger generations have strongly preferred to disintermediate transactions of any kind. The preference is not to speak with a human in person, or to call customer service by phone. The preference is to text or press a button on an application and be done with it.
Think about Uber, DoorDash, Postmates, etc. Want food? Press a button on an app and it will show up at your front door. You don’t even have to see the delivery person.
Upset with a flight on United? Post on social media how bad @United is (#UnitedSucks) and get contacted by a United customer service agent to address the issue, rather than calling into customer service to discuss the issue.
Perceived conflict and the friction of transactions are strongly preferred to be disintermediated by most, and definitely by those that grew up with a smartphone in their hands.
GPT-4o, and what it will evolve into, will be the ultimate in disintermediation.
Our “agents” (personal digital assistants) will be tasked with dealing with our conflict on our behalf. They will remove the stress of dealing with friction and conflict in interactions for humans.
The downside, of course, is that people will get worse at dealing with conflict and conflict resolution.
But just like water flows to the path of least resistance — downhill — humans will do the same with this newfound technology.
Human-like voice interactions makes search so simple and easy. No need for a keyboard any longer. And it happens nearly instantaneous.
So is Google in trouble? Is this the beginning of the end?
Not at all. Things are just getting exciting.
In fact, OpenAI is employing a business strategy not unlike what Google did with smartphones. It’s developing the equivalent of an operating system for AI. The goal is to allow any application developer to gain access to GPT-4o and beyond, and to pay for accessing the technology.
OpenAI is incentivized to continue making its GPT technology faster and cheaper to use. Declining costs come from making the AI more efficient, thus requiring less compute… and by offloading some compute and inference to local devices on the edge of a network, like a smartphone or a tablet.
Lower costs increase adoption by other developers, just like Google did with Android OS. And is this such a surprise, considering that Microsoft — the king of computer operating systems — has a controlling interest in OpenAI?
OpenAI’s GPT-4o is too good not to be used. Google knows this, and we can be sure that it’s racing to improve its own Gemini software to have just as much utility as OpenAI’s GPT-4o.
And how about Apple, which has been woefully behind in AI technology. The rumors have been swirling that a deal with OpenAI is pending. In the short-term, that would make perfect sense. Apple can integrate with GPT-4o APIs, and it can still call its voice assistant Siri, it will just have a major upgrade.
Most consumers will think that Siri has just massively improved, and they’ll love it. Apple can take a portion of the $18 billion a year it receives from Google related to search… and use that to pay for the licensing of OpenAI’s technology.
And when it finally has something good enough developed in-house, it can simply make the change on the backend.
Time is of the essence though.
These new personalized AIs will be very sticky for consumers. And in many cases, humans will develop emotional ties to their new artificial companions. Changing AI assistants will be difficult for many.
“Owning” consumers with a personalized AI is how money will be made with search-related advertising. Trillions of dollars are at stake.
Adoption will be the fastest in history, which is why getting in early is so critical.
We always welcome your feedback. We read every email and address the most common comments and questions in the Friday AMA. Please write to us here.