The Business of Adapting — How Google Plans To Stay Relevant Amidst An AI Revolution

Ben King lays out how Google is navigating the next phase of its growth and what career upskilling looks like in the age of AI.
by Zat Astha

Photo: Google Singapore

“What does a perfect Google look like for you?” I pose to Ben King, managing director of Google Singapore, APAC Agency. My question comes in the wake of Google rolling out a slew of controversial changes to its Search algorithm and its embrace of Artificial Intelligence (AI) in displaying Google search results. The latter has allegedly caused former top-performing websites to be downvoted in favour of Reddit and Quora.

“I don’t think perfect exists in this world,” King replies, almost too diplomatically. Perhaps sensing my reticence, he quickly adds, “We’re living in a world where many people don’t necessarily live by the same ethics and values, and there are many bad actors. Perfect is hard because you’re constantly navigating an evolution of technology and combating bad actors.”

This, I soon learned, would be typical of how King looks at life. There’s King, the Google employee of more than a decade. Then there’s King, who’s a father of two young daughters traversing a world that, thus far, has not shown itself to be a bastion of all things good and positive. This dichotomy is the lens through which King observes his world — both as someone who would staunchly defend the decisions his workplace makes while at the same time worrying about how those very decisions would significantly define the values that would shape his progeny.

“If a government comes to us and asks for content to be taken down in line with the legal frameworks of the market we’re operating within, we will go ahead and do that.”

Navigating the imperfect

Putting his Google hat back on, King explains that when it comes to managing misinformation and disinformation, Google faces significant challenges. It has safeguards in place that ensure its products are ‘secure by default, private by design’. However, while AI or technology-driven solutions can help augment this, King believes that human intervention is also crucial, underscoring the complexity of the task.

“That can take many different forms, such as trusted flagger programs, where we have organisations or industry bodies that can flag information that might cause concern in a particular country,” he shares, reiterating that Google always works within the legal frameworks of a country. “If a government comes to us and asks for content to be taken down in line with the legal frameworks of the market we’re operating within, we will go ahead and do that.”

Recent reports affirm King’s explanation. In March this year, Thailand’s government asked Google to remove 149 YouTube videos that allegedly defamed the monarchy, a move compliant with the country’s strict Lèse-majesté laws. Google, in response, blocked local access to about 70 per cent of these videos. A month later, Indonesian authorities focused on removing content violating local laws on blasphemy and defamation, leading to multiple takedowns.

Even more recently, last month, UK police authorities requested the removal of around 640 YouTube videos from five users, accusing them of promoting terrorism and violating user terms. Google complied, removing the specified videos, underscoring the diverse and often complex legal frameworks influencing government takedown requests worldwide.

“I don’t think there is a perfect Google,” King reiterates going back to my initial premise, “because we’re navigating so much change and technological evolution, and also dealing with bad actors. What we can do is have a first line of defence, a combination of technology and human intervention, and work with trusted bodies, whether independent or governmental, to create a high-fidelity content ecosystem”.

A balancing act

Still, despite all the guardrails, cautionary precautions, and careful navigation, Google inevitably stumbles due to the sheer scale of its operation. The March 2024 core algorithm update is a prime example. Designed to elevate search quality by purging low-quality content, it inadvertently wiped out numerous websites without notice.

Website owners, blindsided by this purge, saw their traffic and visibility plummet overnight. The update’s severity, targeting even long-standing sites, underscored the precarious nature of digital presence under Google’s ever-changing algorithm.

Meanwhile, Google’s foray into AI hit a notable snag in 2023 with Gemini — an incident I took pains to mention to King. The company lost control over an image recognition AI model, which began producing biased and inappropriate results. This misstep highlighted the ethical quandaries and oversight challenges in AI development, stirring public and industry concern about Google’s ability to manage its cutting-edge technologies responsibly.

“I think it’s a very realistic expectation to expect our intent to always be in line with our ethics and values,” says King. “Still, there’s a difference between rolling out new technology at a very early stage and doing something (wrong) intentionally. But coming back to what happened with the Gemini issue, the reality is that we missed the mark. We’re very open about that. Following user feedback, we had a clear set of actions that allowed us to respond.”

The actions King refers to include revamped product guidelines, improvement in launch processes, robust evaluations, and deploying a range of different Red Team exercises. These exercises involve having hackers within a controlled environment test the platform for instabilities or insecurities to protect the platform against real-world hacking.

“These sorts of things will happen because AI, in general, is at an early stage,” King admits. “So there will be some missteps and errors. It’s important to remember that Gen AI is a creativity tool, particularly Gemini. It’s not always going to be 100 per cent reliable or accurate; it’s going to make some mistakes.”

Almost as if anticipating what I may ask next, King immediately adds that Google has always maintained a high bar when it comes to its product offerings. “We’ve sort of created this situation based on the fidelity of our existing products and platforms. We’re very proud of that, and we need to continue to respond in kind. So, yeah, I think there’s a difference between intent and new technology rolling out and not necessarily being perfect, and being able to respond in real-time and in a meaningful way.”

“I don’t think there is a perfect Google because we’re navigating so much change and technological evolution, and also dealing with bad actors.”

The human touch

I press King to elaborate on the steps Google has taken at the back end in light of the Gemini incident. In my view, it is easier to react and make a big show by responding to a public relations disaster than by responding to problems detected but not widely discussed or panned on social media platforms.

“First of all, we have double-check features,” King says. “Gemini will produce results and then allow a double-check feature, where you can see the sources of that information. This maps back to Google search, enabling people to check the veracity and authenticity of the information they see in real-time.”

He adds that in the realm of “AI-generated photorealistic synthetic audio and images” — deep fakes, as King puts it simply — Google is taking steps to ensure its products have metadata labelling and embedded watermarking with SynthID. “This helps people identify images, text, and videos that are  AI-generated versus authentic.”

Missteps and challenges

Still, while Google is making strides in technological innovation and ethical AI practices, the company is grappling with internal challenges. Adding to its woes, Google’s handling of workforce reductions (a common occurrence, I may add, in 2024) has not been without controversy.

Early 2024 saw a second round of layoffs, impacting hundreds despite strong financial performances. This was after Google’s January 2023 retrenchment exercise, when it laid off approximately 12,000 employees— one of its largest cuts to date. This second round sparked backlash, depicting instability and raising questions about the company’s commitment to its employees.

This atmosphere of discontent within Google was further exacerbated by its response to employee activism. In a stark illustration of the intersection between corporate policy and political activism, Google recently found itself at the centre of controversy after terminating several employees who protested against Project Nimbus, a significant contract with the Israeli government and military.

This contract, valued at $1.2 billion, has been criticised for potentially enabling surveillance and military operations against Palestinians. During a high-profile tech conference in New York, a Google Cloud engineer publicly denounced the project, declaring, “I refuse to build technology that powers genocide, apartheid, or surveillance,” before being escorted out and subsequently fired for violating company policies.

This action and the dismissal of nearly 50 other employees involved in similar protests sparked a backlash from activist groups like No Tech for Apartheid. These groups accused Google of retaliating against employees for exercising their right to protest, highlighting the fraught balance between corporate directives and employee advocacy within tech giants.

In response, a statement by a Google spokesperson, as reported in a story by Time.com, said: “We have been very clear that the Nimbus contract is for workloads running on our commercial platform by Israeli government ministries such as finance, healthcare, transportation, and education. Our work is not directed at highly sensitive or classified military workloads relevant to weapons or intelligence services.”

“All Google Cloud customers,” the spokesperson said, “must abide by the company’s terms of service and acceptable use policy. That policy forbids the use of Google services to violate the legal rights of others or engage in “violence that can cause death, serious harm, or injury.”

Gemini moon, cautious rising

The policy mentioned above refers to Google’s usage philosophy, mainly its “do no harm” principle, an eloquent testament to the company’s commitment to ethical technology development and responsible innovation. This principle, deeply embedded in Google’s DNA, is a cornerstone of its broader AI principles, which were publicly articulated in June 2018 and provide a framework that ensures Google’s technologies advance societal good while mitigating potential harms.

“We were one of the first companies to develop a core set of principles around which we would evolve our AI technology,” King reminds me, adding that this was not something that emerged last year when AI became part of the public discourse in a way we’ve never seen before. “There are seven core principles around which we think about AI development. I’ll come back to your question directly, but it’s important to start there because these principles dictate how we build products and evolve them over time.”

  • Be socially beneficial: AI should benefit society and contribute to the public good, addressing significant societal challenges and improving people’s lives.
  • Avoid creating or reinforcing unfair bias: AI systems should be designed to avoid reinforcing existing biases and should be inclusive, equitable, and fair.
  • Be built and tested for safety: AI technologies should undergo rigorous testing to ensure safety and reliability, preventing unintended harm.
  • Be accountable to people: AI systems should be accountable to humans, maintaining appropriate human control and oversight to ensure ethical use.
  • Incorporate privacy design principles: AI should protect users’ privacy and data integrity, adhering to best practices in data protection and privacy.
  • Uphold high standards of scientific excellence: AI research and development should adhere to high standards of scientific rigour and technical excellence, ensuring the advancement of the field.
  • Be made available for uses that accord with these principles: AI technologies should be used in ways that are consistent with these principles, avoiding applications that could cause harm or violate ethical standards.

“What this means is that we want to make sure we’re not utilising or deploying AI in ways that support things like mass surveillance, violate human rights, or weaponise countries,” King explains. “This is clearly excluded from how we think about supporting governments, organisations, or any third party. We’re a company that has been thinking about responsibility and ethical deployment from the very beginning.”

Still, while King is proud that Google established seven forward-thinking guidelines in the realm of AI back in 2018, given the familiarity (which, of course, breeds contempt) of more ordinary users with the power of large language models and text-to-image creation and manipulation, I wonder if six years on, these guidelines deserve a relook. King agrees.

“I think it would be very silly for any company to ever say something should never change. That would be an irresponsible way of thinking about it,” says King, adding that Google should have an evolutionary approach to developing its principles. “So should governments and various organisations.”

He draws fitting parallels to Singapore’s regulatory approach, which, he opines, is “very pro-business, consultative, and incremental in their approach to issues like this”. Across those three working principles, King observes that Singapore was able to evolve over time and carefully consider meeting various opportunities and challenges as they come up.

“As a company, we would be similar. I don’t think anyone can say they know 100 per cent where all this is going in five years — I certainly can’t. But what we can say is that we will have a firm grounding in the principles we have. If we need to evolve over time, we will, but the principles laid out in 2018 hold pretty true and cover a lot of the things that are on people’s minds at the moment.”

“I think it would be very silly for any company to ever say something should never change. That would be an irresponsible way of thinking about it.”

Responsible innovation

Today, AI has transformed from a futuristic fantasy to an indispensable component of modern life, yet its journey is still unfolding. Some use-case scenarios include powering search engines, recommending our next favourite shows, and helping diagnose diseases.

Still, while AI algorithms can write poetry and drive cars, numerous seemingly simple problems remain unaddressed. As Sundar Pichai, CEO of Google, aptly said, “AI is one of the most profound things we’re working on as humanity. It’s more profound than fire or electricity”. This profound impact is evident in how AI transforms industries, from healthcare to finance. Yet, it also highlights the gaps that still need bridging.

In education, AI can personalise learning experiences, yet millions of students worldwide still lack access to tailored educational resources. A 2023 UNESCO report noted that AI-driven tools could revolutionise education by offering personalised feedback and support, but their implementation remains patchy and inconsistent.

The irony is that the technology exists, but the will and infrastructure to universally deploy it lag behind, highlighting a significant, easily solvable problem that could dramatically improve global education equity.

The ongoing climate crisis is another area where AI’s potential is vast but underutilised. Google’s DeepMind has demonstrated AI’s ability to optimise energy use in data centres, significantly reducing emissions. Despite this, as Greta Thunberg pointed out at the 2023 Climate Action Summit, “We are still talking about what can be done, rather than doing it.”

Optimising local recycling programs or improving public transport schedules using AI are relatively simple problems that remain largely unaddressed — despite their potential for significant environmental impact.

In the social sector, AI’s ability to address accessibility issues stands out. Real-time translation and transcription services powered by AI could break down language barriers and improve communication for millions. Yet, the rollout of such services is often limited to major languages, leaving many communities underserved.

Unrealised potential and gaps

To the question of what is the easiest problem AI can solve today that it hasn’t, King admits to not having a definitive answer either. “What I would say is that the unlocking of AI’s potential is at very, very surface levels at the moment. So, have we unlocked all of the potential that AI can bring as it relates to driving productivity for businesses? No,” King says, clarifying that what he means by “productivity” is less on sharing information and more on how people are actually working together and collaborating.

He shares that he’s seen some of his customers adopting AI pretty well but posits that small businesses today are still in the early stages of adoption. “I don’t think we’ve seen anywhere near the full scale of digital evolution , even in Singapore, a very digitally minded and evolved country.”

To that end, Singapore is driving AI adoption among SMEs through initiatives like the GenAI Sandbox, enabling businesses to experiment with generative AI solutions to enhance productivity and customer engagement. Substantial government investments, including a $1 billion allocation over five years, aim to triple the number of AI practitioners and provide extensive training programs.

“The last piece (in AI adoption at work) is how Singapore thinks about regulation. They are very conservative and consultative in their approach,” says King, expressing appreciation that Singapore is committed to deploying regulation that’s implementable by tech companies.

While Singapore’s pragmatic and consultative approach to AI regulation is commendable, not all countries have managed to strike the same balance. For instance, the European Union’s General Data Protection Regulation (GDPR) has often been criticised for its stringent and complex requirements, which can be burdensome for tech companies.

In China, the AI landscape presents another example of regulation that can hinder the tech industry. The Cyberspace Administration of China’s (CAC) regulations are part of a broader, more assertive approach to controlling AI and data technologies. Even in the United States, the patchwork of state-level AI and data privacy regulations can create a cohesive and consistent regulatory environment.

California’s Consumer Privacy Act (CCPA), while pioneering in data protection, introduces complex compliance requirements that can be particularly challenging for smaller tech firms without the resources to manage them effectively.

“It’s not just about imposing regulations,” King reminds me. “It’s about making sure it works in real, pragmatic, and practical terms. They do what’s right for the Singaporean people and in line with the national agenda, but in a way that works for the tech industry.

“So, have we unlocked all of the potential that AI can bring as it relates to driving productivity for businesses? No.”

The workforce of tomorrow

While the challenges of stringent AI regulation in regions like the EU, China, and the US highlight the complexities of fostering innovation under heavy compliance burdens, another pressing issue that accompanies the rise of AI is the fear of job displacement.

As AI technologies become more advanced, there is growing concern that automation and AI-driven processes will replace human jobs across various industries. This anxiety is not unfounded; studies have shown that while AI can create new job opportunities, it is also poised to disrupt traditional roles, leading to significant shifts in the workforce.

For King, such job displacements are inevitable. “The first thing to be mindful of is that this is not something new. With every technological shift, there’s been displacement, but there’s also been the creation of new job opportunities,” he explains, adding that the jobs that will be displaced are typically highly repetitive and that AI can perform more efficiently and productively.

For King, socialising the need involves a coming together of businesses, governments, and various organisations to keep promoting the need and opportunities to update skills. It’s about continually advertising the various options available for people to improve their skill sets. “People don’t know what they don’t know. Interviews like this can help socialise the need to update skills and point people in the right direction regarding available resources.”

To my point about privilege, King knows that living in Singapore and working at Google gives him access to benefits and technological advancements that others may not. “However,” he tells me, “I don’t see myself as different from others regarding the need to update my skills. My skills and understanding could become redundant if I don’t have an approach to continuous learning.”

“So, I hear what you’re saying about privilege, but I wouldn’t disconnect myself from what everyone else is going through. A lot is changing under our feet, and we all need to update ourselves accordingly.”

Optimism amidst challenges

I can tell that King has given much thought to how he envisions his future and his place in it. But now, with two young daughters, King’s worry extends beyond himself. “I worry as a parent about the information they get access to and the influences they encounter, whether that be within the home through digital access or within their schooling environment. In the future, I hope they will be equipped to navigate the world safely and find purpose in their lives.”

“I also worry about the geopolitics of the world. I’m definitely not an expert, but I’m a citizen of the world, and I worry about potential conflicts. I have an ancient history major and a modern history major, and humankind tends to work in cycles, with conflict being a part of human existence.” It’s a sobering but realistic fact that makes me think that maybe, just maybe, King is as much a realist as I am.

As my one-hour conversation with King comes to a close, I ask what gives him hope looking at the state of the world today. “If you think about where the world sits today, there are several factors indicating that this is the best the world has ever been,’ King offers. He then references a UN report that states that 1 billion people have moved out of poverty in the last 30 years, with access to clean drinking water and basic sanitary options improving significantly.

And then, there’s the marked improvement in another area of access too: information. “Even countries like Thailand, which I used to look after, moved from 40 million to 60 million internet users in the space of four or five years. That’s 20 million more people in that country alone getting access to information in ways they never had before, not over a lifetime, but within just a few years.”

It’s a phenomenon that genuinely excites King and keeps him feeling positive that the world is not about to dive into the doldrums. Sure, there are still concerns and real problems we need to overcome, such as risks with AI or even the risk of another outbreak. “But there’s also access to information and technology to combat these problems in ways we’ve never seen before. There’s a lot to be encouraged — I’m pretty optimistic.”

This story was first published on The Peak Singapore.

, , ,

Type keyword(s) and press Enter