What Did Google Do This Week?
GOOGLE’S AI HUSTLE: LOOSEN THE LEASH, TIGHTEN THE GRIP
A big ‘quiet’ week for Google, with major moves that show exactly how serious the company is about shaping the future of AI that seemingly slip unnoticed. Two big announcements highlight their approach to influencing AI regulation and infrastructure, both in the UK and on a global scale.
First, in the UK, Google is pushing the government to relax copyright laws and invest more in AI infrastructure. They’re warning that if the UK doesn’t step up, it could be left behind in the global AI race. According to Google, strict copyright laws are stifling innovation—especially when it comes to training AI models on copyrighted material, which is key for things like ChatGPT and their own Gemini model. At the same time, the UK is lagging in data center investment, and Google is pressuring the government to back more infrastructure projects and set up a national research cloud. Debbie Weinstein, Google’s UK Managing Director, highlighted the need for “pro-innovation” regulation to make sure the UK stays competitive in AI.
Meanwhile, Sundar Pichai made a splash at the UN Summit of the Future by announcing a $120 million Global AI Opportunity Fund. The goal? Expand AI education and training in underserved communities. Pichai was clear: AI is the most transformative technology of our time, and Google wants to ensure its benefits are spread equitably across the globe. Pichai also cautioned that poor or protectionist regulation could create a divide between those who have access to AI and those who don’t—something that could worsen global inequality.
What’s really striking is Google’s two-sided strategy: pushing for deregulation in markets like the UK while advocating for the global benefits of AI at the UN. In the UK, they’re trying to remove legal barriers to training AI on massive data sets, which would allow them to monetise other people’s intellectual property to fuel their own AI ambitions. They’re also calling for more investment in AI infrastructure, which aligns with their broader push to dominate the hardware side of AI as well.
Globally, Google is presenting itself as a leader in AI ethics and education, contrasting itself with companies like OpenAI that are often seen as focusing on innovation at any cost. The $120 million fund for AI education might seem small for a company that made $24 billion last quarter, but it sends a clear signal. Google is not just preparing future users and developers for its AI tools—it’s also trying to shape the global narrative around responsible AI development, on its own terms.
For businesses that rely on Google’s products and services, these moves are a sign that AI is the future of Google’s platform. If you’re integrating with Google’s upcoming Gemini model, you’re going to benefit from more advanced tools and services. But at the same time, you need to be cautious about becoming too dependent on Google’s ecosystem. With Google controlling both the data infrastructure and the AI models, you could find yourself vulnerable if their priorities or pricing structures shift.
For society, Google’s actions this week raise important questions about who controls the future of AI. In the UK, loosening copyright laws to help tech giants could put content creators and smaller players at a disadvantage. Globally, Google’s focus on equitable AI development through education sounds noble, but it also reinforces the company’s role as the central player in how AI is taught, understood, and deployed.
This week has shown that Google is doing more than just building AI models—it’s actively working to shape the legal, educational, and infrastructural frameworks that will determine how AI evolves. Whether this leads to a more equitable distribution of AI’s benefits or a further concentration of corporate power remains the big question.
This big story in WDGDTW is free thanks to…
eight&four have developed platform12 - an AI powered workspace – that is securely built and totally customisable for your brand. Harnessing a wide range of carefully vetted proprietary and partner tools, it allows your team to work in a collaborative, safe, creative space - rocketfuelled with AI power. Find out more.
SO WHAT?
Google’s actions point to a rapidly changing landscape for AI, and businesses, governments, and individuals all need to reconsider their strategies. For businesses, especially those using Google’s products, AI is no longer just an option—it’s becoming essential to stay competitive. Companies that don’t move quickly to integrate AI risk being overtaken by more agile competitors. However, it’s crucial to avoid becoming too reliant on Google’s ecosystem. With Google controlling both the data and the AI models that shape everything from search to enterprise solutions, companies could find themselves vulnerable if Google changes its terms.
For governments, Google’s lobbying shows that tech giants will continue pushing for lighter regulation in the name of innovation. Policymakers need to find a balance between fostering AI growth and protecting intellectual property, while also addressing concerns around privacy, security, and fairness. The UK, in particular, faces a tough choice: ease up on regulation to attract AI investment or impose stricter controls to protect creators and maintain a fair competitive landscape.
For society, there are bigger questions about the long-term implications of AI becoming embedded in everyday life. Google’s push to train AI models on copyrighted material, along with its role in shaping global AI education, raises concerns about who gets to decide what AI learns, how it’s used, and what data it accesses. Transparency and accountability will be key to ensuring that tech companies don’t gain unchecked power over information, culture, and personal data.
Pichai’s warning about an “AI divide” is a real threat. As AI becomes more integrated into global systems, the gap between those with access to AI tools and those without could grow rapidly. Developing nations, in particular, need to make sure they aren’t just passive consumers of AI developed elsewhere but active participants in its development and use. Partnerships with global organizations and leveraging initiatives like Google’s AI education fund will be crucial to avoid long-term dependence on AI systems controlled by a few dominant players.
In the end, the future of AI isn’t just about building the most powerful models—it’s about who controls the infrastructure, the data, and the rules that will guide AI’s use. Those who prepare now, whether they’re businesses, governments, or individuals, will be better positioned to navigate the massive changes AI will bring to economies and societies.
Also in this week’s edition: More Monopoly woe, Chrome gets more secure and Waymo is changing its focus... Get all of this +50 additional stories you need to know about. Subscribe and get up-to-date… ⬇
Keep reading with a 7-day free trial
Subscribe to What Did Google Do This Week? to keep reading this post and get 7 days of free access to the full post archives.