- Generative AI Enterprise
- Posts
- Google's bold shift in AI principles
Google's bold shift in AI principles
Plus, DeepSeek security, OpenAI Deep Research, and more.
WELCOME, EXECUTIVES AND PROFESSIONALS.
Google takes on humanity’s biggest challenges, OpenAI pioneers new frontiers, and the implications of DeepSeek continue to reverberate. But is security being overlooked?
Since the previous edition, we've reviewed hundreds of the latest insights on best practices, case studies, and innovation. Here’s the top 1%...
In today’s edition:
Google’s bold shift in AI principles.
OpenAI debuts Deep Research.
Bain explores DeepSeek implications.
Cisco evaluates DeepSeek security.
Transformation and technology in the news.
Career opportunities & events.
Read time: 4 minutes.

CULTURE & CASE STUDIES

Image source: Google
Brief: SVP James Manyika and DeepMind CEO Demis Hassabis refined Google's AI principles, adopting a bolder approach to innovation. They noted how AI frameworks from democratic nations have deepened Google’s “understanding of AI’s potential and risks.”
Breakdown:
“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape,” they wrote.
“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.”
The post continued, “companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
With that backdrop, Google’s AI Principles now center on three tenets: Bold Innovation, Responsible Development and Deployment, and Collaborative Progress, Together.
An accompanying 17-page Responsible AI progress report outlines Google’s approach to governance throughout the AI lifecycle.
The report includes four case studies on how Google operationalizes its AI principles, such as in managing the safe deployment of NotebookLM.
Why it’s important: Google is setting the tone for "building transformative technology to solve humanity’s biggest challenges while ensuring proper safeguards and governance." First published in 2018, its AI Principles have evolved to address the dynamic, global nature of its products, environment, and user needs.
INNOVATION INSIGHT

Image source: OpenAI
Brief: OpenAI introduced Deep Research, a new agentic capability that conducts multi-step research on the internet for complex tasks. It completes in minutes what would take a human hours for certain tasks.
Breakdown:
Deep Research works independently with the ability to find, analyze, and synthesize hundreds of online sources into a research report.
Built for intensive knowledge work in areas like finance, science, policy, and engineering, it provides citations and a summary of its thinking.
In Humanity’s Last Exam, an expert-level AI test, Deep Research scored a new high of 26.6% accuracy, far surpassing DeepSeek R1’s 9.4%.
OpenAI launched a version for Pro users with up to 100 queries per month. Plus and Team users will get access next, followed by Enterprise.
Sam Altman estimates “that it can do a single-digit percentage [1-9%] of all economicially valuable tasks in the world, which is a wild milestone"
Siqi Chen tweeted that Deep Research outperforms his $150K private research team, calling the $200/month Pro subscription an “insane ROI.”
Why it’s important: Once deployed in enterprises, Deep Research can support activities such as market intelligence, legal research, and investment analysis. OpenAI envisions agentic experiences, where Deep Research conducts investigations, and OpenAI Operator executes actions.
MARKET INSIGHT

Image source: Bain & Company
Brief: Bain & Company examines if DeepSeek’s open-source R1 model is a game-changer in cutting inference costs, improving training efficiency, and sustaining performance. It explores market scenarios and actions executives can take now.
Breakdown:
Model architecture innovations include MoE (mixture-of-experts) activating only 37B of 671B parameters per token, and multi-head latent attention (MHLA) reducing memory usage to 5%-13% of prior methods.
Data handling innovations include using PTX over CUDA for better GPU control, and an optimized reward function prioritizing high-value data.
So far, these innovations align with broader trends in AI efficiency, showing consistent advancements, as depicted in the image above.
The article highlights uncertainties, including the true cost of training, GPU mix (high-end vs. lower-tier), and intellectual property concerns.
AI scenarios are outlined: bullish, moderate, and bearish. In a bull market, efficiency gains cut inference costs, driving AI adoption (Jevons' paradox).
Bain contextualizes the events: while significant, it’s not surprising. Adoption will continue, though investment pace and shape may shift, with an intensified race between open-source vs. proprietary models.
Why it’s important: Executives can take key steps: Avoid overreaction, but prepare for cost disruption driving broader adoption. Monitor capex trends and GPU demand. Think beyond productivity, use AI to redefine core offerings, from product development to customer personalization or entirely new services.
SECURITY INSIGHT

Image source: Cisco
Brief: Cisco, in collaboration with the University of Pennsylvania, examined DeepSeek R1 security. While its performance rivals top reasoning models, Cisco's security assessment reveals critical flaws in safety.
Breakdown:
Using algorithmic jailbreaking, Cisco applied an automated attack methodology which tested models against 50 random prompts.
These prompts came from the HarmBench dataset covering harmful behaviors including cybercrime, illegal activities, and general harm.
DeepSeek R1 exhibited a 100% attack success rate, meaning it failed to block a single harmful prompt.
In contrast, other leading models demonstrated at least partial resistance (e.g. Gemini-1.5-pro at 64% and o1-preview performing best at 26%).
DeepSeek R1 lacks robust guardrails, making it susceptible to jailbreaking and misuse, and suggests that cost-efficiency may have compromised safety.
Why it’s important: Cisco's research underscores the need for security evaluations and guardrails to ensure AI advances don’t compromise safety. DeepSeek R1 has triggered global regulatory responses, including data privacy investigations and usage restrictions across various countries and regions.

Databricks VP of AI, Naveen Rao, in collaboration with Deloitte, shared insights on developing transformative gen AI applications. The key? Understand the technology’s limitations before scaling its impact.
OWASP published a 77-page guide on gen AI red teaming, covering model evaluation, implementation testing, infrastructure assessment, and runtime behavior analysis, with best practices and enterprise examples.
Weaviate released a 13-page advanced RAG guide on techniques to improve retrieval quality and response accuracy at various stages of the RAG pipeline, featuring compelling visuals and explanations.
Ex-McKinsey consultant launched AI startup Perceptis, which raised $3.6 million to automate tedious consulting tasks with gen AI, aiming to help smaller firms compete with industry giants.
Writer shared a case study on Salesforce's deployment of Writer AI Studio and Knowledge Graph for 3,000 users, including 50 champions building new apps.

The EU activated the first phase of its AI Act, banning AI systems deemed ‘unacceptably risky’ and imposing penalties of up to 35M euros.
Meta published its Frontier AI Framework, reaffirming its commitment to open-source development while focusing on mitigating cybersecurity and weapon risks.
Hugging Face released open-Deep-Research, an open-source alternative to OpenAI's Deep Research, achieving 55% accuracy on the GAIA benchmark with autonomous web navigation capabilities.
Anthropic introduced Constitutional Classifiers, a new AI safety system, and is inviting the public to help stress-test it after surviving over 3,000 hours of unsuccessful bug bounty attempts.
The EU revealed a $56M investment to develop OpenEuroLLM, an open-source large language model designed to work across all 30 European languages.

CAREER OPPORTUNITIES
Anthropic - Applied AI Partner Solutions
EY - Director AI
Visa - Director AI Consulting
EVENTS
Section - AI Value Creation - February 18, 2025
MIT - Gen AI Search Roundtable - February 18, 2025
Gartner - Gen AI Business Case - March 10, 2025

Previous edition: McKinsey: Now leaders must step up
Complete this survey to get more value.
All the best,

Lewis Walker
Found this valuable? Share with a colleague.
Received this email from someone else? Sign up here.
Let's connect on LinkedIn.