How Databricks and Glean Reveal AI Automation’s Hidden Complexity
Deploying AI for automation is far more expensive and slow than many anticipate, even for giants like Databricks and Glean. Databricks recently raised over $4 billion, valuing itself at $134 billion, while Glean secured $150 million at a $7.2 billion valuation.
Yet, both CEOs Ali Ghodsi and Arvind Jain emphasize that unleashing AI agents does not instantly translate to successful automation inside organizations.
They reveal a critical leverage mechanism: AI deployment is “an engineering art” demanding deep contextual understanding and human oversight, not just launching code and expecting magic.
"AI automation success takes much longer than you know—it requires sustained human-in-the-loop systems," said Jain.
Why AI Automation’s Hype Misses The Critical Constraint
Conventional wisdom suggests AI agents can rapidly replace human workflows, cutting costs and accelerating productivity. This view treats AI as a plug-and-play engine.
But this ignores the true bottleneck: integrating AI into complex organizational systems requires continuous iteration and expert supervision. That’s constraint repositioning, not mere cost-cutting. For leverage, see how 2024 tech layoffs revealed structural leverage weaknesses missed by many.
Glean tried custom AI models tuned to internal workflows but pivoted back to easily deployable foundation models after costly failures. Unlike rivals chasing bespoke AI silver bullets, they opted for reusing robust public models to lower complexity.
Humans in the Loop: The Leverage Pivot Behind Scalable AI
Databricks CEO Ali Ghodsi calls AI deployment an “engineering art” because success depends on production-quality work—extensive testing, monitoring, and governance.
This insight aligns with why AI forces workers to evolve, not vanish. Human oversight remains essential as companies deploy agents widely across tasks.
Ghodsi predicts many future AI agents will require humans approving every step to hold accountability. This hybrid model shifts leverage from automation alone toward trust and safety systems.
Repositioning Constraints for Real AI Leverage
The key constraint in AI automation is no longer raw model capability but integrating, supervising, and iterating AI systems inside complex enterprises.
Glean and Databricks show successful AI leverage needs teams and infrastructure that solve this integration puzzle resiliently. This filters out attempts to cut human involvement prematurely.
Operators should recognize that the leverage opportunity scales by building feedback loops with strong human supervision embedded.
With these systems, AI becomes a force multiplier, amplifying human expertise rather than replacing it outright—unlocking compounding gains others miss.
"Without human oversight, AI automation isn’t automation—it’s risk," Ghodsi warns.
Related Tools & Resources
As organizations navigate the complexities of AI automation, leveraging tools like Blackbox AI can provide the necessary support for developers in creating robust applications. This platform empowers teams with AI-driven code generation, making the integration and iteration process smoother while enhancing overall productivity. Learn more about Blackbox AI →
Full Transparency: Some links in this article are affiliate partnerships. If you find value in the tools we recommend and decide to try them, we may earn a commission at no extra cost to you. We only recommend tools that align with the strategic thinking we share here. Think of it as supporting independent business analysis while discovering leverage in your own operations.
Frequently Asked Questions
Why is AI automation more expensive and slower than expected?
AI automation requires continuous iteration, deep contextual understanding, and human oversight rather than just deploying code. Both Databricks and Glean emphasize that integrating AI into complex organizational systems is an engineering art demanding extensive testing, monitoring, and governance.
What roles do human workers play in AI automation?
Human workers are essential in overseeing AI agents to ensure accountability, trust, and safety. As Databricks CEO Ali Ghodsi states, many AI agents will require humans to approve each step, maintaining a hybrid model rather than fully replacing humans.
How much funding have Databricks and Glean raised for AI automation?
Databricks recently raised over $4 billion, valuing the company at $134 billion, while Glean secured $150 million, reaching a valuation of $7.2 billion. These investments underscore the high stakes and complexity involved in AI deployment.
Why did Glean pivot from custom AI models to foundation models?
Glean initially invested in custom AI models tailored to internal workflows but switched back to publicly available foundation models after costly failures. This shift aimed to reduce complexity and increase deployment ease.
What is the main constraint in successful AI automation?
The key constraint is not the AI model’s raw capability but the ability to integrate, supervise, and iteratively improve AI systems within complex enterprise environments, requiring strong human feedback loops.
How does AI automation impact workforce evolution?
Instead of replacing workers, AI forces them to evolve by adapting to new human-in-the-loop systems. This transformation focuses on leveraging human expertise alongside AI rather than complete automation.
What tools can support the development and integration of AI automation?
Tools like Blackbox AI provide AI-driven code generation that helps developers create robust applications, smoothing the integration and iteration process and enhancing productivity in AI system deployment.
What risks exist without proper human oversight in AI automation?
Without continuous human supervision, AI automation becomes a risk rather than an advantage. Databricks’ CEO Ali Ghodsi warns that lacking human oversight can lead to failures and unintended consequences in deploying AI agents.