- The Ravit Show
- Posts
- How to Build a Data Stack for Global Growth, Databricks at $100B Valuation, Scale AI infrastructure with Hammerspace
How to Build a Data Stack for Global Growth, Databricks at $100B Valuation, Scale AI infrastructure with Hammerspace
Hi Data & AI Pros,
As a premium skincare brand with a global footprint, Medik8 needed more than beautiful packaging. It required a modern data stack that could deliver clean, timely, and trusted insights. But with data spread across Shopify, Odoo, and other key platforms, business reporting was anything but smooth.
In this session on August 27, 11am ET, hear how Medik8’s data team streamlined analytics by moving critical data into Google BigQuery using CData Sync. The result? Near real-time reporting, simplified access to insights, and a scalable foundation for data-led growth.
Join Manou Campbell, Head of Data and Information Systems, and Daryl Collins, Sr. BI Engineer at Medik8, along with Bruce Sandell, Partner Solutions Architect at Google Cloud, as they share:
How Medik8 unified ecommerce and operational data
Why BigQuery and CData were the right fit for fast-moving teams
How a modern data stack helped Medik8 support rapid global expansion
Whether you’re centralizing operations, scaling analytics, or navigating fragmented systems, this session offers practical takeaways for building a data stack that grows with your business.

Databricks is closing in on a funding round that could value it at over $100 billion, a big jump from $62 billion last year. For me, this highlights just how quickly enterprise AI infrastructure is becoming a central pillar of the tech ecosystem. Databricks is no longer just a “data company,” it is being positioned alongside the biggest names in cloud and analytics. The valuation shows confidence, but it also raises the stakes: can they deliver enterprise-scale AI outcomes that match this hype?
Read more
Firecrawl raised $14.5 million to grow its AI-ready web data infrastructure. Compared to the billions moving around, this looks small but I find it one of the most meaningful stories. Every AI model and every application depends on clean, structured, real-time data.
Most enterprises I talk to still struggle here, and solving it is what separates experiments from production. Firecrawl’s approach is a reminder that while models get the attention, the real moat often lies in the data pipelines.
I had the chance to speak with Sam Newnam, Senior Director of AI Solutions at Hammerspace on The Ravit Show, about the challenges and opportunities enterprises face as they scale AI infrastructure.
We began with the name itself, Hammerspace. A concept borrowed from comic books, now representing a very real approach to how data can be accessed, unified, and delivered without limits.
From there, we explored the pressures shaping enterprise AI today. The rise of AI agents and enterprise-scale deployments is creating new challenges around data gravity and data friction.
Sam shared why this is no longer just about storage, but about what he calls “GPU gravity”, moving data efficiently to where the models and GPUs actually are.
Hammerspace’s view is clear: enterprises do not need to forklift data into proprietary silos or rewrite workflows just to enable AI. Instead, an open, Linux-native, standards-based approach allows data to remain where it lives, whether on-prem or across clouds, while streaming on demand to the infrastructure that needs it.
🔍 Stay Ahead in AI & Data! Join 137K+ Data & AI professionals who stay updated with the latest trends, insights, and innovations.
📢 Want to sponsor or support this newsletter? Reach out and let's collaborate! 🚀
Best,
Ravit Jain
Founder & Host of The Ravit Show