Join us at Current New Orleans! Save $500 with early bird pricing until August 15 | Register Now
Represent Kafka topics and associated schemas as open table formats such as Apache Iceberg® (GA) or Delta Lake (OP) in a few clicks to feed any data warehouse, data lake, or analytics engine
Simplify the process of representing Kafka topics as Iceberg or Delta tables to form bronze and silver tables, reducing engineering effort and compute costs
Store your fresh, up-to-date Iceberg or Delta tables once, reuse many times with your own compatible storage, ensuring flexibility, cost savings, and security
Tim Berglund is back at the lightboard to show you the most exciting thing he’s seen in analytics in the last 40 years (nope, that’s not an exaggeration): Tableflow, a new feature in Confluent Cloud that lets you completely skip the ETL and make your Apache Kafka® topic data available as tables in your data lake.
In this video, Tim walks through how Tableflow works and why it’s a big deal—no data copying, no transformation steps, just native open table formats like Apache Iceberg™ and Delta Lake right on top of your Kafka data.
Leverage our commercial and ecosystem partners to transform bronze or silver tables into gold standard tables for a wide range of AI and analytics use cases
Continuously optimize read performance with file compaction, maintaining efficient data storage and retrieval by consolidating small files into larger, more manageable ones
“For companies to maximize returns on their AI investments, they need their data, AI, analytics and governance all in one place. As we support a growing number of organizations in building data intelligence, trustworthy enterprise data is paramount, achievable only by eliminating silos between operational and analytical data. We are excited that Tableflow has embraced Delta Lake and Unity Catalog as an open table and governance solution, and we look forward to working together to deliver long-term value for our customers.”
“Access to real-time, trustworthy data is a key factor in scaling AI initiatives faster, and a crucial requirement for analytical platforms. With Tableflow, users can seamlessly represent Kafka topics and their associated schemas as Apache Iceberg™ tables in just a few clicks to deliver the AI-powered insights that today’s businesses need. Through a native integration with Snowflake Open Catalog, a managed service for Apache Polaris™(incubating), our joint customers can access real-time operational data for AI, analytics, and collaboration easier, and with more unified governance, than ever before. We are thrilled to help customers bridge the gap between operational and analytical estates with a solution that leverages open standards."
“Bus companies love us because we make sales, operations and dispatching super simple, but that’s only possible with real-time data analytics. Tableflow offers a promising path for our analytics engine to seamlessly consume operational data in Kafka in real-time as Apache Iceberg tables, eliminating the need for additional pre-processing. By integrating Tableflow with our data warehouse, we could prevent raw, unclean data from being ingested while reducing complexity and storage costs. This approach would simplify workflows and accelerate time to insight while ensuring a more efficient and cost-effective data architecture.”
“In the current era of AI, it is more important than ever to ensure that real-time, trustworthy data is unified across both operational and analytical systems. Enterprises should seek solutions that provide their lakehouses, AI models, and analytical engines with real-time streaming data without extensive pre-processing or manual intervention. Offerings that automate the preparation of bronze and silver standard tables with a push-button approach can enable faster real-time decision-making, significantly accelerate AI initiatives, and enhance data reliability, integrity, and trust.”
New signups receive $400 to spend during their first 30 days.