D ONE – Data Driven Value Creation hat dies direkt geteilt
Workflow Wednesday Do You Still Sketch Your Data Pipelines? ✏️➡️🚀 Code-Driven Workflows Scale, Diagrams Don’t A photo of a crowded event—how many people are in it? A phone call transcript—who spoke when? Not long ago, these were “unstructured” data points. Today, AI extracts faces from images and timestamps from conversations, turning them into structured datasets. The line between structured and unstructured data has blurred, yet many teams still draw their data transformations in visual tools—then jump to a “free-hand SQL” section when complexity strikes. Instead of drawing workflows, modern platforms treat transformations as code, often generated dynamically to handle ever-changing data. Turning photos into lists of people or transcripts into structured records isn’t a one-off task—it’s ongoing processing for all incoming data, so insights flow continuously into analytics and AI. Ten years ago, we saw that dependency management was key to scaling. With no off-the-shelf solution, we built our own code-based pipelines. Today, tools like dbt provide these features out of the box: transformations as code, a clear dependency graph, and support for both SQL and Python. Code-driven pipelines also integrate seamlessly with CI/CD, data testing, and lineage tracking, ensuring every change is tested, documented, and deployed. Minibatches keep data fresh without the overhead of full-streaming, while automated tests guard against silent failures. It all adds up to a unified, AI-ready data environment that’s engineered rather than hand-sketched. So, are you drawing your data workflows—or engineering them? Drop a comment and share what excites—or worries—you most about this shift. 👇 #WorkflowWednesday #DataEngineering #AI #CI/CD #DataTesting #DataLineage #MakingSenseOfData Art by @basilonmypizza: https://lnkd.in/eF8FkWzN https://basilhefti.ch/