Ixchel Ruiz, Senior Software Developer at Karakun, and Gunnar Morling, Software Engineer at Decodable, joined InfoQ podcast host Michael Redlich to unpack the latest InfoQ Java Trends Report. Here’s what they covered: ➡️ The benefits of Java’s six-month release cadence ➡️ Project Lilliput & compact object headers ➡️ Nullability in Java and how it’s evolving ➡️ Python's influence on the Java ecosystem ➡️ Tackling the One Billion Row Challenge Whether you're a Java enthusiast or simply love keeping up with tech trends, this episode is packed with insights you don’t want to miss! 🎧 https://lnkd.in/g69FuTXM
Decodable
Software Development
San Francisco, CA 4,756 followers
Decodable is a serverless real-time data platform. No clusters to set up. No code to write. No PhD required.
About us
Decodable’s mission is to make streaming data engineering easy. Decodable delivers the first self-service real-time data platform — that anyone can run. As a serverless platform for real-time data ingestion, integration, analysis, and event-driven service development, Decodable eliminates the need for a large data team, clusters to set up, or code to write. Engineers and developers can now easily build real-time applications and services with SQL using clear and simple semantics, error handling, and operational tools. Decodable gives the right people access to the right data — fast. For more information, visit www.decodable.co
- Website
-
https://decodable.co/
External link for Decodable
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- San Francisco, CA
- Type
- Privately Held
- Founded
- 2021
Locations
-
Primary
San Francisco, CA 94107, US
Employees at Decodable
Updates
-
Scalability is one of the most challenging aspects of stream processing platforms. As data volumes grow and workloads evolve, the infrastructure behind a platform must be able to scale up or down in response. Traditionally, this requires constant tuning of memory, I/O, and timeouts, which can be both time-consuming and complex. At Decodable, we’ve taken a different approach to scalability, making it easier and more flexible for businesses to manage resource availability. By abstracting the complexity of infrastructure scaling, we’ve simplified the process into two key concepts: task size and task count. These parameters allow you to define how resources are allocated for your connections and data pipelines: ❇️ Task Size defines the maximum resource utilization per task. If you need more capacity, this is where you specify the level of resource scaling. ❇️ Task Count specifies how many tasks can run concurrently, allowing you to scale your jobs horizontally to handle higher throughput. This simplicity means that as your workload grows, scaling becomes a seamless process—whether you’re expanding for a larger-scale production system or testing a small-scale proof of concept. In just a few minutes, you can scale your pipelines up or down, consuming only the resources you need, when you need them. Even better, Decodable’s platform monitors and optimizes the infrastructure automatically, so you no longer need to worry about the constant tuning required to keep things running smoothly. This automated optimization ensures your systems stay performant without constant manual intervention, freeing up your team to focus on business logic rather than infrastructure. Scalability shouldn't be a hurdle to growth—at Decodable, we've made it a strategic advantage that allows you to focus on what matters most: delivering value through data, with the flexibility to scale as your business needs evolve. Read more in our technical guide 📖 https://dcdbl.co/3ZvmBs1
-
Download now for key insights 🔑 https://dcdbl.co/4eNScej Managed CDC solutions offer automatic scaling, high availability, and seamless performance optimization as your data grows. With built-in redundancy, load balancing, and multi-region support, managed solutions can handle expanding data volumes without manual intervention. Want to learn more about scaling your CDC solution? Our Buyer’s Guide breaks down how to align your goals, resources, and constraints to make the best decision on whether to build or buy a CDC solution that works with your existing data architecture and supports your future growth.
-
At Decodable, our focus is on enabling users to concentrate on the business logic of their data jobs—whether in SQL, Java, or Python—while leveraging the full flexibility of Flink APIs. The real challenge lies not just in the individual components, but in integrating these pieces seamlessly. While each component may be complex, the true value comes from how well they work together to create a cohesive data processing ecosystem. By prioritizing integration, we empower teams to innovate and drive insights without getting bogged down by the underlying engineering challenges. In this on-demand talk, CEO and Founder, Eric Sammer breaks down the differences between Amazon MSF and Decodable. Watch it here 🎥 https://dcdbl.co/3MAl1Pz
-
Data and AI and Real-time - Oh my! Join this roundtable of industry experts from Redpanda Data, Decodable, and Old Mutual as they share their thoughts on what's to come for real-time data streaming and AI in 2025. 📆 January 15th @ 11am PT/2pm ET Hosted by Eric Kavanagh, this webinar offers valuable insights to help you stay ahead of the curve. Don't lag behind -- register now 💥 https://dcdbl.co/3ZGZSth
This content isn’t available here
Access this content and more in the LinkedIn app
-
Decodable reposted this
🔮 Predictions for 2025: The Future of Real-time Data Streaming and AI Want to peek into the future without leaving your desk? Join this virtual roundtable with formidable founders and experts from Decodable, Old Mutual, and Redpanda! Hosted by The Bloor Group 🔥 🗓️ January 15, 2025 🕚 11 am PST / 2 pm EST 📍 Virtual Perfect for #data practitioners, #AI enthusiasts, and #technology strategists looking to skip the queue and sharpen their #streamingdata edge 🎯 Register here👇 https://lnkd.in/gBdx-yxD
Predictions for 2025: The Future of Real-time Data Streaming and AI
info.decodable.co
-
Building and maintaining real-time data pipelines is no small task. For organizations looking to leverage real-time data, the road is lined with challenges that require specialized skills and a deep technical foundation to navigate successfully. One of the biggest hurdles? The scope and complexity of stream processing technologies. Platforms like Apache Flink and Apache Spark are powerful, but they come with a steep learning curve. Configuring and optimizing these frameworks to run efficiently demands expertise including: Distributed systems Data partitioning Fault tolerance For those ready to take on the challenge, the payoff is huge. Real-time pipelines can transform data into insights at the speed of business—if you have the know-how to keep them running smoothly. Want to learn more about building resilient, real-time pipelines? 📈 Download the guide 👉 https://dcdbl.co/4fsbqa4
-
As data volumes grow, ensuring that your Change Data Capture (CDC) system scales efficiently becomes a top priority. Managed CDC solutions excel in this area, typically offering built-in scaling and high availability features that ensure seamless performance as your needs evolve. 1. Automated Scaling Managed solutions can automatically adjust resources based on data volume fluctuations, ensuring your CDC system keeps pace without manual intervention, making scaling simple and efficient. 2. High Availability & Redundancy With built-in redundancy and failover mechanisms, managed services minimize downtime and ensure continuous data availability. Load balancing ensures efficient workload distribution, while multi-region support enables global data replication for disaster recovery. 3. Performance Optimization Managed solutions come with performance monitoring and automatic tuning, adjusting system parameters as data patterns evolve. This proactive approach ensures consistent, reliable performance with minimal effort from your team. Managed CDC solutions allow your business to focus on growth while the system adapts seamlessly to your data needs. Learn more about the pros and cons of build vs buy 👉 https://dcdbl.co/4eNScej #ChangeDataCapture #Debezium #DataStreaming
-
Failover replication slots are an essential part to using #Postgres and logical replication in #HighAvailability scenarios. Added in Postgres 17, they allow logical replication clients such as #Debezium to seamlessly continue streaming change events after a database fail-over, ensuring no events are lost in the process. This renders previous solutions such as manually synchronizing slots on a standby server or external tools such as pg_failover_slots obsolete (although the latter still comes in handy if you are on an older Postgres version and can’t upgrade to 17 just yet). In his latest blog post, Gunnar Morling provides an in-depth walk through of how to use failover slots, both from the perspective of using Postgres’ SQL interface to logical replication as well as using it together with Decodable’s fully managed real-time data platform based on Apache Flink. Read it here 👉 https://dcdbl.co/3D0ezjg
-
In fast-moving organizations like Drata, keeping data fresh and consistent while scaling can be a challenge. As Drata scaled its operations, they realized: Their lean team couldn’t dedicate resources to building a custom solution. They needed a managed service that seamlessly integrated with their CI/CD process. Partnering with experts at Decodable would help them meet their goals faster. Hear how Drata leveraged Decodable’s real-time data platform to overcome the challenges of building in-house solutions, and why this approach helped them stay focused on what matters most—innovation and growth. 📺 Watch on-demand to learn how to achieve the same results at your org. https://dcdbl.co/3TXXFYm