The video on-demand of this session is available to logged in QCon attendees only. Please login to your QCon account to watch the session.

Session + Live Q&A

Resilient Real-Time Data Streaming Across the Edge and Hybrid Cloud

Hybrid cloud architectures are the new black for most companies. A cloud-first strategy is evident for many new enterprise architectures, but some use cases require resiliency across edge sites and multiple cloud regions. Data streaming with the Apache Kafka ecosystem is a perfect technology for building resilient and hybrid real-time applications at any scale. This talk explores different architectures and their trade-offs for transactional and analytical workloads. Real-world examples include financial services, retail, and the automotive industry.

Main Takeaways

1 Hear about what it means to do data streaming in real time across different cloud environments.

2 Learn how to create resilient architectures with real-world examples across cloud, hybrid cloud and edge.

What is the focus of your work these days?

At Confluent, I work a lot around data in motion, this means we process data continuously in real time, and that can happen everywhere. We deploy in the cloud, across multi-cloud but also in hybrid and edge environments. And a key piece of this is that we really often do not just run on analytical workloads, but also transactional workloads. And with that, it gets very critical and it has to be resilient. This is the main topic I talk about these days regarding architecture, deployments, implementations and best practices. And this is in the end of what I want to share at this conference.

And what was the motivation behind your talk?

The motivation is that it's not that easy to do real-time data streaming, especially if it's hybrid or across data centers or multi-cloud in a resilient way. But as I said, most of the use cases are mission critical. We talk about transactions like payments, fraud detection, predictive maintenance that has to run 24-7 without data loss and include disaster recovery, and this is hard to do. So the motivation for this talk is to share lessons learned and real world use cases from deployments across the globe from different industries so that the audience can learn from that what the different options are, and then evaluate what is the right option for their use case, depending on their SLAs and requirements.

What would you describe the persona and level of the target audience for this session?

The personas really a broad spectrum because I cover real world use cases, very interesting examples across industries, but then also go deep into the architectures, how you can deploy this and what the tradeoffs and limitations are. For example, it's very different for a shop floor manufacturing use case where you need to deploy with low latency closer to the edge and also keepsecurity concerns and cost efficiency in mind compared to some analytics that might run in the cloud only. And therefore, the persona in the end is really everything from the decision maker and the lead architects that make these decisions and design the architecture and the enterprise, but then also the developers and the project managers that do the actual implementation and understand how to do that and why they do that. So it's really a broad spectrum, and everybody can learn from a different perspective by taking a look at these resilient deployments of data streaming across edge and hybrid cloud.

And what is it that you want this persona types to walk away with at the end of your presentation?

I think the key thing is really that there's an understanding how you can deploy data streaming for real-time data processing across different environments, including different clouds, multi-cloud, different regions, hybrid and edge deployments, and also understand the tradeoffs. Because with that in mind, and also by showing these real world use cases from different deployments, then they can evaluate for their own problems what's the right choice and start going in the right way, because that's really a key point I've seen as a failure in many of our customers. They started out with these learnings and best practices and just started, for example, in the cloud only. But actually, in some use cases, this didn't work because they needed low latency and cybersecurity at the edge. And this is really the key to what the audience will learn from this talk.  


Kai Waehner

Field CTO @Confluentinc

Kai Waehner is Field CTO at Confluent. He works with customers across the globe and with internal teams like engineering and marketing. Kai’s main area of expertise lies within the fields of Data Streaming, Analytics, Hybrid Cloud Architectures, Internet of Things, and Blockchain. Kai is a...

Read more
Find Kai Waehner at:


Wednesday May 18 / 10:10AM EDT (50 minutes)


Resilient Architectures


ArchitectureHybrid CloudCloud ComputingKafkaStreaming DataReal TimeResilienceResiliency

Add to Calendar

Add to calendar


From the same track

Session + Live Q&A Architecture

Resiliency Superpowers with eBPF

Wednesday May 18 / 09:00AM EDT

eBPF is a powerful technology that allows us to run custom programs in the kernel. It’s enabling a whole new generation of tools for networking, security and observability. Let’s explore how it can help us build resilient architectures. This talk - with demos - considers...

Liz Rice

Chief Open Source Officer @Isovalent

Session + Live Q&A Architecture

The Scientific Method for Testing System Resilience

Wednesday May 18 / 12:30PM EDT

Do you remember the Scientific Method from elementary school science class? It's time to dust off that knowledge and use it to your advantage to test your IT systems! In this session, you'll be re-introduced to the Scientific Method, and learn how Vanguard's software engineers and IT...

Christina Yakomin

Senior Site Reliability Engineering Specialist @Vanguard_Group

Session + Live Q&A Fault Tolerance

How to Test Your Fault Isolation Boundaries in the Cloud

Wednesday May 18 / 11:20AM EDT

Will my system keep working when a server fails? When a data center goes offline? When a service dependency is unavailable?Availability calculations for redundant components require that those components are independent and autonomous of each other. But modern day systems are complex, exhibiting...

Jason Barto

Principal Solutions Architect @AWS

View full Schedule