He had founded and lead Hibernate Search, Validator and OGM and participated to the Bean Validation spec (as lead) and the JPA one (as expert).
Nowadays his focus revolves around NoSQL, analytics. streams of data and how microservices will survive contact with data problems.
He is the founder and co-host Les Cast Codeurs Podcast (French).
See also https://emmanuelbernard.com/blog/
Microservices are great, problems arise when you start to have two of them and when you want to deal with data :)
Pun aside, data and state is a big subject that is largely ignored when discussing Microservices.
- Conundrum #1 : What is the aimed data architecture in a perfect Microservices architecture?
- Conundrum #2 : How do you share state between instances of a given Microservice in a stateless 12 factor approach?
- Conundrum #3 : how to echange state between Microservices that must remain independent?
- Conundrum #4 : how do I go from my brownfield database to a fleet of Microservices IRL without a Big Bang? Conundrum #5 : with many Microservices touching many data sets, how do I guarantee uniformed security (GDPR anyone)?
And the list goes on. This presentation is an opinionated answer to these questions. And yes we do demo these concepts.
This two-part Java based workshop explores practices for defining the right boundaries between microservices, followed by ways to exchange data across these boundaries.
Defining Service Boundaries With DDD
The first part of the workshop focuses on defining the borders between microservices. How to split up your big problem into clearly defined microservices. In real life everything is related, and seeing the individual trees in the big picture of the sprawling forest is challenging. This is where Domain Driven Design (DDD) comes to the rescue. After a short presentation about DDD, we’ll get hands-on with an actual problem to end up with a working program.
Data Streams to Cross Boundaries
In the second part we’ll discuss why microservices must avoid tight coupling and how they still can share data. Based on Kafka, Debezium and Kubernetes, our microservices will produce and consume data streams. We’ll also use change data capture to stream data changes directly out of a database, without any application changes needed. We’ll touch on how to set up Kafka clusters on OpenShift via the Strimzi project and how to monitor and tune them for performance and resilience.