AxonIQ

AxonIQ

Softwareontwikkeling

Empower your business with AxonIQ—event-driven microservices, real-time insights, and seamless scalability.

Over ons

AxonIQ delivers a complete platform for evolving event-driven microservices using CQRS and Event Sourcing. With Axon Framework, Axon Server, and the AxonIQ Console for simplified system monitoring, developers can transition Java applications from monoliths to scalable, event-driven microservices without major refactoring. AxonIQ powers mission-critical systems in industries like healthcare, finance, logistics, and government. Our enterprise-grade solutions offer advanced scaling, big data handling, compliance support, and real-time operational insights, ensuring smooth operations for medium to large-scale deployments. Founded in 2017 and based in Utrecht, The Netherlands, AxonIQ also provides extensive tooling, professional support, and education for growing teams.

Branche
Softwareontwikkeling
Bedrijfsgrootte
11 - 50 medewerkers
Hoofdkantoor
Utrecht
Type
Particuliere onderneming
Opgericht
2017
Specialismen
Microservices, Event-driven architecture, Axon Framework, Event sourcing, CQRS, Axon, Domain Driven design, Axon Server, AxonIQ Console, DDD en Distributed Systems

Locaties

Medewerkers van AxonIQ

Updates

  • AxonIQ heeft dit gerepost

    🔔 Event Sourcing: Your Boost for NIS2, DORA & AI! Do you want data that stands out? 🌟 The EU is demanding more security, traceability, and transparency with NIS2 and DORA. But hey – why just meet the requirements when you can also make your data AI-ready? 👉 The Magic Formula: Event Sourcing. Unlike "old" systems that overwrite data, Event Sourcing stores every change as an Event. This brings: ✅ Maximum Traceability: Who changed what, and when? ✅ Perfect Compliance: NIS2 and DORA requirements? Check. ✅ AI-ready: Data that AI will love. 💡 Our Gamechanger: The Axon Framework! For over 8 years, we’ve relied on AxonIQ’s framework – because it just works and is fun to use. And this year, we were even sponsors at the AxonIQ Conference in Amsterdam. What did we bring back? Insider tips, best practices, and exciting success stories! 🚀 So, what are you waiting for? Event Sourcing is the key to starting the future securely, efficiently, and AI-ready. 📩 Let’s talk! Send us a message here or schedule a meeting directly. We’ll show you how Event Sourcing can transform your business. #EventSourcing #softwareengineering #Innovation

    • Geen alternatieve tekst opgegeven voor deze afbeelding
  • LAST CHANCE TO JOIN TODAY’S FREE WEBINAR — click the link below for details

    Organisatiepagina weergeven voor AxonIQ, afbeelding

    3.565 volgers

    Building smarter, resilient systems doesn’t happen by accident. Get inside the minds of AxonIQ’s experts and learn to turn theory into real-world impact. Register below to join the AxonIQ Playbook Webinar.

    Deze content is hier niet beschikbaar

    Open deze content en meer in de LinkedIn-app

  • AxonIQ heeft dit gerepost

    Profiel weergeven voor Allard Buijze, afbeelding

    CTO & Founder at AxonIQ

    Event Sourcing "Friday FUD" - Event Modeling is an important technique, but we tried it before, and it didn't work for us. In the past few years, I've noticed a correlation between the success of a project implementing an event-sourced system and its design process. Teams using an event-driven design process are much more successful with event sourcing than teams that use more traditional techniques. It's not a secret that I've recently become a big fan of Event Modeling. But it wasn't love at first sight. In this analogy, the first date was rather... awkward. The problem was the environment. I've also made a few mistakes that I've seen others make. Once you're aware of them, they're easy to avoid. Alberto Brandolini, creator of Event Storming, a technique that shares some similarities with Event Modeling, once put it very nicely: You need two types of people in the room: people with questions and people with answers. Developers are people with questions. Finding people with answers is the key to successful event modeling. "But these sessions are way too technical" is a warning/complaint/excuse that I've heard multiple times. That's another common mistake that also makes it harder for the "people with answers" to participate in these sessions. Event Modeling focuses on a system's behavior, not the internal, technical intricacies of how it displays that behavior. When done right, Event Modeling involves nothing that a domain expert doesn't understand. Stick to the strict terminology of Event Modeling. Don't talk about Aggregates, Projections, and Sagas, but context, information, and automation. This helps keep the discussion of technical details out of the way. Understanding these pitfalls is one thing. Not falling into them during a session is quite a bit harder. Especially for the very first session, it's important to invite not only the people with questions and those with answers but also someone who can facilitate the session. Preferably, this is someone who is not too familiar with the domain. They should definitely not be a stakeholder in the end result. Fortunately, the Event Modeling community is quite large and still growing. There certainly is an experienced practitioner in the area who can help you get your first session off the ground. Once the team gains more experience working together, they can self-facilitate the sessions. I've been positively surprised several times when facilitating sessions for our customers. The development teams realized that they had uncovered many "unknown unknowns." The domain experts became more aware of the information the development teams required to be more productive. Event Modeling gave them the means to have a valuable, meaningful discussion that helped the project move forward. Have you tried Event Modeling? Was it successful? What's your "secret sauce"?

    • Geen alternatieve tekst opgegeven voor deze afbeelding
  • AxonIQ heeft dit gerepost

    Profiel weergeven voor Allard Buijze, afbeelding

    CTO & Founder at AxonIQ

    Monoliths versus microservices. The debate should not be one versus the other. It's about finding the balance. Microservices come with a lot of additional complexity. The distributed nature of the system makes certain changes very difficult. The network alone is a place to be worried about. The less time you(r program) spend there, the better it is. There is a reason why the first rule of distributed systems is: don't! But monoliths... They are hard to maintain. They tend to grow large and become difficult to deploy. They don't scale. And ultimately, they turn into a Big Ball of Mud. Really? I disagree. You might think that separating our system into smaller, independently deployable units makes software more maintainable. That's partly true. Breaking it into smaller units is the way to go. But deploying them separately is not. The biggest challenge we face in separating our system into modules is that we don't know what the right module boundaries are. Based on experience, some boundaries may look absolutely obvious. But we need to face the fact that for most of the systems we build, we don't exactly know the specifications by the time it reaches production. As we build our system, we learn more about the domain. Meanwhile, the domain may even evolve, requiring our software to adapt. All these changes put tension on our boundaries. We may need to slightly shift them to put certain components that were originally seen as "separate" now need to be joined. Or vice versa. When our software is split into separate deployable units, moving components from one to another is no longer a simple refactoring. It becomes a major change that needs a complete deployment and deprecation lifecycle of API changes. A lot of effort for a conceptually simple change. How can we benefit from the flexibility of the monolith and the scalability and "fast flow" of microservices? Look for the middle ground. Between "one" and "thousands," a specific number must work well for you in your specific environment. Don't try to guess that number upfront. You'll get it wrong. And even if you do, the right number will change over time. Instead, allow your software to adapt. Build it so that components can be split into separate deployable units and combined back into one. Impossible? These components need to be "Location Transparent." Components are location transparent when they aren't aware of the relative location of the components they interact with. To achieve location transparency, define explicit messages. Define the commands, events, and queries (including their replies) that flow between these components. Transport these messages using a bus abstraction. When needed, the bus implementation will carry the messages over the network. When not, it will carry them locally to the right destination. In the diagram depicted below, the sweet spot is as far up as you can get and as far left as you can get away with.

    • Geen alternatieve tekst opgegeven voor deze afbeelding
  • AxonIQ heeft dit gerepost

    Profiel weergeven voor Allard Buijze, afbeelding

    CTO & Founder at AxonIQ

    Event Sourcing "Friday FUD" - How can I know if capturing history is useful? Short answer: you can't. You can't predict the future. But you can prepare for some of what may possibly happen. Recently, I was looking into a particular piece of code, to find out why it was behaving differently from what I expected. It was clear that the code had been modified to show this particular behavior. I had no idea why this had happened. What do you do? Simple. You check the git history. The change was easily found, and it became clear that it was made for good reasons. This wasn't a bug. My expectations of the code at this detailed level were simply outdated. Then, I realized that I've had similar problems with data. Sometimes, the combination of data in fields in your system doesn't seem to make sense. Wouldn't it be nice if you could have a git history for this data? The solution was simple: history tables. They were great. Every change of data would be accompanied by a new entry in the history table. We could trace all the changes. We were proven wrong. It wasn't until long that some of the changes didn't appear in the history tables. Some of the associations didn't seem important. Some of the entries didn't require (we thought) a history table. But more importantly, we lost the "why" of the changes. The original business intent was lost. At the core, the application state results from a series of changes made in an application. The state is required to make the application work in its current form. The challenge we face is that this "form" changes in unforeseen ways. We need to be able to adapt the application's functionality. The state as we store it may be, but is often not, suitable for this new form. Event Sourcing approaches application state in a fundamentally different way. It values change more than state. The primary storage contains these changes. All the rest is derived from that. Developers new to event sourcing often fear the consequences of change in an Event-Sourced system. Funny enough, those who have gained experience value the increased ability to change in Event Sourcing compared to state-driven approaches. It allows them to change their reasoning about the state without compromise. And when the state doesn't tell the story they expect? They can rely on the change history to explain why thing are the way they are. Next time you use git commit, think about your colleagues who work with that system's data. Have you given them the same ability that you value so much?

    • Geen alternatieve tekst opgegeven voor deze afbeelding

Vergelijkbare pagina’s

Door vacatures bladeren

Financiering

AxonIQ 3 rondes in totaal

Laatste ronde

Serie A

US$ 7.331.853,00

Bekijk meer informatie over Crunchbase