Evolution of Event-Driven Architecture


As I explained in my previous post (here and here) Event Driven Architecture is key to digital transformation. here I will be talking about how it evolved.

Nowadays, the trend in event-driven architectures is one in which messaging is complex and quite different from a basic pipe which establishes connections between systems. Today, event-driven architectures combine elements from distributed database systems and streaming systems through where users can join, aggregate, modify and store data. The modern day implementations of these practices have been classified into four general patterns.

1.   Global Event Streaming Platform

This pattern bears similarity to the traditional enterprise messaging architectures. By using this event-driven approach, a company uses core datasets. These datasets share data from application modules like customers, payments, accounts, orders, and trades while an event streaming platform (for .e.g. Apache Kafka) is used.

This approach is a replacement for point-to-point communication technologies which are used in legacy systems. In such systems, applications are connected over multiple separate locations in real time.

For example, a business which runs a legacy system in New York can also operate through international branches in Stockholm and Berlin while connected through microservices with Amazon Web Services, all at the back of the same event-driven approach. A more complex use can include the connection of various shops across regional waters.

Some renowned companies who have adopted this approach include Audi, Netflix, Salesforce, ING, and others.

2.   Central Event Store

Events can be cached by streaming platforms for a specific or undefined time period to generate an event store, a type of organizational ledger.

This approach is used by companies for retrospective analysis. For instance, it can be used to train models in ML (machine learning) for detecting frauds of an e-commerce website.

This approach facilitates developers to create new applications without needing the republishing of previous events by the source systems. As a result, it becomes easier to replay datasets from their actual source which can be a legacy, external, or mainframe system.

Some companies store their complete data in Apache Kafka. This approach comes under event sourcing, forward event cache, or event streaming.

Event storage is necessary for stream processing with states to generate self-sufficient and enriched events through several distinct data sources. For example, it can be used to enrich orders from a customer module.

FaaS implementations or microservices are easily able to consume enriched events as these events carry all the required data for the service. Additionally, they are also used for the provision of a database’s denormalized input. These enrichments are executed by stream processors which require event storage to store data which works with tabular operations.

3.   Event-First and Event Streaming Applications

Usually, in conventional setups, applications gather data from several locations and import datasets in the database after which it can be filtered, joined, aggregated, and cleaned. This is a particularly effective strategy for applications which require creating dashboards, reports or operate via online services. However, in business processing, efficiency can be attained by ignoring the DB step and instead of sending real-time events into a serverless function or microservices.

For such approaches, stream processors like KSQL and Kafka Streams execute the operations like joining, filtering, and aggregating event streams to manipulate data.

For instance, suppose there is a service for limit checking which joins payments and orders through KSQL or a similar stream processing. It extracts the required fields and records and then sends it to a FaaS (function as a service) or a microservice where a check is executed for the limit while no DB is used.

This event-driven approach offers a greater degree of responsiveness to systems because they can be quickly and conveniently built due to the lesser data and infrastructural requirements.

4.   Automated Data Provisioning

This approach mixes the above-mentioned practices while combining serverless/PaaS implementations with the aim of making data provisioning as self-service.

In automate data provisioning, a user specifies the type of data which they require including its form or where it should land like a microservice, FaaS, database, or a distributed cache.

The infrastructure is provisioned by the system which also pre-populates it accordingly and supervises the flow of events. The buffering and manipulation of data streams is done by stream processing. What this means is that for instance, a retail business may have to join payments, customer, and real-time orders after which they send it into a FaaS or a microservice.

The migration of businesses to both private and public clouds has overseen an adoption of this pattern. It helps to initiate new project environments. Since datasets are stored or cached in the messaging system; therefore users only have to use data which they require at a specific time. Meanwhile, earlier in traditional messaging, it was a common practice to hoard and consume complete datasets so they can be used later.

 

Shedding Light on the Evolution

Over the last few years, there has been considerable evolution in event-driven architectures. In the beginning, they were only used to pass messages—use of state transfer and notification for standard messaging systems.

Afterward, these architectures were enhanced with the help of improved centralized control and better out-of-the-box connectivity. However, centralized control was a tricky part; due to standardization, teams found it difficult to progress. In recent memory, storage patterns like CQRS and event sourcing have gained popularity. The above-mentioned approaches are based on it.

Nowadays modern event streaming systems have taken the next step to unify storage, processing, and events into the same platform. This amalgamation is necessary as it shows that these systems are different from databases which store data in one place. Moreover, they do not even fall into the category of messaging systems in which data is transitory. Thus, they are a combination of both databases and messaging systems.

The correct use of these standard categories has allowed organizations to target several regions and clouds for global connectivity and data has become one of their most prized commodities used in provisioned as a service. This means that it can be pushed into a cache, database, machine learning model, serverless function, or a microservice.

 

 

 

 

 

Advertisement