EDA Large Payload Pattern on AWS

In this blog post, we are going to look at a pattern that allows services to send large payloads through an AWS EDA message and event based workloads.

Jason Conway-Williams
18 min readJan 31, 2024

What is EDA?

Event-Driven Architecture (EDA) is an architectural pattern that emphasises the production, detection, consumption, and reaction to events in a system. In an event-driven system, events represent significant occurrences or state changes that can trigger a response from one or more components within the system. The key characteristics of Event-Driven Architecture include:

1. Event Generation: Events are generated by different components or services in the system. These events represent changes in state, occurrences of interest, or requests for specific actions.

2. Event Detection and Notification: There is a mechanism for detecting and notifying interested parties (components or services) about the occurrence of events. This can involve event brokers, message queues, or other communication channels.

3. Decoupled Components: Components within the system are loosely coupled. They interact through events, allowing them to operate independently and without direct dependencies on each other.

4. Asynchronous Communication: Communication between components is often asynchronous. Components publish events without waiting for immediate responses, enhancing scalability and flexibility.

5. Event Handlers: Components interested in specific types of events subscribe to them and act as event handlers. When an event occurs, the associated handlers are notified and can take appropriate actions.

6. Scalability and Flexibility: Event-Driven Architecture promotes scalability and flexibility by allowing the addition or removal of components without disrupting the overall system. New features can be introduced by handling new types of events.

7. Real-time Responsiveness: Event-Driven systems can be highly responsive in real-time, reacting to events as they occur. This is particularly valuable in scenarios where timely responses are crucial.

Examples of Event-Driven Architecture implementations include message-driven systems, publish-subscribe systems, and systems using event brokers like AWS EventBridge.

EDA on AWS with Serverless Managed Services

Event-Driven Architecture (EDA) in AWS with Serverless services involves leveraging AWS’s suite of serverless offerings to build scalable, flexible, and responsive applications that react to events in real-time. Here’s a brief overview:

1. AWS Lambda:
Function as Event Handlers: AWS Lambda is a key Serverless compute service. It enables you to write functions that act as event handlers. These functions are triggered by various AWS events or custom events.

2. Amazon EventBridge:
Event Bus and Rules: Amazon EventBridge is a fully managed event bus service that simplifies the building of event-driven architectures. It allows you to create event buses and rules to route events to Lambda functions, AWS Step Functions, or other targets.

3. AWS S3 Event Triggers:
Object-Based Events: AWS S3, the object storage service, supports event triggers for new object creation, deletion, or modification. You can configure Lambda functions to be invoked automatically when such events occur.

4. Amazon API Gateway:
HTTP Event Sources: Amazon API Gateway enables you to create RESTful APIs with HTTP event sources. Events generated from API requests can trigger Lambda functions, allowing you to build serverless APIs or, storage first APIs where an API Gateway distribution is integrated with an SNS Topic.

5. AWS Step Functions:
Orchestrating Workflows: AWS Step Functions provide a way to orchestrate multiple AWS services and Lambda functions into serverless workflows. Events can trigger state transitions within a Step Function.

6. Amazon DynamoDB Streams:
Database-Driven Events: DynamoDB, AWS’s managed NoSQL database, supports streams that capture changes to the data. Lambda functions can be connected to these streams to respond to changes in real-time.

7. Amazon SNS and SQS:
Messaging Services: Amazon Simple Notification Service (SNS) and Amazon Simple Queue Service (SQS) can be used to decouple components and distribute events. Lambda functions can subscribe to SNS topics or poll SQS queues for processing events.

8. AWS IoT:
IoT-Driven Events: For Internet of Things (IoT) scenarios, AWS IoT services allow you to handle events generated by IoT devices. Lambda functions can respond to events triggered by device activities.

Event-Driven Architecture in AWS with Serverless services empowers developers to design systems that react to events dynamically, scale seamlessly, and focus on business logic without managing infrastructure. It supports a wide range of use cases, from real-time data processing to building responsive APIs and applications.

AWS Service Payload Limits

Many AWS services that generate events or messages have limits on the payload size they can accept or send. For example:

— Amazon S3 allows object uploads up to 5TB, resulting in notification messages with very large payloads.

— Amazon SNS, SQS, EventBridge and Step Funtions allows message payloads up to 256KB by default.

— AWS Lambda functions have payload size limits that depend on the memory allocated to the function, from 6MB at 128MB memory to 10MB at 3008MB memory.

This can cause issues when a service that accepts very large payloads generates events that should be processed by a service with a smaller limit.

For instance: When an Amazon S3 bucket upload generates an object created event to trigger an SNS notification, if the uploaded object is over 256KB the notification message to SNS will fail because it exceeds the SNS payload limit. Similarly, if an S3 event should trigger a small AWS Lambda function, the Lambda may not be able to handle a large S3 object upload notification payload due to its smaller payload size cap. The same is true for a Lambda function receiving a payload size from 6MB to 10MB and sending the received payload in a message to SNS, SQS or EventBridge which can only accept payloads of up to 256KB.

To ensure our workload is robust and can tolerate payloads of varied sizes by services with varied payload size limits, we must implement mechanisms to avoid the mismatch in payload size limits across AWS Services.

Techniques for Handling Variable Payload Limits

Two possible techniques for handling variable payload limits are data compression and chunking.

Data Compression

Implement Lambda functions to compress the payload data (e.g. with gzip) before forwarding to destinations with lower size limits.

Pros of Compression:

  • Reduces payload data volume so a large payload may fit within size limits.
  • More efficient to transmit across networks.
  • Automatically decompress on receiving end.

Cons of Compression:

  • Additional computing overhead to compress and decompress.
  • May not sufficiently reduce payload size to fit limits.
  • Compressed formats less readable for debugging.

Chunking

Use an AWS Lambda function to listen to events from services with large limits. In the Lambda, check the payload size and split it out programmatically into chunks that fit within the payload size of the destination service, then send to the destination.

Pros of Chunking:

  • Divides an extremely large payload into segments meeting destination size limits.
  • Each segment has better chance of being processed.
  • Failed chunks can be re-sent without moving entire payload.

Cons of Chunking:

  • Additional logic needed to segment, track and reassemble chunks .
  • Out of order arrival must be handled.
  • Overhead data on chunks, reassembly state.

Store and Notify Pattern

An alternative, simpler approach to handling varied payload sizes is the store and notify pattern. The Store and Notify pattern is a commonly used approach in event-driven architectures to handle events with large payloads. It works as follows:

  1. Store Payload: Instead of sending the entire large payload data directly in an event, the payload is persisted in durable object storage, like Amazon S3.
  2. Notify with Reference: A lightweight notification event is then published to downstream consumers containing reference information or a pointer to retrieve the stored payload, rather than the payload itself. This reference metadata is small in size.
  3. Retrieve on Demand: Event consumers use the reference data to retrieve or download only the specific parts of the large payload they need from the storage at the time of processing.

Key benefits of the Store and Notify pattern include avoiding restrictive payload size limits when publishing events, optimised network usage and costs since only metadata is transmitted, and durable retention of payload data for replay. Consumers also can scale retrieval of the payload in a managed way.

Pros:

  • Small event notifications avoid payload size limits.
  • Reduced network usage and costs.
  • Durable retention of payload in storage
  • Consumers retrieve only required payload data.

Cons:

  • Added latency of separate payload retrieval.
  • Storage costs for large payloads/data volumes .
  • Consumers must handle eventual consistency of storage

Full State Carried Transfer

The opposite to the Store and Notify pattern is Full State Carried Transfer which refers to an approach where events published by a producer service carry the entire state or payload needed by all potential consumers to process the event. For example:

  • An e-commerce order service publishes an OrderCreated event containing the full order details like customer name, shipping address, items purchased, prices, totals, etc.
  • A customer analytics service listening to events can consume the full order details from the event payload to update reports and metrics.
  • A fulfilment service can directly process the order details like shipping address from the event payload to arrange delivery.

In this approach, events carry the entire state rather than just a reference or identifier that needs to be enriched. It reduces dependencies between consuming services needing to retrieve data from alternate databases. However, carrying full state can result in large payloads to publish and retry if processing fails.

Pros:

  • No additional lookup, full state directly in event.
  • Atomicity if state update & event publish in transaction.
  • Reduced consumer coupling to specific data store.

Cons:

  • Events may exceed payload size limits.
  • Repeated transfer of same state on retries.
  • State info bloats all events even if not always needed.

Hybrid Approach

In instances where an organisation adopts a Storage and Notify pattern as the norm, the variability in payload sizes poses no challenge as consumers consistently retrieve event data upon consumption. The issue arises when an organisation opts for the full event state carried transfer pattern as its standard. In such cases, managing varied payload sizes becomes imperative. A hybrid approach can be implemented, wherein event publishers assess whether the payload surpasses the smallest size limit and, in those instances, employ the Storage and Notify pattern. A convention can then be established, allowing event consumers to identify the pattern used for each event and handle it accordingly.

This approach becomes easier to adopt if the functionality required to determine which pattern to use and subsequently, how to access the event data as a consumer is provided as common code through an Organisations Software Development Kit (SDK).

The code for the example project implementing the hybrid approach can be found below. The project does not contain production ready code or adequate tests and has been provided as an example.

Implementation

The following code provides an example of the hybrid approach in which event producers assess whether the payload exceeds the standard size limit, typically 256KB for most AWS services. If the payload falls below this defined threshold, the full state carried transfer pattern is employed. Conversely, if the payload surpasses the specified threshold, the Storage and Notify pattern is implemented.

The image below shows the architecture of the sample project.

Large Payload Sample Project Architecture

The above architecture has been used to show examples of a Lambda service receiving small and large payloads and sending events to a number of AWS Services with a small payload size limit of 256KB. In a real scenario it is unlikely that a single Lambda service would send messages and events to multiple event/message based services. It has been implemented in this way to keep the project small.

Storage

As highlighted earlier in this post, and as commonly demonstrated in scenarios involving the Storage and Notify pattern on AWS, S3 is recommended as the storage mechanism. Producers store payloads in S3 for event consumers to retrieve. This example follows the same approach but leverages the pre-signed URL feature of S3, offering event consumers restricted access to the stored payload.

An Amazon S3 pre-signed URL provides temporary access to share S3 object data with others, without requiring AWS credentials or permissions. When you create a pre-signed URL for an S3 object, you specify:

  • The bucket name.
  • The object key.
  • An expiration date and time

Amazon S3 will then generate a special URL that encodes all that information including access permissions. Anyone you share the pre-signed URL with will be able to access the object through the URL, within the expiration period using the permissions you specified. S3 pre-signed URLs provide secure, temporary links to share S3 objects without requiring AWS account permissions or access keys. The set expiration and granular permissions keep access appropriate and limited. Benefits of using S3 pre-signed URLs include:

  • Share objects without granting AWS credentials.
  • Set an expiration so links stop working after a certain time.
  • Specify read-only or write access as needed.
  • No need to create temporary accounts for sharing objects

Event and Message Schema Convention

EventBridge events typically necessitate the event creator to furnish an event source and detail-type value, denoting the entity or system dispatching the event and specifying the event name, respectively. In an Event-Driven Architecture (EDA) landscape, where services demand both ordered and unordered events, resulting in a centralized Simple Notification Service (SNS) Topic handling ordered events and EventBridge managing unordered ones, it is advisable to establish a unified event and message routing pattern. This pattern enables event/message consumers, creating event rules and SNS/SQS subscriptions, to adhere to a common convention when subscribing to events.

The examples presented in this blog post, along with the accompanying code project, employ an event/message convention. In this convention, each event and message are anticipated to include a “source” property, indicating the origin of the event/message, and a “detailType” property, specifying the event/message name. Although EventBridge inherently provides a mechanism for this through the API when posting events, the same convention has been implemented in SNS and SQS messages. This implementation involves structuring the message in such a way that the root contains both a “metaData” and a “data” object. The “metaData” object encapsulates the required “source” and “detailType” properties.

Adopting a convention of this nature empowers all developers and teams to establish standardized event/message rules and subscriptions, equipped with filters based on the “metaData” source and detailType” values.

{
"metaData": {
"source": "event-source",
"detailType": "MyEventName"
}
"data":{
...
}
}

Event Producer

The example project contains a Lambda function that acts as an event producer. The function is fronted by AWS API Gateway for ease of use where we can send variable payload sizes to the Lambda function through an API Gateway POST request. On receipt of an event, the Lambda handler calls custom SQS, SNS and EventBridge clients.

import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
import { wrapper } from '../../utils/wrapper';
import { logger } from '../../utils/utils';
import { sendSNSMessage, sendSQSMessage, sendEvent } from '../../clients';
import { generatePayload } from '../../payload-generator';

export const handler = async (
event: APIGatewayProxyEvent
): Promise<APIGatewayProxyResult> => {
try {
const payload = await generatePayload(event.body!);
await sendSQSMessage(
'ProducerLambda',
'TestSQSCreateEvent',
payload,
'TestSQSCreateEvent',
false
);
await sendSNSMessage(
'ProducerLambda',
'TestSNSCreateEvent',
payload,
'TestSNSCreateEvent',
false
);
await sendEvent(
'ProducerLambda',
'TestEventBusCreateEvent',
payload,
false
);
return {
statusCode: 200,
body: JSON.stringify({
message: 'Message Sent',
}),
};
} catch (error) {
logger.error((error as Error).message, { error });
return {
statusCode: 500,
body: (error as Error).message,
};
}
};

export const main = wrapper({
handler,
});

The custom clients abstract the functionality provided by the corresponding clients in version 3 of the AWS SDK. All custom clients implement the same function contracts with the exception of the SNS and SQS clients requiring an additional message group ID as the third parameter. The last parameter required by all custom clients is a boolean value called formatPayload which is used by all custom clients to identify whether they need to evaluate the payload to determine which payload pattern to use. This has been provided in case a Lambda handler wants to call multiple custom clients, where the handler can perform the evaluation separately outside of the client so that the payload is only added to S3 once and the one reference provided to all clients.

Custom SQS Client

import config from '../../config/config';
import { SQSClient, SendMessageCommand } from '@aws-sdk/client-sqs';
import { logger } from '../../utils/utils';
import { generatePayload } from '../../payload-generator';
const sqs = new SQSClient({});
const queue = config.get('queue');

export const sendSQSMessage = async (
source: string,
detailType: string,
message: string,
messageGroupId: string,
formatPayload = true
) => {
try {
let sqsMessage = message;
if (formatPayload) {
sqsMessage = await generatePayload(message);
}
const command = new SendMessageCommand({
QueueUrl: queue,
MessageGroupId: messageGroupId,
MessageBody: JSON.stringify({
metaData: {
source,
detailType,
},
data: sqsMessage,
}),
});

const response = await sqs.send(command);

if (response.$metadata.httpStatusCode !== 200) {
const errorMessage = 'Error sending message to SQS';
logger.error(errorMessage, { response });
throw new Error(errorMessage);
}
} catch (error) {
logger.error((error as Error).message);
throw error;
}
};

The above code shows the custom SQS client which requires the message source, a detailType specifying the name of the message, message as the actual payload itself, messageGroupId and the optional formatPayload boolean value that specified whether the client should determine which event payload pattern to use. If the formatPayload boolean value is true, the generatePayload function is called to perform the evaluation and set the payload value.

Custom SNS Client

import config from '../../config/config';
import { SNSClient, PublishCommand } from '@aws-sdk/client-sns';
import { logger } from '../../utils/utils';
import { generatePayload } from '../../payload-generator';
const snsClient = new SNSClient({});
const topic = config.get('topic');

export const sendSNSMessage = async (
source: string,
detailType: string,
message: string,
messageGroupId: string,
formatPayload = true
) => {
try {
let snsMessage = message;
if (formatPayload) {
snsMessage = await generatePayload(message);
}
const command = new PublishCommand({
TopicArn: topic,
MessageGroupId: messageGroupId,
Message: JSON.stringify({
metaData: {
source,
detailType,
},
data: snsMessage,
}),
});

const response = await snsClient.send(command);

if (response.$metadata.httpStatusCode !== 200) {
const errorMessage = 'Error sending message to SNS';
logger.error(errorMessage, { response });
throw new Error(errorMessage);
}
} catch (error) {
logger.error((error as Error).message);
throw error;
}
};

The above code shows the custom SNS client which requires the message source, a detailType specifying the name of the message, message as the actual payload itself, messageGroupId and the optional formatPayload boolean value that specifies whether the client should determine which event payload pattern to use. If the formatPayload boolean value is true, the generatePayload function is called to perform the evaluation and set the payload value.

Custom EventBridge Client

import config from '../../config/config';
import {
EventBridgeClient,
PutEventsCommand,
} from '@aws-sdk/client-eventbridge';
import { logger } from '../../utils/utils';
import { generatePayload } from '../../payload-generator';
const eventbridge = new EventBridgeClient({});
const eventBus = config.get('eventBus');

export const sendEvent = async (
source: string,
detailType: string,
eventData: string,
formatPayload = true
) => {
try {
let formattedEvent = eventData;
if (formatPayload) {
formattedEvent = await generatePayload(eventData);
}
const command = new PutEventsCommand({
Entries: [
{
Source: source,
EventBusName: eventBus,
DetailType: detailType,
Detail: formattedEvent,
},
],
});

const response = await eventbridge.send(command);

if (response.$metadata.httpStatusCode !== 200) {
const errorMessage = 'Error sending event to the Event Bus';
logger.error(errorMessage, { response });
throw new Error(errorMessage);
}
} catch (error) {
logger.error((error as Error).message);
throw error;
}
};

The above code shows the custom EventBridge client which requires the message source, a detailType specifying the name of the message, message as the actual payload itself, messageGroupId and the optional formatPayload boolean value that specifies whether the client should determine which event payload pattern to use. If the formatPayload boolean value is true, the generatePayload function is called to perform the evaluation and set the payload value.

Payload Evaluation

import { createS3Payload, stringIsGreaterThan250KB } from '../utils';

export const generatePayload = async (payload: string): Promise<string> => {
let generatedPayload = payload;
if (stringIsGreaterThan250KB(payload)) {
generatedPayload = JSON.stringify(await createS3Payload(payload));
}
return generatedPayload;
};

The code above shows the generatePayload function used by the handler and the three custom clients. The generatePayload function receives a payload as a string which is evaluated via a call to the stringIsGreaterThan250KB function to determine if the payload is greater than 250KB in size. If greater than 250KB, generatedPayload is set via a call to the createS3Payload function.

Create S3 Pre-Signed URL and Event Payload

import {
S3Client,
PutObjectCommand,
GetObjectCommand,
} from '@aws-sdk/client-s3';
import config from '../config/config';
import { logger } from './utils';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { v4 as uuid } from 'uuid';

const s3 = new S3Client({});
const bucketName = config.get('s3Bucket');

export type S3Payload = {
s3Url: string;
};

const saveToS3 = async (
objectName: string,
body: string
): Promise<void> => {
try {
const command = new PutObjectCommand({
Bucket: bucketName,
Key: objectName,
Body: body,
});
await s3.send(command);
} catch (error) {
logger.error((error as Error).message);
throw error;
}
};

const generateUrl = async (objectName: string) => {
try {
const command = new GetObjectCommand({
Bucket: bucketName,
Key: objectName,
});

const url = getSignedUrl(s3, command, {
expiresIn: 3600,
});

return url;
} catch (error) {
logger.error((error as Error).message);
throw error;
}
};

export const createS3Payload = async (body: string): Promise<S3Payload> => {
try {
const objectName = uuid();
await saveToS3(objectName, body);
const s3Url = await generateUrl(objectName);
return { s3Url };
} catch (error) {
logger.error((error as Error).message);
throw error;
}
};

The above code shows the contents of the s3-utils.ts file which exposes the createS3Payload function. The function calls the internal saveToS3 function to persist the payload to S3, and generateUrl function to generate the S3 pre-signed URL. The returned event payload is created with a property named s3Url containing the generated S3 pre-signed URL. Consumers of events within the organisation would expect any event created using the Storage and Notify pattern to contain the s3Url property.

Event Consumers

The project contains three event consumers — SNS, SQS and EventBridge. The SQS distribution has been set as the event source of an event consumer Lambda handler. A subscription has been added to the SNS Topic with the target set as the SQS distribution. An event rule has been added to the EventBridge distribution with the target set as the SQS distribution. This will result in all events and messages sent by the producer Lambda function being consumed by the one event consumer Lambda handler. The code below shows the event consumer Lambda handler.

import { SQSEvent, SQSHandler } from 'aws-lambda';
import { wrapper } from '../../utils/wrapper';
import { logger } from '../../utils/utils';
import { payloadProcessorMiddleware } from '../../middleware/index';

export const handler: SQSHandler = async (event: SQSEvent): Promise<void> => {
for (const record of event.Records) {
const { body } = record;
logger.debug('Received SQS message:', {body});
}
};

export const main = wrapper({
handler,
}).use(payloadProcessorMiddleware());

The event consumer handler is extremely simple. It receives the SQSEvent, iterates through each record and logs the record body. The event payload is not evaluated in the handler itself, it’s done within a custom Middy middleware. A custom wrapper function is called at the bottom of the handler file which wraps the handler as a Middy Handler implementing a number of custom Middy middleware. The Middy Handler is finally instructed to use the custom payload processor middleware by calling the payloadProcessorMiddleware function.

import middy from '@middy/core';
import { SQSBatchResponse, SQSEvent, SQSRecord } from 'aws-lambda'

type ProcessRecordResult = {
successful: boolean;
messageId: string;
}

const processRequest = async (record: SQSRecord): Promise<ProcessRecordResult> => {
try {
const { body } = record;
const parsedBody = JSON.parse(body);
let data;

if (parsedBody.data) { // SQS
data = JSON.parse(parsedBody.data);
} else if ( parsedBody.detail) {// EventBridge
data = parsedBody.detail;
} else if (parsedBody.Message) { // SNS
const message = JSON.parse(parsedBody.Message);
data = JSON.parse(message.data);
} else {
throw Error('Could not identify message type.');
}

if ( !data.s3Url ) {
record.body = data;
return {
successful: true,
messageId: record.messageId
};
}

const response = await fetch(data.s3Url);

if ( response.status !== 200) {
const errorMessage = 'Unable to retrieve request data from S3.';
throw Error(errorMessage);
}

record.body = await response.text();
return {
successful: true,
messageId: record.messageId
};
} catch (error) {
return {
successful: false,
messageId: record.messageId
};
}
}

export const payloadProcessorMiddleware = (): middy.MiddlewareObj<SQSEvent, SQSBatchResponse> => {
const before: middy.MiddlewareFn<SQSEvent, SQSBatchResponse> = async (request): Promise<void> => {
const failedItems = [];
for (const record of request.event.Records) {
try {
const processResult = await processRequest(record);
if (!processResult.successful) {
failedItems.push({ itemIdentifier: record.messageId });
continue;
}
} catch (error) {
failedItems.push({ itemIdentifier: record.messageId });
}
}
if (failedItems.length > 0) {
request.response = {
batchItemFailures: failedItems,
}
}
}

return {
before,
}
}

The above code block shows the payload processor middleware file where the middleware is configured within the payloadProcessorMiddleware function. For each SQS record, the processRequest function is called which determines the source of the message/event (SQS, EventBridge, SNS) and whether the payload contains the s3Url property signifying that the Storage and Notify pattern has been used. If the Storage and Notify pattern has been used, a GET request is made to the S3 pre-signed URL to retrieve the event payload. The returned payload is then added to the Record as the body value, overwriting the original payload with the payload retrieved from S3.

Internal SDK

Custom AWS clients, Middy middleware and utility code has been used within the event producer and consumer Lambda handler’s that provide common functionality to implement the hybrid event payload pattern. The way in which this functionality has been provided would allow the functionality to be made available to developers and teams as common code through an internal SDK. This would allow developers and teams to not only implement the pattern easily but also ensure the pattern is implemented correctly in a repeatable fashion and reduce cognitive load.

Out of Scope Considerations

This blog post covered a number of EDA topics at a high level and mainly focussed on implementing an automated mechanism for dealing with large payload sizes when utilising a Full State Carried Transfer pattern. There are a number of subjects not covered in this post that should be considered when implementing an EDA strategy.

  • Event Storage: The example shown in this blog post utilises decentralised S3 storage where each producer account contains an S3 bucket to store their event payloads. An alternative approach would be to use a centralised S3 bucket to contain all event payloads that can be used by all organisation accounts. I prefer the decentralised approach which ensures that an accounts event payloads remain in their account and only retrieved by consumers of events.
  • Event Payload Expiration: The example shown in this blog post sets an S3 pre-signed URL expiration value of 3600 seconds. This might want to be configurable based on the category of event — whether the event payload contains critical or sensitive data.
  • Event Visibility: The EDA strategy implemented in the example project leverages a central Simple Notification Service (SNS) Topic and Event Bus for efficiently routing events across the organization. The central resources are configured to facilitate bi-directional messaging, enabling all accounts within the organization to both send and receive messages through these central resources. While this approach suffices for events/messages containing non-sensitive data, it becomes impractical when handling sensitive information. In situations involving sensitive data, where access to events/messages must be restricted, particularly due to compliance constraints, this strategy falls short.
    In such scenarios, organisations may consider adopting an event traffic management system, categorising events based on the sensitivity or value of the data. This entails classifying events/messages into three categories: green events/messages, visible to anyone; amber events/messages, visible to everyone pending approval; and red events/messages, exclusive to a separate event bus or SNS Topic for restricted access and consumption. This nuanced approach addresses the complexities associated with managing sensitive data within the organizational event flow.

--

--