Aggregates delegating to other Aggregates #169
-
hi there! From my reading, it seems to be a reasonably common pattern to have one aggregate (the 'root aggregate') delegate to multiple other aggregates. Or to put in another way, a single aggregate is the public entrypoint, but is in practice a 'facade' over multiple other aggregates. I'm not sure how this pattern is implemented in the wild. Certainly it doesn't seem very 'rusty' for each aggregate to manage its own access to the data layer. How would one implement this pattern using this crate? One thing i thought of was using an enum to multiplex/coalesce multiple event streams, and then decomposing that again in the aggregate to delegate to other aggregates. Seems a little convoluted though. in pseudocode (don't try and compile this!)- let event_stream1: EventStream<Event1> = ... ;
let event_stream2: EventStream<Event2> = ... ;
enum Event {
event1(Event1),
event2(Event2),
}
let multiplex_stream: EventStream<Event> = stream::select(
event_stream1.map(Into::into)),
event_stream2.map(Into::into))
); then the root aggregate would match on the event types and delegate each to 'child' aggregates (possibly recursively). Am i making any sense? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 5 replies
-
Hey! Can you clarify what are you trying to do specifically? 😄 In general, Aggregates are supposed to be used only to maintain business invariants and represent transactional boundaries. A great resource on Aggregate modeling here: https://www.dddcommunity.org/library/vernon_2011/ One common thing is for Aggregates to depend on changes from other Aggregates. Example: an A Process Manager is a component that listens to Domain Events and emits commands. From the case above:
Then the command is either handled synchronously inside the Process Manager (it might have the Command Handler injected, so you have it accessible from the Process Manager) or asynchronously:
Process Managers are basically Stream Processors (big thing in event-driven architectures) and it's a rather broad concepts. Process Managers can have state (stateful stream processors) or be stateless (typically coordinators, like Sagas). Other interesting reading about Process Managers: https://event-driven.io/en/saga_process_manager_distributed_transactions/ Does this answer your question to some extent? (If so, I can explain how you could implement one with this library ✌🏻 ) |
Beta Was this translation helpful? Give feedback.
Hey! Can you clarify what are you trying to do specifically? 😄
In general, Aggregates are supposed to be used only to maintain business invariants and represent transactional boundaries. A great resource on Aggregate modeling here: https://www.dddcommunity.org/library/vernon_2011/
One common thing is for Aggregates to depend on changes from other Aggregates. Example: an
Order
(aggregate 1) gets paid, so anInvoice
(aggregate 2) needs to be created. In Event Sourcing (not just about this crate) one handles these cases using Process Managers.A Process Manager is a component that listens to Domain Events and emits commands.
From the case above: