You place a basket order, 4 legs of an iron condor, and your phone buzzes four times with individual alerts before a summary arrives. You close a position in the Analyzer and get no confirmation at all. You check the order log and find 21 entries for what should have been a single basket.
These are not random bugs. They are symptoms of an architectural pattern that every trading platform eventually outgrows.

This is the story of how OpenAlgo adopted event driven architecture, what it is, why algorithmic trading platforms need it, and what it unlocks for strategy level position tracking, risk management, and beyond.
What Is Event Driven Architecture?
Let us start with an analogy every trader understands.
The Trading Floor Analogy
Imagine a trading floor in the 1980s. A floor trader executes a buy order, then personally walks to the risk desk to report it, then walks to the settlement desk, then calls the back office, then updates the position board.
If the risk desk is on a coffee break, the trader stands there waiting. The settlement desk does not get the information until the risk desk conversation is done. If the trader forgets to update the position board, which happens when things get hectic, nobody notices until reconciliation.
Now imagine a modern electronic exchange. The trader submits the order. The exchange publishes a fill message to the wire. The risk system reads it. The settlement system reads it. The position board reads it. The compliance system reads it. Each one independently, simultaneously, without knowing about the others.
The trader does not walk to five desks. The trader announces “this happened” once. Everyone who cares is listening.
That is event driven architecture.
The Two Models
Direct calls, request response:
Order Service -> calls Logger
-> calls Dashboard
-> calls Telegram
-> calls Risk Monitor
The order service knows about every consumer. Adding a new one means editing the order service. If one consumer is slow, it slows down everyone after it. If one crashes, the chain breaks.
Event driven, publish subscribe:
Order Service -> publishes "order.placed" event
|
|----------|----------|----------|
v v v v
Logger Dashboard Telegram Risk Monitor
The order service does not know who is listening. Each consumer subscribes independently. Adding a new one requires zero changes to the order service. If Telegram is down, the logger and dashboard still work. If you add a risk monitor next month, you do not touch a single line of order code.
The technical term for this is decoupling, and for trading systems, it is not a nice to have. It is essential.
Why Algo Trading Platforms Need Event Driven Architecture
Trading platforms are not typical web apps. The order pipeline has unique properties that make direct function calls progressively more painful as the system grows.
1. The Order Pipeline Touches Everything
When an order is placed, the entire system needs to know:
- The database needs to log it, for audit trails and compliance
- The dashboard needs to refresh, so you see the result instantly
- Your phone needs to buzz, Telegram or WhatsApp alert
- The position tracker needs to update, so you know what you hold
- The risk engine needs to check, are you exceeding exposure limits
- The P & L calculator needs to recalculate, real time profit and loss
- The analytics engine needs to record, win rate, average trade, drawdown
That is seven consumers for a single order event. Wire them directly, and your order function becomes a 200 line monster that imports half the codebase. Event driven architecture lets the order function stay clean: place the order, announce what happened, return the result. Seven consumers handle the rest.
2. Your Broker Drops Your Strategy Identity
This is a problem unique to Indian markets and algo platforms like OpenAlgo.
When you send an order to Zerodha, Angel One, Fyers, Dhan, or any Indian broker, you include a strategy field, "Iron Condor", "Momentum Scanner", "Mean Revert". The broker ignores it. It does not store it. It does not return it.
When you later ask the broker for your positions:
NIFTY: +65 lots
SBIN: +100 shares
Which strategy holds those? If you are running “Momentum” with +100 SBIN and “Mean Revert” with 50 SBIN short, the broker shows you +50 SBIN. The per strategy breakdown is gone.
The only moment you can capture which strategy owns which order is at placement time. After that, the strategy tag disappears at the broker boundary forever.
Event driven architecture captures that moment. Every order event carries the strategy name. Any system that subscribes, position tracker, risk manager, analytics, gets the strategy identity preserved.
3. Live Trading and Sandbox Trading Must Behave Identically
Every serious algo trader tests strategies in a sandbox before going live. OpenAlgo’s Analyzer mode provides sandbox trading with sandbox capital.
But here is the challenge: when the same order function handles both live and sandbox mode, the side effects need to differ:
Live mode
- Log destination: order_logs table
- Dashboard event: order_event
- Telegram prefix: LIVE MODE, Real Order
Sandbox mode
- Log destination: analyzer_logs table
- Dashboard event: analyzer_update
- Telegram prefix: ANALYZE MODE, No Real Order
With direct calls, you need if branches for every side effect in every order service. With event driven architecture, the event carries a mode field, and each subscriber knows what to do:
if event.mode == "analyze":
# sandbox path
else:
# live path
One check per subscriber, not per service. The order service simply sets the mode and publishes.
4. Batch Orders Need Different Notification Semantics
A single order needs one notification. A basket of 20 orders needs one summary notification, not 20 individual ones. A split order breaking 1000 shares into 50 share chunks needs a summary, not 20 alerts.
With direct calls, every batch service has to manually suppress per order notifications and emit its own summary, a pattern that is easy to get wrong. With events, the batch service publishes one BasketCompletedEvent at the end. Individual sub orders publish nothing. The notification logic lives in one place.
5. Failure Isolation Is Critical When Money Is Involved
If Telegram’s API is slow and your notification call is inline with order placement, one of two things happens:
- The order response is delayed while waiting for Telegram, bad for latency
- The Telegram call fails, and depending on your error handling, the error might propagate up and make it look like the order failed, catastrophic for trust
Event driven architecture isolates failures by design. Each subscriber runs independently. Telegram is down? The Telegram subscriber logs an error. Your order still gets logged. Your dashboard still updates. Your position tracker still records the trade. No single subscriber failure can affect the order pipeline or any other subscriber.
How OpenAlgo Adopted Event Driven Architecture
What We Started With
OpenAlgo supports ten order types: place order, smart order, basket order, split order, options order, multi leg options, modify, cancel, cancel all, and close position.

Each service had three hardcoded side effects after every broker call:
# After the broker confirmed the order:
executor.submit(async_log_order, "placeorder", request_data, response) # Log it
socketio.start_background_task(socketio.emit, "order_event", {...}) # Update dashboard
socketio.start_background_task(telegram_alert_service.send_order_alert, ...) # Alert phone
These three lines, with variations, appeared in every order service, in every success path, every failure path, and every sandbox path. That is 50 plus dispatch points across 18 files.
The Bugs We Found
- Close Position in Analyzer mode: Dead if False: code block. Positions closed silently, no log, no Telegram, no dashboard update
- Modify and Cancel in Analyzer mode: Telegram alerts were skipped entirely, while live mode sent them
- Basket order with 20 legs: 21 Telegram alerts and 21 log entries instead of 1
- Options multi order in Analyzer: Emitted order_event, live mode event, instead of analyzer_update, confusing the Analyzer UI
- UI Close Position button: Called the broker directly, bypassing the service layer. No order was ever logged
- API key in error logs: On validation failures, the raw API key was written to the log database
Every one of these bugs existed because side effects were scattered across files instead of centralized. No single file was “wrong”, the bugs emerged from inconsistencies between files.
The Event Bus We Built
We built a lightweight, in process event bus in about 60 lines of Python. No Redis. No Kafka. No external infrastructure.
How it works:
- Order services publish typed events: OrderPlacedEvent, BasketCompletedEvent, OrderCancelledEvent, and others. Each event carries the mode, live or analyze, the strategy name, the request and response data, and the API key for notifications.
- The event bus routes by topic: Topics like "order.placed", "basket.completed", "order.cancelled" determine which subscribers receive the event.
- Subscribers handle one concern each:
- Log subscriber: Writes to order_logs, live, or analyzer_logs, analyze
- SocketIO subscriber: Emits the correct dashboard event, 8 different event names depending on operation and mode
- Telegram subscriber: Sends formatted alerts with detailed order information
- Everything is async and isolated: A shared thread pool dispatches callbacks. One subscriber crashing does not affect the others.
Before and After
Before, the order service imported and called three systems directly:
# place_order_service.py (old)
from database.apilog_db import async_log_order, executor
from extensions import socketio
from services.telegram_alert_service import telegram_alert_service
def place_order_with_auth(order_data, auth_token, broker, original_data):
# ... broker call ...
if res.status == 200:
executor.submit(async_log_order, "placeorder", request_data, response)
socketio.start_background_task(socketio.emit, "order_event", {...})
socketio.start_background_task(telegram_alert_service.send_order_alert, ...)
After, one line, one event:
# place_order_service.py (new)
from events import OrderPlacedEvent
from utils.event_bus import bus
def place_order_with_auth(order_data, auth_token, broker, original_data):
# ... broker call ...
if res.status == 200:
bus.publish(OrderPlacedEvent(
mode="live",
api_type="placeorder",
strategy=order_data.get("strategy", ""),
symbol=order_data["symbol"],
orderid=str(order_id),
request_data=cleaned_request,
response_data=response,
api_key=api_key,
))
The service does not know who listens. It does not import logging, SocketIO, or Telegram. It announces what happened and moves on.
The Impact
- Side effect dispatch points: Before, 50 plus across 18 files. After, 15 bus.publish() calls across 10 services
- Files that know about logging: Before, 18. After, 1, log_subscriber.py
- Files that know about Telegram: Before, 12. After, 1, telegram_subscriber.py
- Files that know about SocketIO events: Before, 14. After, 1, socketio_subscriber.py
- Thread pools for side effects: Before, 3 pools, 25 threads. After, 1 pool, 10 threads
- Bugs from inconsistent side effects: Before, 6 known. After, 0
How Modularity Is Maintained
The event bus enforces a clean separation of concerns through a simple rule: publishers do not know about subscribers, and subscribers do not know about each other.
The File Structure
Order Services (publishers) Event Bus Subscribers (consumers)
───────────────────────── ───────── ────────────────────────
services/place_order_service.py | | subscribers/log_subscriber.py
services/basket_order_service.py | | subscribers/socketio_subscriber.py
services/split_order_service.py | -> bus.publish() -> | subscribers/telegram_subscriber.py
services/options_multiorder_... | | subscribers/strategy_store.py (future)
services/cancel_order_service.py | | subscribers/risk_manager.py (future)
services/close_position_service.py | |
Each column is independent. You can modify a subscriber without touching any service. You can add a service without touching any subscriber. The event bus in the middle is the only shared contract, and it is a 60 line class that never needs to change.
The Contract: Typed Events
Publishers and subscribers agree on the event schema, nothing else:
@dataclass
class OrderPlacedEvent(OrderEvent):
topic: str = "order.placed"
mode: str = "live" # "live" or "analyze"
api_type: str = "" # "placeorder", "basketorder", etc.
strategy: str = ""
symbol: str = ""
exchange: str = ""
action: str = "" # "BUY" or "SELL"
quantity: int = 0
orderid: str = ""
request_data: dict = {} # for logging, apikey stripped
response_data: dict = {} # for logging
api_key: str = "" # for Telegram username resolution
This is the contract. The publisher fills it. The subscriber reads it. They never import each other.
Adding a New Consumer: One File, One Line
Want to add a Discord notification alongside Telegram? Write a file, register it:
# subscribers/discord_subscriber.py
def on_order_placed(event):
send_discord_webhook(f"Order placed: {event.symbol} {event.action}")
# subscribers/__init__.py, add one line:
bus.subscribe("order.placed", discord_subscriber.on_order_placed)
No order service changes. No existing subscriber changes. No deployment coordination. This is the power of decoupling.
The Future: Strategy Level Intelligence
The event bus is not just a code cleanup. It is the foundation for features that were architecturally impossible with hardcoded side effects.
Strategy Level Positions
Today, OpenAlgo shows positions from the broker, account level, no strategy breakdown.
Tomorrow, a new subscriber will listen to order events and maintain per strategy positions:
Strategy: Iron Condor
NIFTY30MAR2623650CE BUY +65 @ 120.50
NIFTY30MAR2622650PE BUY +65 @ 95.30
NIFTY30MAR2623400CE SELL -65 @ 155.20
NIFTY30MAR2622900PE SELL -65 @ 110.80
Net P&L: +2,450
Strategy: Momentum Scanner
SBIN BUY +100 @ 420.50 P&L: +1,250
RELIANCE BUY +50 @ 1305.00 P&L: -430
This is just a new subscriber reading order.placed events. The order services do not change.
Strategy Level Orderbook and Tradebook
The broker’s orderbook has no strategy column. But every order event carries the strategy field. A subscriber can write these to an internal table, giving you:
- Filter orders by strategy
- See trade history per strategy
- Know which strategy generated which fills
Strategy Level Risk Management
With per strategy positions built from events, risk management becomes possible per strategy:
- Stoploss per strategy: “Exit all Iron Condor legs if net P and L drops below 5,000 loss”
- Target per strategy: “Close Momentum positions at +2 percent return”
- Trailing stoploss per strategy: “Trail SBIN stop by 10 points as price rises”
- Max position size per strategy: “Momentum Scanner cannot hold more than 500 shares of any symbol”
- Daily loss limit per strategy: “If Mean Revert loses 10,000 in a day, stop trading”
The risk manager subscribes to order events, to know what each strategy holds, and market data ticks, to monitor prices. When a limit is breached, it publishes a RiskExitEvent, which triggers an exit order through the same event bus.
The Extensibility Pattern
Every future feature follows the same three steps:
- Create a new file in subscribers/
- Write handler functions that receive events
- Register them at startup, one line per topic
No order service modified. No existing subscriber touched. No testing of unrelated code. This is modularity in practice, not as a principle on a whiteboard, but as a property of the running system.
Why 60 Lines and Not Redis?
OpenAlgo is designed for a single trader running on a laptop or a small VPS. It uses SQLite. It is a single process Python application.
Redis Streams gives you persistence, consumer groups, and multi instance coordination. Kafka gives you distributed processing across data centers. ZeroMQ, which OpenAlgo already uses for market data streaming, gives you cross process messaging.
None of these are needed for a single user platform with three subscribers.
The 60 line event bus does exactly three things:
- Topic based routing: events go to the right subscribers
- Async dispatch: subscribers run in a thread pool, never blocking the order response
- Error isolation: one subscriber crashing does not affect others
When OpenAlgo’s requirements grow, multi user support, multi process deployment, event replay for debugging, the EventBus internals can swap to ZeroMQ or Redis. The event types, the subscribers, and the order services remain unchanged. The right amount of infrastructure today. The right interface for tomorrow.
What This Means for You
If you are a trader using OpenAlgo: your notifications are now reliable. Basket orders send one clean summary. Sandbox mode behaves identically to live mode. Close position always logs. Every order type, every mode, every path, consistent behavior.
If you are building strategies on OpenAlgo’s API: the groundwork is laid for strategy level positions, per strategy risk management, and analytics. These features become possible because the event bus captures what the broker throws away: which strategy owns which order.
If you are a developer contributing to OpenAlgo: the order pipeline is now decoupled. Adding a new consumer is one file and one registration line. The ten order services never need to change for new side effects. And the code is simpler, 15 publish calls replaced 50 plus scattered dispatch points.
If you are building your own trading platform: consider this lesson. The order execution path is the one place where everything connects. Log it, display it, alert it, track it, analyze it, risk manage it. Wire these directly, and you build a system where every new feature requires editing every existing service. Put an event bus in the middle, and you build a system where new features are new files, clean, isolated, and independently testable.
Event driven architecture is not about fancy technology or enterprise patterns. It is about a simple idea: announce what happened, and let everyone who cares decide what to do about it. For trading platforms, where reliability, consistency, and extensibility are not optional, it is the right foundation.