[System Design Tech Case Study Pulse #13] 20 Billion Messages Daily: How Facebook Messenger Actually Works
With detailed explanation and flow chart....
Hi All,
Facebook Messenger's choice of Apache HBase as its backend storage system enables it to handle an astounding 20 billion messages daily. This feat allows Facebook to provide real-time messaging services to billions of users worldwide with high reliability and low latency.
Learn how to System design —Design Lyft
Today in this post, let me dive deep into how Facebook engineered this system, exploring the key architectural decisions, scaling strategies, and optimizations that enable HBase to manage this massive volume of messages for Messenger.
System Overview
Before I delve into the HBase architecture for Facebook Messenger, let's look at some key metrics that highlight the scale of its operations:
- Messages processed daily: 20 billion+
- Active users: 1.3 billion+
- Peak messages per second: Millions
- Data stored: Petabytes
- Latency: Low milliseconds for message delivery
- Availability: 99.99%+
- Global deployment: Multiple data centers worldwide
- Supported message types: Text, images, videos, voice messages, etc.
- Devices supported: Mobile apps, web browsers, IoT devices
Master Data Structures and Algorithms and curated Questions List( Company wise)
Build Projects and master the most important topics
Ignito System Design Youtube Channel
System Design Github - Link
Learn system design pulses -
[System Design Pulse #3] THE theorem of System Design and why you MUST know it - Brewer theorem
[System Design Pulse #4] How Distributed Message Queues Work?
[System Design Pulse #5] Breaking It Down: The Magic Behind Microservices Architecture
[System Design Pulse #6] Why Availability Patterns Are So Crucial in System Design?
[System Design Pulse #7] How Consistency Patterns helps Design Robust and Efficient Systems?
[System Design Pulse #9] Why these Key Components are Crucial for System Design.