temporary repository for messages that are waiting for processing
one component in the application queues messages to be consumed by another component in the application
can be used to decouple the components of an application
messages can contain up to 256 kb of text
you can store up to 2GB per message, however, it will be stored and retrieved from S3 in this case
messages can be retrieved using AWS SQS API
auto-scaling events can be configured based on queue sizes as well
pull-based, not pushed based
messages should be consumed from the queues, SQS will not push the messages
messages can be kept in the queue from 1 minute to 14 days
the default retention period is 4 days
visibility timeout - the amount of time that the message is invisible in the SQS queue after a reader picks up that message. If the job is not processed within visibility timeout, the message will become visible again and can be processed by another reader.
maximum visibility timeout is 12 hours
short polling - returns immediately even if the message queue being polled is empty
long polling - doesn’t return a response until a message arrived in the message queue or the long poll times out
nearly unlimited number of transactions per second
a message is delivered at least once
occasionally more than one copy of a message might be delivered out of order
allows high throughput
provides best-effort ordering
ensures that messages are generally delivered in the same order as they are sent
the order is strictly preserved and guaranteed
the message remains available until a consumer processes and deletes it; duplicates are not introduced into the queue
support message groups that allow multiple ordered message groups within a single queue
limited to 300 transactions per second (TPS)
SWF (Simple Workflow Service)
makes it easy to coordinate work across distributed application components
workflow executions can last up to 1 year
presents a task-oriented API
ensures that the task is assigned only once
keeps track of all the tasks and events in an application
An application initiating a workflow.
(e.g. an e-commerce website following the placement of order)
Control the flow of activity tasks in a workflow execution. If something has finished (or failed) in a workflow, a Decider decides what to do next.
Carry out the activity tasks
SNS (Simple Notification Service)
Service for sending notifications from the cloud
Can deliver notifications to devices (via Push notifications), SMS, Email, SQS queues or to any other HTTP endpoint
Push notifications to Apple, Google, FireOS, Windows devices and Android devices in China with Baidu Cloud Push
Recipients are grouped using Topics
One topic can support deliveries to multiple endpoint types
The message published to a topic will be delivered to each subscriber
Messages published to SNS are stored across different AZs
Push-based delivery (no polling)
Simple APIs and easy integration with applications
Inexpensive, pay-as-you-go model with no up-front costs
SNS vs SQS
Both Messaging Services in AWS
SNS - Push
SQS - Polls (Pulls)
Media Transcoder in the cloud
Convert media files from their original source format into different formats that will play on smartphones, tablets, PCs, etc.
Provides transcoding presets for popular output formats
Pay based on the minutes that you transcode and the resolution at which you transcode
Fully managed service that makes it easy for developers to publish, maintain, monitor and secure APIs at any scale.
Expose HTTPS endpoints to define a RESTful API
Serverless-ly connect to services like Lambda & DynamoDB
Send each API endpoint to a different target
Run efficiently with low cost
Track and control usage by API key
Throttle requests to prevent attacks
Connect to CloudWatch to log all requests for monitoring
Maintain multiple versions of your API
Define an API
Define Resources and nested Resources (URL Paths)
For each resource:
Select supported HTTP methods (verbs)
Choose target (such as EC2, Lambda, DynamoDB, etc.)
Set request and response transformations
Uses API Gateway domain, by default
Can use a custom domain
Now supports AWS Certificate Manager: free SSL/TLS certs
API Caching (TTL specified)
Will help you reduce the number of requests made to your endpoint, improving the latency of the requests to your API.
Same Origin Policy
Web browser permits scripts contained in a first web page to access data in a second web page, but only if both web pages have the same origin. This prevents XSS attacks.
CORS can be enabled on the API Gateway
CORS (Cross-Origin Resource Sharing)
allows restricted resources on a web page to be requested from another domain outside the domain from which the first resource was served.
The browser makes HTTP OPTIONS call
The server returns a response listing other domains that are approved to GET this URL
Service working with the streaming data
Makes it easy to load and analyze streaming data
Streaming Data is data that is generated continuously by thousands of data sources, which typically send in the data records simultaneously, and in small sizes (order of Kilobytes).
Receive data from producers and retain it until consumed
Producers - Produce streaming data and stream it to Kinesis streams
Consumers - receive data from Kinesis Streams and act on it
Retention: 24 hours - 7 days
Consists of Shards
5 transactions per second for reads
a maximum total data read rate of 2MB per second
up to 1000 records per second for writes
maximum total write rate of 1MB per second
Data capacity of the stream is determined by the number of shards and their capacities
Data received could optionally execute a lambda function and will be output to it’s the destination which could be S3, ElasticSearch
There’s no data persistence
Data will be forwarded immediately to the target destination
analyzes the data inside both of Kinesis Firehose and Kinesis Streams, with on-the-fly analysis capability
SQS Billing is calculated per request, plus data transfer charges for data transferred out of Amazon SQS
1 million requests per month - fall under the free tier
Batch operations cost the same as other SQS requests. Grouping messages into batches, you can reduce the SQS costs.
All the messages in SQS have a globally unique ID that SQS returns when the message is delivered to the message queue. This ID is useful for tracking the receipt of a particular message in the message queue.
Dead letter queues receive messages from other source queues after a maximum number of processing attempts cannot be completed.
SQS message can contain up to 10 metadata attributes - applications can determine how to process the message based on the metadata instead of inspecting the entire message. (attributes: name-type-value triples)
SentTimestamp attribute - contains information when the message was sent by a producer and queued by SQS.
SenderId attribute - contains either the AWS account ID or the IP address for the sender.
Amazon SQS APIs provide deduplication functionality for FIFO queues that prevents your message producer from sending duplicates. Any duplicates introduced by the message producer are removed within a 5-minute deduplication interval.
If using standard queues you may experience duplicates - the application must be designed to be idempotent - not affected when processing the same message more than once).
Queue type can be chosen only on creation. If you want to convert from one type to another you will have to recreate the queue.
In FIFO queues, messages are ordered based on the message group ID. It’s a required field when sending a message to the queue.
After you’re done processing the message, you are responsible for deleting it.
Server-side encryption (SSE) - SSE encrypts messages as soon as Amazon SQS receives them using keys in AWS KSM (AWS Key Management Service). The messages are stored in encrypted form and are decrypted when sending a message to an authorized consumer.
SSE encrypts the body of the message. The queue metadata, message metadata, and per-queue metrics are not encrypted.
SSE uses AES-GCM 256 algorithm.
Amazon SQS is PCI DSS Level 1 certified and HIPAA Eligible Service.
SQS messages can contain text data, including XML, JSON and unformatted text.
The number of inflight messages is limited to 120k for standard queue and 20k for FIFO queue.
Inflight messages are those messages that have been received by a consuming component but have not yet been deleted.