What is RTMP and why shouldn’t I use it?
It’s no surprise that online, live video streaming has become more and more popular. Recent studies find that consumers spend an average of 83 minutes per day watching online video. By next year, Cisco estimates that over 82% of IP traffic will be video. Unfortunately, while demand for online video has grown, techniques to stream have largely been left in the ’90s.
We’ve all faced the consequences of poor video streaming. Buffering wheels, jitters, and quality loss is common, even on “fast” connections. Limitations of archaic video streaming techniques have crippled technology growth in providing reliable, secure, and flexible video transmission. Proprietary standards have tried to fill the gap, but they have not allowed for proper interoperability between servers and devices, creating fragmentation that further frustrates the issue.
What is RTMP?
First and Last Mile Video Delivery
RTMP, or Real-time Messaging Protocol, was originally created by Macromedia (now Adobe) in 2002 as a method for streaming video and audio over the internet. RTMP uses a persistent (continuous) TCP connection to stream fragments of audio and video from a source to a single destination.
There are two main use cases for RTMP. First is using RTMP to transmit video between an encoder and server. This is known as first-mile delivery, or video contribution.
The second use case for RTMP is between a server and a viewer’s device utilizing Flash Player. This is known as last-mile delivery. This works best for streaming to a small audience from a dedicated media server. However, receiving and playing video with RTMP is no longer supported on many endpoints (e.g. iOS devices), and it will soon be deprecated as Adobe has announced the end of Flash Player.
Solving for the Last Mile
Instead, for last-mile delivery, the industry turned to modern streaming methods which offer HTTP delivery such as:
- Apple’s HTTP Live Streaming (HLS)
- Microsoft’s Smooth Streaming (MSS)
- Adobe’s HTTP Dynamic Streaming (HDS)
- MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH)
HTTP-based streaming allows audio/video to be delivered using regular web servers (as opposed to streaming servers which are required for RTMP). Since HTTP is used by most of the content on the internet, distribution and scaling is widely available, and no special firewall modifications are required in most cases.
HTTP streaming also paved the way for Adaptive Bitrate Streaming (ABS). ABS allows the viewer to dynamically receive the highest quality video that is available based on his/her device’s capabilities and potentially fluctuating network connection. With ABS, video is made available in multiple bitrates (quality/file sizes) so that the content is viewable in high quality to users with large bandwidth and lower quality to those with restricted or limited bandwidth. On the viewer (client) side, an Adaptive Bitrate (ABR) video player uses an algorithm to automatically select the highest quality version that can be downloaded which will allow playback without stalls or buffering.
What is the difference between HTTP Streaming Protocols? (MPEG-DASH vs HLS, etc.)
Most HTTP Streaming Protocols have been developed as proprietary transmission methods and have unique differences between them. Microsoft’s Smooth Streaming (MSS) was an early contender and was used in the 2008 Olympics, but it has since been discontinued. Despite its name, Adobe’s HDS cannot be used with ordinary HTTP servers, so it has not been widely used. Apple’s HLS has been developed for several years and has been widely adopted by the industry. Apple made a public specification out of their proprietary HLS protocol, but it is still not open to be improved by the industry and some broadcasters find their implementation of manifest files overly complex and inefficient (a manifest describes the ABS content in a way that the ABR video players can understand).
In comparison with the proprietary protocols, MPEG-DASH is the only HTTP streaming method which is an open standard (ISO/IEC 23009). MPEG-DASH is also codec-agnostic, allowing content encoding with any format such as H.265, H.264, VP9, etc. It also supports DRM, low-latency streaming, ad insertion, and a number of other features. Because of this, MPEG-DASH is believed to be the best option for a future unified, vendor-neutral adaptive streaming technology, and it is quickly being adopted in the industry.
Revisiting the First Mile – RTMP Alternatives
Why RSP is more resilient than forward error correction protocols like SRT and Zixi
Today, even though the RTMP protocol is no longer being implemented for last-mile delivery (replaced by HTTP streaming), using an RTMP encoder for contribution (first-mile) is still a go-to for many content producers. This is because many platforms accept RTMP streams and convert to ABR HTTP streaming. However, because of its age, RTMP is not a forward-looking technology. RTMP lacks support for high-resolution video and next-gen compression methods like H.265/HEVC, VP9, AV1, etc.
RTMP is also highly vulnerable to audio/video quality loss due to bandwidth or network issues. Even though RTMP can use TCP for retransmission of lost packets, TCP has a limited (and constantly adjusting) window of time where it can resend lost data. The protocol doesn’t include any error-correction methods for recovering/resending any video data which is lost outside the TCP window. This means that even minor packet loss over a period of a few seconds can cause the stream to be disrupted.
A number of other proprietary protocols have been developed since RTMP in an attempt to solve the content loss due to network interruption problem that plagues RTMP. For example, Zixi and Secure Reliable Transport (SRT) protocols use Forward-Error-Correction (FEC) techniques to buffer and send redundant data to overcome potential packet loss. While these protocols work well in applications where low latency is required, they still suffer from the same windowing problem, when if packet loss persists beyond a set amount of time (typically 8-12 seconds maximum), content will be lost forever because the buffer runs out and content is dropped.
Other protocols may use additional error correction techniques such as Selective Repeat Automatic Repeat Request (ARQ) which can retransmit only certain content which is lost (which saves on the data overhead of re-transmitting all audio/video redundantly), but the effectiveness of these protocols are limited by the proprietary implementation of their storage buffer and retry logic.
An End-to-End Streaming Solution
To solve the first mile problem, we turn our attention back to HTTP streaming, such as MPEG-DASH. First-mile HTTP-based streaming can use HTTP with TCP to deliver content from an encoder to distribution servers, but again, TCP on it’s own cannot guarantee delivery of all content (that same windowing problem).
Resilient Streaming Protocol
In 2014, Resi developed the Resilient Streaming Protocol (RSP) which works in conjunction with MPEG-DASH to manage end-to-end content transmission. RSP uses a number of error correction techniques including unlimited buffering and Selective Repeat ARQ in order to provide a smooth and complete stream end-to-end. Unlike all other streaming methods, RSP is not limited by a certain amount of time in transmitting or receiving audio/video data at any stage. From the encoder to the distribution servers (first mile), and from the distribution servers to the end user (last mile), RSP ensures 100% content transmission.
While Resi RSP encoders produce MPEG-DASH content which can be directly used by end-user video players without any modification, Resi’s Cloud Transcoder can also convert a single quality video into multiple bitrate (ABR) content in the cloud. This provides end users with an Adaptive Bitrate Streaming experience from a single bitrate encode – reducing upload bandwidth requirements at the broadcasting location.
Additionally, since the Cloud Transcoder uses RSP, ABR transcoded content is guaranteed to have been created from a perfect source. This is unlike other streaming methods which could have encountered loss during transmission in the first mile. If an RSP encoder temporarily loses network connectivity and cannot transmit data, the RSP transcoder simply waits for the network to be restored. When the network is restored, the encoder will transmit all information which has not been received, and the cloud transcoder will resume from exactly where it left off. This ensures that viewers will have a perfect, gapless adaptive bitrate viewing experience even in the case of severe packet loss or total transmission interruption.
This day in age, with online streaming playing a more and more pervasive role within our culture, it is critical to pay attention to the quality and consistency of video streams in order to keep viewers engaged. Viewers have a short tolerance for spotty streams, with most abandoning after 1-2 buffering wheels. By 90 seconds of poor content, most users will have left. Additionally, the quality of a video stream will serve as a reflection of the brand or message that displays it. Simply put, the quality of a video streaming experience is critical for building and retaining a successful online audience. We are here to help you be successful in representing your organization the best way possible through high-quality, glitch-free streaming for every event.
See what it's all about.
Resilient streaming starts here.