AJA's Mini-Converters are well known worldwide for their reliability, robustness and power across a range of conversion needs. Select the appropriate category for your in field or studio needs.
The Ki Pro family of digital file recorders and players cover a range of raster, codec and connectivity needs in different formats suited for the field, equipment rack or facility.
Explore AJA’s solutions for editing, color correction, streaming and much more in the field or flexible facility environments with the Io and the T-TAP Pro products.
From Composite, Component, 3G-SDI, HDMI to Broadcast IP, AJA desktop solutions provide you the needed I/O for any given project. Explore the KONA desktop I/O family to learn more.
Advanced tools for managing the latest color formats in live production, on-set, post, and delivery.
AJA Streaming solutions include standalone and modular I/O devices, supporting the latest streaming software, codecs, formats, transport protocols and delivery platforms.
AJA offers a broad range of IP video tools, supporting ST 2110, NDI, Dante AV, AES67 and the latest streaming formats.
AJA’s FS range of 1RU frame synchronizers and multi-capable converters cover your needs from SD to 4K, 1 to 4 channels of HD support and advanced workflows including real time HDR transforms, with the incredible FS-HDR
openGear and AJA Rackframes and Converters for convenient rack based conversion, DA and fiber needs.
KUMO SDI routers range from 16x16 to 64x64 with convenient control and salvo recall from the KUMO CP control panels.
AJA offers a range of SSD-based media, designed for the rigors of professional production. Docks provide the latest Thunderbolt and USB connectivity.
The Corvid lineup of developer cards and SDK options for KONA and Io desktop and mobile solutions offers OEM partners’ a wealth of options. Explore to learn more.
AJA offers a wealth of free software for control, configuration, testing and capture and playback needs.
Distribution Amplifiers, Muxers, DeMuxers, Genlock, Audio Embedders and Disembedders; all in a robust reliable form factor for your infrastructure needs.
Choose HA5 Mini-Converters for conversion from HDMI to SDI and Hi5 Mini-Converters for conversion from SDI to HDMI.
AJA FiDO fiber converters expand your range all the way up to 10km with Single Mode. Transceivers, transmitters and receivers available with SC or ST connectivity and LC Multi-Mode or Single Mode options.
Scan Converters for Region of Interest extraction from HDMI, SDI, DVI, and DisplayPort.
Mini-Converters ideally suited for up, down or cross-conversion for raster or scan conversion needs.
Bridge SDI or HDMI to Broadcast IP and back with AJA IP Mini-Converters
Convert to and from Digital or between Analog Video standards.
openGear Frame Syncs, DA's, and Audio Processing
Supporting the Latest SDI and Fiber Connectivity
The latest press releases on the most exciting product releases and top news stories from AJA.
Download the latest firmware, software, manuals and more to keep current with the latest releases from AJA.
AJA is dedicated to ensuring your success with our products. Technical Support is available free of charge and our team will work with you to help answer questions or resolve any issues.
If you have an AJA product, please register on this page. Having your registration on file helps us provide quality Technical Support and communication.
From entertainment to education, corporate culture, social media, religion, and even healthcare, video streaming is now nearly ubiquitous. It’s not only used to entertain and inform but also to connect, educate, and analyze. The technology behind streaming continues to advance at full throttle, unlocking more engaging audience experiences across the board.
Given the dynamic nature of streaming and its broad range of consumer and prosumer applications, streaming setups vary in complexity. Whereas a video podcaster may simply tap a camera and a microphone to facilitate a stream, a live sporting event or concert production might establish a multi-camera streaming workflow equipped with remote production capabilities. Explore this page for a high-level overview of streaming and key related terms and protocols, formats, and codecs, as well as a few pointers for getting up and running with a live stream.
Streaming uses IP transport to deliver video to various devices. Why has it become so pervasive, and how is it being applied across industries? Uncover the answers to these questions and more.
ABR dynamically adjusts video stream quality based on the viewer's device type and bandwidth. How does it work, and what impact does it have on the viewer experience? Find out.
One way to think of streaming is that it’s the broadcast of video material via nontraditional technology to nontraditional receivers. “Nontraditional technology” here refers to transport via IP (Internet Protocol). “Traditional technology” encompasses the over-the-air broadcast of television signals – either from a transmitter site to an aerial or a satellite receiver on a viewer’s home or over coaxial or fiber optic cables to a box in a cable subscriber’s home.
In other words, streaming describes the transmission of time-linear video via IP to IP-capable receivers, and its use has exploded in popularity. This is largely because those IP-capable receivers are not tied to a physical location. Unlike broadcast, which is generally viewed on a non “smart” home television, streamed video content can be viewed on any screen that can receive an IP signal, whether via a Wi-Fi or cellular network. Video can also be streamed within a local network (LAN), such as in corporate environments, educational institutions, or houses of worship, without accessing the broader internet. Additionally, satellite internet services may be used to stream video in remote areas where traditional broadband is unavailable.
Unlike downloading, where the viewer transfers a file to a computer or device before playback can begin, streaming allows the transmission and playback of media to occur simultaneously. When the streaming process begins, media data is received, and a limited amount of that data is stored in a temporary memory called a buffer. Once sufficient video has been received into the buffer, playback of the stream can begin. As the media data is displayed, it is typically discarded from the buffer to make room for the continued reception of the stream. In effect, the buffer acts as a time delay and helps to ensure continuous playback of the stream in the event of momentary connection interruption. This process of filling and emptying the buffer is repeated until the end of the stream is reached.
To distribute a video stream from the media server, a content delivery network (CDN) is required. A CDN is a geographically distributed network of servers (examples include Akamai, Cloudflare, Facebook Live, Twitch, and YouTube Live) that deliver content to the viewer. By having multiple geographically distributed access points, a CDN is able to connect viewers to the server closest to their actual location to reduce network variability. A CDN designed specifically for live streaming further reduces latency and provides scalability by distributing the demand across hundreds or thousands of servers. This allows streaming video content to be delivered to multiple viewers simultaneously without interruption, which is essential for professional applications. So, why stream? Let’s take a closer look at its advantages.
Instant playback: Media playback begins almost immediately after a stream reaches a viewing device, dramatically reducing wait times. Current streaming media servers also allow viewers to jump to any moment of an on-demand stream without downloading the entire file.
Live transmission: Streaming allows for live transmission of events such as sports, concerts, house of worship services, presentations, performances, and more. While there is some nominal delay in the encoding, transmission, and buffering process, the playback appears near instantaneous to the viewer.
Storage requirements: Since a small portion of the streaming media file is stored at any one time, and it’s stored temporarily in the device’s buffer, the amount of physical memory required is greatly reduced.
Adaptive quality: Depending on connection speed between the media server and the playback device, the quality of the media stream can be increased or decreased to ensure smooth playback. Slower connections can still access content, albeit at a lower quality. If the connection speed is variable, that quality can adapt dynamically.
Video on Demand (VOD): Along with the streaming file, depending on the media server and CDN configuration, it is also possible to provide a video file download for on-demand or offline use.
Among the many benefits streaming provides, it’s no surprise that it’s permeated such a broad range of businesses spanning entertainment, finance, healthcare, and education, among many others.
Video streaming has diverse applications across markets, each of which is enhancing accessibility, engagement, and educational opportunities.
Live streamed professional sports broadcasts are now common and becoming even more common at the high school and collegiate level, providing visibility and engagement for schools and communities. With more accessible technology today, even more niche sports are being streamed. The esports industry also heavily relies on live streaming to reach audiences, with platforms dedicated to broadcasting competitive gaming events. Concerts and live performances are streamed to reach wider audiences, allowing fans to experience events they cannot attend in person. Then, there is also an entire community of creators and influencers emerging to entertain audiences with targeted live streamed content.
The global pandemic dramatically accelerated the adoption of streaming technologies for corporate communications, particularly internal meetings. Companies rapidly embraced video streaming as a critical tool for enabling remote employee participation in a work-from-home model. Companies also use streaming to host webinars and virtual tradeshows or even to communicate earnings reports, allowing them to reach a wider audience without geographical constraints. Furthermore, streaming serves as a valuable tool for training programs, enabling consistent training delivery across multiple locations.
Live streaming enhances medical training by allowing professionals to participate in sessions remotely, facilitating immediate feedback and access to recorded content for later review. Surgical procedures can be streamed live to provide educational opportunities for medical students and professionals globally, offering a practical learning experience beyond traditional methods.
Video streaming enables real-time interaction in virtual classrooms, making education accessible to students regardless of location; it’s also used to support staff meetings and training needs. Recorded sessions of all the above allow for flexible learning review and opportunities. Many primary schools and universities are also now live streaming sports, graduations, drama productions, and other school events for family members who may not be able to attend in person.
While some of the mega churches were live streaming prior to the pandemic, many others began to also live stream when they could no longer host service in-person. Religious institutions not only live stream weekly services to audiences tuning in from afar, but also to screens in overflow rooms.
Local and national government agencies use streaming to make public meetings accessible to constituents, promoting transparency and community engagement. Streaming also facilitates training and communication within government agencies, allowing efficient information sharing across departments and regions.
An important characteristic of streaming that differentiates it from traditional broadcast is that it often (but not always) involves a direct connection between the content provider and the viewer. In this context, streaming may be described as a one-to-one activity, while broadcast is one-to-many. A television station broadcasts a signal that will be received by any receiver within its service area; there is no difference in cost or broadcast infrastructure whether 100 or 100,000 people are watching. Similarly, satellite footprints can encompass hundreds of thousands of potential receivers within a geographic area.
In streaming’s one-to-one communication, each stream is tailored to an individual user or device rather than being broadcast to a large audience simultaneously. This allows for personalized experiences, where content (and advertising) can be targeted based on the viewer’s preferences and interests. In a similar vein, the way content is streamed can be fine-tuned based on the capabilities of a user’s device or available bandwidth. ABR (Adaptive Bitrate Streaming) is a sophisticated video streaming technique that dynamically adjusts video quality in real time based on available network bandwidth.
Once a piece of video content is ready for distribution to viewers, it’s encoded into multiple variants at different bitrates, frame sizes, and frame rates. While the content is the same in each variant, the bandwidth required to stream them varies, from very low to high. An ABR package is a collection of multiple encoded versions of the same video content; it includes various bitrates and resolutions of the video. The ABR ladder is a structured representation of the different quality levels within the ABR package; it describes the specific bitrates and resolutions available for this piece of content. Each rung in the ladder represents a different quality tier.
The ladder informs how the ABR package is created and used during streaming. For example, an ABR package might contain multiple MP4 versions of a video encoded at different qualities, while the ABR ladder would specify that the package includes 1080p at 5 Mbps, 720p at 3 Mbps, and 480p at 1 Mbps versions. When a viewer watches streaming content, his video player continuously monitors network throughput, device processing capacity, and available bandwidth, and it dynamically selects the most appropriate video segment to receive– which rung on the ABR ladder – ensuring smooth playback without buffering issues. The player can move up or down the ladder as conditions change, ensuring smooth playback without buffering.
IP video devices like AJA’s BRIDGE LIVE can encode content using various codecs, including H.265 (HEVC), H.264 (AVC) and H.262 (MPEG-2), with support for different bitrates and resolutions, enabling users to create optimized ABR ladders for efficient content delivery across different streaming platforms and devices. To illustrate, let’s use the example of someone streaming a news program during their commute to work in another city. For safety reasons, we’ll imagine this commuter taking this train instead of watching the news while driving. In a single commute to work, this person’s smart phone device will pass through several different networks, each with varying bandwidth limits. They might leave their house and walk to the commuter train station bathed in ultra-wideband 5G from their wireless service provider, allowing them to watch the variant of his news program encoded at the highest bitrate, frame size and frame rate. Their device will monitor network conditions and report an all clear to the CDN, meaning that the streaming device receives video easily as fast as is necessary to display it.
Once our commuter boards the train and the train pulls out of the station on its way to the city, their device will probably keep a wireless signal, though it will likely be less robust than the ultra-wideband 5G they had before. ABR systems adapt to real-time changes in network conditions to maintain uninterrupted playback; in this case, the player will request program segments encoded at a slightly lower quality. Taking the example a step further, let’s imagine this commuter has arrived in the city and switches to the subway, the final step in his commute to the office. While underground, their device may not be able to access cellphone signals, and just a few years ago this commuter would have to endure a subway ride with only a book or recorded content to occupy him.
As technology has improved, subway operators have installed networks that deliver wireless connectivity along train routes, allowing commuters to continue streaming content as they travel underground. Though bandwidth varies, the throughput via the on-board Wi-Fi will likely be less than it was on the commuter train. Their cellphone will recognize that network conditions have changed again, and it can no longer maintain the current high-quality stream without buffering. Their device will request a lower bitrate version of the video from the CDN to match the available bandwidth, and the video keeps on playing.
Even though the version of the content they’re viewing on the subway is encoded at a lower bitrate and resolution, the video quality is still fairly good. Today’s more efficient video codecs, which are able to maintain visual quality at lower bitrates, combined with advanced encoding techniques ensures that even when streaming at lower bitrate, the video quality is as high as possible given the available network conditions. The transition between different quality levels in the ABR ladder is designed to be seamless, often occurring without the viewer noticing.
With so many available streaming protocols, formats, and codecs, it can be challenging to determine the best fit for the job. We unpack some of the most popular options today and their unique advantages.
Every streaming setup varies depending on the stream quality and reach you want to achieve. Here are a few considerations to ensure a smoother end-to-end experience.
Protocols manage data transmission, while formats handle data storage and packaging. Although often confused, video streaming protocols are independent of compression codecs and file formats. Protocols are the method by which the data is transmitted and have no bearing on how the data is compressed or the container the data is wrapped in.
Let’s use the analogy of a railroad to illustrate the difference between streaming media protocols, formats, and codecs. In this example, we’ll be transporting water on our rail line instead of people. As a stream is made up of data, our train is made up of cars. We know that trains consist of multiple connected cars with wheels that allow them to travel on railroad tracks. Further, we know that liquids can be transported via rail using specialized tank cars with varying capacities. Tank cars are capable of transporting liquids at specific temperatures or pressure levels as necessary.
Let’s say you need to transport 150 million gallons of water on your train. It would be unwise to build a single 150-million-gallon tank for this job; not only would it be impossibly large, but if anything went wrong with that railway car, you’ll have lost all your cargo. You’ll likely divide your 150 million gallons of water into smaller volumes and store each portion in a separate tank car.
Streaming video operates via a similar premise. We might describe each tank car as a container that holds a small, self-contained unit of video data (the water). A container in video streaming is a file format that acts like a digital envelope holding and organizing (encapsulating) video and audio data, along with metadata such as subtitles and streaming information. The container provides a standardized way to store and transport the media. On our railway, the containers are standardized as having a tubular tank, the ability to connect to other containers, and eight wheels that can travel over railroad tracks.
MPEG-TS (MPEG Transport Stream) is a standard digital container format used for the transmission of audio, video, and metadata. Designed to maintain streaming integrity over even unreliable networks, MPEG-TS is resilient to packet loss and network congestion, making it suitable for live broadcasts and streaming applications. MPEG-TS can carry multiple programs in a single stream, allowing for efficient bundling of different types of content. It is commonly employed in broadcasting systems such as DVB (Digital Video Broadcasting), ATSC (Advanced Television Systems Committee), and IPTV (Internet Protocol Television).
MP4 is the standard MPEG-4 container. It supports multiple video/audio streams, is widely used for web and mobile streaming, and can contain H.264/HEVC video and AAC audio.
WebM is an open, royalty-free container format designed for web streaming. It supports VP8/VP9/AV1 video codecs.
Matroska (MKV) is an open standard container that supports nearly any codec. While MKV is popular for high-quality video storage, it can be used for streaming, particularly over HTTP-based protocols. However, specialized streaming formats like HLS or DASH are often preferred for professional streaming applications.
AVI is a container format developed by Microsoft. It’s older but still widely supported.
QuickTime (MOV) is Apple’s standard video container and the basis for MP4 development.
3GP is a mobile phone video container format that’s based on ISO base media file format.
It's important to understand that a container is not the same as a codec. While the container is the “package” or “wrapper,” the codec determines how the video and audio data are compressed and decompressed within that package. The codec (compression method) describes how the cargo is packed. Each codec offers unique advantages in compression, quality, and device compatibility.
In our rail transport example, the codec might indicate whether the water needs to be pressurized or held at a specific temperature during transport. We encode the media in a way that allows more efficient transport, and then we put the media in containers, the equivalent of the railroad car.
H.262/MPEG-2 describes a combination of lossy video compression and lossy audio data compression methods. While MPEG-2 is not as efficient as newer standards such as H.264/AVC and H.265/HEVC, backwards compatibility with existing hardware and software means it is still widely used, for example in over-the-air digital television broadcasting.
H.264/AVC (Advanced Video Coding) is a widely used video compression standard designed for efficient, high-quality video transmission. Developed by ITU-T Video Coding Experts Group and ISO/IEC MPEG, it reduces the bitrate by about half compared to previous standards. It supports multiple compression profiles and is compatible with various container formats including MP4, TS, and 3GP. H.264/AVC is often used by streaming services like Netflix and YouTube, as well as for professional video production, video conferencing, and corporate IT infrastructure.
H.265/HEVC (High Efficiency Video Coding), developed as the successor to H.264, delivers the same video quality at half the bitrate of H.264, reducing streaming bandwidth requirements. It supports high dynamic range (HDR) video and resolutions up to 8K UHD. It’s used in live event broadcasting, OTT services, video conferencing, and 4K content streaming. Platforms including Netflix, YouTube, Amazon Prime, and Apple TV+ use H.265 to deliver content to viewers.
H.266/VVC (Versatile Video Coding) is a recent compression standard developed by the Joint Video Experts Team (JVET) as a successor to H.265/HEVC that aims to reduce data requirements by about 50 percent compared to HEVC. H.266 supports 4K and 8K and enables 360° and HDR video streaming. It’s not yet widely adopted.
AV1 is an open source and royalty-free codec for internet video transmission developed by the Alliance for Open Media as a successor to VP9. It’s optimized for high-resolution content (4K, 8K) and supports HDR and wide color gamuts and is compatible with MP4, 3GP, and MKV containers.
VP9 is a royalty-free codec developed by Google to improve video compression and streaming efficiency. It delivers the same video quality at half the bitrate of VP8 and supports 4K and 8K. VP9 is used by YouTube for 4K video streaming and is supported by most major browsers.
The railroad tracks and signals that determine the train’s progress are analogous to the streaming protocols that dictate how streaming data is transmitted. The protocol essentially provides a comprehensive blueprint for how video content will be packaged, secured, and delivered from a server to a client device across different network conditions.
The protocol will dictate elements including segmentation rules (we’re going to chop the video into chunks of a specific length), encryption method (we’ll apply this kind of protection to the media), and metadata handling (we’ll allow this amount and type of associated non-media data to travel with the media). It also handles communication between the server and the client.
HLS (HTTP Live Streaming) offers adaptive bitrate streaming, allowing for quality adjustment based on the viewer's internet speed. Created by Apple, HLS is a very common streaming protocol compatible with nearly every device. It uses HTTP for content delivery, segments video into small chunks, supports ABR streaming, uses H.264 or H.265 codecs, and supports MPEG-TS and fragmented MP4 containers.
MPEG-DASH (Dynamic Adaptive Streaming over HTTP) is an open standard for ABR streaming used by major services like YouTube and Netflix. Sometimes referred to simply as DASH, it is an alternative to Apple’s proprietary HLS protocol. MPEG-DASH is codec-agnostic, breaks video into small segments, and supports multiple bitrates and resolutions.
RTMP (Real-Time Messaging Protocol) is an older, TCP-based protocol for low-latency, high-performance streaming of audio, video, and metadata. While originally proprietary, RTMP is now an open specification. RTMP is widely supported by streaming platforms.
RTSP (Real-Time Streaming Protocol) is an application-layer network protocol designed for controlling multimedia streaming between servers and clients. RTSP provides a flexible, interactive method for streaming real-time multimedia content, allowing precise control over media playback across various platforms and devices. It may be used in live streaming, video-on-demand services, and interactive media applications.
UDP (User Datagram Protocol) is a core communications protocol in the Internet Protocol Suite designed for fast, low-latency data transmission. It sends data as datagrams (packets), relies on underlying IP protocol for routing, and sends packets without confirming receipt. It’s primarily used for real-time applications requiring speed like Voice over IP (VoIP), online gaming, and live video streaming. UDP prioritizes speed over reliability, making it ideal for applications where occasional data loss is acceptable and timely transmission is critical.
SRT (Secure Reliable Transport) is an open-source protocol designed for reliable, low-latency streaming over unpredictable networks. It offers high security and compatibility.
RIST (Reliable Internet Stream Transport) is an open-source, UDP-based transport protocol designed for reliable video transmission over unreliable networks. Its goal is to provide an interoperable, open standard for transporting live video content over public internet networks. It provides low-latency, high-quality video streaming appropriate for professional media workflows like news, sports and remote production.
NDI (Network Device Interface) is a royalty-free video over IP transmission protocol developed by NewTek in 2015 for high-quality video communication over computer networks. It supports low-latency, high-resolution video transmission over IP networks, allows frame-accurate switching, and supports bi-directional video and audio transmission. NDI simplifies video production workflows by replacing specialized cabling with standard Ethernet connections, making it increasingly popular in broadcasting and live streaming environments.
Streaming protocols may transmit data in a unicast or multicast model. Unicast refers to a one-to-one communication model where data is sent from a single sender to a single receiver, whereas multicast is a method of sending data from a single source to multiple recipients simultaneously. A unicast protocol – like RTMP, HLS, or DASH – delivers individualized content streams to each viewer. A multicast protocol, like UDP, may be used in live video streaming, video conferencing, enterprise video streaming, or other environments where content does not need to be individualized per receiver. By sending a single stream instead of multiple unicast copies, multicast can reach a large number of servers without increasing network traffic or server load.
To wrap up the analogy, the collection of cars that make up our train represents a segment of the video, which denotes a specific duration of video content. Segments can be requested and delivered independently, similar to how train cars can be coupled or uncoupled.
The streaming client can request these segments or chunks adaptively based on network conditions, much like how a train might add or remove cars based on capacity needs. This segmented approach allows for ABR streaming, where the quality of the video can be adjusted by switching to different “trains” (quality levels) at segment boundaries, ensuring smooth playback across varying network conditions.
Many live streams today are single-source and produced by houses of worship, universities, sporting events, local government offices, or creators. The easiest place to start for these is with a single camera and a microphone. This bare-bones setup might include a fixed wide shot of the event and a couple of microphones. You’d send the output of these devices into a mixing desk, then into an encoder appliance, like HELO Plus, that is capable of streaming the captured video and audio over the internet to a CDN.
On the other hand, there are also more sophisticated applications that must fuse multiple camera feeds with graphics and encode and package the feed for live delivery to a range of platforms and devices, for which a solution like BRIDGE LIVE and BRIDGE LIVE 3G-8 can be incredibly powerful. Whichever path you take, here are a few key elements that will determine how you shape your streaming setup:
The quality of streamed video content will generally be determined not by the camera but by your internet bandwidth – specifically, the bandwidth between you and the CDN. Modern cameras, with their sophisticated image sensors and powerful onboard processing, can capture very high quality footage, but the higher the resolution and frame rate of your video, the more data you’ll need to transmit it.
Video shot at a higher bitrate generally results in a higher quality stream with more detail, better color accuracy, and smoother motion, but also larger file sizes, requiring more storage space or, in the case of streaming, more bandwidth to get the signal out.
Frame Rate
When making equipment purchases, consider what kind of events you’ll be shooting and the level of movement involved. If you’re recording a city council session with speakers at lecterns, you don’t need to be terribly concerned with frame rate. For relatively static events like conferences, sermons or interviews, a standard rate will suit you well.
However high-movement events – especially sports – will benefit from 60 frames per second (FPS) capture. If you’re producing a stream of a surgical event, you’d probably also benefit from high frame rate. The tradeoff with motion is equivalent to that with video quality in general: content with fast motion looks better when it’s captured at a higher frame rate, but the higher the frame rate, the larger the file size.
To keep viewers engaged, you want to deliver the highest quality stream that you can in the bandwidth that's available to you. It may not be possible to increase your bandwidth without spending a lot of money, but you can essentially stuff more information into the bandwidth you have by upgrading the encoder at the front end.
Nearly any encoder can handle H.264/AVC, but a more sophisticated encoder (AJA’s BRIDGE LIVE or BRIDGE LIVE 3G-8, for example) will give you access to more efficient codecs like H.265/HEVC, which provides much higher quality video than H.264 at the same bitrate. Encoders like this help producers get a much better, clearer picture, increasing the production value of the outbound product.
When designing a modern video streaming workflow, it’s important to consider the type and number of cameras you’ll use and how you’ll connect that video signal to the media server delivering the stream. Streaming directly from mobile devices has recently increased in popularity, but to achieve professional quality and production flexibility, dedicated camera systems are preferred.
Higher-quality cameras offer SDI or HDMI outputs that can be connected to a media server through hardware devices such as AJA’s U-TAPs, which offer USB 3.0 connectivity, Io 4K Plus with Thunderbolt 3 interfaces, or via direct plug-in PCIe cards like the AJA KONA line. There are also standalone appliances such as HELO Plus that will encode and stream H.264 from an HDMI or SDI video signal, with simultaneous recording to a network share or removable media.
More productions today are designing switched multi-camera systems to increase live stream production values. Switching between multiple SDI or HDMI camera angles and content sources has major advantages over a single camera or mobile device stream. The ability to switch between multiple sources provides a more dynamic visual experience.
With video switching applications like vMix or Telestream® Wirecast, adding graphics and effects is simple to implement. Connecting multiple SDI and HDMI sources is easily available through a Thunderbolt 3 video interface like the AJA Io 4K Plus, or via direct plug-in PCIe cards with the AJA KONA product line. Together, the switching application software and multiple input hardware act as a production switcher.
Although streaming is the primary function of a streaming video pipeline, there is also often a requirement for the simultaneous recording of the stream for archiving. Recorded files must often be made available for archiving or publishing as downloadable files for on-demand or offline playback. While the streaming file may be compressed at a low bitrate to accommodate bandwidth limitations, the recorded file is often stored at the highest possible quality for archiving.
The recording capability can be designed into the media server, or for portable applications a standalone appliance like the AJA HELO Plus can provide H.264 (MPEG-4/AVC) streaming and simultaneous recording with independent output and quality settings to either internal removable storage or external and networked attached storage. Digital video recorders like those in AJA Ki Pro products can also be integrated into streaming pipelines to ensure immediate high-quality recordings for handoff.
When building a streaming pipeline from the ground up, there’s a lot that can be learned from examining pipelines others have built and the results they’ve achieved, as well as available workflow diagrams. To this end, we’ve assembled a page dedicated to sharing a collection of both.
Streaming enthusiasts and professionals around the world continue to build impressive pipelines for delivering live content to their audiences. Uncover the challenges they’ve faced along the way and the tools they’ve embraced to solve them.
Esports’ rapid rise to popularity can be attributed to many factors, from technological advancements that have enhanced the quality of modern gaming experiences to the proliferation of streaming platforms, which have paved the way to an era of global esports influencers with massive fan followings.
Boulder, Colorado-based BCC Live may have gotten its start in information technology (IT) but has quickly evolved into a live production powerhouse. Committed to ensuring customers’ success, the company thrives on solving the most difficult broadcast and live stream challenges for customers across the globe.
Boulder, Colorado-based BCC Live may have gotten its start in information technology (IT) but has quickly evolved into a live production powerhouse. Committed to ensuring customers’ success, the company thrives on solving the most difficult broadcast and live stream challenges for customers across the globe.
From single camera to CDN setups to major multicamera productions, AJA develops a broad range of reliable, high-performance streaming and encoding solutions designed to fit into any streaming environment. Explore AJA’s latest solutions made for streaming.