Free Quote
Our Knowledge Base2021-02-15T15:21:46+00:00

Broadcast Tech Knowledge Base

All about broadcast technology terminology we are in


Web Real-Time Communications (WebRTC) is an open source project created by Google to enable peer-to-peer communication in web browsers and mobile applications through application programming interfaces. This includes audio, video, and data transfers.

it is a free, open-source project that provides web browsers and mobile applications with real-time communication (RTC) via simple application programming interfaces (APIs). It allows audio and video communication to work inside web pages by allowing direct peer-to-peer communication, eliminating the need to install plugins or download native apps.[3] Supported by AppleGoogleMicrosoftMozilla, and Opera, WebRTC is being standardized through the World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF).[4]

Its mission is to “enable rich, high-quality RTC applications to be developed for the browser, mobile platforms, and IoT devices, and allow them all to communicate via a common set of protocols“.[4]

See the related products:


Ultra-high-definition television (also known as Ultra HD televisionUltra HDUHDTVUHD and Super Hi-Vision) today includes 4K UHD and 8K UHD, which are two digital video formats with an aspect ratio of 16:9. These were first proposed by NHK Science & Technology Research Laboratories and later defined and approved by the International Telecommunication Union (ITU).[1][2][3][4] It is a digital television (DTV) standard, and the successor to high-definition television (HDTV), which in turn was the successor to standard-definition television (SDTV).

The Consumer Electronics Association announced on October 17, 2012, that “Ultra High Definition”, or “Ultra HD”, would be used for displays that have an aspect ratio of 16:9 or wider and at least one digital input capable of carrying and presenting native video at a minimum resolution of 3840×2160 pixels.[5][6] In 2015, the Ultra HD Forum was created to bring together the end-to-end video production ecosystem to ensure interoperability and produce industry guidelines so that adoption of ultra-high-definition television could accelerate. From just 30 in Q3 2015, the forum published a list up to 55 commercial services available around the world offering 4K resolution.[7]

See the related products:


NewsWrap Newsroom Computer System (NRCS) encompasses the ingestion of wires, the logging & sorting of media elements, scripting, editing and approval of stories. The NRCS system takes care for all the key areas of the broadcast chain – collaborating, exchanging information and getting on air quickly.

The NRCS system takes care for all the key areas of the broadcast chain – collaborating, exchanging information and getting on air quickly. The NewsWrap NRCS system has been designed keeping in mind the journalists and the user friendly easy to use interface makes it the preferred solution for news management. The system brings together all the media used in today’s fast moving news presentations. It also organizes live or recorded news content such as video, texts, stills, news agency stories, CG, graphics, etc. for inclusion into the run order.

The NRCS is web based and the same platform can also be extended to publish content to radio, online and print.

See the related products:



NDI 4.5 (current version) adds support in iOS for real-time, full frame-rate, and resolution capture of the display on wireless with NDI®|HX Capture for iOS. In addition, the NDI®|HX Camera for iOS app turns any iPhone into a full 4K wireless camera, giving it the same capabilities as a high-end video camera.

Network Device Interface (NDI) is a royalty-free software standard developed by NewTek to enable video-compatible products to communicate, deliver, and receive high-definition video over a computer network in a high-quality, low-latency manner that is frame-accurate and suitable for switching in a live production …

See the related products:


Dynamic Adaptive Streaming over HTTP (DASH), also known as MPEG-DASH, is an adaptive bitrate streaming technique that enables high quality streaming of media content over the Internet delivered from conventional HTTP web servers. Similar to Apple’s HTTP Live Streaming (HLS) solution, MPEG-DASH works by breaking the content into a sequence of small segments, which are served over HTTP. Each segment contains a short interval of playback time of content that is potentially many hours in duration, such as a movie or the live broadcast of a sports event. The content is made available at a variety of different bit rates, i.e., alternative segments encoded at different bit rates covering aligned short intervals of playback time. While the content is being played back by an MPEG-DASH client, the client uses a bit rate adaptation (ABR) algorithm[1] to automatically select the segment with the highest bit rate possible that can be downloaded in time for playback without causing stalls or re-buffering events in the playback.[2] The current MPEG-DASH reference client dash.js[3] offers both buffer-based (BOLA[4]) and hybrid (DYNAMIC[2]) bit rate adaptation algorithms. Thus, an MPEG-DASH client can seamlessly adapt to changing network conditions and provide high quality playback with few stalls or re-buffering events.

See the related products:

MOS Protocol2021-02-15T16:08:33+00:00

The Media Object Server (MOS) protocol allows newsroom computer systems (NCS) to communicate using a standard protocol with video serversaudio serversstill stores, and character generators for broadcast production.[1][2]

The MOS protocol is based on XML.[3] It enables the exchange of the following types of messages:[4]

Descriptive Data for Media Objects.
The MOS “pushes” descriptive information and pointers to the NCS as objects are created, modified, or deleted in the MOS. This allows the NCS to be “aware” of the contents of the MOS and enables the NCS to perform searches on and manipulate the data the MOS has sent.
Playlist Exchange.
The NCS can build and transfer playlist information to the MOS. This allows the NCS to control the sequence that media objects are played or presented by the MOS.
Status Exchange.
The MOS can inform the NCS of the status of specific clips or the MOS system in general. The NCS can notify the MOS of the status of specific playlist items or running orders.

MOS was developed to reduce the need for the development of device specific drivers. By allowing developers to embed functionality and handle events, vendors were relieved of the burden of developing device drivers. It was left to the manufacturers to interface newsroom computer systems. This approach affords broadcasters flexibility to purchase equipment from multiple vendors.[5] It also limits the need to have operators in multiple locations throughout the studio as, for example, multiple character generators (CG) can be fired from a single control workstation, without needing an operator at each CG console.[6]

MOS enables journalists to see, use, and control media devices inside Associated Press‘s ENPS system so that individual pieces of newsroom production technology speak a common XML-based language.[7]

See the related products:


The term “media asset management” (MAM) may be used in reference to DAM(Digital Asset Management) applied to the sub-set of digital objects commonly considered “media“, namely audio recordings, photos, and videos. Any editing process that involves media, especially video, can make use of a MAM to access media components to be edited together, or to be combined with a live feed, in a fluent manner. A MAM typically offers at least one searchable index of the images, audio, and videos it contains constructed from metadata harvested from the images using pattern recognition, or input manually.[4]

The term “Visual Asset Management” (VAM) are also used in reference to DAM. It indicates a DAM system but visually presenting digital twins assets in the system and visually connected all assets in the DAM with URLs or hotspots. Virtual Assets in the VAM does not only limited to media files and documents, but also digital twins of physical subjects, including 3D models, HDR Virtual panorama tour, 360-degrees videos, audios and more. A VAM(or VAM2) is originally developed for the Oil & Gas, Aviation and Construction industries. In the past 10 years, the VAM2 has also be commonly used by a wider variety of industries for virtual asset management, virtual training, virtual project management, virtual maintenance/inspection, virtual curation, virtual planning and more. [5]

See the related products: