Voices of Video

Rethinking Live Publishing: Templates, Roles, and Hybrid Control

NETINT Technologies Season 3 Episode 36

What does it take to run hundreds of live events without chaos? In this episode, we open up the architecture behind G&L’s Playout Hub - a hybrid publishing engine designed for broadcasters, public institutions, and distributed editorial teams that need broadcast precision at scale. Built on decades of systems integration experience at G&L Geißendörfer & Leschinsky GmbH, the platform mixes live inputs, VOD interstitials, graphics, and multi-target outputs into a unified, dependable workflow.

We trace the evolution from bespoke integrations to a productized, composable platform grounded in three pillars:

  1. Custom solutions,
  2. Reusable components, and
  3. Opinionated products that reflect real-world broadcast workflows.

Inputs span SDI, SRT, RTMP, MPEG-TS, VOD, and ST 2110 on the horizon. All sources feed into a playlist-driven orchestration layer, where editorial teams trigger transitions, mix live with pre-produced clips, and overlay graphics in real time. Outputs include HLS, DASH, RTMP, SRT, and CMAF, enabling consistent publishing to OTT platforms, social media, and syndication partners simultaneously.

At scale, repeatability becomes everything. A channel manager with powerful templates, parameters, and reusable configurations lets operators spin up channels quickly while maintaining standards across hundreds of events and dozens of concurrent streams, such as the European Parliament’s 30 parallel events or ARTE’s 600 concerts per year.

Download the full presentation: https://info.netint.com/hubfs/downloads/GnL-Beyond-live.pdf

Governance is treated as a first-class requirement. G&L’s independent access manager delivers SSO and granular role-based access control, down to individual actions like source switching or overlay triggering. This clean separation of concerns allows engineers to define codecs and I/O while editors manage timing, rundowns, and branding—preventing workflow collisions in large production teams.

The architecture is hybrid by design, deployable on Kubernetes or k3s across cloud and on-prem environments, and integrates cleanly with external encoders, CDNs, and players. A built-in studio module supports lower-thirds, logos, and rundown-based overlays, while still allowing integration with external tools like Singular Live.

Under the hood, the platform uses hardware acceleration wherever possible—including NETINT VPUs (https://netint.com/products/) for efficient high-density encoding, while also supporting GPU and CPU environments. For teams handling hundreds of events, this efficiency is not optional; it’s the difference between smooth operation and system overload.

Topics Covered in This Episode

• The three pillars: custom work → productized components → full products
• Hybrid inputs across SDI, SRT, RTMP, MPEG-TS, VOD, and future ST 2110
• Playlist-based orchestration with real-time graphics overlays
• Multi-target outputs: HLS, DASH, RTMP, SRT, CMAF
• Scaling challenges across hundreds of events and 30+ concurrent channels
• Channel manager with templates, parameters, reusability
• Hardware acceleration with NETINT VPUs, plus GPU and CPU support
• RBAC with SSO and granular, action-level permissions
• Separation of concerns for engineers vs. editors
• Kubernetes-based composable architecture for cloud + on-prem
• Lifecycle flow: reservation → templates → policies → scheduling → monitoring
• Studio module for overlays, rundowns + optio

Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.

Voices Of Video:

Voices of video. Voices of video. The voices of video. Voices of video.

Alexander Leschinsky, G&L:

Thank you, Anita. Good morning, everybody, for showing up so early on this second day of IBC. So I'm talking about the GNL Playout Hub today. It's a beyond life programmed events at broadcast precision. So GNL is a systems integrator and managed service provider. So we integrate stuff, we build stuff, we maintain stuff. That's where we came from, what we've been doing for the last 25 years. The first pillar is custom solutions. So we design architectures, we build them, we implement them, we maintain them for customer. Whatever the task is that they come to us in a tender or in some other discussion, we often are confronted with complex media solutions that the customer does not find a real solution in the market where they need something bespoke, customized for their specific needs. That's where we kind come from what we've done for most of our time. But when we implemented all these custom implementations for customers, we saw that there is a need for something that is more productized, more standardized. We can start with everything from scratch for any product. So we for any project. So we started to look at what are the atoms that we build, what can we reuse best? There are things like access management, which are pretty much the same all over any application that we build. There is encoding, there is CDN. So there are some of the atoms that we can reuse. So we started to productize these components so that we can more easily assemble them into a fully solution that is still customized, but it that also comes from productized components. And with these productized components, we were able to start building actual products. So taking out a part of the functionality in a very opinionated way, building on our experience in all the projects that we had for customers, and taking that part of the functionality that we think will serve most of our customers well. So that could be put into a product. So custom solutions, product high components, products. That's the three pillars that we build on. Today we are talking about the GNL Playout Hub, which is our hybrid live streaming publishing workflow engine. So what does it? The input of the system is one or more live streams, one or more live signals. Must not necessarily be a stream. It can be an SCI signal. It can be SRT, RTMP, MPEG TS. We even are working on in 2110, of course, so that we have a full input setup that we can work with. Multiple inputs can be on-prem in the cloud. We can also work with prepared video on demand assets that we can integrate in the whole application. So it's not only live, it's also video on demand clips that we can introduce it like in a fast channel in a way. We have some graphical overlay. We have an easy-to-manage engine that we operate ourselves, that we build ourselves. You still can integrate external graphic managements. If you have something like Singular Life, you can integrate that. But the system comes with a ready-made graphics engine that we can use. The transformation after the input is we treat all the inputs together like a playlist. So we can go through the different live sources that we have, we can mix in the video on demand clips that we have, we can overlay graphics on the fly, and we have editorial teams which can trigger the transition from one of the playlist items to the next. So it's a bit like a real professional broadcast playout. The output then can be also very differently. We have OTT output to HLS and Dash, social media output, RTMP, SRT, syndication partners where you just send a single bitrate stream to via SRT. We can also output other things, but those are the ones that are most typically used in these setups. Use cases, sports, news, entertainment, everything you have where you have a ready-made setup of live streams and on-imand streams that you want to mix and mingle. The challenges that we have, where the things that we do are different from your standard live streaming thing. Many things that I mentioned, you can also do with OBS. There is nothing that has been really specific to what I told you. But the the setups that we worked on in our day-to-day custom projects usually come from a large number of events. So we are talking hundreds of events per year, many events per day. So really customers like the European Parliament, for instance, where we have 30 concurrent event streams at the same time. We have the public broadcasters who are doing a lot of that. So for instance, for the French German broadcaster ARTE, we have 600 concerts that we stream per year. So our customers usually have a large number of events. We also have a large number of operators and editors. So these teams usually distributed teams, sometimes working from the home office, distributed over multiple locations. So it's a number of people who have to work on it. It's not a single person, it's not a YouTuber, a live streamer that sits in front of his OBS. It's many more people that are defined to work over the public internet and the web interfaces. We also need flexibility and an easy UI and API. Flexibility because we see that many of the use cases and many of the applications that we work on have a very different, very specific circumstance in which they are. The architecture is different, the tools that the customer is using are different. So not everybody wants a fully integrated solution. They have they're all they already have their encoders, they already have might have their CDNs, they might have their players. So they just want a certain functionality and not everything. So we saw that we have to be very flexible in how we adopt to this very specific customer world. And then we have the reality of hybrid and cloud. Everybody's talking about cloud native live production, but there are a lot of productions are still on-prem, and there are good reasons why you have to mix and mingle this. So our applications have been prepared to be adopted to any cloud provider and also run on-prem. Challenge one. So we have a dedicated channel manager component. It's the only task is to manage these large numbers of channels. We have introduced some abstraction layers. So we sat down and see what are what are the typical challenges that you have if you want to manage a lot of channels. Typically, it comes down to there are some aspects that you want to be repeatedly using. So some configuration, some parameters, some templates that you want to use. Not every stream is equal, so you have some setups that you want to use for some group of streams. There's another set of parameters that you want to use in a different stream. So we need some level of abstraction with reusability. So that abstraction layer allows us to modify the system in a very um easy way for you. The data structure is completely optimized for reusability. So we look down for what is the structure with parameters and templates, how can we combine that in a way that is easy to understand but still flexible enough to be adopted to a lot of use cases. And then we use hardware acceleration where possible. If we are talking about many, many concurrent streams, it's important that we are efficient in terms of the encoding capacity. So wherever we can use VPUs from Net and we are using them. The system can also work with GPUs and with CPU, but of course, for efficiency, we prefer to run in a setup where we can use VPUs. The second challenge, the large number of operators and editors. So if you look at a typical setup, for instance, the European Parliament, with a group of more than 100 people who are working on that platform and are using it for different tasks. So what we have done to implement that is we have created an access manager application, which is pretty independent from what we actually manage. So it could be used for any media processing workflow, because it's abstracted from the specific encoding task. It comes with RBIC, role-based access control. So we have a management of users, we can put the users in groups, we can define roles, we have single sign-on so that you can attach it to whatever single sign-on solution you have. So for instance, for a public broadcast in Germany, we have made an integration so that our application can be used with their active directory domain in Microsoft. Just an example, we can integrate with pretty much any SSO application or IDP that you are using in your company. The third challenge flexibility and easy UI and API. So we have created a very granular access rights system. That means any action that you can take on that platform, be it creating a channel, switching between sources, everything is defined as a property that you can manage access to. So imagine a huge matrix where you can say, okay, this person, this role, this user can do AYZ. So this allows us to create a separation of concern. If you're working in a huge broadcasting environment, we have multiple operators and editors, there is a separation of concern. There are people who are more on the technical side and define what are the codecs that we are using, what is the interface, the input-output that we are using. And then you have more editorial teams that have to decide when do I switch from content A to content B, when do I take a certain overlay over the graphics. So there are different use user groups that you have to accommodate for. This separation of concern is built into our platform. And the last one is hybrid. So we were we knew that we had to build a solution that can both run on-prem and in the cloud. So what we did is we built a composable architecture. It's completely component-based. Under the hood, all we need is Kubernetes, either a fully fledged Kubernetes or K3S single server Kubernetes. Um we bring that with the application, you don't have to care about that. But it allows us to adopt the software to run on pretty much any cloud provider and on-prem as well. How it looks like, finally, some graphics. So this is a channel, and the channel has um a different number of states. So in this case, this is a channel which is in the started state. So we have a finite state machine for each channel that shows you where in the life cycle of that channel you are. Um you see some graphical interface where you see content, you have some audio um level meters, some targets where you are sending to, we have some different outputs here. So this is what you would expect from a channel manager. How do you get there? So the first thing is a channel needs a certain lifetime. It has a start, it has an end. In an event streaming, you know, you plan something for an event at the IBC show or a press release. So you know this is an event that will take place at some point in time. You define that time range so that you reserve the necessary resources in the system and that you can um make sure that this that the resources you need are not used by any other project at the same time. So it's like a reservation, like a hotel room reservation that you make for the resources that you need. Then you select a template, and we created a very complex, more complex, a very powerful template system so that you can look at your what are your different use cases. So do you have setups where you have the same um infrastructure that you use anymore? Do you have something that you send to a certain set of partners? Do you have something that comes with a certain set of writes? So you can all unify that into different templates. And the number of templates is up to you. We can have setups where you have just one set at one template because everything is very similar, or you can have hundreds of templates depending on the complexity of the editorial workflows. Inside a template, you have a lot of parameters that you can define. So you can set up a template. In this case here, we have a template where we have two elemental encoders. This customer, where we have taken this example from, is not using our system for the actual encoding, just for the preparation of the content. So here we in the template we've defined that if you define the parameters, you can just choose which of the elemental encoders we are using. So in this case, we know we have two elemental encoders, one A, one B, certain IP addresses. We can select our SDI source that we capture the devices from, the signals from locally on our own machine. And we can select some of the SOT setups that we do. So in this case, the target will be two elemental encoders that we send to. It could be something else. We can change that to encode ourselves, we can change that to use a different encoder setup. So we can very easily, via the template language, practically interface with whatever infrastructure you have so that we can combine it. After that, we can define policies. The policies define what can which group do. So here is the place where you can say, hey, I only want that group of editors to access this and work on their schedule and work on the actual editorial tasks, and not some other team. So it can be one person who is responsible for doing that live stream. So only this person can then act on changing the input, on setting up a certain overlay. Um but you can also say, okay, but I want a backup person to also have access, and I want an engineer to have access to the underlying technology and the underlying parameters so that they can help if something goes wrong. So this is the place where you attach these policies to certain groups or users directly. Then you set a channel name, and afterwards you have a list of the channels. You can sort the channels, you see the status of all these channels. So you can manage many, many channels here with different individual setups with a templatized system. We have a concept of applications that we use. So the applications are the access manager where you define the users in groups. We have the channel manager, we have the compositor manager, so different applications that encapsulate different tasks for better separation. Role base access control is very important. How does it look like? So we create users like you would in a diff in a normal user management system. They have an email address, they are part of one or more groups, they can be attached to certain roles. A group simply gathers a lot of people into a group that you can assign, which if you have five people, you don't want to attach a policy to each individual user, you rather want to attach that policy to the group. So we definitely encourage everybody to just use the groups because that's way easier than attaching it to individuals. And then we can define roles, and these roles again can be attached to groups or users. So you can be as complex as you want, and you can be as simple as you want. In the most simple setup, you have one group, one user, one role, perfectly fine. We can work with that, that's not a problem. But you can have 10 rules. You can have 10 roles, multiple groups, and hundreds of users. That's that's perfectly fine. For instance, the European Parliament, they have these 30 live events in parallel per day, each with 32 audio languages. So what happens is there is a plenary session that goes for four hours, and all the members of parliaments who are speaking have an um editorial team, assistants, who are working for them and for them only. So these editorial teams, often students, often assistants, they get access to these plenary streams, and then case then they have the right to stream a subset of that just for the few minutes that their member of parliament is talking and stream that as a social media to stream to the Facebook page in Spain or to the X page in Poland, whatever. So you have very dedicated rules that you can, and of course, nobody else has access to that live stream. So that allows us to really have a lot of people working in parallel without stepping on each other's foot. This is the stream distribution, which is practically if we're using the system not to just record, but also to send something to uh a different target, where we can define the different um targets that we want to stream to. What you see here is we have RTMP, we send CMAF to an output, we have HLS, we have an RTMP failover. So we we can do work with most of what you would expect in such a system. This is the sh uh a view of the studio. So the studio is the application where we do something that you would know from Silver Live Singular Life. So like a graphical overview that we have implemented. It's much easier than Singular Live. So if you have Singular Live and are comfortable with it, use it, no problem, we can integrate it. But if you don't have Singular Light Live or are not happy with their licensing scheme, then you can use our own studio application. So we can add a lot of video files, we can add graphics, we can have uh overlays and playlists, and then we can integrate everything into a long rundown for production. So you can have input A, you start with a live stream, you switch over to an interstitial where you have an interview that you have prepared while on stage there is some action going on. Then you switch over to the backup stream, you switch over to another stage. So you can switch between these different sources and always add an uh um graphical overlay or a lower third or a logo or some other applications on top of that. So GNL play up hub playout hub, beyond live, broadcast precision. We are building it for OTT engineers, broadcasters, and editorial teams. It's cloud native, composable, and it works with hundreds of users and hundreds of streams. Thank you very much.

Voices Of Video:

We are here to show you the more details about it at netint.com.