Streaming personalisation has been traditionally treated as a discovery problem: recommendation engines analysing viewing history and suggesting what to watch next. That approach still carries weight across most platforms and remains familiar to users, but it now explains only a small part of how platforms shape what audiences actually watch.
As AI tools are introduced more deeply into streaming operations, their influence is moving earlier in the workflow, rather than just appearing on the interface. AI is now beginning to shape how content is tagged, clipped, reformatted, surfaced and distributed before it reaches the viewer. Personalisation is evolving from reacting to behaviour, to determining how content is made available in the first place.
The process is a complex one, because in many media organisations, the workflow still creates constraints. Data pipelines, media processing systems, and delivery layers have traditionally been designed to operate independently of one another, with handoffs rather than feedback loops. This makes it difficult to apply audience signals while they are still relevant, especially in live or near-live environments where timing ultimately determines value. Consequently, insight often arrives too late to influence what is actually published.
Why richer metadata is changing the meaning of ‘discovery’
When those systems don’t connect, the limitations are evident in how content is understood from the outset. Most platforms still organise libraries around programmes or titles, because that is how content is commissioned and stored, but that’s less useful once you look at how people watch because engagement doesn’t follow that structure for very long.
With more granular metadata, content can be described in detail as it unfolds, rather than being assigned a fixed set of labels at the asset level. Speech, visual content and event detection can be used to build a timeline of what is happening within a stream, making it possible to locate and extract sections of content without relying on manual review, which has traditionally been a bottleneck. This changes what ‘discovery’ actually means. Richer metadata goes beyond search, helping to support automated highlights, faster clipping and more precise content surfacing, giving platforms more ways to match content to viewer interest while that interest is still live.
A similar gap appears when content is prepared for different platforms. The expectation that video should move easily between broadcast, OTT and social environments is now well established, but the underlying processes have not always been able to keep up. Producing vertical or short-form versions often requires additional editing steps, which slows things down and limits scale.
Automated reframing and format conversion are starting to close that gap by working directly from the source feed. Instead of creating separate edits, systems can track focal points within a scene and adjust the framing accordingly. That makes it possible to generate alternative versions much closer to the point of distribution, which directly increases how widely and quickly that content can be used. For broadcasters and platforms, that means the same content can travel further across apps, social feeds and mobile-first environments that reward speed and native presentation.
Balancing automation and human judgement
Of course, there are limits to how far this can be taken without oversight. Reframing based on motion or subject detection doesn’t always reflect the editorial intent behind a shot, particularly in scripted or highly produced content. In live scenarios, the priority tends to sit with speed and accessibility, which makes those trade-offs easier to accept, while in other areas, control matters more than automation.
That distinction becomes more visible as content moves further through the workflow, particularly when considering monetisation. Clips, highlights, and alternative formats all create additional points of engagement, each with their own potential value. That expands available inventory, but it also introduces more complexity into how that inventory is monetised, particularly in advertising environments.
Systems that identify where attention is highest aren’t always connected to the systems that decide when and how ads are inserted. Without that key link, placement defaults to fixed patterns that ignore what is happening within the content. So while the data exists, it doesn’t travel far enough. This is one of the clearest examples of where the gap between insight and execution becomes commercially visible. Personalisation can only be useful commercially when audience insight moves quickly enough through the workflow to influence packaging, promotion and ad decisions. Otherwise, it remains descriptive rather than actionable, limiting its impact on yield and reducing the ability to monetise attention at the point it is most valuable.
Timing is the real test for live content
For sports broadcasters and rights holders, these constraints are already visible in day-to-day operations. Engagement moves quickly between live coverage, near-live clips and follow-on content. Each stage depends on how efficiently material can be identified and delivered, with even small delays becoming costly, especially when competing for attention on platforms where content is continually refreshed. As personalisation becomes more embedded in these processes, the question of control again becomes more pronounced. Without some level of editorial input, there’s a risk of narrowing the range of content reaching the audience.
In areas where context carries weight, such as news or premium sport, that balance tends to be managed deliberately. The role of AI therefore sits closer to execution, helping to apply decisions at scale, rather than replacing them. What emerges is a different form of personalisation which is less about refining recommendations and more about giving platforms a faster, more accurate way to respond to what is happening within their own content.
Its effectiveness depends on how information moves through the workflow, and whether systems can act on it in time. Where that alignment exists, audience insight stops being something platforms collect for recommendation and instead, starts shaping how content is packaged and monetised. That way, personalisation completes its evolution from a discovery feature to a core part of how streaming services operate and generate value.