Akamai, the content delivery network that’s helping to deliver the on-demand and live TV and other video content fueling OTT services, this month opened a 2,000-square-foot broadcast operations control center (BOCC) at its headquarters in Cambridge, Mass., to help gain greater visibility into every step along the way from content ingest to the consumer playback.
That’s no small task for a content delivery network with global reach and some 200,000 edge servers strategically positioned around the world to deliver an OTT viewing experience on par with traditional media delivery channels.
To build and run the company’s new BOCC, Akamai recruited former NBCUniversal Director of On-Air Engineering Matt Azzarto. Working together with an outside consultant, Azzarto, who is now Akamai’s director of media operations, put in place a master control operation to monitor the health of the content streams in about 100 days.
In this interview, Azzarto discusses how his 10 years at NBCUniversal helped him tackle the demands of creating and running the new BOCC, the custom software tools Akamai has developed for the project, the tougher demands of monitoring live OTT content delivery and what lessons over-the-air broadcasters can learn from the BOCC as they may soon begin preparing to support Internet content delivery as part of ATSC 3.0 operations.
Coming from TV, you’ve seen firsthand how important monitoring by exception is to master control operations, especially in a multichannel environment. Is that the strategy being employed in the Akamai BOCC?
Exactly. This is the monitoring by exception model. That is exactly what we are doing.
In order to be effective in a high-density, multichannel environment — and we are talking about hundreds of channels here and we are designing for and targeting 400 to 500 channels in this one facility — your call to action comes from exceptions and those alerts that will alert the operator to take action.
So, quite literally, on our monitoring wall we have an Evertz MVIP system, which has the penalty box approach.
Internally, the BOCC Dashboard [an Akamai software tool] has a similar approach where you can see multichannel views in one screen, and on a component level [you] will get a tally status — red, yellow, green, orange with different priority levels associated with those colors and alerts behind them. We take action based upon the priority level of those alerts on the associated channels.
It’s literally calling your attention to the problem rather than having to have somebody looking at every single channel all of the time.
So, we have this polling methodology that we are applying so we do effectively quality monitoring at scale in the operations center.
How did you approach the design of the new Broadcast Operations Control Center?
What we are doing is taking a master control mentality into the CDN space.
I come from 10 years of working within a BOC [broadcast operations center] and NOC [network operations center] environment at NBCUniversal, both at 30 Rock and at their cable NOC in Englewood Cliffs, where I supported linear broadcast, the on-air systems and associated infrastructure.
That was everything upstream of Akamai before you hand off to the CDN — what we would call traditionally a channel chain or the system components that lead up to the HD encoder that then feeds your satellite distribution, or in this case IP distribution.
That is like the hinge point of where we meet from a systems engineering perspective. So our visibility starts there, and we take that telemetry at each link in the chain through the CDN to the end player, or the end user, which is done with our QoS Mon data product, which exists today.
Akamai also developed a set of software tools called Broadcast Operations Support Systems for the new BOCC. Tell me about the BOSS tools.
We are very excited about the tools. The BOSS dashboard provides component-level monitoring of the CDN. In addition to the BOSS dashboard, there are our media analytics and our QoS Mon tools.
In the middle of the workflow, or the core of the network, we have our Pivot Analysis tool and Stream Health Monitor tool.
Those are Akamai tools we use to collect data. to send alerts that we define specifically for these workflows and to look at everything from encoder to end point, through streaming mid-gear, through ingress, through egress, edge delivery and to the end user.
So everything from first mile through the last mile.
To the last mile. How does that work?
We use beacons, which you can think of as smart agents at each link in the workflow.
What we are doing is polling [data] based upon CP code and Stream ID. Those are Akamai terms that stand for customer provision and stream identification, which is exactly what it sounds like. It’s the channel that constitutes the mezzanine level feed and all of the ABR [adaptive bit rate] derivatives.
At each link in the chain, we’re polling out [data] — as we walk through from source to end user. Those beacons are coming back to the dashboard and refreshing every minute. That gives us what we are calling real-time status, which is as close to real-time status as we can get knowing that we are polling a large number of machines across a global network that in total is 200,000 Akamai servers. That’s obviously not for every workflow, but it gives you a feel for the sea of data that we are polling from.
What gives us the ability to do that in a short interval is that based upon the fact that with CP codes and Stream IDs we have a really good idea of the path of that media through Akamai — quite literally the machines it is traversing — and we can we can do a more targeted poll of data.
Historically, that data poll — in Akamai terms — was what we would call “Query Two Data.” It can take hours to get that data.
So we are doing this in a more targeted manner. It’s parallel processes across each component simultaneously, and that allows us to get to the shorter polling rates and shorter status refresh in the Dashboard.
That’s how we go to the last mile, the only caveat is you need to use our SDK in the player.
OTA broadcasters are on the brink of a major standards change to ATSC 3.0. The next-gen standard includes support for two-way IP connectivity to Smart TVs. What advice do you have for broadcasters who one day in the not too distant future may need to begin monitoring data flows delivered over the Internet to televisions?
That’s a great question.
The components in a CDN are different than the components in a master control, and there’s the rub. There are different components you have to alert on. You need to have a different course of action to fix things.
If you can employ a service such as the BOCC or a streaming monitoring service that has some expertise built in and some experienced staff, I would absolutely go that route —at least initially until you can get your own folks up to speed.
That may sound a little self-pitchy, but part of what we are doing here is to offset some of the OPEX for some of those broadcasters that may not want to build out their own OTT monitoring team and give a 24 by 7, pro-active eyes on glass data so that we can get out in front of these things.
This [OTT content delivery] is a very real business growth opportunity for media providers. I would say embrace the change, but employ the experts when you can.
I noticed Akamai’s BOCC is supporting live content delivery. What challenges does that present?
We’ve done three major live events this year, all within the sports genre.
The challenge there is you have fast ramp up. It’s a more intense environment where real-time communication is key.
So we are providing that for our customers. We do status updates throughout the events. We create a war room scenario that operates in tandem with our operations center.
It’s really about being able to handle the burstiness of those events, both from an operational perspective and a network perspective.
What happens with the concurrent user base, it spikes into the terabit range for bandwidth, and we need to be prepared for that across Akamai. It’s really about maintaining throughput from a bandwidth perspective and making sure you a have your backups in place.
How did your experience at NBCUniversal prepare you for the demands at Akamai?
At the highest level, it’s about creating the best quality experience for your viewer and maintaining the integrity of the content as it flows through Akamai, through the CDN.
My experience in the broadcast world taught me a lot about maintaining quality, a healthy sense of urgency in response to issues, and preparedness for a lot of scenarios or big events like the Olympics.
We are trying to do the same sort of thing here at Akamai through the CDN.
I am trying to take what historically was [done in] hours and days and compress that down to minutes and seconds.