The tough challenges faced by communication service providers (CSPs) from over-the-top service providers create a pressing need for fast-paced innovation, at a level that operators may not be used to.
Compared to mobile service providers, ISPs and cable operators have fewer applications in their arsenal to fight the “dump pipe” threat. IPTV and interactive content delivery is perhaps their best bet in that fight.
Operators trying to augment the value of their IPTV offering worked with technology vendors to transform their offering from a video-only streaming solution to an all-device interactive experience platform.
However, OTT players are still ahead of CSPs in managing customer experience and consequently monetizing it.
Big Data is one of the key technologies that OTTs have embraced, enabling them to do so. Among others, by using Big Data they can predict user behaviour and preferences in order to create attractive offerings with minimum investment.
What Big Data can bring to IPTV
Big Data for IPTV can provide value to CSPs in at least two areas, which are related to user experience and quality of experience. First, CSPs can improve IPTV user experience by understanding subscriber preferences and proactively recommending content, creating a “positive surprise” experience that will ultimately lead to increased loyalty and revenues.
Second, CSPs can use Big Data to simplify cross-domain network planning, system performance monitoring and root cause analysis, in order to offer the required quality of experience while minimising capex/opex.
We are all familiar with Amazon’s recommendation engine which is based on item similarity algorithms and rich metadata. IPTV recommendation engines are not the same as such typical web content recommendation engines, although they have certain similarities.
But, in cases such as video-on-demand, metadata and user data are sparse and typically not enough for a content recommendation experience that would “pleasantly surprise” users.
IPTV recommendation engines, in order to live to their promise and succeed, must take advantage of Big Data and Massive Parallel Processing technologies, so as to be in position to create intelligence from a mixture of data sources.
First, they need to create “relevance & commonality” relationships, by gathering data, which may not be directly relevant to the target content. For example, the Electronic Program Guide can be used to create metadata commonalities to VoD content.
Remote-control click-stream analysis is also a pivotal data source that may be used in batch and real-time analysis. External data sources should also be used to enrich content metadata. In our example, VOD metadata can be further enriched with data from Internet Movie Database (iMDB).
Last but not least, IPTV and cable providers are in a position to utilise even some additional data sources that are either not available or hard to correlate for OTT providers. One of them is based in the fact that users nowadays keep their smartphone or tablet close to hand while they watch live or on-demand content.
They use these personal devices in order to search for information, send messages and post likes to their friends, about what they are watching at any instant.
Consequently, IPTV and cable providers offering TV and Internet service bundles are in a unique position to identify users who are using their device in a context related to what is being shown on their TV at any time. Provided there is user consent, this “Out of Band” data source can be an invaluable information stream to an IPTV recommendation engine.
The choreography of massive data ingestion
Acquiring massive amounts of structured and unstructured data in real time as described above is one of two key functionalities of an intelligent IPTV recommendation engine.
Profiling and recommending algorithms is the other critical aspect. There are three types of algorithms that need to work in a cooperating fashion, for interesting and experience-augmenting recommendations to emerge:
- The first type of algorithms are more traditional data mining types that run in batch mode at regular intervals and generate classification and profiling related data that are placed in fast NoSQL or in-memory stores.
- The second type of algorithms run in real time and are capable of processing streaming data typically generated by user activity. For every incoming subscriber event ingested into the system, these algorithms need to calculate a number of KPIs that, once correlated with the batch processed data, will affect the recommendation list.
- Finally, a set of “sanity rules” based on heuristics, online subscriber feedback and promotional/marketing directives are executed, making sure that no “inappropriate” or undesirable content is recommended.
This complex choreography of massive data ingestions from dispersed sources and combinations of batch and real-time calculations was made possible only with the advent of Big Data technologies.
IPTV service providers are now in a position to augment their user experience by recommending live and VOD content to their subscribers, assisting them in reducing the cumbersome remote control based searches and channel zapping, while up-selling long-tail and sponsored/promoted content.
Unfortunately though, all benefits of improving user experience with an intelligent recommendation engine, as per above, are immediately diminished if the IPTV platform or the underlying network are not delivering the required quality of experience.
In order to avoid mishaps, operators typically over-provision their platforms and networks to cope with unforeseen demand and faults. However, the costs of such over-provisioning with the ever diminishing ARPUs and high-priced content make the return on investment on IPTV deployments a difficult equation.
In theory, Network Management platforms ought to offer the tools needed to minimise network costs and predict problems. Nevertheless, legacy technologies constrain current solutions in technology silos, requiring multiple platforms in order to monitor and manage the entire network. Furthermore, different systems are used to manage fault, performance, QoS and customer service level agreements.
Fragmentation and event consolidation across network domains
This fragmentation makes it hard, if not impossible, to correlate events that originate from different network domains. But real life, networks and applications do create such complex interdependencies, where events from seemingly unrelated systems may create issues to any other part of the network or any application.
Big Data technologies offer the foundation layer for a NOC (Network Operation Centre) platform which consolidates all events coming from all network elements and all relevant systems.
For example, in the case of an IPTV provider, the desired NOC platform is able to collect in real-time events from all the elements in the access network, including even CPEs and STBs, all the elements in the core network, the IPTV platform itself, as well as data from Billing, CRM and other back office systems.
A Big Data-powered NOC offers a unified view of the operator’s network and all deployed applications as a whole. As a result, operations staff are able to track down the root cause of a problem faster and more easily, while also optimising and planning network and platform capacity, based not only on historic data, but also on predicted user behaviour.
Such a Big Data NOC is capable of calculating in real-time KPIs for every individual subscriber, network element and platform, predicting performance bottlenecks based on cross-domain analysis, while at the same time executing rules that proactively prevent such faults from happening.
In this manner, over-provisioning can be avoided, leading into substantial cost savings, while maintaining top-class service levels and Quality of Experience.