Expertise Zone

Straightening out QoE for the connected audience

Simen K. Frostad, Chairman,  Bridge Technologies

Simen K. Frostad, Chairman,
Bridge Technologies

Guest post by Simen K. Frostad, Chairman, Bridge Technologies

Conditions in the world of technology change at an incredible rate, and manufacturers are always developing huge numbers of new ‘species’ to populate any new technological landscape. And of course, not all of these can survive.

Look at any area of the technology market, and you can see products or standards competing for dominance, just like species of animals and plants in the natural world. Some of these, inevitably, are not going to make it, and most people can think of at least one or two recent high-profile technology extinctions.

In the digital media services market, there’s a technology which has migrated from another area of industry with very different conditions, and in its new environment this technology is something of an evolutionary dead-end. The technology in question underpins Quality of Experience monitoring.

The roots of QoE are in the telecoms industry. Telcos wanted a way of testing how users felt about the quality of sound they heard on the line. There’s no absolute technical measure for the quality of sound in this context, so telcos were looking for the subjective response of subscribers: they were looking for the opinion of the ‘average user’.

But of course there is no such thing as ‘the average user’. So what the telcos did was to assemble panels of expert listeners (not average users) and ask them to provide a subjective assessment of service quality. The experts had to rank what they heard over the phone on a scale from 1 to 5, with problems graded in subjective terms. The data from these panels were then correlated, weighted and statistically manipulated…to concoct an idea of what the ‘average user’ might be thinking about the service.

This methodology was codified as MOS (Mean Opinion Scoring), and in the telecoms industry it was an appropriate enough response to the prevailing conditions and QoE requirements. But then MOS got transplanted to different conditions altogether. And in the digital media landscape of the 21st century, conventional MOS-based QoE systems are not at all appropriate or effective, and certainly do not represent a logical approach to the current conditions.

In place of a panel of expert evaluators, QoE systems for digital media substituted an algorithmic simulation of the experts…(who had themselves been simulating the opinion of the mythical ‘average user’). These algorithms checked for conditions like ‘jerkiness’, ‘blockiness’ and ‘bluriness’  – all nicely subjective terms. So on the face of it, the old telco-derived MOS methodology appears to have acquired modernity by becoming robotised.

But let’s imagine a channel screening two movies, one after the other. The first is a feature made in 2013, and state of the art in terms of audio and video quality. The other is a classic from the 1940s, in black and white, with some scratches and the other artefacts that occur in the transfer from an old film print. The algorithms will approve Captain Philips, but they will score The Maltese Falcon down on the grounds of image and sound deficiencies. Is the data from that score useful to the service operator? It certainly is not, and actually it’s harmful because false data from instances like this dilute monitoring accuracy and erode confidence in the system.

If the digital media industry were starting out now – in 2014 – to create a way of monitoring what the user is experiencing, would we seriously give any consideration to using a methodology designed to cook up a simulated opinion of the mythical ‘average user’?

Digital media is about hard data. Digital media providers deal in real, accurate data. These data are instantly accessible to them, given the right monitoring tools. There is no need to fabricate an imagined ‘user reaction’ to a digital media service. We know exactly what errors cause QoE problems, and we can track them empirically and with total accuracy. We know that dropped packets, jitter, freezes and phenomena like these result in loss of quality as experienced by the user, and we can construct a responsible, accurate and logical way of monitoring QoE based on our ubiquitous access to the real data.

This approach is called Objective QoE. ‘Objective’ because it does away with the misguided attempt to simulate the subjective opinion of the average user. Objective QoE provides a completely factual evaluation of QoE, detecting operational content errors that affect the quality of the experience. The system generates analysis, alarms and reporting on this basis – real data from real services, as experienced by real viewers. There is nothing complex about this – it’s simple. You track what’s really happening, and you see what’s really happening.

There is simply no need for the complexity of conventional MOS-based QoE systems. In the case of QoE, the answer to complexity is simple: do away with a convoluted, error-prone opinion-simulation methodology derived from the entirely different needs of the telecoms industry. Stick to real data, tracked empirically from real services that the users are experiencing. The result is a completely dependable, accurate evaluation of QoE. And with that, providers can deliver a better service, and ultimately a more commercially successful service.

Simen K. Frostad will be appearing at TV Connect 2014 (March 18th-20th), for booking and more info go here

We welcome reader discussion and request that you please comment using an authentic name. Comments will appear on the live site as soon as they are approved by the moderator (within 24 hours). Spam, promotional and derogatory comments will not be approved

Post your comment

Facebook, Instagram and Sky case study: Game of Thrones

BT at IBC: 'unlocking the power of fibre IPTV'

IP&TV News tries out 4G Broadcast at the FA Cup Final

Thomas Riedl: “Google TV has evolved into Android TV”

Tesco and blinkbox: what went wrong?

Reed Hastings and 2030: is he right?