MOST POPULAR

Expertise Zone

WineSOFT: Scale-out and Caching

Source: WineSOFT

Source: WineSOFT

Pre-Content Delivery World guest post from Jonathan Jaeha Ahn (Strategic Director, WineSOFT)

1. Principles for Designing Successful Service

Successful service incorporates availability, speed and scalability. Kate Matsudaira emphasized these three principles too, from the article ‘Scalable Web Architecture and Distributed Systems’.

Availability

A service must be always available. Ninety percent of users move on to competitors if failure occurs. The recovery has to be swift, even though a completely flawless system might not be possible.

Speed

Time correlates to revenue in business, and high latency from e-commerce means a drop in sales. Every 0.1 second of latency means decreased revenue by one percent. Forty-seven percent of Amazon.com customers want a website loaded on their screen in two seconds.

Scalability

Regardless of the user number, a service has to be reliable. Scalability includes scale-up, service maintenance, easy storage expansion and transaction processing capacity. Manageability, with regard to the ease of diagnosing and understanding problems along with the easy updates or modifications, is also an important factor.

It is the best to keep these principles with least resources such as time, training and money. As a successful service grows, it must be able to deal with more users and content while keeping the principles. Doing so can be a difficult task. With a couple of servers, a test or pilot service may start. As the service begins to grow, the number of servers also should increase accordingly. Content renewal must be meticulously carried out on one server at a time. It might be a laborious task, but managing the system is not such an impossible task up to this point.

2. Growing Service and Content Delivery

As the service begins to expand with even more users and data, managing each server one by one becomes more difficult. Thus, high-cost storage for collecting data in one system should be introduced (NAS, SAN, DAS and etc.). High-priced but reliable storage systems make content renewal easier because servers can automatically acquire updated content from the storage. Now, what about the exploding service scale? More servers require more data from storage and cause data delivery overload on the storage. In order to resolve the data overload issue, a new storage system to support higher bandwidth is often be considered, which may be highly expensive. However, investing an excessive amount of the budget on storage may be questioned from time to time.

Data synchronization is claimed as another potential solution. Getting all data ready is impractical, so the storage system needs to be the one sorting out contents. Management is essential to achieve precise content control. Synchronization among a few servers might be easy, but the more servers and files to sync, the harder synchronization. The entire system expands and synchronization becomes slower, harder, and more unstable even. Content is constantly changing. Synchronization takes more time with more files to add or delete. Likewise, a bigger scale service inevitably requires a complicated synchronization managing system. Failure of the managing system may lead to total service failure. A simple, quick and flexible method to deliver content to the servers would be preferred for bigger services.

A service may be broken down into application and storage layers, as shown in the figure below.

The storage layer supervises data at the core. The application layer is on top of the storage layer. Within the application layer, the service logic is implemented and content delivery can also be processed for a small number of customers. The storage layer and application layer can create a decent early stage service.

As the service expands, the budget may change. In the early stage, logic development consumes a huge portion of budget. On the contrary, in the growing period, data management consumes most of the budget as the number of users increases. Content delivery becomes the main concern as the service matures, making it the biggest obstacle for service scale-out. How can the exploding bandwidth be covered?

3. The Edge : Delivery Layer

Content delivery can become an enormous burden when the service reaches maturity. Dozens of billions of shopping mall content and video service content have already reached terabytes long ago. The scalability of content delivery must be considered in order to expand the service.

The edge indicates the surface layer of the service where users experience speed and availability of the service. No matter the cost, content requested by users must be responded. Broken images or unavailable webpages on the user’s screen fatally damages the reputation of the service. The burden of content delivery at the application layer and the storage layer will be reduced if the edge layer can deliver content.

Having an efficient and easily expandable edge layer eliminates the necessity of expanding other high cost layers. On the other hand, expanding the storage layer and the application layer is an inappropriate solution due to high cost and low efficiency.

 WineSOFT will be exhibiting at next week’s Content Delivery World (5th – 7th October 2015. Radisson Blu Portman Hotel, London)

We welcome reader discussion and request that you please comment using an authentic name. Comments will appear on the live site as soon as they are approved by the moderator (within 24 hours). Spam, promotional and derogatory comments will not be approved

Post your comment

Facebook, Instagram and Sky case study: Game of Thrones

BT at IBC: 'unlocking the power of fibre IPTV'

IP&TV News tries out 4G Broadcast at the FA Cup Final

Thomas Riedl: “Google TV has evolved into Android TV”

Tesco and blinkbox: what went wrong?

Reed Hastings and 2030: is he right?

Loading IPTV Tweets ...

INTERVIEWS