[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Akamai etc.



From: Gordon Cook <cook@cookreport.com>
Subject: June 2000 Cook Report published

Akamai Pushes Web Content to the Edge, pp. 1- 12

Rapid and reliable delivery of web-based content anywhere in the 
world has become one of the most critical issues in enabling the 
continued the scaling of the Internet.  Web caching started out in 
1996 as an attempt of many ISPs to store locally as much of the 
content of the web as possible. Each ISP would make its own decisions 
about what content to fetch and how often to do it. This system 
created many problems for web content providers because they had no 
knowledge about what was cached where, by whom and with what 
frequency.  Furthermore, since caching distributed their content to 
many sites, they had no reliable way of reporting to their 
advertisers how many people had seen the material.  It was a hit or 
miss system that no one was happy with and one that created a major 
opportunity for others to fill.  A year ago Sandpiper and Akamai were 
the most talked about competitors. We note that since then Sandpiper 
has been acquired by Digital Island and has been focusing on the 
rapidly growing field of business-to-business e-commerce, leaving 
Akamai as the acknowledged leader in general content delivery.

In late 1999 Avi Freedman left his position as Vice President of 
Engineering for AboveNet to become Vice President of Network 
Architecture for Akamai. We publish a long interview where Avi 
explains in detail Akamai's extremely interesting business model. 
What Akamai does is enabled by a very significant new use for DNS 
that it has developed.

Akamai it has its own network of DNS servers that keep in contact 
with each other globally.  Akamai's other servers take the web 
content of Akamai's customers and store it in hundreds and then 
thousands of copies at the edge of the network as Akamai's  global 
network of servers continues to grow.  Akamai solves the problem of 
the world wide wait by pushing content as close to the end user as 
possible.

Akamai's network of DNS servers then accomplishes a kind of global 
air traffic control task of communicating among themselves network 
traffic conditions in real time to determine which local server to 
send a user's request to or, in the event that regional traffic 
problems are interfering with local reachability, how to retrieve the 
data from a more distant server.

Within a site Akamai figures out what data is not constantly updated. 
That data is migrated to Akamai's edge servers on a regular basis. 
The minimum amount of data possible is pumped from the host web sites 
to the edges, while each edge web server is kept constantly informed 
of the best path to get to the fresh host data it needs. Akamai 
charges each web site owner for the aggregate amount of its data 
delivered to end users anywhere in the Internet.  The table (at the 
end of the interview on page 12) shows how many networks receive what 
percent of Akamai's total aggregate of content traffic.  Its 
intelligent overlay network of DNS servers that direct web content 
look up must keep very good statistics so that Akamai knows what to 
bill each of its customers who pay to have their web sites included 
in Akamai's distribution network.

Akamai has, in effect, created a virtual private overlay of the 
internet where, as much as possible, it keeps packets on a single 
network and minimizes their having to flow upstream to transit from 
one backbone to another (where most packet loss occurs) and them move 
to the downstream customers of the other backbone.  This means that 
Akamai can go to an ISP and ask to place its servers in the its key 
POPs for no co-location charge and no charge for bandwidth used. 
Why? Because it can generally show every ISP how, with Akamai servers 
locally,  its customers will pull far less web traffic across the 
ISP's backbone that they would if the ISP tried to do its own 
caching.  Or,  if the ISP just sent the packets back and forth to the 
content provider's central servers.  In addition Akamai can 
demonstrate how, in return for nothing more than some co-lo space and 
bandwidth, the ISP will save bandwidth and give its customers better 
service.

Freedman also describes how Akamai must deal with the needs of its 
customer's central servers that are most often located at large web 
hosting centers at major backbone sites.  In these cases he may act 
as an advocate for the Akamai customer in procuring if necessary some 
Akamai owned and operated short haul links to ensure that they can 
have enough burstable bandwidth to meet peak traffic periods.  Given 
his experience at AboveNet which ran this type of operation, he is 
well equipped to deal with the web based, content provider, the web 
farm backbone operator and the large number of down stream networks 
where delivery oriented servers can be placed as close to customers 
as possible.

Akamai has taken advantage of a narrow window of opportunity to 
become, in contrast to the older generation vertically integrated 
backbones, one of a small but growing number of content distribution 
networks. Such a network hopes to solve problems like the peering 
problem for a BBN which in the summer of 1998 rebelled at granting 
Exodus free peering because Exodus dumped more traffic into BBN than 
it took out.

Commoditizing Bandwidth, an interview with Andersen Consultings Lin 
Franks pp. 13-18

Focusing on her role in ongoing efforts to develop a commodity market 
in bandwidth, we interview Lin  Franks of Andersen Consulting.  Lin 
helps to bring a non Internet protocol perspective to the issue of 
bandwidth commoditization by explaining in some detail her role in 
the commoditization of oil, natural gas and electricity.  In the mid 
90s she went to work for Portland General Electric.  When Enron 
acquired Portland in early 1998 she met Stan Hanks.  The interview 
recounts how she worked with Hanks to learn the technology issues for 
trading bandwidth while techning the internet technical people what 
skills were necessary to successful commodities trading.

For the past year Franks has been working with Andersen consulting in 
developing a training program that will acquit executives at the 
large carriers with the issues behind bandwidth commoditization, make 
certain that they understand the staffing that must be done to get 
their companies  ready to participate in bandwidth trading, to help 
them form an industry group that can agree on a benchmark and 
standard contract and to coach them through the process of carrying 
out the first trades. She notes that a nascent industry trading 
association had its first meeting in Washington, DC on March 23.

Although the price of bandwidth is declining, Franks is bullish on 
its future. She states that "the real disruptive event is the 
realization by those in the industry that, regardless of 
technological advances, regardless of their fever to lay fiber across 
the world, there is really no way that supply is going to 
consistently be able to keep up with the ungodly increase in demand 
at which we are looking. Whereas there may appear to be a supply glut 
right now, bandwidth demand will rise to fill it and will exceed the 
available supply. Then there will be another technological advance 
that will increase the available bandwidth supply. Then demand will 
rise again and so on and so forth."

<schnipp>