Akamai Technologies, Incorporated

Wikipedia 🌐 Akamai Technologies

Sources todo -

2002 book by Scot Hull - "Content Delivery Networks: Web Switching for Security, Availability, and Speed"

Source - [HB005H][GDrive]

Genuity, Akamai,

Saved Wikipedia (Dec 28 2020) - Akamai Technologies

[HK004L][GDrive]

Type Public

Traded as

Industry Internet / Cloud computing

Founded / 1998; 22 years ago

Founders

Headquarters Cambridge, Massachusetts, U.S.

Key people

Revenue $2.894 billion (2019)

Operating income $549 million (2019)

Net income $478 million (2019)

Total assets $7.007 billion (2019)

Total equity $3.658 billion (2019)

Number of employees 7,650 (2018)

Website akamai.com

Akamai Technologies, Inc. is a global content delivery network (CDN), cybersecurity, and cloud service company, providing web and Internet security services.[2][3] Akamai's Intelligent Edge Platform is one of the world's largest distributed computing platforms. The company operates a network of servers around the world and rents out capacity on these servers to customers who want their websites to work faster by distributing content from locations near the user. When a user navigates to the URL of an Akamai customer, their browser is directed to one of Akamai's copies of the website.

History

Akamai Technologies entered the 1998 MIT $50K competition with a business proposition based on their research on consistent hashing, and were selected as one of the finalists.[4] By August 1998 they had developed a working prototype, and with the help of Jonathan Seelig and Randall Kaplan, they began taking steps to incorporate the company.[5]

In late 1998 and early 1999, a group of business professionals joined the founding team. Most notably, [Paul Lewis Sagan (born 1959)], former president of New Media for Time Inc. and [George Henry Conrades (born 1939)], former chairman and chief executive officer of BBN Corp. and senior vice president of US operations for IBM. Conrades became the chief executive officer of Akamai in April 1999 and led the company from start-up to sustained profitability and positive free cash-flow before turning the reins over to Sagan in 2005.[6][7][8] The company launched its commercial service in April 1999 and was listed on the NASDAQ Stock Market from October 29, 1999.[9]

On July 1, 2001, Akamai was added to the Russell 3000 Index and Russell 2000 Index.[10]

On September 11, 2001, co-founder [Daniel Mark Lewin (born 1970)] died in the September 11 attacks at the age of 31 when he was stabbed during his flight aboard American Airlines Flight 11, the first plane to crash into the World Trade Center.[11]

In 2005, [Paul Lewis Sagan (born 1959)] was named chief executive officer of Akamai. Sagan worked to differentiate Akamai from its competitors by expanding the company's breadth of services.[8] Under his leadership the company grew to $1.37 billion in revenues.[12] Sagan served as chief executive officer until co-founder and current CEO, [Frank Thomson Leighton (born 1956)], was elected to the position in 2013.[13] In July 2007, Akamai was added to the S&P 500 Index.[14]

Akamai's headquarters are in Kendall Square. Akamai started out in Technology Square and later expanded to multiple buildings in Cambridge Center. They consolidated their offices in a purpose-built building at 145 Broadway in December 2019.[15]

Technologies

Akamai Intelligent Edge Platform

The Akamai Intelligent Platform is a distributed cloud computing platform that operates worldwide. It is a network of over approximately 275,000 servers deployed in more than 136 countries.[16] These servers reside in roughly 1,500 of the world's networks gathering real-time information about traffic, congestion, and trouble spots.[16] Each Akamai server is equipped with proprietary software that uses complex algorithms to process requests from nearby users, then serve the requested content.[17]

Content delivery process

Akamai content delivery to a user

The content delivery process begins with a user submitting a request to a browser. When a user enters a URL, a DNS request is triggered and an IP address is retrieved. With the IP address, the browser can then directly contact the Akamai edge server for subsequent requests.[18] In a content delivery network structure, the domain name of the URL is translated by the mapping system into the IP address of an edge server to serve the content to the user.[17]

Akamai delivers web content over its Intelligent Platform by transparently mirroring elements such as HTML, CSS, software downloads, and media objects from customers' servers. The Akamai server is automatically chosen depending on the type of content and the user's network location. Receiving content from an Akamai server near the user allows faster downloads and less vulnerability to network congestion. Akamai claims to provide better scalability by delivering the content over the last-mile from servers close to end-users, avoiding the middle-mile bottleneck of the Internet.[19] The Download Delivery product line includes HTTP downloads for large downloadable objects, a customizable application for consumers, and analytics tools with metrics that monitor and report on the download process.[20]

Peer-to-peer networking

In addition to using Akamai's own servers, Akamai delivers certain content from other end-users' computers, in a form of peer-to-peer networking.[21][22]

Network Operations Command Center

Akamai's Network Operations Command Center (NOCC) is used for proactive monitoring and troubleshooting of all servers in the global Akamai network.[23] The NOCC provides real time statistics of Akamai's web traffic. The traffic metrics update automatically and provide a view of the Internet traffic conditions on Akamai's servers and customer websites.[24]

State of the Internet

The State of the Internet report is a quarterly report Akamai releases based on data gathered from its Intelligent Platform, which provides global Internet statistics such as connection speed, broadband adoption, attack traffic, network connectivity, and mobile connectivity.[25][26]

Visualizing the Internet

Akamai's data visualization tools display how data is moving across the Internet in real-time. Viewers are able to see global web conditions, malicious attack traffic, and Internet connectivity.[27] In addition, the net usage indices monitor global news consumption, industry specific traffic, and mobile trends.[28] Akamai also offers the Internet Visualization application, which allows users to view real-time data on their mobile device.[29]

OPEN Initiative

On October 9, 2013 Akamai announced its Open Initiative at the 2013 Akamai Edge Conference. OPEN allows customers and partners to develop and customize the way they interact with the Akamai Intelligent Platform. Key components of OPEN include system and development operations integration, real-time big data integration, and a single-point user interface.[30]

Primary domains

Akamai Technologies owns about 60 other domains, but the primary domains it uses include:

Corporate[edit]

  • akamai.com – Akamai's domain

Content (delivery) domains[edit]

  • akamai.net

  • akamaiedge.net

  • akamaized.net

  • akamaihd.net

  • edgesuite.net

  • edgekey.net

  • srip.net[31][32]

  • akamaitechnologies.com

  • akamaitechnologies.fr

DNS servers[edit]

  • akamaitech.net

  • akadns.net

  • akagtm.org

  • akam.net

  • akamaistream.net

  • akamaiedge.net

  • akamaihd.net

  • akamai.com

Customers

On July 21, 1999, at Macworld Expo New York, Apple and Akamai announced a strategic partnership to build Apple's new media network, QuickTime TV (QTV), based on QuickTime Streaming Server.[33] Both companies later announced that Apple had made a $12.5 million investment in the company the previous month.[34] Apple continues to use Akamai as their primary content delivery network[35] for a wide range of applications including software downloads from Apple's Website, QuickTime movie trailers, and the iTunes Store.[36]

In September 1999, Microsoft and Akamai formed a strategic relationship to incorporate Windows Media technology in Akamai's FreeFlow service, as well as to facilitate the porting of the FreeFlow product to the Windows platform; this relationship exists to this day.[37] Microsoft Azure offers Akamai (along with Verizon) as options for its "standard" CDN service.[38]

Arabic news network Al-Jazeera was a customer from March 28, 2003, until April 2, 2003, when Akamai decided to end the relationship.[39] The network's English-language managing editor claimed this was due to political pressure.[40]

In June 2008, The NewsMarket teamed with Akamai to accelerate dynamic content and applications to global media ahead of the Beijing Olympics.[41]

The BBC iPlayer uses Akamai to stream its recorded and live programs, focused through an XML playlist.

The entire China Central Television website, including its streaming video, has been hosted on Akamai's edge servers since late 2009.[42][43] Hulu uses Akamai for hosting video.[44] MIT OpenCourseWare utilizes Akamai's EdgeSuite for its content delivery network.[45]

Trend Micro uses Akamai for their Housecall antivirus application.

Valve's Steam service uses Akamai's Content Delivery Network for storing screenshots and icons uploaded by users.

Akamai provided streaming services to ESPN Star (India) during the course of the ICC Cricket World Cup 2011.[46]

Rackspace's Cloud Files use Akamai's Content Delivery Network (CDN) for storing its customer's files.

Other customers include Adobe Systems, Airbnb, AMD, AutoTrader.com, COS, ESPN, The Great Courses, Hewlett-Packard, Hilton Worldwide, IBM, J. C. Penney, Jehovah's Witnesses, MTV Networks, NASA, National Academy of Recording Arts and Sciences, NBC Sports, Pearson Education, Red Bull GmbH, Red Hat, Sony PlayStation and Yahoo!.[47]

In August 2017, Nintendo launched its new game console and mobile gaming application simultaneously around the world via Akamai's Media Delivery Solutions.[48]

Acquisitions

  • On February 10, 2000, Akamai acquired Network24 Communications[49] for an aggregate purchase price of $203,600,000.[50]

  • On April 20, 2000,[50] Akamai acquired InterVU Inc.[51] for an aggregate purchase price of $2,800,000,000.

  • On July 25, 2000, Akamai acquired CallTheShots, Inc., for an aggregate purchase price of $3,700,000.[50]

  • On June 10, 2005, Akamai acquired Speedera Networks, Inc. for an aggregate purchase price of $142,200,000.[52]

  • On December 13, 2006, Akamai acquired Nine Systems, Inc.,[53] for an aggregate purchase price of $157,500,000.[54]

  • On March 13, 2007, Akamai acquired Netli Inc. (Netli),[55] for an aggregate purchase price of $154,400,000.

  • On April 12, 2007, Akamai acquired Red Swoosh Inc.[56] for an aggregate purchase price of $18,700,000.[57]

  • On November 3, 2008, Akamai acquired aCerno Inc.,[57] for an aggregate purchase price of $90,800,000.[58]

  • On June 10, 2010, Akamai acquired Velocitude LLC,[59] for an aggregate purchase price of $12,000,000.[60]

  • On February 7, 2012, Akamai acquired Blaze Software, Inc.,[61] for an aggregate purchase price of $19,300,000.[62]

  • On March 6, 2012, Akamai acquired Cotendo, Inc.,[61] for an aggregate purchase price of $278,900,000.[63]

  • On September 13, 2012, Akamai acquired FastSoft, Inc.,[61] for an aggregate purchase price of $14,400,000.[64]

  • On December 4, 2012, Akamai acquired Verivue, Inc.,[61] for an aggregate purchase price of $30,900,000.[65]

  • On November 8, 2013, Akamai acquired Velocius Networks[66] for an aggregate purchase price of $4,300,000.[67]

  • In February 2014, Akamai acquired cyber security provider, Prolexic Technologies[66] for an aggregate purchase price of $390,000,000.[68]

  • In February 2015, Akamai acquired Xerocole Inc., a domain name system technology company.[69]

  • On April 6, 2015, Akamai acquired cloud OTT IPTV service provider Octoshape,[70] for an undisclosed amount.[71]

  • On November 2, 2015, Akamai acquired Bloxx, a provider of Secure Web Gateway (SWG) technology,[72] for an undisclosed amount.[73]

  • On September 28, 2016, Akamai acquired Concord Systems, a provider of technology for the high performance processing of data at scale,[74] for an undisclosed amount.[75]

  • On October 4, 2016, Akamai acquired Soha Systems, an enterprise secure access delivered as a service provider,[76] for an undisclosed amount.[77]

  • On December 19, 2016, Akamai acquired Cyberfend, a bot and automation detection solutions provider,[78] for an undisclosed amount.[79]

  • On March 29, 2017, Akamai acquired SOASTA, a digital performance management company based in Mountain View, CA, for an undisclosed all-cash amount.[80]

  • On October 11, 2017, Akamai acquired Nominum, a carrier-grade DNS and DHCP provider and one of the major players in the creation of the modern DNS system, for an undisclosed all-cash amount.[81]

  • In October 2019, Akamai acquired security software provider ChameleonX for $20 million.[82][83]

  • On January 24, 2019, Akamai acquired CIAM provider Janrain.[84]

  • On October 27, 2020, Akamai acquired IoT and mobile security provider Asavie.[85]

Litigation

One of Akamai's patents covers a method of delivering electronic data using a content delivery network. Internet Web site proprietors (content providers) contract with Akamai to deliver their Web sites' content to individual Internet users. The patented method permits large files, such as video or music files, to be stored on Akamai's servers and accessed from those servers by Internet users. This increases the speed with which Internet users access the content from Web sites.

Unfortunately for Akamai, its patent was written in a way that called for or permitted actions by multiple persons or entities—such as the content provider customer and the company providing the CDN service.[86] Akamai's competitor Limelight chose to operate its allegedly infringing service in that manner—it performed most steps of the patented process and its customers performed a so-called tagging step. Under the interpretation of patent law at the time when Akamai decided to sue Limelight for patent infringement, a method patent could be held infringed only when a single actor performed all of the steps. The court therefore overturned a $40 million jury verdict in Akamai's favor.

Akamai initially lost the case, even taking it to the Supreme Court. The Supreme Court returned the case to the United States Court of Appeals for the Federal Circuit, however, with an invitation to re-evaluate its rule, if it chose to do so, that all the steps of a method had to be performed by a single actor for there to be infringement. On remand, the Federal Circuit considered the matter en banc (all active judges of the circuit) and modified its rule. It now held that a patent could also be directly infringed if "an alleged infringer conditions participation in an activity or receipt of a benefit upon performance of a step or steps of a patented method and establishes the manner or timing of that performance." On that basis, the Federal Circuit reinstated the $40 million jury verdict. It said that "Akamai presented substantial evidence demonstrating that Limelight conditions its customers' use of its content delivery network upon its customers' performance of" the steps that Limelight does not itself perform. This has been considered a substantial change in patent law. The case is known as Akamai Techs., Inc. v. Limelight Networks, Inc..

Controversies

The National Security Agency and Federal Bureau of Investigation have reportedly used Facebook's Akamai content delivery network (CDN) to collect information on Facebook users.[87]This report appears to show intelligence analysts intercepting communications between Facebook and its CDN provider, but does not indicate Akamai as being complicit in this process.

According to researchers from the Universities of Cambridge and California-Berkeley, University College London, and International Computer Science Institute-Berkeley, Akamai has been blocking access to web sites for visitors using Tor.[88][89]

See also


Akamai overcomes the Internet's hot spot problem.

[Paul Lewis Sagan (born 1959)] says that Danny could leave the company to finish his PhD and publish his thesis, but then they'd have to kill him. Everyone else at [Akamai Technologies, Incorporated] is encouraged to complete their academic work, a slew of them at MIT, but Danny - him they'd have to off. He knows too much.

[Daniel Mark Lewin (born 1970)] is an algorithms guy, and at Akamai Technologies, algorithms rule. After years of research, he and his adviser, professor [Frank Thomson Leighton (born 1956)], have designed a few that solve one of the direst problems holding back growth of the Internet. This spring, Tom and Danny's seven-month-old company launched a service built on these secret formulas.

The Akamai solution recalls great historical shifts - discoveries of better, faster ways - like the invention of Arabic numerals, or the development of seafaring. Take the latter: For most of prehistory, people traveled exclusively over land. Then, around 5,000 years ago, they discovered that floating cargo on water was easier than lugging it across terrain - no mountains to climb, no roads to negotiate. Seafaring transformed most of the world's surface from unusable space into a vast ubiquitous shortcut, a portal to faraway lands and great riches. Natural harbors blossomed into sophisticated cities. And although societies continued exchanging goods with their neighbors over land, the first world powers, the first empires, commanded the seas.

In some ways, sending information around the traditional Internet resembles human transport, pre-Phoenicia. The Net was originally designed like a series of roads connecting distinct sources of content. Different servers, physical hardware, specialized in their own individual data domains. As first conceived, an address like nasa.gov would always correspond to dedicated servers located at a NASA facility. When you visited www.ksc.nasa.gov to see a shuttle launch, you connected to NASA's servers at Kennedy Space Center, just as you traveled to Tivoli for travertine marble instead of picking it up at your local port. When you ran a site, your servers and only your servers delivered its content.

This routing system worked fine for years, but as users move to fatter pipes, like DSL and broadband cable, and as event-driven supersites emerge, the protocols tying information to location cause a bottleneck. Back when The Starr Report was posted, Congress' servers couldn't keep up with hungry surfers. When Victoria's Secret ran its Super Bowl ad last February, similar lusts went unsated. The Heaven's Gate site in 1997 quickly followed its cult members into oblivion. And when The Phantom Menace trailers hit the Web this spring, a couple of sites distributing them went down.

This is the "hot spot" problem: When too many people visit a site, the excessive load heats it up like an overloaded circuit and causes a meltdown. Just as something on the Net gets interesting, access to it fails.

For more time-critical applications, the stakes are higher. When the stock market lurches and online traders go berserk, brokerage sites can hardly afford to buckle. In retail, slow responses will send impatient customers clicking over to the competition. Users may have Pentium IIIs and ISDN lines, but when a site can't keep up with demand, they feel like they're on a slow dialup. And users on relatively remote parts of the network - even tech hubs like Singapore - often suffer slow responses, not just during peak traffic.

ISPs address this problem by adding connections, expanding capacity, and running server farms to host client sites on many machines, but this still leaves content clustered in one place on the network. Publishers can mirror sites at multiple hosting companies, helping to spread out traffic, but this means duplicating everything everywhere, even the files no one wants. A third remedy, caching, temporarily stores copies of popular files on servers closer to the user, but out of the original site's control. Naturally, site publishers don't like this - it delivers stale content, preserves errors, and skews usage stats. In other words, massive landlock.

So in 1998, with their new algorithms in hand, [Frank Thomson Leighton (born 1956)] and [Daniel Mark Lewin (born 1970)] found themselves facing a sort of manifest destiny. The Web's largest sites were straining to meet demand - and frequently failing. Most needed better traffic handling, a way to cool down hot spots and speed content delivery overall. And Tom and Danny had conceived a solution, a grand-scale alternative to the Net's routing system.

In September, a California company called Sandpiper Networks introduced a service perilously similar to what they'd envisioned, but Tom and Danny's load-balancing solution was one step more radical, and the problem was plenty big for two contenders. [Paul Lewis Sagan (born 1959)], a content guy from Time Warner's Pathfinder, signed up to lead them, and the Cambridge, Massachusetts-based startup began building its own globe-spanning network of servers that would handle Web content in a brand-new way.

It worked. With Akamai's FreeFlow service, all content pours through the entire network, instantly responding to demand, ebbing and flowing as needed, changing routes and locations in response to current conditions. Its ocean of servers connects to the terra firma of the rest of the Net at scores of ports, all of which move data more quickly as conditions continually change.

No Fixed Address

In January, Akamai began running beta versions of FreeFlow, serving content for ESPN.com, Paramount Pictures, Apple, and other high-volume clients. (Akamai withholds the names of the others, but you can tell if a site is using the service by viewing the page source and looking for akamaitech.net in the URLs. A cursory test reveals "Akamaized" content at Yahoo! and GeoCities.)

ESPN.com and Paramount have been good beta testers - ESPN.com because it requires frequent updates and is sensitive to region as well as time, and Paramount because it delivers a lot of pipe-hogging video. On March 11, while ESPN was covering the first day of NCAA hoops' March Madness, Paramount's Entertainment Tonight Online posted the second Phantom Menace trailer. FreeFlow handled up to 3,000 hits per second for the two sites - 250 million in total, many of them 25-Mbyte downloads of the trailer. But the system never exceeded even 1 percent of its capacity. In fact, as the download frenzy overwhelmed other sites, Akamai picked up the slack. Before long, Akamai became the exclusive distributor of all Phantom Menace QuickTimes, serving both of the official sites, starwars.com and apple.com.

So how does it work? Companies sign up for Akamai's FreeFlow, agreeing to pay according to the amount of their traffic. Then they run a simple utility to modify tags, and the Akamai network takes over. Throughout the site, the system rewrites the URLs of files, changing the links into variables to break the connection between domain and location. On www.apple.com, for example, the link www.apple.com/home/media/menace_640qt4.mov, specifying the 640 x 288 Phantom Menace QuickTime trailer, might be rewritten as a941.akamai.com/7/941/51/256097340036aa/www.apple.com/home/media/menace_640qt4.mov. Under standard protocols, a941.akamaitech.net would refer to a particular machine. But with Akamai's system, the address can resolve to any one of hundreds of servers, depending on current conditions and where you are on the Net. And it can resolve a different way for someone else - or even for you, a few seconds later. (The /7/941/51/256097340036aa in the URL is a fingerprint string used for authentication.) This new method is more complicated, but like modern navigation, it opens new vistas of capacity and commerce.

Sandpiper remains Akamai's only direct competitor. In April it signed a deal with AOL and Inktomi to begin serving their sites and incorporating their servers into the Sandpiper network. But a month out of the starting gate, Akamai was running immediately neck and neck with its rival, both promising more than 1,000 servers by the end of the year.

Academic Hot Spot

Partly because it arrived second, Akamai has had to differentiate its product. The company has done this not only by focusing on fine points of the technology, but also by positioning itself as the intelligent solution. FreeFlow is the masterwork of leading scientists from MIT.

What's more, the scientists of Akamai are algorithms people. Whereas network hackers tend to be masters of improvisation, spotting local problems and using intuition and quick experimentation to fire off fixes, algorithms people tend to be slower and more rigorous, examining and proving everything along the way. They start with the most pared-down problems - sorting numbers, stacking rectangles, connecting dots - and build up to more complicated situations. They study how efficiently computer programs run under all conditions - the best, average, and worst-possible cases - as the mass of processors, connections, and information become infinitely large. It may take them a while to find a solution they like, but when they do, they know it will work, both on paper and in any reality.

This is the hot spot phenomenon: just as something on the Net gets interesting, access to it fails.

So network growth doesn't scare algorithms people; they always push things to infinity anyway.

You can trace Akamai's genesis to LCS, MIT's Laboratory for Computer Science, where Tim Berners-Lee's World Wide Web Consortium and [Frank Thomson Leighton (born 1956)]'s algorithms group both have their offices. In 1995 Tim asked Tom if he thought distributed algorithms could reliably solve the hot spot problem, and Tom's algorithms posse was intrigued. The problem raised interesting theoretical issues, the group's forte, while at the same time the graphs of nodes and edges they drew on whiteboards represented actual machines and Net connections. Real-world relevance! Several semesters and ideas later - some published, some still proprietary - Tom and Danny had blueprints for server software that would cool down traffic everywhere. But better, they had formally proven that most of their algorithms were optimal. In other words, different solutions might give equally good results, but none could ever possibly do better.

For their nonoptimal algorithms, Tom and Danny demonstrated that the problems were, in computer-science shorthand, "hard" problems. This means there are no major shortcuts to finding the best solution; the problem is inherently difficult, and you just have to do the best you can with finite time and computational resources.

Lastly, Tom and Danny knew with total certainty that, given their descriptions of the hot spot problem and the workings of the Net, the larger the network grew, the better their solution would perform. They not only had a solution, they had a solution that was literally - demonstrably - unbeatable.

La Sauce Est Tout

The name Akamai means "clever" in Hawaiian, or more colloquially, "cool." At Danny's suggestion, the team found it by trying out words in an online English-Hawaiian dictionary. In a sense, FreeFlow is an attempt to put smartness into the Web page, the URL, and the network itself.

Although at first Akamai's founders imagined selling their traffic-calming solution as server software - ISPs would buy it and install it themselves to boost performance - they soon realized they should be a service for publishers, content providers, and ecommerce companies of all kinds, not a software company. With Akamai's service, publishers could forget about servers and ISPs, and concentrate on content. Akamai would run its software on its own broadly deployed server network and sell guaranteed fast delivery, subscription style. The idea was to work with as many ISPs as possible to create a new layer of infrastructure on top of the Net, a fluid system that would run everywhere and reach out to remote populations.

Small ISPs operate at a single location; the big ones have their own subnetworks encompassing multiple POPs in different locations. Both are basically facilities where a bunch of servers share one or more network connections. Akamai calls these facilities "regions," and to get close to all users, the company tries to locate its servers in as many regions as it can. Singapore's SingNet has Akamai servers in its single region, while Teleglobe has them all over. Partnering ISPs small and large benefit from improved capacity, while their users get faster delivery of FreeFlow sites. (The company targets content providers and ecommerce sites, but hopes to cultivate an Intel-style status among consumers as well, who would look for ISPs with "Akamai Inside.")

So Akamai servers are in diaspora, but they remain clannish. They keep in constant contact with each other all over the map, speaking their own special dialect of Linux. Each region has one mapping server and one or more content servers. All content servers, no matter where they are, are eligible to serve any content. The mapping servers monitor the local state of the network: How fast are the current connections to neighboring regions? Which connections seem to have gone down completely? They figure out which servers should carry which files, and then how to evenly distribute the hits for a requested file among the servers that carry it.

Web pages themselves break down into units: the HTML page plus each embedded file it contains - images, animations, sounds, video. Akamai's system wisely leaves the HTML alone and scatters only the embedded elements, the rationale being that these larger files cause the traffic jams, while plain HTML is fast and cheap. In a big cafeteria, FreeFlow might pick up your meat loaf, green beans, and rice pudding at three different counters, calculating which steam tables look hotter and where the lines are long. But your tray, the HTML, always comes from the same place. Keeping the HTML on the home server also keeps the user database and customization scripts in one place, where publishers want them.

When a user requests a file, the mapping servers decide on a content server in two stages: They choose first the best region and then the content server within that region. In computer-science terms, the first stage represents a classic "min-cost" flow problem, where the cost associated with each hop between neighboring regions - or how easy it is for traffic to flow between them - is weighted. As traffic conditions change, the mapping servers update these weights and continually find a low-cost route based on a user's place on the network. As Akamai and Sandpiper both know, this is an expensive, "hard" calculation. The mapping servers have to be fast.

Within a region, for the second stage of routing, the system divides the traffic evenly among all the servers using "consistent hashing," a wonderful double-randomized hashing algorithm that Danny invented, earning him MIT's 1998 Morris Joseph Lewin Award (no relation) for best master's thesis. Simple hashing algorithms, which assign objects to locations the way a card dealer deals out hands, break down completely when players drop out or come in; the original formula for who will receive what card, which relies on knowing the number of players, no longer works. But consistent hashing splits up and mixes the assignments so thoroughly, while still using a fairly simple formula, that if locations drop out and throw things off, the correct location will still be close by. As more server problems and network glitches arise, the algorithm has to do more second-guessing, but it achieves the best results possible in an unsteady environment.

Routing and spreading the hits intelligently is important, but it isn't the whole solution. The real hot spot cooler is balancing the content load - determining which servers should have which files in the first place, before they fulfill requests routed to them. Akamai's solution replicates the popular files on multiple servers, spreads total loads evenly, and minimizes copying files around. The ingenious algorithm underlying this process is Akamai's secret sauce, and as the French say, the sauce is everything.

It was a solution that was literally - demonstrably - unbeatable.

Cambridge Dons

Underlying algorithms aside, a private network service business is a different beast entirely from a software business - for one thing, it requires far more money.

Discovered by Battery Ventures through MIT's student-run "$50K" Entrepreneurship Competition (the Akamai team failed to win the trial's whopping $50,000 grant for starting a new business, though the winner turned out virtuous and offered to split the take with the four other finalists), Akamai later secured funding from several angel investors. Among them were Gil Friesen (formerly of Classic Sports) and Art Bilger (formerly of Apollo Fund and New World Communications). Polaris Venture Partners of Boston and Seattle also joined, and by the end of 1998 the startup had more than $8 million in first-round funding.

The talent snowballed. Battery Ventures' Todd Dagres recruited [Paul Lewis Sagan (born 1959)] as chief operating officer last January. In March David Goodtree joined as head of marketing, after years of studying the IT industry at Forrester Research. In April [George Henry Conrades (born 1939)] became Akamai's CEO and chair. At [BBN Technologies, Incorporated] George had overseen the acquisition of [Genuity Incorporated], authors of the traffic-management software tool Hopscotch, and when he learned about Akamai, it seemed the perfect big-thinking, out-of-the-box idea: a Hopscotch for the entire Internet. Everyone believed.

Today Akamai employs about 130 people. Many are MIT students, both graduate and undergrad. But alongside them in the trenches is a surprising number of actual faculty on temporary leave - full professors and associate professors from places like MIT, Carnegie Mellon, and UC Berkeley.

The Sandpiper Approach

Akamai and Sandpiper are not the first in their field of distributed traffic management. Older systems, many still available and useful, perform sophisticated routing and load balancing over groups of servers installed on a single subnetwork. Companies like Cisco (DistributedDirector), GTE Internetworking (which acquired [BBN Technologies, Incorporated] and with it [Genuity Incorporated]'s Hopscotch), and Resonate (Central Dispatch) have been selling such solutions as installable software or hardware. Digex and GTE Internetworking (Web Advantage) offer hosting that uses intelligent load balancing and routing within a single ISP. These work like Akamai's and Sandpiper's services, but with a narrower focus.

But only Akamai and Sandpiper are selling the service of whole server networks spanning numerous ISPs, and only Akamai and Sandpiper use the trick of rewriting URLs as a hook into the alternative system.

Both companies' services are powerful, but they're not identical. Footprint customers specify which part of their content the system should handle by defining rules through a user interface, rather than by adding tag lines to page source. Sites project expected traffic levels ahead of time and pay for levels of service, rather than paying by the meter.

Footprint users can choose from many content distribution options - some simple, some advanced - for different parts of a site, while FreeFlow optimizes everything automatically. Footprint also distributes HTML, not just embedded files, spreading database information through the network. Akamai leaves HTML at home.

Under unusual circumstances, Footprint may be less bulletproof than FreeFlow, but it has proven itself well. It kept The Starr Report available on the Los Angeles Times site, when many others buckled, and served Intuit's site reliably all through tax season. As long as speed is scarce on the Net, the two companies are going to find fans.

Both companies, if they continue to grow, will route traffic more efficiently over the Net as a whole, increasing delivery speeds for subscribers and nonsubscribers as well. But there's a downside. Content delivered via these subscription-based networks will make content routed by the old, free Internet seem slow. A page that loads in six seconds seems fine until you visit one that loads in four. This effect will be magnified as people upgrade their connections to DSL and broadband cable. With fatter pipes, users will demand more information-intensive experiences, and the newly available last-mile bandwidth will be filled up with fast, dazzling content from supersites, served from networks like Akamai. ESPN.com will appear instantly, but you'll have to wait an age for anything homegrown or poorly financed.

Others argue, however, that the Internet is already a tiered environment, where cash-rich content providers can add more and more hardware to improve delivery, while your cousin's homepage, with no traffic management resources behind it, is already slow. Akamai and Sandpiper just allow more publishers to tap into premium services.

Either way you look at it, the stakes are high. The winner, if there is one, will have its hand in the major revenue-generating sites on the Web. More than any other company in the medium's short history, the winner will own the Net - or at least the parts of it that pay.

Sea Change

Akamaiians compare their company to FedEx, delivering content faster and more reliably than the old USPS. It's not a bad comparison - as good, at least, as the glorious advent of sea travel thousands of years before Christ - but it raises the specter of huge capital needs and about 10 more years till ubiquity.

Akamai wants to be everywhere: says Paul, "a server in every POP."

Yet in less than a year, the little-known company is well on its way toward global domination. Akamai wants to be universal, as widespread as the Net itself, with, in Paul's terms, "a server in every POP." If the company has its way, every computer on the Net will connect to Akamai servers, which will push and pull content around with the tides, constantly running calculations on the turbulence and fluid dynamics of information. As each new site and ISP signs on, Akamai's ocean will swell, carrying ships ever closer to their final goals. For now at least, the ordinary user ought to be glad that the company in charge of the Earth's oceans wants only to give each of us beachfront property.


2000 (April) - whitepaper on FreeFlow

http://www.theether.org/cs199/papers/freeflow.pdf



2000 (April 17) - NYTimes - "TECHNOLOGY; 2 Companies Take Separate Paths To Speed Delivery of Web Pages"

By Lawrence M. Fisher / April 17, 2000 - Source - [HN01F7][GDrive]

Here's news: Some of the Internet's uncertainties have nothing to do with stock prices. Take the vagaries of surfing the Web, for example.

Anyone who has ever watched a complex Web page paint itself onto the computer screen, image by grudgingly slow image, can attest that the World Wide Web should be faster. On that simple proposition there is virtually unanimous agreement.

Where there is disagreement is over how to accomplish this worthy goal -- and how to get paid for doing it. On that, the Web-performance industry splits into two factions, whose leaders are divided by technologies and an entire continent: Inktomi, based in Foster City, Calif., and Akamai, in Cambridge, Mass.

Inktomi, with America Online as a trophy customer, aims to make Web traffic flow faster by storing or ''caching'' the most frequently requested material on network server computers nearest to the people who request it. Last year, for example, when a bazillion or so AOL users clicked on the Starr report, there were no delays or crashes. That is because the readers were not linking to the government server, but to a battalion of Inktomi caches that duplicated and delivered the document.

Akamai, which counts Yahoo among its big clients, takes a different approach to the same problem, with a technique called content distribution. When a user in Singapore, say, clicks on a popular page in Yahoo, only the first request goes to Yahoo's server in Palo Alto, Calif.; the balance of the page is then delivered from an Akamai server with the shortest, fastest connection to the person in Singapore.

Inktomi and Akamai each have a pack of competitors, snapping at their heels and hoping to bite off bigger and bigger chunks of the Web-performance market, which is expected to grow from about $4.5 billion last year to $25 billion in 2003, according to Andrew C. Brosseau, an analyst with S. G. Cowen.

Officially, Inktomi and Akamai executives assert that they really do not compete with each other and have little in common -- which of course means they compete vigorously and have more in common than either will let on.

''Akamai is just doing one of a multitude of things you would want to do to interact with content,'' said David C. Peterschmidt, president and chief executive of Inktomi, whose stock price is up nearly 14 percent so far this year, despite the market's recent turmoil. ''Caching allows a whole set of stuff, of which content distribution is just one thing.''

But as Akamai sees things, ''there's a lot of content that's uncacheable, because caching doesn't ensure freshness,'' said [Paul Lewis Sagan (born 1959)], Akamai's president and chief operating officer. ''In entertainment or news you have to make sure the content is fresh for each user.'' Akamai's stock price, despite a plunge from the stratospheric heights it occupied in January, is still up nearly 150 percent from the offering price last October.

Whatever their technical differences, Akamai and Inktomi in some ways have followed remarkably parallel paths. Each, for instance, has an odd name derived from a Native American language: Inktomi, pronounced INK-tuh-mee, is the name of a clever spider in Lakota legend; Akamai (AK-uh-migh) is Hawaiian for intelligent.

Each was founded by a pair of math whizzes from a leading university, but is now managed by a computer industry graybeard.

Each company, moreover, received venture funding from a leading technology company, Intel in Inktomi's case, Apple Computer and Cisco Systems in Akamai's. And each has ridden an initial public offering into a high stock market valuation for the company -- $100.8125 a share for Inktomi and $64.875 a share for Akamai.

And each, of course, tries to address perhaps the most vexing quirk of the Web. Some sites sprint while others crawl. But sometimes a site will sprint on one visit only to crawl on the next. In such cases the disparity is more likely caused not by the Web site itself, but by the underlying architecture of the Internet, which was originally intended to deliver e-mail reliably -- not to broadcast complex media at high speed.

For a user, calling on any Web page requires downloading multiple ''objects'' -- a banner ad, a photo or a set of activity buttons, for example. Any or all such objects must be broken down into packets of digital bits, sent across the Internet and reassembled at the end user's computer.

During this downloading, even for the delivery of a single Web page, there is a tremendous amount of dialogue between the user's browser software and the source computer server. All the communicating bits must travel -- often across the country, or even around the globe -- hopping across multiple networks and traffic routers. At each hop, a few packets may get lost and have to be resent.

Inktomi's approach, caching, and Akamai's mainstay, content distribution, both seek to streamline things by taking the material on the original server and moving it closer to the user. But the approaches differ, as do the trade-offs.

The concept of caching comes from the semiconductor industry; chip makers commonly put a small amount of memory, called cache, directly on a microprocessor. Frequently used bits of data can be stored in this cache memory, to be easily summoned when needed, speeding the computer's operation

Inktomi has a software product, called Traffic Server, that runs on server computers around the Internet, working essentially the same way as cache memory on a computer chip. Traffic Server monitors network traffic to identify the most frequently requested objects, then stores, or caches, them closer to the requesting users.

''Networks were very dumb,'' Mr. Peterschmidt said. ''They would send the same packet over and over again to the same location. Traffic Server was the first step at making the network smarter. It would recognize that you'd already requested that information, and store it closer to the user.''

At age 52, Mr. Peterschmidt, formerly president of Sybase, the database software company, is Inktomi's graybeard. The company was founded in 1996 by two researchers at the University of California at Berkeley: Eric A. Brewer, now 33 and Inktomi's chief scientist, and Paul Gauthier, who is 27 and currently on sabbatical from the company. Their first product was a search engine, which Inktomi still sells as a service to portals like Yahoo and Hotbot.

Traffic Server, introduced in 1998, was created as a stand-alone software package to run on Sun Microsystems' Solaris version of the Unix operating system. The product now also runs on other major versions of Unix and on Microsoft's Windows NT network operating system. Inktomi also publishes Traffic Server's application programming interfaces, or A.P.I.'s -- software specifications that enable other companies to add functions, like network security or so-called streaming audio and video.

Excite@Home, which became Inktomi's second customer for Traffic Server after AOL, uses Traffic Server's programming capability to customize the Excite@Home Web service for different devices -- one version for PC's, for example, another for TV set-top boxes. But while this flexibility was nice, the real draw was the improved performance, said Robert Keller, an engineering manager at Excite@Home.

Inktomi, which counts 200 customers, sells Traffic Server for as much as $24,000 a copy. And it requires a substantial piece of hardware, like a Sun Sparc server, which is likely to cost as much or more. These requisites have left an opening at the budget-minded end of the market.

Rushing to fill this opening are companies like CacheFlow and NetAppliance, which sell caching appliances -- small computers preloaded with caching software. Novell, the office-network software company, is also a player in this end of the cache market. The company has deals with 13 major server manufacturers, including I.B.M., Compaq and Dell, to sell hardware bundled with software. Prices start as low as $4,000.

CacheFlow, whose stock went public last November at $24 and now trades at $34.25, says it will do for caching what Cisco Systems did for routing -- even though Cisco has its own cache products. CacheFlow offers a simple hardware box with its own operating system and software that is designed to require little of the Web operator's time or effort.

''We go out, do the installation and the customer never has to think about us,'' said Brian M. NeSmith, president and chief executive of CacheFlow, based in Sunnyvale, Calif.

All of the cache vendors sell a product, be it hardware, software or a mixture of both. Akamai, on the other hand, sells a service, typically priced on a megabit-per-month basis.

Akamai customers -- which include the Web sites of Apple Computer, CBS, Microsoft and The New York Times -- must first run a small program that selectively ''akamaizes'' portions of their sites for delivering material over Akamai's servers. This process, one that the individual user normally would not be aware of, takes the U.R.L., or address, that normally directs a browser request to the server housing the information and converts the address to an A.R.L., the designator for an Akamai server.

The 2,750 or so Akamai servers worldwide are run for the company by network operators like [UUNET Technologies, Incorporated] and Sprint. Akamai's software determines which server is best suited for use by the person making the request and then sends the ''akamaized'' content to that server. Akamai officials contend that the process is much more sophisticated than caching.

''Caching is 2 percent of the complexity of content delivery,'' said [Daniel Mark Lewin (born 1970)] , who was a graduate student at the Massachusetts Institute of Technology when he co-founded Akamai in 1998 with [Frank Thomson Leighton (born 1956)], a professor of applied mathematics there. ''The hard part of content delivery is figuring out what request should go to what machine,'' said Mr. Lewin, 29, who is now Akamai's chief technology officer.

In addition to Akamai's president, Mr. Sagan, who is 41 and is a former president of Time Inc. New Media, Akamai's executive suite includes [George Henry Conrades (born 1939)], chairman and chief executive. Mr. Conrades, 61, is the former chief executive of the networking company BBN, now Genuity, and a former senior executive at I.B.M.

Some analysts say Akamai overstates the distinctions between content distribution and caching, which they say is also a big part of Akamai's service. ''Akamai deployed $4 million worth of caches and accelerated Yahoo's performance,'' said Peter Christy, an analyst with Jupiter Communications. ''Any of the caching companies could have done that.''

Although he concedes that Akamai has some truly innovative technology, Mr. Christy said the company's real brilliance lay in its business model, which might be described as caching plus network management, all sold as a pay-as-you-go service.

Mr. Christy says Akamai's pitch, that it can improve performance for content providers, has more sizzle than Inktomi's claim that it can reduce costs for Internet service providers. ''None of Inktomi's customers can figure out how to get a consumer to pay more for higher performance,'' he said. Akamai asks content providers, ''Are you willing to pay to have your site perceived better by the consumer?'' he said, ''and the answer is yes.''

Akamai's service is now used by more than 400 Web customers. But the company is challenged by a pack of players, like Adero, Digital Island and Exodus, that act as network hosts for companies that operate Web sites and that have all formed partnerships with Inktomi.

Some of Akamai's potential rivals are even more daunting, like mighty Intel, which started its own Web-hosting business last year and has begun selling Inktomi's software with an Intel cache server.

What is more, many of these hosting companies own their own networks. Akamai does not, instead piggybacking on network carrying capacity from UUNet, Sprint and others.

''Akamai has content distribution, which is good, but that's just one piece,'' said Ruann Ernst, chairman and chief executive of Digital Island, a company in San Francisco that went public last June at $10 and whose stock closed on Friday at $25.875. Its notable customers include CNBC.com, Intuit and The Los Angeles Times.

''We are especially good at putting it all together in an integrated way so they don't all look like different solutions,'' Ms. Ernst said. ''It's a huge market, and there will be room for a lot of different business models because different customers will want to buy in different ways.''



2020 (Nov 30) - Silicon Angle - "Level 3 outbid Akamai on Netflix by reselling stolen bandwidth"

Source - [HW005J][GDrive]

2010-11-30-siliconangle-com-level-3-outbid-akamai-on-netflix-by-reselling-stolen-bandwidth

BY GEORGE OU

Level 3 communications raised a lot of eyebrows earlier this month when it outbid rival Content Delivery Network (CDN) provider Akamai to deliver Netflix content to large parts of the United States. The announcement was a huge win for Level 3′s CDN business and it meant leasing around 2.9 terabits of distributed capacity to Netflix. But there’s just one little problem: Level 3 won that bid because it intends to break its contractual obligations on peering with Comcast and essentially resell stolen bandwidth to Netflix. Now it makes perfect sense how Level 3 managed to outbid Akamai since no CDN provider operating legally could outbid hot goods.

So how does Level 3 intend to get away with this? Just put it under the “Net Neutrality” violation banner and invent a story that Comcast is blocking content when it merely wants to enforce existing contractual agreements with Level 3. Then as expected, we get ignorant groups like Media Access Project and bloggers like Stacy Higginbotham claiming that “Comcast might break the web” despite the fact that the kind of contractual agreement between Comcast and Level 3 is typical of how the Internet has always worked. On the contrary, allowing Level 3 to violate their peering agreement under the banner of “Net Neutrality” is what would actually break the Internet and turn peering and Internet investment economics on its head.

What makes the Internet work

The Internet is comprised of privately built and privately operated networks that connect on a contractual basis. These contracts are mutually agreed upon and there are generally two types of contractual agreements.

  1. Transit agreements

  2. Peering agreements

A transit agreements is when a network carrier delivers data between two other networks, and both end point networks have to pay the central carrier to carry that traffic at a metered rate. So if Comcast and Level 3 communications had no direct method of connecting to each other, they must each pay a third party network carrier to connect their two networks. Transit agreements are usually more expensive than peering agreements for every packet delivered and they’re generally slower because of distance and congestion. According to DrPeering.net, typical Internet transit rates are $3 to $10 per Megabit per second (Mbps) per month for server bandwidth.

Peering agreements occur when two networks have a direct physical connection to each other. A physical connection exists when it is financially feasible and when there is a business agreement in place. Physical connections are often practical to construct since many end point networks meet at Internet Exchange Points within the same physical building and that merely involves connecting a few in-building Ethernet cables on some gigabit or multi-gigabit switched ports. The business agreement requires that both parties come to mutually agreed upon terms on a financial settlement scheme.

The settlement scheme can involve a fee based agreement or it might involve no money changing hands. When one network sends more traffic to another network than it receives, it usually has to pay a metered fee on the extra traffic that it sends. When the two networks are more or less sending the same amounts of traffic to each other, the two networks can agree on settlement free terms where no money changes hands. These terms of settlement might sound arbitrary and unfair to outside observers that smacks of “might makes right”, but it is based on sound economics that ensures continued investment in the Internet.

The network sending more traffic does so because it has cheaper-to-serve customers than the network receiving more traffic. The network receives more traffic because it has more “eyeballs” i.e., broadband consumers who are hundreds of times more expensive to connect than the sending network’s customers which are the Netflix and YouTubes of the world. Providing a gigabit per second (Gbps) of capacity to a server in a data center might cost a few hundred dollars of capital expenditures but an additional gigabit of last-mile broadband capacity could cost millions in capital expenditures. Another example of one network that might charge another network to peer is when one network operates more long distance lines which are more expensive to build and maintain, especially those going under the oceans connecting continents.

A network that didn’t build the expensive infrastructure can’t just expect to use those networks without paying to use them. The network that spent all the money up front in capital expenditures provides value to the network that spent little money building the network and it needs to recoup its costs by charging the network that didn’t build the most expensive part of the network. Settlement based peering is the mechanism than ensures that all players on the Internet pay their share of expanding the Internet. This is the only way to ensure continued private investment and growth on the Internet.

How Level 3 intends to steal bandwidth from Comcast

Before Level 3 Communications landed the Netflix CDN deal, it sent approximately the same amount of traffic to Comcast as it received. Because of that, the two networks exchanged services of roughly equal value and they agreed to a settlement free peering contract. The two companies agreed not to charge each other for peering connectivity so long as the traffic exchange levels remained more or less the same.

But with the massive new Netflix CDN deal where Netflix is currently the largest source of traffic in North America, Level 3 will likely start sending 5 times more traffic to Comcast than it receives. That would violate its current settlement free peering agreement and it would require a new fee based agreement where Level 3 has to pay Comcast for the extra traffic it sends to Comcast. That makes sense as Netflix’s old CDN provider Akamai paid to peer with Comcast, but Level 3 decided that it would simply insist on violating its existing free peering agreement with Comcast which would allow it to outbid Akamai on the lucrative Netflix CDN service. Level 3 would essentially steal bandwidth from Comcast to outbid Akamai which pays Comcast for bandwidth.

From Comcast’s perspective, this impacts them severely as they’re losing bandwidth revenue from Akamai since Akamai lost Netflix business to Level 3, but Level 3 will refuse to pay for the bandwidth they use from Comcast by free riding on their existing peering connection. Even without the bogus Net Neutrality claims, Level 3 can cause major problems for Comcast if a peering dispute turns ugly. If Comcast insists on enforcing its contractual agreement and charges Level 3 and Level 3 refuses to pay, there’s little Comcast can do about it other than to sever the peering connection and send Level 3 traffic to its Internet transit provider that will eventually reach Level 3. But with most peering disputes, the contract violator will simply refuse to return to the Internet transit path and they’ll keep sending traffic down a broken unpaid pipe to Comcast which breaks network connectivity between Level 3 and Comcast completely.

Each will point the finger while customers on the two networks are cut off from each other and it makes both providers look bad. The reputation of both companies will be harmed and it will essentially be a game of “chicken” to see who blinks first. For smaller peering disputes that don’t garner much attention, the dispute is settled in a court or arbitration while the network is cut off for days or weeks. For larger peering disputes that receive media attention and congressional attention because fuming voters are calling their representatives and senators, it turns ugly and may lead to calls for additional Internet regulations which no one on the Internet wants.

With the added confusion of “Net Neutrality”, the hopeless pro-anything-labeled-Net-Neutrality biased blogosphere and the FCC proceedings on the Comcast NBC merger, Level 3 figured that they had a good chance of getting away with bandwidth theft or at least forcing much more favorable rates from Comcast. But not only is Level 3 trying to steal bandwidth, they’re a hypocrite because they’ve fought just as vigorously against other bandwidth thieves that tried to violate Level 3′s peering agreements. Joe Waz of Comcast pointed this out by citing Level 3′s own words when it came to the sanctity of peering agreements.

“To be lasting, business relationships should be mutually beneficial. In cases where the benefit we receive is in line with the benefit we deliver, we will exchange traffic on a settlement-free basis. Contrary to [other ISPs] public statements, reasonable, balanced, and mutually beneficial agreements for the exchange of traffic do not represent a threat to the Internet. They don’t represent a threat to anyone other than those trying to get a free ride on someone else’s network.”

Of course Level 3 wasn’t actually being genuine when they said that. They only meant it when someone else tried to free ride on Level 3′s network but they think it’s just fine if they free ride and steal bandwidth from Comcast.

A possible solution to free riders

I think there might be a better solution than severing a peering connection that goes unpaid. What Comcast or any other network encountering a free rider should do is simply enforce a symmetric rate on their peering connection. If Comcast sends 1 Gbps of traffic to Level 3 Communications, then they should enforce a 1 Gbps settlement free receive rate from Level 3. This might result in Level 3 failing to deliver the performance they promised to Netflix. If Level 3 can’t deliver an adequate service level to Netflix because they refuse to pay for the extra traffic going to Comcast, then Netflix is well within its rights to terminate the new agreement with Level 3 and return to honest CDN providers like Akamai. From a Net Neutrality standpoint, Comcast would not be discriminating against any source or application type in particular and they would only be enforcing existing contractual agreements with Level 3.