In my previous post, I examined the evolution of Internet systems' complexity and business models. We looked at how commonly used software applications have grown more and more complex and the things that people and companies are doing with the Internet today are qualitatively different than they were a decade ago.
This time – how the Internet backplane model is changing to support these new usage and business models.
Because the business model itself is virtualized there is an increased interest in and demand for cloud hosting as a business service to which new “virtual teams” inside organizations subscribe. “Virtual teams” often don’t have the technical knowledge political power to put applications and services in the corporate datacenters, don’t want to run their own datacenters, and probably don’t have the enterprise IT and engineering skills to do so. So they turn to cloud hosting partners to reduce time to market, help provide ready made IT services, and allow flexible spend on allocated capacity to better match periods of peak demand. The value of these benefits can best be seen in the tremendous growth of “cloud” hosting services such as Amazon’s EC2.
Other critical drivers for cloud hosting are the qualitative nature of modern applications and their frequency of change. Applications that are being built and delivered into the app economy are typically conceived from the union of data and services from a number of places across the Internet and not all hosted in the same datacenter. Additionally, the services aggregated within the application may actually be from multiple vendors. To meet these new needs, cloud hosting has evolved from the first generation hosting of managed processes (like using a virtualized hardware server in Amazon to run a web server hosting a company’s website) to today’s hosting of routing itself via a cloud vendor such as Apigee.
So in the Internet necessary to enable the app economy, switching itself runs in the cloud, as do the services that the applications surfaces.
The software that implements these services, and switching, has several new and interesting traits.
- Change Frequency: First, cloud hosted services, typically running as Saas or PaaS or BaaS, need to change at very high frequency to meet the demands of evolving businesses. In response, PaaS software is typically released approximately once per month - sometimes more frequently.
- Runtime Profile: Another difference is in the physical runtime profile of the software implementing the application or routing service. Unlike traditional OS and switching software, which was hosted on a “contained” or “dedicated” device, cloud hosting most often implies that systems are multi user or multi tenancy, introducing new challenges for the software in the modern Internet backplane.
- Log Management: Cloud computing is changing all aspects of IT. Log management for server software for example was typically done with tools provided by vendors. Today cloud logging and cloud delivery of log data is big business for many companies.
- Complex Routing: The nature of new business models necessitates more complexity in routing decisions and routing software has to do more processing of messages than was needed when Cisco routers ran the Internet.
Handling Large Volume, Velocity, and Variety of Data
We've looked at how a large volume of data from new apps, APIs, and a myriad other new and traditional sources is the currency of the new economy. Gathering, evaluating, analyzing, and operationalizing big data from your internal systems-of-record as well as from diverse new sources outside of the walls of the enterprise becomes critical for IT operations as well as for predictions about your business.
While revenue may be the primary metric businesses use to gauge the success of their new app economy endeavors, they need second order metrics that help them make crucial decisions as to which partners, developers, and API programs in which to invest.
Given this huge revolution that is big data in all areas of the “visible Internet”, its no surprise that the backplane – the “invisible internet” - needs to handle large volumes, velocity, and variety of data, and needs to make it trivial to correlate one API ‘message’ (one routing call) with data.
So as enterprises think beyond “simple” storage and the “bigness” of data to extracting meaning and business insights - to predicting behavior and identifying risks and opportunities in ever-more sophisticated and intelligent ways - a more complex backplane is evolving to support sophisticated data flow and processing.
Today’s backplane must enable data flow at high speeds and in more complex ways than ever, but must couple this with -
- capabilities for signal extraction and analysis
- storage and processing of analytics data
- data synchronization and concurrency
. . . and more.
One pitfall in all this, of course, is too many silos of data, so along with being able to record and store, there is also the need to make it easy to access and join, so that when desired insights can be gleaned across all sources of data, and performance factors in new business initiatives can be joined with reports on more traditional aspects of business operations.
The Apigee Platform is replacing switching software from Cisco, Juniper, and the like as the backbone and traffic cop for the modern WAN, and in some ways also acting as an “application server in the cloud”. Next time, we'll take a quick look at the traits of this modern “application” switching system.