"Cachiness factor" is the degree to which your API design supports the caching of responses. Low cachiness means that a relatively higher than optimal number of requests is forwarded to the back end for retrieving data; a high cachiness factor means that the number of requests serviced through the cache layer is reduced and optimized.
Every time a request is sent to the API provider endpoint, the provider incurs the cost of servicing the request. Investing in a good caching mechanism...
Thanks to all who participated in last week's strategy webinar - HUGE: Running an API at Scale.
And thanks to our speakers @sramji, @brianpagano, and @edanuff. The slides and video are here.
Facebook and Twitter are cache assembly lines -- every web page and API request is served up by many calls to various caches at different levels - assembling the final result from many different chunks. At this scale there is almost no other way to deliver reasonable performance.
For APIs - what's the largest chunk of all? The entire API response.
APIs lend themselves nicely to caching responses because it is often easy to identify the cache...
We use this blog to talk about issues we see around securing, managing and scaling APIs and web services. We also see many of these same issues and requirements with feeds. Arguably, feeds - specifically RSS and Atom feeds - might just be the most common type of XML API.
Feeds are are growing beyond just being a great way to keep people abreast of changes to news or a blog and becoming a great way to aggregate or syndicate content to partners or customers or applications in general.
Our media customers, like MTV Networks, use RSS feeds management...
(Following from Tuesday's blog entry on API Scalability and Caching.
Last time we wrote about 3 things to think about when planning how to scale your API.
- Rate limiting and threat protection
- Offloading expensive processing
and then talked about caching at length, so let's finish up with:
Rate Limiting and Threat Protection
Another aspect of scaling is just keeping unnecessary traffic away from your application servers and databases. Some of the techniques that we've discussed previously, such as rate limits and threat protection, apply here...
(Part 7 in our blog series: 'Is your API Naked?: 10 API Roadmap considerations".
So far our discussion of APIs has focused on aspects like security, visibility, and data protection. But how do you make your API scale?
"Scale" means different things to different people, so let's narrow it down to the question of what to do as your traffic increases? Do you have a plan to handle 10, 100, or 10,000 times more traffic than your API is receiving today?
The truth is that solving this problem at the high end can...
Last week, Scott Metzger of Truecredit.com gave a great case study presentation on how they opened their internal SOA as APIs for partners at the Burton Group Catalyst conference. Specifically, the different policy and governance patterns.
Scott talks about the factors driving them to identify and implement a separate application agnostic layer for 5 major policy patterns including service access, routing, caching, transformations, and operations. (And more details of their implementation in this video.)