Welcome to the blog that looks at Application Programming Interfaces, i.e. APIs. Here we'll scope out the vast API landscape and dive deep into the design considerations and quality assessment requirements for building and testing great APIs.
- What are APIs and what makes them so useful.
- Catalysts that made APIs ubiquitous.
- Design considerations for choosing API technology.
- Quality Assurance (QA) requirements for APIs.
- Some famous API protocols and backend technologies/architectures, and their salient features.
Understanding the current API landscape
Let's start with the definition of an API: An acronym for Application Programming Interface—it's a software intermediary that lets two applications (a client and a server) talk to each other.
Think of it as a 'contract' between the client and the server. "A request made by the client to the server, if made in the right format, will get a response in a specific format or initiate a pre-defined action."
It's an instruction booklet describing how you can talk to one system through another system. Here the system can be a web server or a library which can interact with an operating system, a database, etc. APIs are a part of modern operating systems, database systems, computer hardware, as well as software libraries.
So, what makes APIs so useful?
30 years ago, if you wanted to become a programmer, you'd have to
- learn various endian formats,
- identify which format is being used by the system you're trying to wrangle, and
- write your program according to that format.
But now, APIs provide abstraction, which can flatten a beginner's learning curve to any technology. Most of the functionality has been abstracted away by libraries. To use a library, we can use the API provided by that library.
APIs provide security. They can limit the extent to which a system's functionality is exposed to other users. For example, imagine that you have a supercomputer that hosts a country's nuclear codes — and a web server for your Pokédex. You don't want users to activate the nuclear missiles while admiring Pikachu's cuteness. An API lets you pick and choose the functionality users could avail, increasing the security of your system. So, instead of begging someone NOT to press a button, you don't expose that functionality.
How did APIs become mainstream?
APIs have been around for a very long time. But their widespread usage today can be attributed to the Dot-com bubble of the 1990s.
This era saw the emergence of tech and e-commerce websites. These websites made products and services available to customers via a single website—and also allowed partners and third-party resellers to extend the reach of their platforms. This forced the organizations to automate much of the commerce that was powering the web.
Then, Salesforce introduced the first enterprise-class API on Feb 7, 2000, with eBay, Amazon, Flickr and the others following suite.
With the growth of these organizations, developers and budding companies also started providing public APIs to catch up with the competition. From there, APIs became an industry standard, even for private use. And then the cloud came along and made APIs ubiquitous.
Rise of the cloud
The core idea of cloud computing has been around since the Mainframe Computing era of the 1950s. The concept was to link computing systems over great distances for scientific (and military) purposes. In 1961, in his speech at MIT, John McCarthy even suggested that computing could be sold like a utility, just like water or electricity.
In August 2006, AWS launched and propelled cloud to its ubiquity today.
By outsourcing the computing to cloud vendors, organizations have more resources to dedicate to their engineering goals, instead of diverting them to acquiring, maintaining, updating, and recycling hardware, based on their needs. This allows organizations to spend on an as-needed basis per their traffic trends.
Movement of APIs to the cloud
Today, cloud is the preferred choice for deploying APIs for everyone. Cloud makes it easier to deploy and scale APIs across the globe through —
- High availability. Cloud eliminates single points of failures that are a common characteristic of on-premise deployment.
- High scalability with automated creation and release of API service instances, based on usage.
- Easy API monitoring and management, with built-in dashboards furnished by the cloud vendor.
- Easy deployments nearer to end-user locations for a smoother service experience.
Modern API design considerations
The architecture behind an API alters the way it can be used—and the reaction to that use. A customer may stop using the product if the APIs behind the working of the product are not optimum.
Modern APIs may service millions of requests every single minute. To deliver good functionality, usability, and a good user experience, modern APIs have to be designed to interface with the right systems at the right time, using the right languages. Theoretically, you can create an entire operating system by writing everything in the assembly language, but is it feasible? Definitely not. Likewise, choosing the API architecture should depend on your use-cases instead of boarding a hype train. Because, choosing inefficient architectures, making questionable technology choices and doing rushed development, will require a combination of band-aids, magical spells and incantations to keep the systems running throughout the day.
While designing an API, you need to consider the following to pick the right architecture:
- Who are the end-users? Is it a backend-frontend or backend-backend API?
- What is your hardware capacity?
- What is your optimal latency?
- What is your expected throughput?
- Are there any limitations on bandwidth?
- How much time can you and/or your developers spend on development using a particular architecture?
- What is the learning curve for your development team and does it align with your deadlines?
- What is your architecture? Is it monolithic or microservices?
- How big is the community behind an API architecture/library?
- Does your preferred language have an official library?
Quality assessment needs for APIs
If you can break it, anyone can break it. If you can't break it, someone will manage to break it.
Teams involved in designing, implementing and delivering the APIs to production need to ensure that besides delivering the functionality it’s supposed to, the APIs are compatible with service consumers, resistant to misuse, have built-in security features and can scale.
Some of the most common scenarios (and their testing aspects) for assessing quality of APIs are listed below.
- Unit: Do the service methods respond with correct values?
- Functional: Does the service behavior match the user’s intended requirements?
- Validation: Is the service accessing the correct data in a defined manner? Is the service using the most accurate and efficient method of doing what is required?
- Performance: How quickly does the service send responses to the user?
- Performance and functional: How does the service respond to requests from different locations?
- Performance: How does the service respond to distributed load?
- Load: Can the service handle expected and unexpected user loads?
- Runtime Error Detection, Fuzz Testing: Can the service handle invalid values and exceptions caused by bad data?
- Interoperability, WS-Compliance: Does the service support common practices, standards and guidelines?
- Security: Is the service secure for common attacks and is access-control implemented for different user types?
- Security: Are you able to reach the service from all locations? Should the service be reachable from all locations?
- Compatibility: Is the new version of the service backward-compatible and does not introduce breaking changes?
- Throttling: Does the service handle throttling gracefully when service limits are reached per user, per group or per service-region?
Additional quality considerations for APIs deployed in the cloud
Cloud deployment of APIs comes with additional quality considerations, along with the numerous benefits it has to offer. Quality assessment teams should keep a careful eye on compliance as well as accurate internal logging and alerting for a successful release with amazing customer experiences.
- Timezone and timestamps: Does the API adhere to a uniform time zone throughout (e.g. UTC) or perform logging with local time zone details?
- User locales: Does the API work as expected when installed on cloud machines with a different locale?
- Performance: Does the service experience performance downgrade as it grows or provides more functionality? How are shared resources handled?
- Security: How are sensitive transactions of the cloud APIs with the on-premise backend handled?
- Data regulations across borders: Does the service's data regulations comply with the law of the land (both source and destination countries)?
- Transaction costs / load testing limits: How are cloud API resources being consumed when subjected to different loads or spikes in usage? Are teams aware of transaction costs per unit use, to keep them in limit with appropriate alerting?
Advanced API protocols and libraries
API protocols / libraries and backend technology architecture behind an API influences the testing approach, API test suite configuration, and the guidelines followed by service consumers for seamless communication with the API.
Based on the specific business need and the use case, various companies have developed their custom APIs, backend protocols and libraries. Over time, these protocols matured within those companies—with many of them making their way into open source, like Apache Thrift, gRPC, GraphQL, etc. Widely used open source backend technology libraries based on Apache Kafka, Falcor and Apache Avro also affect how API clients consume the services.
These emerging protocols and backend technologies differ in the way they consume or respond to a request, the underlying mechanism used to process data (e.g. raw binary format or high level language like JSON), streaming and message support, etc. In principle, most of them strive to support event-driven asynchronous distributed application architecture use-cases, requiring rapid interprocess communication.
Below is a list of prominent API protocols and backend technologies / architectures with their salient features.
- Apache Thrift: It generates the client and server code, including the data structures. It supports binary / JSON data with TCP (which are faster and more compact than HTTP) and can be extended easily for encryption, compression, non-blocking IO, etc. It also cuts down barriers to communication that exist due to various languages and configurations.
Used by: Facebook, Last.fm, Cloudera, EverNote, Uber, and Quora.
- gRPC / Google Protocol Buffers: gRPC is type-safe, with built-in streaming RPC, TLS support, and optimized performance through binary serialization. gRPC interceptor API enables you to add common functionalities to multiple service endpoints. Client apps can directly call methods on server apps as if calling a local object; facilitating easier distributed application and service creation.
Used by: Google internal APIs, Netflix, Cisco, and Juniper Networks.
- GraphQL: The ultimate data aggregator, GraphQL allows API consumers to control the data it receives, and allows API providers to aggregate resources on server-side. It also standardizes and simplifies complex APIs.
Used by: Facebook, GitHub, Coursera, Shopify, and Pinterest.
- Apache Kafka: Kafka has a scalable messaging system. It offers high durability, low latency, data sharing, replication and has higher service autonomy. It works well in asynchronous use-cases like publish/subscribe, subscriber notification and ability to replay previous messages sequentially.
Used by: LinkedIn and Netflix.
- Falcor: This API data platform powers the Netflix UIs. It models backend data as one virtual JSON object on a Node server — so if you know your data, you know your API. On the client, we can work with remote JSON objects using familiar JavaScript operations like get, set, and call.
Used by: Netflix
- Apache Avro: It has a compact binary format useful for data serialisation, with dynamic schema built into Hadoop that needs no compilation or field ID declaration.
Used by: Apache Hadoop.
Each API development methodology has its own unique features. Will we have a single leading API development mechanism which overpowers all others and becomes the next big thing in the API world? Unlikely. APIs will continue to provide superlative performance based on the nature of the business problem, use cases and scaling requirements.
As API development technologies evolve further, engineering teams involved in the design, development, QA and DevOps of APIs must understand the capabilities and nuances of all major API protocols. A better understanding of API's request-response, error handling, sync or async nature, scaling needs, performance and security etc. will enable teams to not only choose the right API development mechanism but also assist them in API design, implementation and configuration of appropriate quality testing suites.
Read more posts by BrowserStack Engineers here.
Interested in joining us? Apply here.