APIs design for frontend developers

Photo by Taylor Vick on Unsplash

APIs design for frontend developers

WEB APIs for Frontend Developers: Guidelines

APIs are at the core of modern software development. They are not only the job for backend developers, but as frontend developers, sometimes we need to have a conversation with BE guys to decide flows from front to back when working with a complicated system, and having the ability to choose between different protocols such as REST, RPC, and GraphQL is crucial. In this blog, we will discover a wide range of topics, such as API Paradigms, API Security, best practices, scaling API, etc

REST vs RPC vs GraphQL

Operation(Human)HTTP verbURL:/usersURL:/users/1
READGETList all usersRetrieve users 1
CREATEPOSTCreate new usersNot applicable
UPDATEPUT, PATCHBatch update usersUpdate users1
DELETEDELETEDelete all usersDelete users

A picture to demonstrate REST API

💡
REST API is dependent heavily on HTTP methods. HTTP methods have their own limitations as well, for example, the lack of actions that might be required in a complex application, such as deactivating users, downloading, or achieving a file. How can we represent it with HTTP methods only?

RPC (remote procedure call)

💡
noun/noun.verbCamelCase

For a complex application where we would mostly have every action to make to the server, RPC is the more suitable choice

💡
Nouns are separated by - and actions are followed by a dot after the noun

💡
Instead of bagging our heads against the wall, think of the tricky way to come up with these actions with regular HTTP verbs. We can simplify the process and let the server handle it by parsing the URL instead

GraphQL

GraphQL is a query language for APIs that has gained significant traction recently. It was developed internally by Facebook and has been adopted by API providers like GitHub, Yelp, and Pinterest

GraphQL has a few key advantages over REST and RPC

  • Save multiple round trips

  • Avoids versioning

  • Smaller payload size

  • Strongly typed

  • Introspection

But all of it comes with a price. The complexity it adds to the API providers. The server needs to do additional processing to parse complex queries and verify parameters. When working with external developers, those use cases become difficult to understand and optimize for

Event-Driven APIs

We're living in a world where new data will become stale data in a matter of a few seconds. Developers who want to stay up-to-date with changes in data often end up polling the API. However, polling the data at a high frequency would lead to a huge waste of resources, as most API calls will not return any new data

To share events in real-time, there are three common mechanisms: WebHooks, WebSockets, and HTTP Streaming

WebHooks

Polling is a technique done by FE developers using timer API in the browser (setTimeout, setInterval or even requestAnimationFrame) to send requests to a server within an MS second continuously

Webhook requires creating a simple new HTTP endpoint to receive events. This means that they can reuse existing infrastructure. But there are a few things I like you to keep in mind when implementing an event-driven-APIs

  • Failure and retries. Slack allows users to retry failing up to three times at their endpoint (once immediately, then one minute later, and finally five minutes later). Further, if there is 95% failure rate of requests, Slack will stop sending events to that webhook endpoint and notify the developers

  • Security

  • Firewalls

  • Noise

Configure GitHub WebHook

WebSocket(Events based)

WebSocket is a protocol used to establish a two-way streaming communication channel over a single TCP connection

For a long time, developers have relied on hacks on servers and clients alike to keep connections open longer and fake a running connection. Today, the state of browser support for WebScoket is much brighter for the end-user

// Initialize the browser's native Web-Socket object to local server
const ws = new WebSocket("ws://localhost:8181");

Now we have a WebSocket object on the client to listen for events

  • open

  • message

  • error

  • close

With JS, we can listen for those events to fire the event-listeners

//Send stock object to server after successfully connection to server
ws.open = (e)=>{
    console.log("Connection established");
    ws.send(JSON.stringify(stock_request));
}

//Receive constant message from server
ws.onmessage=(e)=>{
   const parseData = JSON.parse(e.data);
   updateUI(parseData);
}

//Handle errors
ws.onerror = (e)=> {
  handleErrors(e);
}

//Close event fires when Web-socket closes
ws.onclose = (e)=> {
  ws.close(1000,'WebSocket connection closed');
}

Status code WebSocket

WebSocket Methods

Like event listeners, we also have methods from web socket to take actions

  • send

  • close

//method send
ws.send(JS.stringify(stock_request));
//method close
ws.close();

Testing for WebSocket support

if(window.WebSocket) {

} else { }

Closing thoughts

There's no one-size-fits solution when it comes to selecting an API paradigm.Our job is to have a discussion with our teammate and choose what works best for us. Some companies, like Slack, have supported RPC-style APIs, WebSockets and WebHooks

API security

There are many security best practices out there to apply to your software application. Beyond those topics, there is one additional topic that we'll face when we expose our API to external developers outside our company

Authentication and Authorization

  • Authentication is the set of credentials we use when we're asked to log-in by the application

  • Authorization is power that the users have within the application based on their specific role

The simple form in the early days of the application is basic authentication. The client sends the HTTP request with the authorization header with the word Basic followed by a space and a combination of encode64(username+password)

Authorization: Basic dXNlcjpwYXNzd29yZA==

But this is not the best practice to protect our API because it's easy to guess, and if anyone can have a hand on our encoding character, they can easily decode it and get our real username and password data.

OAuth

To address the issue faced by basic authentication. OAuth was introduced in 2007 and has been adopted by many large tech companies, such as Facebook, Twitter, Google, etc. The technique is simple and straightforward, allowing our users to gain permission without sharing username and password

The biggest benefit of OAuth is that users do not need to share passwords with applications. For example, say TripAdvisor wants to build an application that will use a user’s Facebook identity, profile, friend list, and other Facebook data. With OAuth, TripAdvisor can redirect that user to Facebook, where they can authorize TripAdvisor to access their information. After the user authorizes the sharing of data, TripAdvisor can then call the Facebook API to fetch this information.

Facebook will expose this data to TripAdvisor based on the agreements.

  • your public profile

  • friend-list

  • hometown

  • currentcity

  • photos

  • email addresses

Finally, if at some point a user would like to revoke TripAdvisor’s access to their Facebook data, they can simply go to their Facebook settings and revoke it without changing their password.

Token generation

With OAuth, applications use an access token to call APIs on behalf of a user. The generation of this token happens in a multistep flow

💡
After users has successfully logged in the third API party, the application will get the access token and can use this token to request data from protected resources

Scope

OAuth scopes are used to limit an application’s access to user data. For instance, an application might only need to identify a user. Rather than requesting access to all of the user’s data, the application can request access to only the user’s profile information by means of a granular OAuth scope. During authorization, the API provider will display all the requested scopes to the user. This way, users will know what permissions they are granting to an application.

Many APIs offer

  • Read only

  • Read and write

  • Read, write and access direct messages

But many companies go beyond a simple read-and-write

Token and scope

After developers have received an access token, they can begin making API requests using this access by setting the HTTP Authorization header

POST /api/chat.postMessage
HOST slack.com
Content-Type: application/json
Authorization: Bearer xoxp-16501860-a24afg234
{
 "channel":"C0GEV71UG",
 "text":"This a message text",
 "attachments":[{"text":"attachment text"}]
}

When receiving the access token, the APIs will verify if it is a valid one. After that, it'll identify the required scope that the access token is supposed to perform. If either check fails, the server will return an error. Many APIs, like Github and Slack, return these two headers

  • X-OAuth-Scopes lists the scope for which token has been authorized

  • X-Accepted-OAuth-Scopes lists the scopes that the action requires.


curl -H "Authorization: token OAUTH-TOKEN"\
  https://api.github.com/users/saurabhsahni -I
HTTP/1.1 200 OK
X-OAuth-Scopes: repo, user
X-Accepted-OAuth-Scopes: user

//Response API when the request missing the scope
{
    "ok": false,
    "error": "missing_scope",
    "needed": "chat:write:user",
    "provided": "identify,bot,users:read",
}

Token expired and refreshed token

The OAuth protocol allows limiting the validity of the access token issued in the OAuth flow. Many APIs choose to issue tokens that expire in a few hours or days. This way, if a token is compromised, the impact can be contained. If you issue access tokens with limited validity, you need to provide a way for applications to obtain a new token, typically without intervention from the end user. One way to do this is by issuing refresh tokens.

A refresh token is a special type of token used to obtain a new access token when the current access token expires. Applications need to provide the client ID, client secret, and refresh token to generate a new access token. Refresh tokens are a standard way of renewing expired access tokens. API providers, like Google, Salesforce, Asana, Stripe, and Amazon, support refresh tokens.

💡
Short-lived access tokens are more secure

Listing and Revoking Authorizations

For various reasons, a user might want to know which applications can access their data and might want to revoke access to one or more of them. To support this use case, most API providers typically offer a page that lists the applications that a user has authorized, along with the ability to revoke access

As well as providing the ability to revoke authorizations in the UI, it’s a good idea to provide APIs that give users the ability to revoke access tokens and refresh tokens. This way, a developer who wants to revoke a token, due to a compromise or for other reasons, can do so programmatically.

Closing thoughts

Security is difficult. And securing your APIs is difficult. Once you apply a certain security mechanism to your API,. It will become hard to change it later. So we have to think about it deeply if you're not sure about that. Let's consult with experts in security fields. If you rely on a well-designed, tested, and open security standard that has been examined and tested by hackers and experts over the years, your chances of running into a major security vulnerability will be far lower.

Design best practices

Make trouble shooting easy

Returning meaningful errors

Example of different error message for different situations

SituationRecommendedNot recommended
Authentication failed because token is revokedtoken_revokedinvalid_auth
Value passed for name exceeded max lengthname_too_longinvalid_name
The credit card has expiredexpired_cardinvalid_card

To begin designing your system of errors, you might map out your backend architecture along the code path of an API request. The goal of this is not to expose your backend architecture but to categorize the errors that happen and identify which ones to expose to developers

After grouping your system errors, think about what level of communication is meaningful for those errors. Some options include HTTP status codes and headers, as well as machine-readable codes or more verbose human-readable error messages returned in the response payload

💡
Keep in mind that you want to return an error that is consistent with a non-error format. For example, if you return a JSON response on a successful request, you should do the same with the error

My rule of thumb is that for general error HTTP status code, I'll translate the error message to the general error, and for a specific error, I'll translate the error message to the corresponding server

if(error.response.status === "403") {
    reject(translate('common_notAuthorized'));
}
if(error.response.status === "409") {
    reject(translate('common_operationFailedPleaseTryAgain'));
}   
if(error.response.status === "502" || error.response.status === "504") {
    reject(translate('common_operation_time_out'));
}
    reject(translate(response.status.error.message));
💡
Some APIs from different companies may choose what the meaning of HTTP status is. In my company, if the http status is different than 0, we will handle the incorrect response message and filter down the status code from general error message to the custom error message

For even more structured and detailed recommendations on meaningful errors and problem details for HTTP APIs, see RFC 7807.

Design in Practice

Write an API draft

URIInputsOutputScope
GET /filesRequired:N/A Optional: include_deleted(bool)default false limit (int) default 100, max 1000, cursor (string) default null last_updated_after(timestamp)default null200 Okay Array of $file resource [{id:$id,name:string,"date_added":$timestamp,"last_updated":$timestamp,"size":int,"permalink":$uri,"is_deleted":bool}]read
GET files/:id200 OKread
PATCH files/:idUpdatable fields: name (string) and notes (string)202 Acceptedwrite
POST files/:idRequired: name(string) Optional:notes(string)200 Createdwrite

Section describing the HTTP status code for errors

Status codeDescriptionError response body
200 OKThe request succeeded
201 CreatedThe request succeeded and a new file was created
202 AcceptedThe file was updated successfully
400 Bad requestThe request cannot be accepted, often because of a missing parameter or an error, like too large a file being given{ "error":"missing_parameter, message:"The following parameter are missing from your request:<parameter1>,<parameter2> }
401 UnauthorizedNo valid access token was provided{"error":"unauthorized","message":"The provided token is not valid" }
403 ForbiddenThe user may not have permission to view the file{"error":"forbidden", message:"You do have permission to access the request file" }
404 Not FoundThe request file was not found{"error":"file_not_found","message":"The request file <id> was not found" }
500 Server ErrorSomething went wrong on the server side
💡
Besides this, we can also include additional information about scaling, performance, logging, and security

Manage changes within API

Consistency

Good APIs need to be able to adapt and change along with the evolution of your product or business. Consistency is key to ensuring backward compatibility as your API evolves

💡
An API should be consistent, clear, and well documented. Small inconsistencies around things like naming and URLs add up to a lot of confusion as your API ages.

Inconsistency hurts developers

It takes a channel nameIt takes a channel ID
channels.join({ channel:"channel-name"})channels.invite({channel:"C12345", user:"123456"});

Inconsistent in payload

repositories.fetch()repositories.fetchSingle(12345)
{ "respositories" : [ { "id" :12345 } , { "id" :23456 } ] }[ { "12345" : {...} } ]

Notice the inconsistencies when comparing the payloads for these two very similar enpoints. Repositories.fetch and return the response, including the key "responsitories." Responsitories.fetch(id) returns the response with the id as a key

Solution

  • Automated Testing

  • API description languages (JSON schema or openAPI)

Adding

In the case of adding response fields, adding a new JSON key-value pair is almost always backward compatible and won’t affect developers.

  • Was the field set before?

  • Will everyone want a new field? Think about adding a new endpoint or including a parameter for the users to send the request to receive a different response

Add a new parameters for an API

Remove

Given that you need to continue to evolve your API, there will be endpoints and fields that you may want to deprecate completely. So, let's communicate with developers and tell them what benefits they can get if they opt in to the new API endpoints

Versioning Scheme

Pagination for APIs

In addition to scaling throughput and evolving your API design, paginating APIs can help with scailing

Offset-pagination

Advantages and disadvantages

{
  startRow:0;
  endRow:300;
  operationType:"fetch"
}

Cursor-Based Pagination

/* Consider this example: a developer wants to obtain the list of a
 user's follower's IDs. To fetch the first page of results,the 
 developers makes an API request **/
 https:api.twitter.com/1.1/followers/ids.json?screen_name=
        vincenguyeh&count=50
/* The twitter API response **/
{
  "ids" : [1,2,3,4,5,6,7,8],
  "next_cursor" : 12345
}
/* Using the next_cursor for the next request */
 https:api.twitter.com/1.1/followers/ids.json?screen_name=
        vincenguyeh&count=50&cursor=12345

Rate-Limiting APIs

Rate limits help to handle surges in traffic or spam by making your application more reliable. By safeguarding your infrastructure and products, rate limits are also protecting developers. There is no API or data for anyone if it’s possible to bring down the entire system via the API. So, let’s dive into what rate-limiting is and how you can implement it for your API

What is a rate limit?

A rate-limiting system controls the rate of traffic sent or received on a network interface. For web APIs, rate-limiting systems are used to control how many times an application or a client is allowed to call an API during a given time interval. Traffic is allowed up to the specified rate, whereas traffic that exceeds that rate might be denied. For example, GitHub’s API allows developers to make up to 5,000 requests per hour

Here are a few tips to decide the rate limit:

  • Define rate-limit per endpoint

  • Define rate-limit based on authentication (APIs requiring user authentication generally apply rate-limiting on a per-user basis, whereas APIs requiring application authentication typically rate-limit on a per-app basis). For unauthenticated API calls, API providers often choose to rate-limit by IP address.

  • Allowing exceptions. You can always have exceptions for them if they request additional quota

Implementation Strategies

There are a few common algorithms that used to implement rate-limit

  • Token bucket

  • Fixed-window counter

  • Sliding-window counter