Where we Come From: An Honest Introduction to GraphQL

March 01, 2019 | 15 min read

This post is a preview of my upcoming book on GraphQL schema design, it provides an introduction to GraphQL by starting back in time to understand what problems GraphQL is really trying to solve, and why it was designed this way. If you’re interested in learning more about the book, I would really appreciate if you signed up for the newsletter: https://book.productionreadygraphql.com

Just a few years ago, way before anyone had heard of GraphQL, another API architecture was dominating the field of web APIs: Endpoint based APIs. I call an endpoint based API any API using a technology or architecture that revolves around HTTP endpoints. These may be a JSON API over HTTP, RPC style endpoints, REST, etc.

These APIs have several advantages, and in fact, are still dominating the field when it comes to web APIs. There is a reason on why that’s the case. These endpoints are usually quite simple to implement and usually do one thing very well. With careful design, Endpoint based APIs can be very well optimized for a particular use case, are easily cacheable, and very discoverable and simple to use by clients.

In more recent years, the number of different types of consumers of web APIs has exploded. While web browsers used to be the main client for Web APIs, we now have to make our APIs to respond to mobile apps, other servers part of our distributed architectures, gaming consoles, hell, even your fridge might be calling a web API when you open the door.

Endpoint based APIs are great when it comes to optimizing an exchange between a client and a server for one functionality or use case. The tricky thing is that because of that explosion in client types, for certain APIs that need to serve that many use cases, building a good endpoint to serve these scenarios started to be more complex. For example, if you were working on a e-commerce platform and had to provide a use case of fetching products for a product page, you would have to consider web browsers, which may be rendering a detailed view of products, a mobile app which may only display the product images on that page, and your fridge, which may have a very minimal version of the data to avoid sending too much on the wire. What ends up happening in these cases is that we try to build a one-size-fits-all API.

One-Size-Fits-All APIs

What is a One-Size-Fits-All API? It’s an API that tries to answer too many use cases. It’s an API that started optimized like we wanted and became very generic, due to the failure to adapt to a lot of different ways to consume a common use case. They are hard to manage for API developers because of how coupled they are to different clients, and sometimes how messy it is usually to maintain on the server.

This became quite a common problem with endpoint based APIs, sometimes blamed only on REST APIs (In reality, REST specifically is not to blame, and provides ways to avoid this problem). Web APIs facing that problem reacted in a number of different ways. We saw some APIs respond with the simplest solution: adding more endpoints. One endpoint per variation. Take for example an endpoint based API that provides a way to fetch products:

GET /products

To provide the gaming console version of this use case, certain APIs solved the problem this way:

GET api/playstation/products

GET api/mobile/products

With a sufficiently large web API, you can maybe guess what happened with this approach. The number of endpoints used to answer variations on the same use cases exploded, which made the API extremely hard to reason about for developers, very brittle to changes, and generally a pain to maintain and evolve.

Not everybody chose this approach. Some chose to keep one endpoint per use case, but allow certain query parameters to be used. At the simplest level, this could be a very specific query parameter to select the client version we require:

GET api/products?**version=gaming**

GET api/products?**version=mobile**

Some other approaches were more generic, for example partials:

GET api/products?**partial=full**

GET api/products?**partial=minimal**

And then some others chose a really generic approach, by letting clients basically select what they wanted back from the server. The JSON:API specification calls them sparse fieldsets:

GET api/products?**include=author&fields[products]=name,price**

Some even went as far as creating a query language in a query parameter. Take a look at this example inspired by google’s Drive API

GET api/products?**fields=name,photos(title,metadata/height)**

All the approaches we covered make tradeoffs of their own. Most of these tradeoffs are found between optimization (How optimized for a single use case the endpoint is) and customization (How much can an endpoint adapt to different use cases or variations). We’ll cover this tradeoff more in *Chapter X: Optimization vs Customization.*

While most of these approaches can make clients happy, they’re not necessarily the best to maintain as an API developer, and usually end up being hard to understand for both client and server developers. Around 2012, different companies were hitting this issue, and lots of them started thinking of ways to make a more customizable API with a great developer experience. This was the case for *Netflix - when they redesigned their API a few years ago.

Netflix’s Server Adapters

In 2012, Netflix announced that they had made a complete API redesign. In a blog post about that change, here’s the reason they stated:

Netflix has found substantial limitations in the traditional one-size-fits-all (OSFA) REST API approach. As a result, we have moved to a new, fully customizable API.

Knowing that they had to support more than 800 different devices, and the fallbacks of some of the approaches you’ve just read, it is not so surprising that they were looking for a better solution to this problem.

The post also mentions something that is really key to understanding where we come from:

While effective, the problem with the OSFA approach is that its emphasis is to make it convenient for the API provider, not the API consumer.

Netflix’s solution involved a new conceptual layer between the typical client and server layers, where client specific code is hosted on the server:

[https://medium.com/netflix-techblog/embracing-the-differences-inside-the-netflix-api-redesign-15fd8b3dc49d](https://medium.com/netflix-techblog/embracing-the-differences-inside-the-netflix-api-redesign-15fd8b3dc49d)https://medium.com/netflix-techblog/embracing-the-differences-inside-the-netflix-api-redesign-15fd8b3dc49d

While this might sound like just writing many custom endpoints, this architecture makes doing so much more manageable on the server. In their approach, the server code takes care of “gathering content” (Fetching data, calling the necessary services) while the adapter layer takes care of formatting this data in the client specific way. In terms of developer experience, this lets the API team give back some control to client developers, letting them build their own client adapters on the server.

They liked their approach so much that they filed a patent for it, with the pretty general name of “Api platform that includes server-executed client-based code”.

To learn more about that approach, I highly suggest you read the whole blog post.

Soundcloud’s “Backend for Frontend”

Another company struggled with similar concerns back then: Soundcloud. While migrating from a monolithic architecture to a more service oriented one, they started struggling with their existing API:

After a while, it started to get problematic, both in regards to the time needed for adding new features, and also due to the different needs of the platforms. For a mobile API, it’s sensible to have a smaller payload footprint and request frequency than a web API, for example. The existing monolith API didn’t take this into consideration and was developed by another team, unaware of the mobile needs. So every time the apps needed a new endpoint, first the frontend team needed to convince the backend team that this was truly the case, then a story needed to be written, prioritized, picked, developed and communicated to the frontend team.

Rings a bell doesn’t it? This is very similar to the problems Netflix was trying to solve, and the problems that can be caused by implementing the customization solutions we talked earlier this chapter.

Their solution to this was quite interesting: instead of including advanced customization options to their main API, they decided that each use case would get its own API server. When you think about it, it makes a lot of sense, this would allow developers to optimize each use case very well without needing to worry about other use cases, which an endpoint based API performs really well at.

They called this pattern “Backends for Frontends” or BFF. A great case study from Thoughtworks includes a great visualization of the pattern:

[https://www.thoughtworks.com/insights/blog/bff-soundcloud](https://www.thoughtworks.com/insights/blog/bff-soundcloud)https://www.thoughtworks.com/insights/blog/bff-soundcloud

As you can see, this makes each BFF handle one, or very similar use cases, which allows developers to write manageable APIs for one use case and avoid falling in the traps of writing a generic “One-Size-Fits-All” API.

Fast Forward to 2015

In September 2015, Facebook officially announced the release of GraphQL, which has since sky rocketed in popularity.

However, it might not be a surprise to you after the other solutions we’ve covered so far, but it is really in 2012 also, that Facebook started re-thinking the way they worked with APIs. They were frustrated by very similar concepts:

We were frustrated with the differences between the data we wanted to use in our apps and the server queries they required. We don’t think of data in terms of resource URLs, secondary keys, or join tables; we think about it in terms of a graph of objects and the models we ultimately use in our apps like NSObjects or JSON. There was also a considerable amount of code to write on both the server to prepare the data and on the client to parse it.

Once again, frustration between differences in data returned for different use cases. However they also add that they had a different mental model for what Facebook should be represented as. The last sentence above once again shows the difficulty in maintaining and evolving an API that has to be generic enough to handle so many use cases and variations.

Enter GraphQL

If you’re reading this book, chances are you’re already familiar with GraphQL’s basic concepts. Even if you do, or don’t, it’s interesting to look at the components of GraphQL with that history in mind.

GraphQL is a specification. Not a library, not a product, not a database. This specification defines, within the larger GraphQL name, a few interesting parts:

  1. The GraphQL Query Language (Graph Query Language Query Language, sounds weird, I know,)
  2. The GraphQL Schema Language and type system
  3. An algorithm for how a GraphQL implementation should execute queries against a given type system.

What is interesting with these three specification items is to look at how they’re designed to answer some of the problems we covered in this chapter so far. To me, there are three main things APIs had to be better at in these contexts:

  1. Embracing different types of clients, different use cases
  2. Allowing clients to take more ownership in how the API server responds to use cases.
  3. Keeping a manageable system and offer a solid developer experience even with item number 1 and 2 in place.

We’ll cover each of those, and how the GraphQL specification tries to answer them.

Embracing different types of clients and use cases

We’ve already seen different approaches for endpoint based APIs to provide alternative versions of use cases and modifications based on the client. GraphQL chose to go on the far end of the spectrum when it comes to customization. Remember this style:

GET api/products?fields=name,price,photos/url,size

GET api/shop/1?fields=name,location/lat,long

This is already quite a customizable API. If we push that concept even further, we can create an even more complete query language, and even drop the need for multiple endpoints:

GET api/graphql?query="{ shop { name location { lat long } products { name price { photos { url size } } }"

GraphQL decided to push the customizability so far that it doesn’t even need dedicated endpoints per use case, because the world of capabilities, of use cases, is represented in a single “Graph”, which we can query using a query language.

This query, in a easier to read format, is a good example of GraphQL’s power:

query {
  shop {
    name
    location {
      lat
      long
    }
    products {
      name
      price
      photos {
        url
        size
      }
    }
  }
}

In a single query, we’ve traversed many relationships, and selected our own, carefully crafted use case, specific to our client’s need. The reason why our client knows this specific query is within the realm of capabilities of our API server is because of GraphQL’s type system, the schema. We’ll focus on how this one is the next chapter.

Any GraphQL API exposes a strongly typed system to clients, exposing the entire set of possibilities to clients. If compared to REST, it could be similar to starting from the root endpoint to discover links to resources. It is machine readable, and human discoverable.

Many articles written on GraphQL focus on the fact that GraphQL solves the under-fetching and over-fetching problems. This is totally true, but to me is a side effect of something much more crucial: enabling any number of clients to coexist, and survive the evolution, of a single API server.

GraphQL definitely embraces the difference in clients, in fact, it puts a lot more power in the hands of client developers than what we were used to we endpoint based APIs. By giving clients total control on what shape of data they want to receive, this coupling between the usual endpoint result and client request is soften by a lot.

We had talked about allowing customizations earlier this chapter, and knew that it often resulted in a system that’s very hard to manage, and offers a poor experience for server developers, is that the case with GraphQL?

Keeping a manageable system and offering a solid developer experience

Netflix’s client-based server adapters, and Soundcloud’s BFFs were able to give customization to clients without losing a good developer experience on the backend. GraphQL is no different. This is due to several attributes of the GraphQL specification that makes client coexisting quite an enjoyable experience for both client and server developers.

First, it’s very important to note that the GraphQL query language only allows selecting fields declaratively and explicitly. There is no SELECT - , no way of selecting all fields on a particular type, and this is for a very good reason. This allows server developer to add functionality, new fields, new use cases without existing clients ever needing to care about these changes, or ever be impacted by them. Each individual GraphQL query is selecting a subset of our graph of possibilities. You can kind of imagine a GraphQL query being a way to craft your own custom endpoint, but one that doesn't cause the burden on API developers as creating an actual custom endpoint would. That’s because the GraphQL schema exposes the “graph” of capabilities, but does not care how it will be used precisely. This of course is a design decision that makes a tradeoff. A GraphQL API will rarely be as optimized as an endpoint answering the need of a specific client. However, this was a conscious decision, to allow our API to handle different use cases while keeping a sane development experience.

What this property of GraphQL gives us is that we can provide as many ways of enabling use cases we want in a GraphQL schema. Clients can then select what they require, but don’t need to pay the cost of supporting other use cases. Even the server developers don’t need to pay the cost because of how a GraphQL schema if usually developed, where each field or use case is implemented in isolation. Of course, the power we give to clients comes with great responsibility. Often times this will lead to an experience that requires a bit more work to get what they want, and discover how to consume their use cases. We’ll talk about this tradeoff, and how we can help with this in Chapter X: Documenting a GraphQL API.

Think about Netflix’s solution and Soundcloud’s solution. When we compare them to what GraphQL offers, even if the final solution looks quite different, they can be quite similar. GraphQL’s “engine”, the algorithm that can execute a client’s query against a GraphQL API Schema is in fact kind of our own powerful, yet complex “Client specific server adapter”. A GraphQL API can act as these “BFFs”, but within just one service, because of how decoupled clients can be from how the schema evolves, and how schema developers can add functionality without worrying about adding weight to existing ones.

By having this context in mind, we can make better decisions when it comes to designing a great GraphQL API. We will frequently go back to GraphQL’s raison d’être as we look for the best practices along the book. As you can see, GraphQL was born in a very specific context to solve a particular problem. It is an excellent way to write APIs, but other ways exist as well, as we explored in this chapter. This book is not here to convince you to use GraphQL for everything, but rather to teach you how to do it correctly if you do choose to use it. Enjoy!

Thanks for reading this preview / draft! Hope you enjoyed, I think there’s value in introducing GraphQL this way. Let me know what you thought https://twitter.com/xuorig

If you've enjoyed this post, you might like the Production Ready GraphQL book, which I have just released!

Thanks for reading 💚

Sign up for my newsletter

Stay up to date when I release courses, posts, and anything related to GraphQL

No spam, just great GraphQL content!

© 2020 MYUL Digital, Inc. All rights reserved.