Skip to main content

GraphQL Server in the Browser using WebAssembly

· 5 min read
Shadaj Laddad
Co-founder @ Exograph

On the heels of our last feature release, we are excited to announce a new feature: Exograph Playground in the browser! Thanks to the magic of WebAssembly, you can now run Exograph servers entirely in your browser, so you can try Exograph without even installing it. That's right, we run a Tree Sitter parser, typechecker, GraphQL runtime, and Postgres all in your browser.

See it in action

Head over to the Exograph Playground to try it out. It shows a few pre-made models and populates data for you to explore. It also shows sample queries to help you get started.

A few models need authentication, so you can click on the "Key" icon in the middle center of the GraphiQL component to simulate the login action.

When you open the playground, the top-left portion shows the model. The top-right portion shows tabs to give insight into Exograph's inner workings. The bottom portion shows the GraphiQL interface to run queries and mutations.


You can also create your model by replacing the existing model in the playground. The playground supports sharing your playground project as a gist.

How it works

The Exograph Playground runs entirely in the browser. Besides the initial loading of static assets (like the WebAssembly binary and JavaScript code), you don't need to be connected to the internet. This is possible because we compiled Exograph, written in Rust, to WebAssembly.

Playground Architecture

Builder

The builder plays a role equivalent to the exo build command. It reads the source code, parses and typechecks it, and produces an intermediate representation (equivalent to the exo_ir file). The builder also includes elements equivalent to the exo schema command to compute the SQL schema and migrations.

On every change to the source code, the builder validates the model, reports any errors, and produces an updated intermediate representation. You can see the errors in the "Problems" tab.

The builder also produces the initial SQL schema and migrations for the model as you change it. The playground will automatically apply migrations as needed. You can see the schema in the "Schema" tab.

Runtime

The runtime is equivalent to the exo-server command. It processes the intermediate representation the builder produced and serves the GraphQL API. When you run a query, it sends it to the runtime, which computes the SQL query, runs against the database, and returns the results.

The runtime also sends logs to the playground, which you can see in the "Traces" tab. This aids in understanding what Exograph is doing under the hood, including the SQL queries it executes.

Postgres

The playground uses pglite, which is Postgres compiled to WebAssembly. Currently, we store Postgres data in memory, so you will lose the data when you refresh the page. We plan to add support for saving the data to local storage.

GraphiQL

The GraphiQL is a standard GraphQL query interface. You can run queries and mutations against the Exograph server. The playground populates the initial query for you to get started.

Sharing Playground Project as a Gist

The playground supports sharing the content as a gist. You can load such a gist using the gist query parameter. For example, to load the gist with ID abcd, you can use the URL https://exograph.dev/playground?gist=abcd.

You can create a gist to share your model, the seed data, and the initial query populated in GraphiQL. The files in gist follow the same layout as the project directory in Exograph, except you use :: as the directory separator.

  • src::index.exo: The model
  • tests::init.gql:The seed data. See Initializing seed data for more details.
  • playground::query.graphql: The initial query in GraphiQL
  • README.md: The README to show in the playground

We will keep improving the sharing experience in the future.

What's next

This is just the beginning to make Exograph easier to explore. Here are a few planned features to make the playground even better (your feedback is welcome!):

  • Support JavaScript Modules: In non-browser environments, Exograph supports Deno Modules. Deno cannot be compiled to WebAssembly, so we cannot run it in the browser. However, browsers already have a JavaScript runtime 🙂, which we will support in the playground.
  • Persistent Data: We plan to add support for saving the data to local storage so you can continue working on your data model across sessions.
  • Improved Sharing: We will add a simple way to create gists for your playground content and share it with others.

Try it out and let us know what you think. If you develop a cool model, publish it as a gist and share it with us on Twitter or Discord. We would appreciate a star on GitHub!

Share:

Exograph at the Edge with Cloudflare Workers

· 5 min read
Ramnivas Laddad
Co-founder @ Exograph

We are excited to announce that Exograph can now run as a Cloudflare Worker! This new capability allows deploying Exograph servers at the edge, closer to your users, and with lower latency.

Cloudflare Workers is a good choice for deploying APIs due to the following characteristics:

  • They scale automatically to handle changing traffic patterns, including scaling down to zero.
  • They have an excellent cold start time (in milliseconds).
  • They get deployed in Cloudflare's global network, thus the workers can be placed optimally for better latency and performance.
  • They have generous free tier limits that can be sufficient for many applications.

With Cloudflare as a deployment option, a question remains: How do we develop backends? Typical backend development can be complex, time-consuming, and expensive, requiring specialized teams to ensure secure and efficient execution. This is where Exograph shines. With Exograph, developers:

  • Focus only on defining the domain model and authorization rules.
  • Get inferred APIs (currently GraphQL, with REST and RPC coming soon) that execute securely and efficiently.
  • Use the provided tools to develop locally, deploy to the cloud, migrate database schemas, etc.
  • Use telemetry to monitor production usage.

Combine Cloudflare Workers with Exograph, and you get cost-effective development and deployment.

In this blog, we will show you how to deploy Exograph backends on Cloudflare Workers and how to use Hyperdrive to reduce latency.

A taste of Exograph on Cloudflare Workers

Exograph provides a CLI command to create a WebAssembly distribution suitable for Cloudflare. It also creates starter configuration files to develop locally and deploy to the cloud.

exo deploy cf-worker

The command will provide instructions for setting up the database connection. You can create a new database or use an existing one and add its URL as the EXO_POSTGRES_URL secret. Cloudflare Workers also integrate with databases such as Neon to add this secret through Cloudflare's dashboard.

To run the worker locally, you can use the following command:

npx wrangler dev
...
Using vars defined in .dev.vars
Your worker has access to the following bindings:
- Vars:
- EXO_POSTGRES_URL: "(hidden)"
- EXO_JWT_SECRET: "(hidden)"
⎔ Starting local server...
[wrangler:inf] Ready on http://localhost:8787

And when you are ready to deploy to the cloud, run:

npx wrangler deploy
...
Uploaded todo (2.19 sec)
Published tod (0.20 sec)
https://todo.<domain>.workers.dev
...

Please see the Exograph documentation for more details.

Using Hyperdrive to reduce latency

Let's measure the latency of the request with a query to fetch all todos. Here, we have deployed the worker that connects to a Postgres database managed by Neon.

oha -c 1 -n 10 -m POST -d '{ "query": "{todos { id }}"}' <worker-url>
Slowest: 0.5357 secs
Fastest: 0.2436 secs
Average: 0.2872 secs

The mean response time of 287ms is good but not stellar. The main reason for increased latency is that the worker has to open a new connection to the Postgres database for every request. If connection establishment time is the problem, connection pooling is a solution. For Cloudflare Worker, connection pooling comes in the form of Hyperdrive.

Behind the scenes

To extract latency benefits, we dealt with a few challenges. You can read more about them in our previous blog posts on "Latency at the Edge with Rust/WebAssembly and Postgres": Part 1 and Part 2.

To use this connection pooling option, you create a Hyperdrive using either the npx wrangler hyperdrive create command or the Cloudflare Worker's dashboard. Then add the following to your wrangler.toml:

EXO_HYPERDRIVE_BINDING = "<binding-name>"

[[hyperdrive]]
binding = "<binding-name>"
id = "..."

The worker will now use Hyperdrive to manage the database connections, significantly reducing the latency of the requests. Let's measure the latency again:

oha -c 1 -n 10 -m POST -d '{ "query": "{todos { id }}"}' <worker-url>

Slowest: 0.3588 secs
Fastest: 0.0879 secs
Average: 0.0967 secs

Much better! We have reduced the mean response time to 97ms, significantly faster than the previous 287ms.

How does it work?

Cloudflare Worker is, at its core, a V8 runtime capable of running JavaScript code. V8 also supports JavaScript loading and executing WebAssembly modules. To make Exograph run on Cloudflare Workers, we compiled Exograph to WebAssembly. Currently, Rust has the best tooling to target WebAssembly. Here, our decision to implement Exograph in Rust paid off!

How it works

As for the developers using Exograph, we ship a WebAssembly binary distribution for exo-server, which provides bindings to the Cloudflare Workers runtime and implements a few optimizations that we will see in the next section. It also creates JavaScript scaffolding to interact with the WebAssembly binary.

Roadmap

Our current Cloudflare worker support is a preview. We are planning on adding more features and improvements in the upcoming releases. Here is a high-level roadmap:

  • Improved performance: While the performance in the current release is already pretty good, especially with Hyperdrive, Exograph's ahead-of-time compilation offers more opportunities to improve it, and we will explore them.
  • JS Integration: Exograph embeds Deno as the JavaScript engine, but that won't work in Cloudflare Workers. However, Cloudflare worker's primary runtime is JavaScript (WebAssembly is a guest), so we will support integrating Exograph with the host system's JavaScript runtime.
  • Trusted documents: The current release doesn't yet support trusted documents, but we are working on it.

What's Next?

Exograph's WebAssembly target is a significant milestone in our journey to bring new possibilities to the Exograph ecosystem. But this is just the beginning. The next blog post will showcase another exciting feature due to this new capability. Stay tuned!

We are eager to know how you plan to use Exograph in Cloudflare workers. You can reach us on Twitter or Discord with your feedback. We would appreciate a star on GitHub!

Share:

Latency at the edge with Rust/WebAssembly and Postgres: Part 2

· 9 min read
Ramnivas Laddad
Co-founder @ Exograph

In the previous post, we implemented a simple Cloudflare Worker in Rust/WebAssembly connecting to a Neon Postgres database and measured end-to-end latency. Without any pooling, we got a mean response time of 345ms.

The two issues we suspected for the high latency were:

Establishing connection to the database: The worker creates a new connection for each request. Given that a secure connection, it takes 7+ round trips. Not surprisingly, latency is high.

Executing the query: The query method in our code causes the Rust Postgres driver to make two round trips: to prepare the statement and to bind/execute the query. It also sends a one-way message to close the prepared statement.

In this part, we will deal with connection establishment time by introducing a pool. We will fork the driver to deal with multiple round trips (which incidentally also helps with connection pooling). We will also learn a few things about Postgres's query protocol.

Source code

The source code with all the examples explored in this post is available on GitHub. With it, you can perform measurements and experiments on your own.

Introducing connection pooling

If the problem is establishing a connection, the solution could be a pool. This way, we can reuse the existing connections instead of creating a new one for each request.

Application-level pooling

Could we use a pooling crate such as deadpool?

While that would be a good option in a typical Rust environment (and Exograph uses it), it is not an option in the Cloudflare Worker environment. A worker is considered stateless and should not maintain any state between requests. Since a pool is a stateful object (holding the connections), it can't be used in a worker. If you try to use it, you will get the following runtime error on every other request:

Error: Cannot perform I/O on behalf of a different request. I/O objects (such as streams, request/response bodies, and others) created in the context of one request handler cannot be accessed from a different request's handler. This is a limitation of Cloudflare Workers which allows us to improve overall performance. (I/O type: WritableStreamSink)

When the client makes the first request, the worker creates a pool and successfully executes the query. For the second request, the worker tries to reuse the pool, but it fails due to the error above, leading to the eviction of the worker by the Cloudflare runtime. For the third request, a fresh worker creates another pool, and the cycle continues.

The error is clear: we cannot use application-level pooling in this environment.

External pooling

Since application-level pooling won't work in this environment, could we try an external pool? Cloudflare provides Hyperdrive for connection pooling (and more, such as query caching). Let's try that.

#[event(fetch)]
async fn main(_req: Request, env: Env, _ctx: Context) -> Result<Response> {
let hyperdrive = env.hyperdrive("todo-db-hyperdrive")?;

let config = hyperdrive
.connection_string()
.parse::<tokio_postgres::Config>()
.map_err(|e| worker::Error::RustError(format!("Failed to parse configuration: {:?}", e)))?;

let host = hyperdrive.host();
let port = hyperdrive.port();

// Same as before
}

Besides how we get the host and port, the rest of the code (to connect to the database and execute the query) remains the same as the one in part 1.

You will need to create a Hyperdrive instance using the following command (replace the connection string with your own):

npx wrangler hyperdrive create todo-db-hyperdrive --caching-disabled --connection-string "postgres://..."

We disable query caching since that will cause skipping most database calls. Due to the empty cache, the first request will hit the database. For subsequent requests (which execute the same SQL query in our setup), Hyperdrive will likely serve them from its cache. We are interested in measuring the latency to include database calls. With caching turned on, the comparison to the baseline would be apples-to-oranges.

note

For the real-world scenario, you may enable caching to balance database load and freshness of data.

Next, you will need to put the Hyperdrive information in wrangler.toml:

[[hyperdrive]]
binding = "todo-db-hyperdrive"
id = "<your-hyperdrive-id>"

Let's test this worker.

curl <worker-url>
INTERNAL SERVER ERROR

Hmm... that failed.

Fast moving ground

This is due to an issue with the current Hyperdrive implementation. The support for prepared statements is still new and (currently) works only with caching enabled. I have made the Cloudflare team aware of it. I think this will be fixed soon 🤞.

As things change, I will add updates here.

What's going on? Postgres has two kinds of query protocols:

  1. Simple query protocol: With this protocol, you must supply the SQL as a string and include any parameter values in the query (for example, SELECT * FROM todos WHERE id = 1). The driver makes one round trip to execute such a query.
  2. Extended query protocol: With this protocol, you may have the SQL query with placeholders for parameters (for example, SELECT * FROM todos WHERE id = $1), and its execution requires a preparation step. We will go into detail in the next section.

Let's explore both protocols.

Hyperdrive with the simple query protocol

To explore the simple query protocol, we will use the simple_query method. Since it doesn't allow specifying parameters, we inline them.

#[event(fetch)]
async fn main(_req: Request, env: Env, _ctx: Context) -> Result<Response> {

/// Hyperdrive setup as before

let rows = client
.simple_query("SELECT id, title, completed FROM todos WHERE completed = true")
.await
.map_err(|e| worker::Error::RustError(format!("Failed to query: {:?}", e)))?;

...
}

Does it work and how does this perform?

oha -c 1 -n 100 <worker-url>
Slowest: 0.2871 secs
Fastest: 0.0476 secs
Average: 0.0633 secs

That's more like it! The mean response time is 63ms, a significant improvement over the previous 345ms.

Since the simple query protocol needed only one round trip, Hyperdrive was able to use an existing connection without too much additional logic, so it worked without any issues and performed well.

But... the simple query protocol forces us to use string interpolation to inline the parameters in the query, which is a big no-no in the world of databases due to the risk of SQL injection attacks. So let's not do that!

Hyperdrive with the extended query protocol

Let's go back to the extended query protocol and figure out why Hyperdrive might be struggling with it. As it happens, all external pooling services deal with the same issue; for example, only recently did pgBouncer start to support it.

When using the extended query protocol through query, the driver executes the following steps:

  1. Prepare: Sends a prepare request. This contains a name for the statement (for example, "s1") and a query with the placeholders for parameters to be provided later (for example, $1, $2 etc.). The server sends back the expected parameter types.
  2. Bind/execute: Sends the name of the prepared statement and the parameters serialized in the format appropriate for the types. The server looks up the prepared statement by name and executes it with the provided parameters. It sends back the rows.
  3. Close: Closes the prepared statement to free up the resources on the server. In tokio-postgres, this is a fire-and-forget operation (doesn't wait for a response).

When you add in a connection pool, the driver must invoke the "bind/execute" and "close" with the same connection it used for "prepare". This requires some bookkeeping and is a source of complexity.

What if we combine all three steps into a single network package? This is what Exograph's fork of tokio-postgres (fork, PR) does. The client must specify the parameter values and their types (we no longer perform a round trip to discover parameter types). This way, the driver can serialize the parameters in the correct format in the same network package.

#[event(fetch)]
async fn main(_req: Request, env: Env, _ctx: Context) -> Result<Response> {

/// Hyperdrive setup as before

let rows: Vec<tokio_postgres::Row> = client
.query_with_param_types(
"SELECT id, title, completed FROM todos where completed <> $1",
&[(&true, Type::BOOL)],
)
.await
.map_err(|e| worker::Error::RustError(format!("query_with_param_types: {:?}", e)))?;

...
}

How does this perform?

oha -c 1 -n 100 <worker-url>
Slowest: 0.2883 secs
Fastest: 0.0466 secs
Average: 0.0620 secs

Nice! The mean response time is now 62ms, which matches the simple query protocol (63ms).

Summary

Let's summarize the mean response times for the various configurations:

Method ➡️querysimple_queryquery_with_param_types
Pooling ⬇️Timing in milliseconds
None345312312
Hyperdrivesee above6362

With connection pooling through Hyperdrive, we have brought the mean response time by a factor of 5.5 (from 345ms to 62ms)!

Round trip cost

The 33ms improvement between query (345ms) and query_with_param_types (312ms) is likely to be due to saving the extra round trip for the "prepare" step but needs further investigation.

The source code is available on GitHub, so you can check this yourself. If you find any improvements or issues, please let me know.

So what should you use? Assuming that the issue with Hyperdrive and query method has been fixed:

  • If you don't want to use Hyperdrive, use the query_with_type_params method with the forked driver. It does the job in one roundtrip and gives you the best performance without any risk of SQL injection attacks.
  • If you want to use Hyperdrive:
    • If you frequently make the same queries, the query method will likely do better. Hyperdrive may cache the "prepare" part of the step, making subsequent queries faster.
    • If you make a variety of queries, you can use the query_with_param_types method. Since you won't execute the same query frequently, Hyperdrive's prepare statement caching is unlikely to help. Instead, this method's fewer round trips will be beneficial.

Watch Exograph's blog for more explorations and insights as we ship its Cloudflare Worker support. You can reach us on Twitter or Discord. We would appreciate a star on GitHub!

Share:

Latency at the Edge with Rust/WebAssembly and Postgres: Part 1

· 6 min read
Ramnivas Laddad
Co-founder @ Exograph

We have been working on enabling Exograph on WebAssembly. Since we have implemented Exograph using Rust, it was natural to target WebAssembly. You can soon build secure, flexible, and efficient GraphQL backends using Exograph and run them at the edge.

During our journey towards WebAssembly support, we learned a few things to improve the latency of Rust-based programs targeting WebAssembly in Cloudflare Workers connecting to Postgres. This two-part series shares those learnings. In this first post, we will set up a simple Cloudflare Worker connecting to a Postgres database and get baseline latency measurements. In the next post, we will explore various ways to improve it.

Even though we experimented in the context of Exograph, the learnings should apply to anyone using WebAssembly in Cloudflare Workers (or other platforms that support WebAssembly) to connect to Postgres.

Second Part

Read Part 2 that improves latency by a factor of 6!

Rust Cloudflare Workers

Cloudflare Workers is a serverless platform that allows you to run code at the edge. The V8 engine forms the underpinning of the Cloudflare Worker platform. Since V8 supports JavaScript, it is the primary language for writing Cloudflare Workers. However, JavaScript running in V8 can load WebAssembly modules. Therefore, you can write some parts of a worker in other languages, such as Rust, compile it to WebAssembly, and load that from JavaScript.

Cloudflare Worker's Rust tooling enables writing workers entirely in Rust. Behind the scenes, the tooling compiles the Rust code to WebAssembly and loads it in a JavaScript host. The Rust code you write must be able to compile to wasm32-unknown-unknown target. Consequently, it must follow the restrictions of WebAssembly. For example, it cannot access the filesystem or network directly. Instead, it must rely on the host-provided capabilities. Cloudflare provides such capabilities through the worker-rs crate. This crate, in turn, uses wasm-bindgen to export a few JavaScript functions to the Rust code. For example, it allows opening network sockets. We will use this capability later to integrate Postgres.

Here is a minimal Cloudflare Worker in Rust:

use worker::*;

#[event(fetch)]
async fn main(_req: Request, _env: Env, _ctx: Context) -> Result<Response> {
Ok(Response::ok("Hello, Cloudflare!")?)
}

To deploy, you can use the npx wrangler deploy command, which compiles the Rust code to WebAssembly, generates the necessary JavaScript code, and deploys it to the Cloudflare network.

Before moving on, let's measure the latency of this worker. We will use Ohayou, an HTTP load generator written in Rust. We measure latency using a single concurrent client (-c 1) and one hundred requests (-n 100).

oha -c 1 -n 100 <worker-url>
...
Slowest: 0.2806 secs
Fastest: 0.0127 secs
Average: 0.0214 secs
...

It takes an average of 21ms to respond to a request. This is a good baseline to compare when we add Postgres to the mix.

Focusing on latency

We will focus on measuring the lower bound for latency of the roundtrip for a request to the worker who queries a Postgres database before responding. Here is our setup:

  • Use a Neon Postgres database with the following table and no rows to focus on network latency (and not database processing time).

    CREATE TABLE todos (
    id SERIAL PRIMARY KEY,
    title TEXT NOT NULL,
    completed BOOLEAN NOT NULL
    );
  • Implement a Cloudflare Worker that responds to GET by fetching all completed todos from the table and returning them as a JSON response (of course, since there is no data, the response will be an empty array, but the use of a predicate will allow us to explore some practical considerations where the queries will have a few parameters).

  • Place the worker, database, and client in the same region. While, we can't control the worker placement, Cloudflare will place the worker close to either the client or the database (which we've put in the same region).

All right, let's get started!

Connecting to Postgres

Let's implement a simple worker that fetches all completed todos from the Neon Postgres database. We will use the tokio-postgres crate to connect to the database.

#[event(fetch)]
async fn main(_req: Request, env: Env, _ctx: Context) -> Result<Response> {
let config = tokio_postgres::config::Config::from_str(&env.secret("DATABASE_URL")?.to_string())
.map_err(|e| worker::Error::RustError(format!("Failed to parse configuration: {:?}", e)))?;

let host = match &config.get_hosts()[0] {
Host::Tcp(host) => host,
_ => {
return Err(worker::Error::RustError("Could not parse host".to_string()));
}
};
let port = config.get_ports()[0];

let socket = Socket::builder()
.secure_transport(SecureTransport::StartTls)
.connect(host, port)?;

let (client, connection) = config
.connect_raw(socket, PassthroughTls)
.await
.map_err(|e| worker::Error::RustError(format!("Failed to connect: {:?}", e)))?;

wasm_bindgen_futures::spawn_local(async move {
if let Err(error) = connection.await {
console_log!("connection error: {:?}", error);
}
});

let rows: Vec<tokio_postgres::Row> = client
.query(
"SELECT id, title, completed FROM todos WHERE completed = $1",
&[&true],
)
.await
.map_err(|e| worker::Error::RustError(format!("Failed to query: {:?}", e)))?;

Ok(Response::ok(format!("{:?}", rows))?)
}

There are several notable things (especially if you are new to WebAssembly):

  • In a non-WebAssembly platform, you would get the client and connection directly using the database URL, which opens a socket to the database. For example, you would have done something like this:

    let (client, connection) = config.connect(tls).await?;

    However, that won't work in a WebAssembly environment since there is no way to connect to a server (or, for that matter, any other resources such as filesystem). This is the core characteristic of WebAssembly: it is a sandboxed environment that cannot access resources unless explicitly provided (thought functions exported to the WebAssembly module). Therefore, we use Socket::builder().connect() to create a socket (which, in turn, uses TCP Socket API provided by Cloudflare runtime). Then, we use config.connect_raw() to lay the Postgres protocol over that socket.

  • We would have marked the main function with, for example, #[tokio::main] to bring in an async executor. However, here too, WebAssembly is different. Instead, we must rely on the host to provide the async runtime. In our case, Cloudflare worker provides a runtime (which uses JavaScript's event loop).

  • In a typical Rust program, we would have used tokio::spawn to spawn a task. However, in WebAssembly, we use wasm_bindgen_futures::spawn_local, which runs in the context of JavaScript's event loop.

We will deploy it using npx wrangler deploy. You will need to create a database and add the DATABASE_URL secret to the worker.

You can test the worker using curl:

curl https://<worker-url>

And measure the latency:

oha -c 1 -n 100 <worker-url>
...
Slowest: 0.8975 secs
Fastest: 0.2795 secs
Average: 0.3441 secs

So, our worker takes an average of 345ms to respond to a request. Depending on the use case, this can be between okay-ish and unacceptable. But why is it so slow?

We are dealing with two issues here:

  1. Establishing connection to the database: The worker creates a new connection for each request. Given that a secure connection, it takes 7+ round trips. Not surprisingly, latency is high.
  2. Executing the query: The query method in our code causes the Rust Postgres driver to make two round trips: to prepare the statement and to bind/execute the query. It also sends a one-way message to close the prepared statement.

How can we improve? We will address that in the next post by exploring connection pooling and possible changes to the driver. Stay tuned!

Share:

GraphQL needs new thinking

· 6 min read
Ramnivas Laddad
Co-founder @ Exograph

Backend developers often find implementing GraphQL backends challenging. I believe the issue is not GraphQL but the current implementation techniques.

This post is a response to Why, after 6 years, I'm over GraphQL by Matt Bessey. I recommend that you read the original blog before reading this. The sections in this blog mirror the original blog, so you can follow them side-by-side.

Before starting Exograph, we implemented several GraphQL backends and faced many of the same issues mentioned in the blog. Exograph is our attempt to address these issues and provide a better way to implement GraphQL. Let's dive into the issues raised in the original blog and how Exograph addresses them.

Attack surface

The expressive nature of GraphQL queries makes it attractive to frontend developers. However, this exposes a large attack surface, where a client can craft a query that can overwhelm the server.

Authorization

Problem: GraphQL authorization is a nightmare to implement.

Solution: A declarative way to define entity or field-level authorization rules.

While authorization is an issue with any API, it is particularly challenging with GraphQL. A typical REST API implementation considers access to only one resource at a time. In contrast, GraphQL implementations need to process queries that can ask for multiple resources. Processing such a query while enforcing authorization rules for each query field is a backend developer's nightmare.

Exograph provides a declarative way to define authorization rules at the entity or field level. Exograph's runtime will enforce them. And by default, Exograph assumes that no one can access an entity unless explicitly allowed, forcing an explicit decision on the developer.

For example, the following rule ensures that a non-admin user can query any department but only published products. An admin user, in contrast, can query or mutate any product or department. With this structure in place, no matter how the query is structured (get products along with their department or departments along with their products), the client can only access the resources they are authorized to.

context AuthContext {
@jwt role: String
}

@postgres
module EcommerceDatabase {
@access(query=self.published || AuthContext.role=="admin",
mutation=AuthContext.role=="admin")
type Product {
// ...
published: Boolean
department: Department
}

@access(query=true, mutation=AuthContext.role=="admin")
type Department {
// ...
products: Set<Product>?
}
}

In a way, Exograph allows a REST-like thought process, where you consider authorization of each resource in isolation. Exograph's query engine ensures that no matter how the query is structured, the client will only get access to the authorized resources.

Rate limiting

Problem: GraphQL queries can ask for multiple resources in a single query, overwhelming the backend.

Solution: Trusted documents, query depth limiting, and rate limiting.

Since GraphQL can ask for multiple resources in a single query, it is easy to overwhelm the backend. For example, a client can issue a single deeply nested query, which will cause many trips to the database.

Exograph addresses this issue in multiple ways:

Query Parsing

Problem: GraphQL queries can overwhelm the parser.

Solution: The same solution as above.

GraphQL queries express the client's intent at a finer level than REST APIs, requiring the backend to parse the query and present itself as an attack surface.

Exograph's solution for rate limiting also addresses this issue. For example, by allowing only trusted documents, you can ensure clients cannot make arbitrary queries and overwhelm the backend.

Performance

In a typical traditional GraphQL implementation, the backend developer will write a resolver for each entity to fetch the data, which can lead to the N+1 problem, where the backend makes multiple queries to fetch the data. Balancing authorization and the N+1 problem can be tricky.

Data fetching and the N+1 problem

Problem: GraphQL queries can lead to the N+1 problem.

Solution: Defer data fetching to a query planner.

This is a classic issue with GraphQL implementation, where a client can ask for multiple resources in a single query. A common solution is data loader, which batches the queries and fetches the data in fewer round trips but requires careful implementation (especially when considering authorization).

Exograph relieves this burden through its query engine. Exograph's query planner maps a GraphQL query to (typically) a single SQL query.

Authorization and the N+1 problem

Problem: Balancing authorization and the N+1 problem can be tricky.

Solution: Make authorization rules a part of the query planning process.

Even if you were to use a data loader to solve the N+1 problem, once you also consider authorization, you end up fighting modularity (see the next point) against performance.

Exograph's query engine incorporates authorization rules while planning the query, so the same query planning process solves this problem.

Coupling

Problem: GraphQL makes it quite hard to keep code modular.

Solution: Reduce the amount of code by leaning on a declarative approach.

GraphQL makes it quite hard to keep code modular, especially when dealing with authorization and performance, which leads to code-tangling and code-scattering to enforce authorization rules.

First, in Exograph, you write very little code. You define your data model and authorization rules, and Exograph plans and executes queries for that model.

Second, Exograph's declarative nature allows you to express authorization rules alongside the data model. Exograph query engine—and not your code—enforces these rules.

Complexity

Problem: GraphQL makes balancing authorization, performance, and modularity tricky.

Solution: Simplify implementing GraphQL through a declarative way to define data models and authorization rules.

GraphQL can be complex to implement, balancing authorization, performance, and modularity. Left unchecked, this can lead to a complex and hard-to-maintain codebase with possible security issues.

Exograph simplifies implementing GraphQL through a declarative way to define data models and authorization rules. Exograph query engine takes care of planning and executing queries efficiently. It also provides tools for dealing with various aspects, from quick development feedback to production concerns.

Conclusion

We believe that with the right level of abstraction, you can get secure, performant, and modular backend implementation. With Exograph, we provide a declarative approach to get all the benefits of GraphQL without the downsides.

What do you think? Reach out to us on Twitter or Discord with your feedback.

Share:

Exograph supports pgvector for embeddings

· 9 min read
Ramnivas Laddad
Co-founder @ Exograph

We are happy to introduce Exograph's first integration with AI: embedding support through the pgvector extension. This new feature allows storing and querying vector representations of unstructured data directly in your Postgres database. It complements other Exograph features like access control, indexing, interceptors, and migrations, simplifying the development of AI applications.

Embeddings generated by AI models from OpenAI, Claude, Mistral, and others condense the semantic essence of unstructured data into a small-size vector. The distance between two vectors indicates the similarity of the corresponding data. This capability opens up many possibilities, such as building recommendation systems, classifying data, and implementing AI techniques like RAG (Retrieval Augmented Generation).

Exograph's access control mechanism ensures that the search results are filtered based on the user's access rights. For example, when implementing a RAG application, Exograph ensures that it retrieves and feeds the AI model only the documents the user can access. Access control at the data model level eliminates a source of privacy issues in AI applications.

Creating RAG applications with Exograph and pgvector

In this blog post, we will explore the concept of embeddings and how Exograph supports it.

Overview

Exograph's embedding support comes through a new Vector type, which uses the pgvector extension internally. This extension enables storing vectors in the database alongside regular data, thus simplifying the integration of embeddings into your application.

Here is an example of a document module with embedding support for the content field:

@postgres
module DocumentModule {
@access(true)
type Document {
@pk id: Int = autoIncrement()
title: String
content: String
contentVector: Vector
}
}

Once you add a field of the Vector type, Exograph takes care of many aspects:

  • Creating and migrating the database schema.
  • Supporting mutation APIs to store the embeddings in that field.
  • Extending retrieval and ordering APIs to use the distance from the given vector.

Let's explore the concept of embeddings and Exograph's support for it.

What is an Embedding?

Picture this: You aim to compile and search through your organization's documents efficiently. Traditional methods fall short; they merely compare strings, missing nuanced connections. For instance, a search for "401k" won't reveal documents mentioning just "Roth"—even though they both deal with the concept of "retirement savings". Enter embeddings.

Embeddings transform data into a numeric vector, capturing its semantic essence.

A mental model behind embeddings is that each dimension represents a semantics (also referred to as a feature or a concept), and the value in a vector represents its closeness to a particular semantics. Consider words like "car", "motorcycle", "dog", and "elephant". If you were to create a vector representation manually, you may define a few semantics and assign a value based on how well the word fits. For instance, you may define the following semantics: "Transportation", "Heavy", and "Animal". You may then assign a value to each word based on how it fits these features. The table below illustrates this concept:

WordSemantic Dimension
TransportationHeavyAnimal
Car0.90.80.1
Motorcycle0.80.50.1
Dog0.10.10.9
Elephant0.60.90.9

Here, you created a three-dimensional vector for each word.

Car:        [0.9, 0.8, 0.1]
Motorcycle: [0.8, 0.5, 0.1]
Dog: [0.1, 0.1, 0.9]
Elephant: [0.6, 0.9, 0.9]

In practice, you would use an AI model like OpenAI's text-embedding-3-small, which generates the vector based on its training data. The dimension labels in the resulting vector are opaque and lack human-interpretable labels such as "Transportation"; instead, you only have an index and its corresponding value.

Finding similar documents through embeddings involves computing the distance between vectors using metrics like the Euclidean distance or cosine similarity and selecting the closest vectors. For example, if you are looking for documents similar to "Truck", you would compute its vector representation (say, [0.95, 0.9, 0.1]) and find documents with vectors close to it (probably "Car" and "Motorcycle" in the example above).

Using embeddings in your application requires two steps:

  1. When adding or updating a document, compute its vector representation, store the vector representation, and link it to the document.
  2. When searching for similar documents, compute the vector representation of the query, find the closest vectors, and retrieve the associated documents. Typically, you'd sort by vector proximity and select the top matches.

Exograph helps with these steps, simplifying AI integration into your applications.

Embeddings in Exograph

Exograph introduces a new type Vector. Fields of this type provide the ability to:

  • Store and update the vector representation.
  • Filter and order based on the distance from the provided value.
  • Specify parameters such as vector size, indexing, and the distance function to compute similarity.

The Vector type feature plays well with the rest of Exograph's capabilities. For example, you can apply access control to your entities, so searching and sorting automatically consider the user's access rights, thus eliminating a source of privacy issues. You can even specify field-level access control to, for example, expose the vector representation only to privileged users.

Let's use the Document model shown earlier but with a few annotations to control a few key aspects:

@postgres
module DocumentModule {
@access(true)
type Document {
@pk id: Int = autoIncrement()
title: String
content: String

@size(1536)
@index
@distanceFunction("l2")
contentVector: Vector?
}
}

First, note that the contentVector field is of the Vector type is marked optional. This supports the typical pattern of initially adding documents without embedding and adding vector representation asynchronously.

Next, note the annotations for the contentVector field to specify a few key aspects:

  • Size: By default, Exograph sets up the vector size to 1536, but you can specify a different size using the @size annotation. Exograph's schema creation and migration will factor in the vector size.

  • Indexing: Creating indexes speeds up the search and ordering. When you annotate a Vector field with the @index annotation, during schema creation (and migration), Exograph sets up a Hierarchical Navigable Small World (HNSW) index.

  • Distance function: The core motivation for using vectors is to find vectors similar to a target. There are multiple ways to compute similarity, and based on the field's characteristics, one may be more suitable than others. Since it is a field's characteristic, you can annotate Vector fields using the @distanceFunction annotation to specify the distance function. By default, Exograph uses the "cosine" distance function, but you can specify the "l2" distance function (L2 or Euclidean distance) or "ip" (inner product). Exograph will automatically use this function when filtering and ordering. It will also automatically factor in the distance function while setting up the index.

Access control

We use a wide-open access control policy (@access(true)) to keep things simple. In practice, you would use a more restrictive access control policy to ensure only authorized users can access the document's content and vector representation. For example, you could introduce the notion of a "document owner" and allow access only to the owner or users with specific roles (see Evolving Access Control with Exograph for more details). This way, you can ensure that the search results are filtered based on the user's access rights, and when used as context in AI applications, the generated content is based on the user's access rights.

Let's see how to use the Vector type in Exograph from the GraphQL API.

Embedding in GraphQL

Once the model is defined, you can use the Exograph GraphQL API to interact with the model.

To insert a document and its vector representation, you can use the following mutation (updating the vector representation is similar):

mutation ($title: String!, $content: String!, $contentVector: [Float!]!) {
createDocument(
data: { title: $title, content: $content, contentVector: $contentVector }
) {
id
}
}

Note how the Vector field surfaces as a list of floats in the APIs (and not as an opaque custom scalar). This design choice simplifies the integration with the AI models that produce embedding and client code that uses vectors.

Now, we can query our documents. A common query with embedding is to retrieve the top matching documents. You can do it in Exograph with the following query:

query topThreeSimilar($searchVector: [Float!]!) {
documents(
orderBy: { contentVector: { distanceTo: $searchVector, order: ASC } }
limit: 3
) {
id
title
content
}
}

Limiting the number of documents is often sufficient for a typical search or RAG application. However, you can also use the similar operator to filter documents based on the distance from the search vector:

query similar($searchVector: [Float!]!) {
documents(
where: {
contentVector: {
similar: { distanceTo: $searchVector, distance: { lt: 0.5 } }
}
}
) {
id
title
content
}
}

You can combine the orderBy and where clauses to return the top three similar documents only if they are within a certain distance:

query topThreeSimilarDocumentsWithThreshold(
$searchVector: [Float!]!
$threshold: Float!
) {
documents(
where: {
contentVector: {
similar: { distanceTo: $searchVector, distance: { lt: $threshold } }
}
}
orderBy: { contentVector: { distanceTo: $searchVector, order: ASC } }
limit: 3
) {
id
title
content
}
}

You can combine vector-based queries with other fields to filter and order based on other criteria. For example, you can filter based on the document's title along with a similarity filter and order based on the distance from the search vector:

query topThreeSimilarDocumentsWithTitle(
$searchVector: [Float!]!
$title: String!
$threshold: Float!
) {
documents(
where: {
title: { eq: $title }
contentVector: {
similar: { distanceTo: $searchVector, distance: { lt: $threshold } }
}
}
orderBy: { contentVector: { distanceTo: $searchVector, order: ASC } }
limit: 3
) {
id
title
content
}
}

These filtering and ordering capabilities make it easy to focus on the business logic of your application and let Exograph handle the details of querying and sorting based on vector similarity.

What's Next?

This is just the beginning of empowering Exograph applications to leverage the power of AI with minimal effort. We will continue to enhance this feature to support more AI models.

For more detailed documentation, please see the embeddings documentation.

We are excited to see what you build with this new capability. You can reach us on Twitter or Discord with your feedback.

Share:

Exograph supports trusted documents

· 7 min read
Ramnivas Laddad
Co-founder @ Exograph

Exograph 0.7 introduces support for trusted documents, also known as "persisted documents" or "persisted queries". This new feature makes Exograph even more secure by allowing only specific queries and mutations, thus shrinking the API surface. It also offers other benefits, such as improving performance by reducing the payload size.

Trusted documents are a pre-arranged way for clients to convey "executable documents" (roughly queries and mutations, but see the GraphQL spec and Exograph's documentation for a distinction) they would be using. The server then allows executing only those documents.

tip

For a good introduction to trusted documents, including the reason to prefer the term "trusted documents", please see the GraphQL Trusted Documents. Exograph's trusted document follows the same general principles outlined in it. We hope that someday a GraphQL specification will standardize this concept.

This blog post explains what trusted documents are and how to use them in Exograph.

Conceptual Overview

Why Trusted Documents?

Consider the todo application we developed in an earlier blog (source code). This single-screen application allows creating, updating, and deleting todos besides viewing by their completion status. The application only needs the following queries and mutations (the application uses fragments, but we will keep it simple here).

The client application needs two queries to get todos by their completion status:

Query to get todos by completion status
query getTodosByCompletion($completed: Boolean!) {
todos(where: { completed: { eq: $completed } }, orderBy: { id: ASC }) {
id
completed
title
}
}
Query to get all todos
query getTodos {
todos(orderBy: { id: ASC }) {
id
completed
title
}
}

The application also needs mutations to create, update, and delete a todo:

Mutation to create a todo
mutation createTodo($completed: Boolean!, $title: String!) {
createTodo(data: { title: $title, completed: $completed }) {
id
}
}
Mutation to update a todo
mutation updateTodo($id: Int!, $completed: Boolean!, $title: String!) {
updateTodo(id: $id, data: { title: $title, completed: $completed }) {
id
}
}
Mutation to delete a todo
mutation deleteTodo($id: Int!) {
deleteTodo(id: $id) {
id
}
}

Contrast this with what the server offers. For example, the server provides:

  • Querying a todo by its ID, but the client application doesn't need it, since it doesn't display a single todo.
  • Flexible filtering, but the client application needs only by completion status.
  • Sorting by other fields or a combination, but the client application only sorts by the ID in ascending order.
  • Creating, updating, and deleting in bunk, but the client application only mutates one todo at a time.

While Exograph (and other GraphQL servers) offer flexible field selection for all queries and mutations, each client query or mutation only needs a specific set of fields.

What are Trusted Documents?

Because the client app requires only specific queries and mutations, it can inform the server using trusted documents. These documents are files with all the executable queries and mutations the client app needs. For instance, in the case of the todo app, a tool would create a file like this:

{
"2a...": "query getTodosByCompletion($completed: Boolean!) ...",
"3b...": "query getTodos ...",
"4c...": "mutation createTodo($completed: Boolean!, $title: String!) ...",
"5d...": "mutation updateTodo($id: Int!, $completed: Boolean!, $title: String!) ...",
"6e...": "mutation deleteTodo($id: Int!) ..."
}

Here, the key is a hash (SHA256, for example) of the query or mutation, and the value is the query or mutation itself.

Benefits of using Trusted Documents

Using trusted documents reduces the effective API surface area and offers several benefits, such as:

  • Preventing attackers from executing arbitrary queries and mutations.
  • Reducing the bandwidth by sending only the hash of the query instead of the query itself.
  • Providing a focus for testing the actual queries and mutations used by the client application.
  • Allowing server-side optimizations such as caching query parsing and pre-planning query execution.

You may wonder if there is a different way to achieve the same benefits. Specifically, couldn't the server provide a narrower set of queries and mutations? For example, could it not offer the API "get a todo by its ID"? While feasible, this approach undermines one of GraphQL's strengths: enabling client-side development without continuous server team involvement. For example, the client application may want to offer a route such as todos/:id to show a particular todo. It will then need a todo by its ID. Without this query, the client must request the server team to add it, consuming time and resources. Trusted documents streamline this process. They allow the client to effortlessly express requirements, like fetching todos by ID. Thus, the server should maintain a reasonably flexible API while clients communicate their needs through trusted documents.

Using Trusted Documents in Exograph

Like any other feature in Exograph, using trusted documents is straightforward. All you need to do is put the trusted documents in the trusted-documents directory! Exograph will automatically use them.

Workflow

A typical workflow to use trusted documents in Exograph is as follows:

  1. Generate trusted documents: In this (typically automated) step, a tool such as graphql-codegen and generate-persisted-query-manifest examines the client application and generates the trusted documents.
  2. Convey the server: Exograph expects trusted documents in either format in the trusted-documents directory. The server will now accept only the trusted documents.
  3. Set up the client: The client may now send the hashes of the executable documents instead of the full text. To do this, The client can use the persistedExchange with URQL or @apollo/persisted-query-lists or some custom code.

For a concrete implementation of this workflow, please see the updated todo application: Apollo version and URQL version.

Organizing Trusted Documents

Typically, you will have more than one client, each with a different set of trusted documents, connected to an Exograph server. Exograph supports this by allowing them to be placed anywhere in the trusted-documents directory. For example, you could support multiple clients and their versions by placing trusted documents in the following structure:

todo-app
├── src
│ └── ...
├── trusted-documents
│ ├── web
│ │ └── user-facing.json
│ │ └── admin-facing.json
│ └── ios
│ │ └── core.json
│ │ └── admin.json
│ └── android
│ │ └── version1.json
│ │ └── version2.json

The server will allow trusted documents in any of the files.

Using Trusted Documents in Development

If you have added the trusted-documents directory, Exograph enforces trusted documents in production mode without any way to opt out, extending its general secure by default philosophy.

In development mode, however, where exploring API is common, Exograph makes a few exceptions. When you start the server in development mode (either exo dev or exo yolo), it allows untrusted documents under the following conditions:

  • If the GraphQL playground executes an operation.
  • If the development server is stated with the --enforce-trusted-documents=false option.

In either case, it will implicitly trust any introspection query, thus allowing GraphQL Playground and tools such as GraphQL Code Generator to work.

Try it out

You can see this support in action through the updated todo application: Apollo version and URQL version.

For more detailed documentation, including how to create trusted documents and set up the client, please see the trusted documents documentation.

Let us know what you think of trusted documents support in Exograph. You can reach us on Twitter or Discord.

Share:

Riding on Railway

· 5 min read
Ramnivas Laddad
Co-founder @ Exograph

The driving principle behind Exograph is to make it easy for developers to build their backends by letting them focus only on inherent—not incidental—complexity; Exograph should handle everything else. With Exograph, you can create a GraphQL server with a rich domain model with authorization logic, Postgres persistence, and JavaScript/TypeScript business logic in just a few lines of code. But what about deploying to the cloud?

The new platform-as-a-service offerings make deploying Exograph apps easy, and we make it easier by providing specific integrations. In this blog, I will focus on Railway. With its Postgres support and GitHub integration, you can create an Exograph server from scratch and deploy it in under three minutes!

A quick overview of Exograph

Let's take a quick look at Exograph from a deployment perspective, which will make the Railway integration easy to understand. Exograph separates the build and execution steps and ships two corresponding binaries: exo and exo-server.

The exo binary Exograph's cli tool takes care of everything from building the app, migrating schema, and running integration tests to acting as a development server. It also simplifies deployment to cloud platforms, as we will see shortly.

Running an Exograph application requires that you build the app using exo build and then run the server using exo-server. The exo build command processes the index.exo file (and any files imported from it) and bundles JS/TS files. The result is an intermediate representation file: index.exo_ir.

Building an Exograph app
exo build

The exo-server binary' is the runtime, whose sole purpose is to run Exograph apps. It processes the index.exo_ir file and runs the server.

Running an Exograph app
exo-server

Please see the Exograph Architecture for more details.

These tools are available as Docker images, making it easy to integrate with cloud platforms and form the core of our Railway integration.

Railway Integration

The exo deploy railway command generates a Docker file (Dockerfile.railway) and railway.toml to point to this file. The generated Docker file contains two stages:

  1. Build stage: Build the app and another to run it. The build stage uses the exo Docker image to build the app and then copies the result to the runtime stage. The stage also runs database migrations.
  2. Runtime stage: Runs the server. It uses the exo-server Docker image.

Deploying to Railway then involves:

  • Pushing all the code to GitHub (or using railway up to push the code directly to Railway)
  • Creating a Railway project
  • Creating a Postgres service (unless you use an external Postgres)
  • Creating a new service pointing to the GitHub repo (unless you use railway up)
  • Binding environment variables for the Postgres database to the service

Let's see how this works in practice.

Creating a new Exograph project

This is easy!

exo new todo
cd todo

This creates a new project with a simple todo app and initializes Git. If you are wondering, this doesn't generate much code. Here is the entire model!

@postgres
module TodoDatabase {
@access(true)
type Todo {
@pk id: Int = autoIncrement()
title: String
completed: Boolean
}
}

If you would like, you can run exo yolo to try it out locally.

Deploying to Railway

We have a choice of using Postgres offered by Railway or an external one. Let's look at both options.

Using its Postgres

Railway offers the Postgres database service, which has the advantage of keeping both the Exograph app and the database on the same platform.

exo deploy railway --use-railway-db=true

This will generate files necessary to deploy to Railway and provide step-by-step instructions. Here is a video of the entire process: we create an Exograph project from scratch and deploy it to Railway in under three minutes.

note

Currently, the process involves using the dashboard to create the Postgres database and binding it to the service. We will keep an eye on Railway's tooling to make this easier. For example, if Railway supports Infrastructure-as-Code (a requested feature), we can lean on it to make the whole process in a single command.

Using Neon's Postgres

Using an external Postgres database is also easy and has the advantage of specialization offered by database providers. We will use Neon for this example.

Passing --use-railway-db=false to exo deploy railway will generate files for use with an external Postgres database.

exo deploy railway --use-railway-db=false

Other than using an external Postgres, the process is the same as using Railway's Postgres.

Using the playground

Following the best practice, we do not enable introspection and playground in production. But that makes working with the GraphQL server challenging. Exograph's playground solves this problem by using schema from the local model (the app's source code) and sending requests to the remote server. This way, you can explore the API without enabling introspection and playground in production.

Let us know what you think of our Railway integration. You can reach us on Twitter or Discord.

Share:

What's new in Exograph 0.4

· 5 min read
Ramnivas Laddad
Co-founder @ Exograph
Shadaj Laddad
Co-founder @ Exograph

We are excited to announce the release of Exograph 0.4! This release introduces several new features and enhancements, many in response to our users' feedback (thank you!). While we will explore some of these features in future blogs, here is a quick summary of what's new since the 0.3 version.

NPM modules support

Exograph offers a Deno integration to write custom business logic in TypeScript or JavaScript. Until now, you could only use Deno modules. While Deno modules ecosystem continues to expand, it is not as rich as the Node ecosystem. Recognizing this, Deno added support for NPM modules to tap into the vast range of NPM packages, and Exograph 0.4 extends this support through our integration. You can now harness NPM modules to checkout with Stripe, send messages with Slack, interact with AWS, and so on.

As an illustration, here is how you would send emails using Resend. First, you would define a mutation:

@deno("email.ts")
module Email {
@access(true)
mutation sendAnnouncement(): Boolean
}

And implement it using the resend NPM module:

import { Resend } from "npm:resend";

const resend = new Resend("re_...");

export async function sendAnnouncement(): Promise<boolean> {
await resend.emails.send({
from: "...",
to: "...",
subject: "Exograph 0.4 is out!",
html: "<p>Exograph <strong>0.4</strong> is out with support for npm modules, playground mode, and more!</p>",
});

return true;
}

Compared to the example shown on the Resend site, the difference is the npm: prefix in the import statement. This prefix tells Exograph to look for the module in the npm registry.

Railway integration

Exograph's deployment has always been easy since the server is just a simple binary with everything needed to run. However, we strive to make it easier for specific platforms. Exograph 0.4 now supports Railway as a deployment target! Here is a quick video where we create an Exograph project from scratch and deploy it to Railway in under three minutes.

To support this kind of integration, where the cloud platform can also run the build step, we now publish two Docker images: cli with the exo binary and server with the exo-server binary.

exo playground

A recommended practice for GraphQL deployments is to turn off introspection in production, which is the default in Exograph. However, exploring the GraphQL API with such a server becomes difficult without code completion and other goodies. Exograph 0.4 introduces a new exo playground command that uses the schema from the local server and executes GraphQL operations against the specified remote endpoint.

exo playground --endpoint https://<server-url>/graphql
Starting playground server connected to the endpoint at: https://<server-url>/graphql
- Playground hosted at:
http://localhost:9876/playground

You will see a UI element in the playground showing the specified endpoint. Besides this difference, the playground will behave identically to the local playground, including autocomplete, schema documentation, query history, and integrated authentication.

note

This doesn't bypass the recommended practice of turning off introspection in production. The exo playground is useful only if you have access to the server's source code, in which case, you would know the schema, anyway!

See the video with the Railway integration above for a quick demo of the exo playground command.

Access control improvements

Exograph offers to express access control rules precisely. In version 0.4, we enhanced this expressive power through higher-order functions. Consider a document management system where users can read documents if they have read permissions and mutate if they have written permissions. The following access control rules express this requirement.

context AuthContext {
@jwt("sub") id: Int
}

@postgres
module DocsDatabase {
@access(
query = self.permissions.some(permission => permission.user.id == AuthContext.id && permission.read),
mutation = self.permissions.some(permission => permission.user.id == AuthContext.id && permission.write)
)
type Document {
@pk id: Int = autoIncrement()
...
permissions: Set<Permission>
}

@access(...)
type Permission {
@pk id: Int = autoIncrement()
document: Document
user: User
read: Boolean
write: Boolean
}

@access(...)
type User {
@pk id: Int = autoIncrement()
...
permissions: Set<Permission>
}
}

With this setup, no user can read or write a document without the appropriate permission. The some function allows you to express this requirement in a single line. Internally, it lowers it down to an SQL predicate for efficient execution.

Currently, we support the some higher-order function, which matches the Array.prototype.some function in Javascript. This function takes a predicate function and returns true if the predicate function returns true for any element in the array.

Other improvements

Besides these major features, we continue to improve Exograph to fit more use cases and simplify the developer experience.

For the Postgres plugin, for example, you can now specify that the tables could be in a non-public schema and specify deeply nested creating and updates in a single mutation. It also allows clients to supply the primary key value for UUID fields when creating an entity.

In version 0.3, we introduced a friction-free integration with Clerk for authentication. Since then, we have extended this support to Auth0 as an authentication provider! While Exograph generically supports all compliant OIDC providers, the playground integration for Clerk and Auth0 makes it easy to try out APIs that require authentication. Along the way, we updated key rotation for the OIDC authentication mechanism.

Let us know what you think of these new features and what you would like to see in the future. You can reach us on Twitter or Discord.

Share:

Authentication in Exograph with Auth0

· 4 min read
Ramnivas Laddad
Co-founder @ Exograph

On the heels of Clerk integration, we are excited to announce that Exograph now supports Auth0 as an authentication provider! Exograph's JWT support seamlessly works with Auth0 out of the box. Additionally, Exograph's playground integrates Auth0's authentication UI to simplify the exploration of access control rules.

Our code will be the same as in the previous blog. Since both Clerk and Auth0 support OpenID Connect (OIDC), everything can stay the same.

context AuthContext {
@jwt("sub") id: String
}

@postgres
module TodoDatabase {
@access(self.userId == AuthContext.id)
type Todo {
@pk id: Int = autoIncrement()
title: String
completed: Boolean
userId: String = AuthContext.id
}
}

This is all the code you need for a multi-user todo app! With the rule specified in the @access annotation, each user can only query or mutate their todos.

note

Exograph's @jwt annotation works with any compliant OIDC provider, so you may use it with any other provider. However, the playground integration is currently only available with Clerk and Auth0.

To try it out, create an Auth0 project following their instructions. Pay particular attention to configuring "Allowed Callback URLs" (for this blog, you may set it to http://localhost:9876/playground, http://localhost:3000).

Then you can start the server using exo yolo with the EXO_OIDC_URL environment variable set to your Auth0 URL:

EXO_OIDC_URL=https://<your-auth0-host>.auth0.com exo yolo

This will start the server with Auth0 as the authentication provider.

Auth0 integration with Exograph Playground

A typical workflow for building an app uses the GraphQL API in the playground to try out queries and mutations needed for the frontend and copy those into the frontend code. However, in the presence of authentication, grabbing a JWT token (typically from a partially built UI) and passing it in the Authorization header can be cumbersome. Exograph Playground makes it easy to try out APIs that require authentication by integrating Auth0's UI in the playground.

Try it out. For example, you can execute the following query to get all todos:

query {
todos {
id
title
completed
}
}

If you do so without logging in, you will get an error:

{
"errors": [
{
"message": "Not authorized"
}
]
}

You can log in by clicking the "Authenticate" button in the playground. This will open Auth0's login page in the playground. You can log in using any configured provider (such as Google or Facebook). Once logged in, you can try the query again, and you will see the todos.

Similarly, you can execute mutations. For example, the following mutation will create a new todo:

mutation {
createTodo(data: { title: "Buy milk", completed: false }) {
id
}
}

Due to access control rules, you can create a todo only for the logged-in user.

Lastly, try out the query above by logging in as another user. You will see that the todos created by the first user are not visible to the second user.

Using the frontend

With the confidence that the API works as expected, building a frontend using the same queries and mutations is easy. The accompanying source code is a Next.js app that allows creating todos, marking them as completed, and updating or deleting them.

Clone the examples repository and try it out!

Summary

Combining the declarative backend of Exograph with the authentication of Auth0 is a simple matter of setting the EXO_OIDC_URL environment variable. The playground support makes it a breeze to try out various authentication scenarios to ensure that users access only the data they are supposed to.

We would love to hear your feedback on this integration on our Discord.

Share: