Skip to main content

2 posts tagged with "Rust"

View All Tags

GraphQL Server in the Browser using WebAssembly

· 5 min read
Shadaj Laddad
Co-founder @ Exograph

On the heels of our last feature release, we are excited to announce a new feature: Exograph Playground in the browser! Thanks to the magic of WebAssembly, you can now run Exograph servers entirely in your browser, so you can try Exograph without even installing it. That's right, we run a Tree Sitter parser, typechecker, GraphQL runtime, and Postgres all in your browser.

See it in action

Head over to the Exograph Playground to try it out. It shows a few pre-made models and populates data for you to explore. It also shows sample queries to help you get started.

A few models need authentication, so you can click on the "Key" icon in the middle center of the GraphiQL component to simulate the login action.

When you open the playground, the top-left portion shows the model. The top-right portion shows tabs to give insight into Exograph's inner workings. The bottom portion shows the GraphiQL interface to run queries and mutations.


You can also create your model by replacing the existing model in the playground. The playground supports sharing your playground project as a gist.

How it works

The Exograph Playground runs entirely in the browser. Besides the initial loading of static assets (like the WebAssembly binary and JavaScript code), you don't need to be connected to the internet. This is possible because we compiled Exograph, written in Rust, to WebAssembly.

Playground Architecture

Builder

The builder plays a role equivalent to the exo build command. It reads the source code, parses and typechecks it, and produces an intermediate representation (equivalent to the exo_ir file). The builder also includes elements equivalent to the exo schema command to compute the SQL schema and migrations.

On every change to the source code, the builder validates the model, reports any errors, and produces an updated intermediate representation. You can see the errors in the "Problems" tab.

The builder also produces the initial SQL schema and migrations for the model as you change it. The playground will automatically apply migrations as needed. You can see the schema in the "Schema" tab.

Runtime

The runtime is equivalent to the exo-server command. It processes the intermediate representation the builder produced and serves the GraphQL API. When you run a query, it sends it to the runtime, which computes the SQL query, runs against the database, and returns the results.

The runtime also sends logs to the playground, which you can see in the "Traces" tab. This aids in understanding what Exograph is doing under the hood, including the SQL queries it executes.

Postgres

The playground uses pglite, which is Postgres compiled to WebAssembly. Currently, we store Postgres data in memory, so you will lose the data when you refresh the page. We plan to add support for saving the data to local storage.

GraphiQL

The GraphiQL is a standard GraphQL query interface. You can run queries and mutations against the Exograph server. The playground populates the initial query for you to get started.

Sharing Playground Project as a Gist

The playground supports sharing the content as a gist. You can load such a gist using the gist query parameter. For example, to load the gist with ID abcd, you can use the URL https://exograph.dev/playground?gist=abcd.

You can create a gist to share your model, the seed data, and the initial query populated in GraphiQL. The files in gist follow the same layout as the project directory in Exograph, except you use :: as the directory separator.

  • src::index.exo: The model
  • tests::init.gql:The seed data. See Initializing seed data for more details.
  • playground::query.graphql: The initial query in GraphiQL
  • README.md: The README to show in the playground

We will keep improving the sharing experience in the future.

What's next

This is just the beginning to make Exograph easier to explore. Here are a few planned features to make the playground even better (your feedback is welcome!):

  • Support JavaScript Modules: In non-browser environments, Exograph supports Deno Modules. Deno cannot be compiled to WebAssembly, so we cannot run it in the browser. However, browsers already have a JavaScript runtime 🙂, which we will support in the playground.
  • Persistent Data: We plan to add support for saving the data to local storage so you can continue working on your data model across sessions.
  • Improved Sharing: We will add a simple way to create gists for your playground content and share it with others.

Try it out and let us know what you think. If you develop a cool model, publish it as a gist and share it with us on Twitter or Discord. We would appreciate a star on GitHub!

Share:

Latency at the Edge with Rust/WebAssembly and Postgres: Part 1

· 6 min read
Ramnivas Laddad
Co-founder @ Exograph

We have been working on enabling Exograph on WebAssembly. Since we have implemented Exograph using Rust, it was natural to target WebAssembly. You can soon build secure, flexible, and efficient GraphQL backends using Exograph and run them at the edge.

During our journey towards WebAssembly support, we learned a few things to improve the latency of Rust-based programs targeting WebAssembly in Cloudflare Workers connecting to Postgres. This two-part series shares those learnings. In this first post, we will set up a simple Cloudflare Worker connecting to a Postgres database and get baseline latency measurements. In the next post, we will explore various ways to improve it.

Even though we experimented in the context of Exograph, the learnings should apply to anyone using WebAssembly in Cloudflare Workers (or other platforms that support WebAssembly) to connect to Postgres.

Second Part

Read Part 2 that improves latency by a factor of 6!

Rust Cloudflare Workers

Cloudflare Workers is a serverless platform that allows you to run code at the edge. The V8 engine forms the underpinning of the Cloudflare Worker platform. Since V8 supports JavaScript, it is the primary language for writing Cloudflare Workers. However, JavaScript running in V8 can load WebAssembly modules. Therefore, you can write some parts of a worker in other languages, such as Rust, compile it to WebAssembly, and load that from JavaScript.

Cloudflare Worker's Rust tooling enables writing workers entirely in Rust. Behind the scenes, the tooling compiles the Rust code to WebAssembly and loads it in a JavaScript host. The Rust code you write must be able to compile to wasm32-unknown-unknown target. Consequently, it must follow the restrictions of WebAssembly. For example, it cannot access the filesystem or network directly. Instead, it must rely on the host-provided capabilities. Cloudflare provides such capabilities through the worker-rs crate. This crate, in turn, uses wasm-bindgen to export a few JavaScript functions to the Rust code. For example, it allows opening network sockets. We will use this capability later to integrate Postgres.

Here is a minimal Cloudflare Worker in Rust:

use worker::*;

#[event(fetch)]
async fn main(_req: Request, _env: Env, _ctx: Context) -> Result<Response> {
Ok(Response::ok("Hello, Cloudflare!")?)
}

To deploy, you can use the npx wrangler deploy command, which compiles the Rust code to WebAssembly, generates the necessary JavaScript code, and deploys it to the Cloudflare network.

Before moving on, let's measure the latency of this worker. We will use Ohayou, an HTTP load generator written in Rust. We measure latency using a single concurrent client (-c 1) and one hundred requests (-n 100).

oha -c 1 -n 100 <worker-url>
...
Slowest: 0.2806 secs
Fastest: 0.0127 secs
Average: 0.0214 secs
...

It takes an average of 21ms to respond to a request. This is a good baseline to compare when we add Postgres to the mix.

Focusing on latency

We will focus on measuring the lower bound for latency of the roundtrip for a request to the worker who queries a Postgres database before responding. Here is our setup:

  • Use a Neon Postgres database with the following table and no rows to focus on network latency (and not database processing time).

    CREATE TABLE todos (
    id SERIAL PRIMARY KEY,
    title TEXT NOT NULL,
    completed BOOLEAN NOT NULL
    );
  • Implement a Cloudflare Worker that responds to GET by fetching all completed todos from the table and returning them as a JSON response (of course, since there is no data, the response will be an empty array, but the use of a predicate will allow us to explore some practical considerations where the queries will have a few parameters).

  • Place the worker, database, and client in the same region. While, we can't control the worker placement, Cloudflare will place the worker close to either the client or the database (which we've put in the same region).

All right, let's get started!

Connecting to Postgres

Let's implement a simple worker that fetches all completed todos from the Neon Postgres database. We will use the tokio-postgres crate to connect to the database.

#[event(fetch)]
async fn main(_req: Request, env: Env, _ctx: Context) -> Result<Response> {
let config = tokio_postgres::config::Config::from_str(&env.secret("DATABASE_URL")?.to_string())
.map_err(|e| worker::Error::RustError(format!("Failed to parse configuration: {:?}", e)))?;

let host = match &config.get_hosts()[0] {
Host::Tcp(host) => host,
_ => {
return Err(worker::Error::RustError("Could not parse host".to_string()));
}
};
let port = config.get_ports()[0];

let socket = Socket::builder()
.secure_transport(SecureTransport::StartTls)
.connect(host, port)?;

let (client, connection) = config
.connect_raw(socket, PassthroughTls)
.await
.map_err(|e| worker::Error::RustError(format!("Failed to connect: {:?}", e)))?;

wasm_bindgen_futures::spawn_local(async move {
if let Err(error) = connection.await {
console_log!("connection error: {:?}", error);
}
});

let rows: Vec<tokio_postgres::Row> = client
.query(
"SELECT id, title, completed FROM todos WHERE completed = $1",
&[&true],
)
.await
.map_err(|e| worker::Error::RustError(format!("Failed to query: {:?}", e)))?;

Ok(Response::ok(format!("{:?}", rows))?)
}

There are several notable things (especially if you are new to WebAssembly):

  • In a non-WebAssembly platform, you would get the client and connection directly using the database URL, which opens a socket to the database. For example, you would have done something like this:

    let (client, connection) = config.connect(tls).await?;

    However, that won't work in a WebAssembly environment since there is no way to connect to a server (or, for that matter, any other resources such as filesystem). This is the core characteristic of WebAssembly: it is a sandboxed environment that cannot access resources unless explicitly provided (thought functions exported to the WebAssembly module). Therefore, we use Socket::builder().connect() to create a socket (which, in turn, uses TCP Socket API provided by Cloudflare runtime). Then, we use config.connect_raw() to lay the Postgres protocol over that socket.

  • We would have marked the main function with, for example, #[tokio::main] to bring in an async executor. However, here too, WebAssembly is different. Instead, we must rely on the host to provide the async runtime. In our case, Cloudflare worker provides a runtime (which uses JavaScript's event loop).

  • In a typical Rust program, we would have used tokio::spawn to spawn a task. However, in WebAssembly, we use wasm_bindgen_futures::spawn_local, which runs in the context of JavaScript's event loop.

We will deploy it using npx wrangler deploy. You will need to create a database and add the DATABASE_URL secret to the worker.

You can test the worker using curl:

curl https://<worker-url>

And measure the latency:

oha -c 1 -n 100 <worker-url>
...
Slowest: 0.8975 secs
Fastest: 0.2795 secs
Average: 0.3441 secs

So, our worker takes an average of 345ms to respond to a request. Depending on the use case, this can be between okay-ish and unacceptable. But why is it so slow?

We are dealing with two issues here:

  1. Establishing connection to the database: The worker creates a new connection for each request. Given that a secure connection, it takes 7+ round trips. Not surprisingly, latency is high.
  2. Executing the query: The query method in our code causes the Rust Postgres driver to make two round trips: to prepare the statement and to bind/execute the query. It also sends a one-way message to close the prepared statement.

How can we improve? We will address that in the next post by exploring connection pooling and possible changes to the driver. Stay tuned!

Share: