Introduction
zero-postgres is a high-performance PostgreSQL client library for Rust. It provides both synchronous and asynchronous APIs with a focus on speed and minimal allocations.
[dependencies]
zero-postgres = "*"
Quick Start
use zero_postgres::sync::Conn;
let mut conn = Conn::new("postgres://user:password@localhost/dbname")?;
// Simple query
let rows: Vec<(i32, String)> = conn.query_collect("SELECT id, name FROM users")?;
// Parameterized query
let rows: Vec<(i32, String)> = conn.exec_collect("SELECT * FROM users WHERE id = $1", (42i32,))?;
// Transaction
conn.transaction(|conn, tx| {
conn.exec_drop("INSERT INTO users (name) VALUES ($1)", ("Alice",))?;
conn.exec_drop("INSERT INTO users (name) VALUES ($1)", ("Bob",))?;
tx.commit(conn)
})?;
Features
- High Performance: Minimal allocations and copies
- Sync and Async: Both synchronous and async (tokio) APIs
- Pipelining: Batch multiple queries in a single round trip
- Type-safe: Rust’s type system for compile-time safety
Feature Flags
sync(default) - Synchronous APItokio(default) - Asynchronous API using tokiosync-tls- TLS support for Synchronous API (experimental)tokio-tls- TLS support for Asynchronous API (experimental)
Benchmark
Inserting 10,000 rows using prepared statements (average of 10 iterations):
| Library | Avg Time (ms) |
|---|---|
| zero-postgres (sync) | 250.35 |
| zero-postgres (async) | 269.32 |
| tokio-postgres | 398.90 |
| postgres (sync) | 422.61 |
Connection
A connection can be made with a URL string or Opts.
An URL can start with
pg://postgres://postgresql://
The URL pg://{USER}:{PASSWORD}@{HOST}:{PORT}/{DATABASE}?PARAM1=VALUE1&PARAM2=VALUE2 is equivalent to:
let mut opts = Opts::default();
opts.host = HOST;
opts.port = PORT;
opts.user = USER;
opts.password = Some(PASSWORD);
opts.database = Some(DATABASE);
opts.params.push((PARAM1, VALUE1));
opts.params.push((PARAM2, VALUE2));
See Opts for all available options.
Sync
use zero_postgres::sync::Conn;
let mut conn = Conn::new("postgres://test:1234@localhost/test_db")?;
Async
use zero_postgres::tokio::Conn;
let mut conn = Conn::new("postgres://test:1234@localhost/test_db").await?;
Unix Socket
use zero_postgres::sync::Conn;
use zero_postgres::Opts;
let mut opts = Opts::default();
opts.socket = Some("/var/run/postgresql/.s.PGSQL.5432".to_string());
opts.database = Some("test".to_string());
let mut conn = Conn::new(opts)?;
Upgrade to Unix Socket
By default, upgrade_to_unix_socket is true.
If upgrade_to_unix_socket is true and the TCP peer IP is local, the driver queries SHOW unix_socket_directories to get the Unix socket path, then reconnects using the socket for better performance.
For production, disable this flag and use explicit TCP or Unix socket:
let mut opts: Opts = "postgres://test:1234@localhost".try_into()?;
opts.upgrade_to_unix_socket = false;
let mut conn = Conn::new(opts)?;
Query
There are two sets of query APIs: Simple Query Protocol and Extended Query Protocol.
Simple Query Protocol
Simple Protocol supports multiple statements (separated by ;), but does not support parameter binding.
Use Extended Protocol if you need to send parameters or read typed binary results.
impl Conn {
fn query<H: SimpleHandler>(&mut self, sql: &str, handler: &mut H) -> Result<()>;
fn query_drop(&mut self, sql: &str) -> Result<Option<u64>>;
fn query_first<Row>(&mut self, sql: &str) -> Result<Option<Row>>;
fn query_collect<Row>(&mut self, sql: &str) -> Result<Vec<Row>>;
fn query_foreach<Row, F>(&mut self, sql: &str, f: F) -> Result<()>;
}
query: execute SQL and process results with a handlerquery_drop: execute SQL and discard results, returning rows affectedquery_first: execute and returnOption<Row>for the first rowquery_collect: execute and collect all rows into a Vecquery_foreach: execute and call a closure for each row
Example
// Execute and discard results
conn.query_drop("INSERT INTO users (name) VALUES ('Alice')")?;
// Collect all rows
let users: Vec<(i32, String)> = conn.query_collect("SELECT id, name FROM users")?;
// Get first row only
let user: Option<(i32, String)> = conn.query_first("SELECT id, name FROM users LIMIT 1")?;
// Process rows one by one
conn.query_foreach("SELECT id, name FROM users", |row: (i32, String)| {
println!("{}: {}", row.0, row.1);
})?;
Extended Query Protocol
Extended Protocol uses a prepared statement and parameter binding. Use $1, $2, ..$N as placeholders.
Postgres does not support multiple statements in a prepared statement.
impl Conn {
fn prepare(&mut self, sql: &str) -> Result<PreparedStatement>;
fn prepare_typed(&mut self, sql: &str, param_oids: &[u32]) -> Result<PreparedStatement>;
fn prepare_batch(&mut self, queries: &[&str]) -> Result<Vec<PreparedStatement>>;
fn exec<S, P, H>(&mut self, statement: S, params: P, handler: &mut H) -> Result<()>;
fn exec_drop<S, P>(&mut self, statement: S, params: P) -> Result<Option<u64>>;
fn exec_first<Row, S, P>(&mut self, statement: S, params: P) -> Result<Option<Row>>;
fn exec_collect<Row, S, P>(&mut self, statement: S, params: P) -> Result<Vec<Row>>;
fn exec_foreach<Row, S, P, F>(&mut self, statement: S, params: P, f: F) -> Result<()>;
fn exec_batch<S, P>(&mut self, statement: S, params_list: &[P]) -> Result<()>;
fn exec_portal<S, P, F, T>(&mut self, statement: S, params: P, f: F) -> Result<T>;
fn close_statement(&mut self, stmt: &PreparedStatement) -> Result<()>;
}
impl Transaction<'_> {
fn exec_portal_named<S, P>(&self, conn: &mut Conn, statement: S, params: P) -> Result<NamedPortal<'_>>;
}
prepare: prepare a statement for executionprepare_typed: prepare with explicit parameter OIDsprepare_batch: prepare multiple statements in one round-tripexec: execute with a custom handlerexec_drop: execute and discard results, returning rows affectedexec_first: execute and returnOption<Row>for the first rowexec_collect: execute and collect all rows into a Vecexec_foreach: execute and call a closure for each rowexec_batch: execute with multiple parameter sets efficientlyexec_portal: execute to create an unnamed portal that incrementally fetch rowsclose_statement: close a prepared statementtx.exec_portal_named: create a named portal within a transaction for incremental fetching
The statement parameter can be either:
- A
&PreparedStatementreturned fromprepare() - A raw SQL
&strfor one-shot execution (parsed once per call)
Example: Basic
// Using prepared statement
let stmt = conn.prepare("SELECT * FROM users WHERE id = $1")?;
let user: Option<(i32, String)> = conn.exec_first(&stmt, (42,))?;
// Using raw SQL
let user: Option<(i32, String)> = conn.exec_first(
"SELECT * FROM users WHERE id = $1",
(42,)
)?;
// Process rows one by one
conn.exec_foreach(&stmt, (42,), |row: (i32, String)| {
println!("{}: {}", row.0, row.1);
Ok(())
})?;
Example: Batch Execution
exec_batch sends all parameter sets without waiting for the response for the already-sent parameter sets:
let stmt = conn.prepare("INSERT INTO users (name, age) VALUES ($1, $2)")?;
conn.exec_batch(&stmt, &[
("Alice", 30),
("Bob", 25),
("Charlie", 35),
])?;
Example: Portal-based Iteration
For large result sets, use exec_portal to fetch rows incrementally with a handler.
let stmt = conn.prepare("SELECT * FROM large_table")?;
conn.exec_portal(&stmt, (), |portal| {
let mut handler = RowCollector::new();
while portal.exec(100, &mut handler)? {
// process handler.rows and clear for next batch
}
Ok(())
})?;
You can also use exec_foreach on the portal to process rows with a closure:
conn.exec_portal(&stmt, (), |portal| {
while portal.exec_foreach(100, |row: (i32, String)| {
println!("{}: {}", row.0, row.1);
Ok(())
})? {
// continues until all rows processed
}
Ok(())
})?;
Example: Interleaving Two Row Streams
Use tx.exec_portal_named to create named portals that can be interleaved within a transaction:
let stmt1 = conn.prepare("SELECT * FROM table1")?;
let stmt2 = conn.prepare("SELECT * FROM table2")?;
conn.transaction(|conn, tx| {
let mut portal1 = tx.exec_portal_named(conn, &stmt1, ())?;
let mut portal2 = tx.exec_portal_named(conn, &stmt2, ())?;
loop {
let rows1: Vec<(i32,)> = portal1.exec_collect(conn, 100)?;
let rows2: Vec<(i32,)> = portal2.exec_collect(conn, 100)?;
process(rows1, rows2);
if portal1.is_complete() && portal2.is_complete() {
break;
}
}
portal1.close(conn)?;
portal2.close(conn)?;
tx.commit(conn)
})?;
Named portals also support exec_foreach for processing rows with a closure:
conn.transaction(|conn, tx| {
let mut portal = tx.exec_portal_named(conn, &stmt, ())?;
while !portal.is_complete() {
portal.exec_foreach(conn, 100, |row: (i32, String)| {
println!("{}: {}", row.0, row.1);
Ok(())
})?;
}
portal.close(conn)?;
tx.commit(conn)
})?;
Parameters
A parameter set (Params) is a tuple of primitives.
exec_drop(sql, ()) // no parameter
exec_drop("SELECT $1, $2, $3", (1, 2.0, "String")) // bind 3 parameters
Struct Mapping
There are two ways to map database rows to Rust structs.
Using #[derive(FromRow)]
The FromRow derive macro automatically maps columns to struct fields by name.
use zero_postgres::r#macro::FromRow;
#[derive(FromRow)]
struct User {
id: i32,
name: String,
email: Option<String>,
}
let stmt = conn.prepare("SELECT name, id, email_address as email FROM users")?;
// Collect all rows
let users: Vec<User> = conn.exec_collect(&stmt, ())?;
// Get first row only
let user: Option<User> = conn.exec_first(&stmt, ())?;
// Process rows one by one
conn.exec_foreach(&stmt, (), |user: User| {
println!("{}: {}", user.id, user.name);
Ok(())
})?;
Use #[from_row(strict)] to error on unknown columns:
#[derive(FromRow)]
#[from_row(strict)]
struct User {
id: i32,
name: String,
}
// Returns Error if query because `email` column is unknown
let user: Option<User> = conn.exec_first("SELECT id, name, email FROM users");
Manual Construction with exec_foreach
This has an advantage of reusing Vec and being slightly faster because it does not match the column names against struct field names.
struct User {
id: i32,
name: String,
display_name: String, // computed field
}
let stmt = conn.prepare("SELECT id, name FROM users")?;
let mut users = Vec::new();
conn.exec_foreach(&stmt, (), |row: (i32, String)| {
users.push(User {
id: row.0,
display_name: format!("User: {}", row.1),
name: row.1,
});
Ok(())
})?;
Data Type
The library intentionally rejects conversions that could silently lose data. For example, reading a BIGINT column as i8 will return an error rather than truncating the value. This ensures data integrity and makes bugs easier to catch.
Widening conversions (e.g., reading SMALLINT as i64) are allowed.
Parameter Types (Rust to PostgreSQL)
| Rust Type | PostgreSQL Type | Notes |
|---|---|---|
bool | BOOL (BOOLEAN) | |
i8 | INT2 (SMALLINT) | PostgreSQL has no 1-byte integer |
i16 | INT2 (SMALLINT) | |
i32 | INT4 (INTEGER) | |
i64 | INT8 (BIGINT) | |
u8 | INT2 (SMALLINT) | PostgreSQL has no unsigned types |
u16 | INT4 (INTEGER) | u16 max exceeds INT2 max |
u32 | INT8 (BIGINT) | u32 max exceeds INT4 max |
u64 | INT8 (BIGINT) | Overflow checked at runtime |
f32 | FLOAT4 (REAL) | |
f64 | FLOAT8 (DOUBLE PRECISION) | |
&str | TEXT | Also works with VARCHAR, CHAR, JSON, JSONB |
String | TEXT | |
&[u8] | BYTEA | |
Vec<u8> | BYTEA | |
Option<T> | Same as T | None encodes as NULL |
Result Types (PostgreSQL to Rust)
PostgreSQL only has signed integer types. Decoding to unsigned Rust types (u8, u16, u32, u64) is not supported.
| PostgreSQL Type | Rust Types |
|---|---|
BOOL (BOOLEAN) | bool |
INT2 (SMALLINT) | i16, i32, i64 |
INT4 (INTEGER) | i32, i64 |
INT8 (BIGINT) | i64 |
FLOAT4 (REAL) | f32, f64 |
FLOAT8 (DOUBLE PRECISION) | f64 |
NUMERIC | f32, f64 |
TEXT, VARCHAR, CHAR(n), NAME | &str, String |
BYTEA | &[u8], Vec<u8> |
NULL | Option<T> |
Feature-Gated Types
Additional type support is available through feature flags.
with-chrono (chrono crate)
| Rust Type | PostgreSQL Type |
|---|---|
chrono::NaiveDate | DATE |
chrono::NaiveTime | TIME |
chrono::NaiveDateTime | TIMESTAMP, TIMESTAMPTZ |
chrono::DateTime<Utc> | TIMESTAMPTZ |
| PostgreSQL Type | Rust Types |
|---|---|
DATE | chrono::NaiveDate |
TIME | chrono::NaiveTime |
TIMESTAMP | chrono::NaiveDateTime |
TIMESTAMPTZ | chrono::NaiveDateTime, chrono::DateTime<Utc> |
with-time (time crate)
| Rust Type | PostgreSQL Type |
|---|---|
time::Date | DATE |
time::Time | TIME |
time::PrimitiveDateTime | TIMESTAMP, TIMESTAMPTZ |
time::OffsetDateTime | TIMESTAMPTZ |
| PostgreSQL Type | Rust Types |
|---|---|
DATE | time::Date |
TIME | time::Time |
TIMESTAMP | time::PrimitiveDateTime |
TIMESTAMPTZ | time::PrimitiveDateTime, time::OffsetDateTime |
with-uuid (uuid crate)
| Rust Type | PostgreSQL Type |
|---|---|
uuid::Uuid | UUID |
| PostgreSQL Type | Rust Types |
|---|---|
UUID | uuid::Uuid |
with-rust-decimal (rust_decimal crate)
rust_decimal::Decimal uses 96-bit precision, not arbitrary precision like PostgreSQL’s NUMERIC.
| Rust Type | PostgreSQL Type |
|---|---|
rust_decimal::Decimal | NUMERIC |
| PostgreSQL Type | Rust Types |
|---|---|
NUMERIC | rust_decimal::Decimal |
Transaction
Transactions ensure a group of operations either all succeed (commit) or all fail (rollback).
Using Transactions
use zero_postgres::sync::Conn;
conn.transaction(|conn, _tx| {
conn.exec_drop("INSERT INTO users (name) VALUES ($1)", ("Alice",))?;
conn.exec_drop("INSERT INTO users (name) VALUES ($1)", ("Bob",))?;
Ok(())
})?;
If the closure returns Ok, the transaction is automatically committed. If the closure returns Err, the transaction is automatically rolled back.
Automatic Rollback on Error
conn.transaction(|conn, _tx| {
conn.exec_drop("INSERT INTO users (name) VALUES ($1)", ("Alice",))?;
// Returns error - transaction will be rolled back
Err(Error::InvalidUsage("oops".to_string()))
})?;
// No data inserted
Explicit Commit/Rollback
Use tx.commit() or tx.rollback() for explicit control:
conn.transaction(|conn, tx| {
conn.exec_drop("INSERT INTO users (name) VALUES ($1)", ("Alice",))?;
if some_condition {
tx.commit(conn)
} else {
tx.rollback(conn)
}
})?;
Nested Transactions
Nested transactions are not supported. Calling transaction while already in a transaction returns Error::NestedTransaction.
Async Transactions
For async connections, use async closures:
use zero_postgres::tokio::Conn;
conn.transaction(async |conn, _tx| {
conn.exec_drop("INSERT INTO users (name) VALUES ($1)", ("Alice",)).await?;
Ok(())
}).await?;
Pipelining
Pipelining allows you to send multiple queries to the server without waiting for the response of each query. This reduces round-trip latency and improves throughput.
Basic Pipelining
// Prepare statements outside the pipeline
let stmts = conn.prepare_batch(&[
"SELECT id, name FROM users WHERE active = $1",
"SELECT COUNT(*) FROM users",
])?;
let (active_users, count) = conn.pipeline(|p| {
// Queue executions
let t1 = p.exec(&stmts[0], (true,))?;
let t2 = p.exec(&stmts[1], ())?;
// Sync sends all queued operations
p.sync()?;
// Claim results in order
let active_users: Vec<(i32, String)> = p.claim_collect(t1)?;
let count: Vec<(i64,)> = p.claim_collect(t2)?;
Ok((active_users, count))
})?;
Pipeline Benefits
- Reduced Latency: Multiple queries are sent in a single network round trip
- Higher Throughput: Server can process queries while client sends more
- Batch Operations: Efficient for bulk inserts or updates