Introduction
zero-postgres is a high-performance PostgreSQL client library for Rust. It provides both synchronous and asynchronous APIs with a focus on speed and minimal allocations.
[dependencies]
zero-postgres = "*"
Quick Start
use zero_postgres::sync::Conn;
let mut conn = Conn::new("postgres://user:password@localhost/dbname")?;
// Simple query
let rows: Vec<(i32, String)> = conn.query_collect("SELECT id, name FROM users")?;
// Parameterized query
let rows: Vec<(i32, String)> = conn.exec_collect("SELECT * FROM users WHERE id = $1", (42i32,))?;
// Transaction
conn.transaction(|conn, tx| {
conn.exec_drop("INSERT INTO users (name) VALUES ($1)", ("Alice",))?;
conn.exec_drop("INSERT INTO users (name) VALUES ($1)", ("Bob",))?;
tx.commit(conn)
})?;
Connection
A connection can be made with a URL string or Opts.
An URL can start with
pg://postgres://postgresql://
The URL pg://{USER}:{PASSWORD}@{HOST}:{PORT}/{DATABASE}?PARAM1=VALUE1&PARAM2=VALUE2 is equivalent to:
let mut opts = Opts::default();
opts.host = HOST;
opts.port = PORT;
opts.user = USER;
opts.password = Some(PASSWORD);
opts.database = Some(DATABASE);
opts.params.push((PARAM1, VALUE1));
opts.params.push((PARAM2, VALUE2));
See Opts for all available options.
Sync
use zero_postgres::sync::Conn;
let mut conn = Conn::new("postgres://test:1234@localhost/test_db")?;
Async
use zero_postgres::tokio::Conn;
let mut conn = Conn::new("postgres://test:1234@localhost/test_db").await?;
Unix Socket
use zero_postgres::sync::Conn;
use zero_postgres::Opts;
let mut opts = Opts::default();
opts.socket = Some("/var/run/postgresql/.s.PGSQL.5432".to_string());
opts.database = Some("test".to_string());
let mut conn = Conn::new(opts)?;
Upgrade to Unix Socket
By default, upgrade_to_unix_socket is true.
If upgrade_to_unix_socket is true and the TCP peer IP is local, the driver queries SHOW unix_socket_directories to get the Unix socket path, then reconnects using the socket for better performance.
For production, disable this flag and use explicit TCP or Unix socket:
let mut opts: Opts = "postgres://test:1234@localhost".try_into()?;
opts.upgrade_to_unix_socket = false;
let mut conn = Conn::new(opts)?;
Query
There are two sets of query APIs: Simple Query Protocol and Extended Query Protocol.
Simple Query Protocol
Simple Protocol supports multiple statements (separated by ;), but does not support parameter binding.
Use Extended Protocol if you need to send parameters or read typed binary results.
impl Conn {
fn query<H: SimpleHandler>(&mut self, sql: &str, handler: &mut H) -> Result<()>;
fn query_drop(&mut self, sql: &str) -> Result<Option<u64>>;
fn query_first<Row>(&mut self, sql: &str) -> Result<Option<Row>>;
fn query_collect<Row>(&mut self, sql: &str) -> Result<Vec<Row>>;
fn query_foreach<Row, F>(&mut self, sql: &str, f: F) -> Result<()>;
}
query: execute SQL and process results with a handlerquery_drop: execute SQL and discard results, returning rows affectedquery_first: execute and returnOption<Row>for the first rowquery_collect: execute and collect all rows into a Vecquery_foreach: execute and call a closure for each row
Example
// Execute and discard results
conn.query_drop("INSERT INTO users (name) VALUES ('Alice')")?;
// Collect all rows
let users: Vec<(i32, String)> = conn.query_collect("SELECT id, name FROM users")?;
// Get first row only
let user: Option<(i32, String)> = conn.query_first("SELECT id, name FROM users LIMIT 1")?;
// Process rows one by one
conn.query_foreach("SELECT id, name FROM users", |row: (i32, String)| {
println!("{}: {}", row.0, row.1);
})?;
Extended Query Protocol
Extended Protocol uses a prepared statement and parameter binding. Use $1, $2, ..$N as placeholders.
Postgres does not support multiple statements in a prepared statement.
impl Conn {
fn prepare(&mut self, sql: &str) -> Result<PreparedStatement>;
fn prepare_typed(&mut self, sql: &str, param_oids: &[u32]) -> Result<PreparedStatement>;
fn prepare_batch(&mut self, queries: &[&str]) -> Result<Vec<PreparedStatement>>;
fn exec<S, P, H>(&mut self, statement: S, params: P, handler: &mut H) -> Result<()>;
fn exec_drop<S, P>(&mut self, statement: S, params: P) -> Result<Option<u64>>;
fn exec_first<Row, S, P>(&mut self, statement: S, params: P) -> Result<Option<Row>>;
fn exec_collect<Row, S, P>(&mut self, statement: S, params: P) -> Result<Vec<Row>>;
fn exec_foreach<Row, S, P, F>(&mut self, statement: S, params: P, f: F) -> Result<()>;
fn exec_batch<S, P>(&mut self, statement: S, params_list: &[P]) -> Result<()>;
fn exec_portal<S, P, F, T>(&mut self, statement: S, params: P, f: F) -> Result<T>;
fn close_statement(&mut self, stmt: &PreparedStatement) -> Result<()>;
}
impl Transaction<'_> {
fn exec_portal_named<S, P>(&self, conn: &mut Conn, statement: S, params: P) -> Result<NamedPortal<'_>>;
}
prepare: prepare a statement for executionprepare_typed: prepare with explicit parameter OIDsprepare_batch: prepare multiple statements in one round-tripexec: execute with a custom handlerexec_drop: execute and discard results, returning rows affectedexec_first: execute and returnOption<Row>for the first rowexec_collect: execute and collect all rows into a Vecexec_foreach: execute and call a closure for each rowexec_batch: execute with multiple parameter sets efficientlyexec_portal: execute to create an unnamed portal that incrementally fetch rowsclose_statement: close a prepared statementtx.exec_portal_named: create a named portal within a transaction for incremental fetching
The statement parameter can be either:
- A
&PreparedStatementreturned fromprepare() - A raw SQL
&strfor one-shot execution (parsed once per call)
Example: Basic
// Using prepared statement
let stmt = conn.prepare("SELECT * FROM users WHERE id = $1")?;
let user: Option<(i32, String)> = conn.exec_first(&stmt, (42,))?;
// Using raw SQL
let user: Option<(i32, String)> = conn.exec_first(
"SELECT * FROM users WHERE id = $1",
(42,)
)?;
// Process rows one by one
conn.exec_foreach(&stmt, (42,), |row: (i32, String)| {
println!("{}: {}", row.0, row.1);
Ok(())
})?;
Example: Batch Execution
exec_batch sends all parameter sets without waiting for the response for the already-sent parameter sets:
let stmt = conn.prepare("INSERT INTO users (name, age) VALUES ($1, $2)")?;
conn.exec_batch(&stmt, &[
("Alice", 30),
("Bob", 25),
("Charlie", 35),
])?;
Example: Portal-based Iteration
For large result sets, use exec_portal to fetch rows incrementally with a handler.
let stmt = conn.prepare("SELECT * FROM large_table")?;
conn.exec_portal(&stmt, (), |portal| {
let mut handler = RowCollector::new();
while portal.exec(100, &mut handler)? {
// process handler.rows and clear for next batch
}
Ok(())
})?;
You can also use exec_foreach on the portal to process rows with a closure:
conn.exec_portal(&stmt, (), |portal| {
while portal.exec_foreach(100, |row: (i32, String)| {
println!("{}: {}", row.0, row.1);
Ok(())
})? {
// continues until all rows processed
}
Ok(())
})?;
Example: Interleaving Two Row Streams
Use tx.exec_portal_named to create named portals that can be interleaved within a transaction:
let stmt1 = conn.prepare("SELECT * FROM table1")?;
let stmt2 = conn.prepare("SELECT * FROM table2")?;
conn.transaction(|conn, tx| {
let mut portal1 = tx.exec_portal_named(conn, &stmt1, ())?;
let mut portal2 = tx.exec_portal_named(conn, &stmt2, ())?;
loop {
let rows1: Vec<(i32,)> = portal1.exec_collect(conn, 100)?;
let rows2: Vec<(i32,)> = portal2.exec_collect(conn, 100)?;
process(rows1, rows2);
if portal1.is_complete() && portal2.is_complete() {
break;
}
}
portal1.close(conn)?;
portal2.close(conn)?;
tx.commit(conn)
})?;
Named portals also support exec_foreach for processing rows with a closure:
conn.transaction(|conn, tx| {
let mut portal = tx.exec_portal_named(conn, &stmt, ())?;
while !portal.is_complete() {
portal.exec_foreach(conn, 100, |row: (i32, String)| {
println!("{}: {}", row.0, row.1);
Ok(())
})?;
}
portal.close(conn)?;
tx.commit(conn)
})?;
Parameters
A parameter set (Params) is a tuple of primitives.
exec_drop(sql, ()) // no parameter
exec_drop("SELECT $1, $2, $3", (1, 2.0, "String")) // bind 3 parameters
Struct Mapping
There are two ways to map database rows to Rust structs.
Using #[derive(FromRow)]
The FromRow derive macro automatically maps columns to struct fields by name.
use zero_postgres::r#macro::FromRow;
#[derive(FromRow)]
struct User {
id: i32,
name: String,
email: Option<String>,
}
let stmt = conn.prepare("SELECT name, id, email_address as email FROM users")?;
// Collect all rows
let users: Vec<User> = conn.exec_collect(&stmt, ())?;
// Get first row only
let user: Option<User> = conn.exec_first(&stmt, ())?;
// Process rows one by one
conn.exec_foreach(&stmt, (), |user: User| {
println!("{}: {}", user.id, user.name);
Ok(())
})?;
Use #[from_row(strict)] to error on unknown columns:
#[derive(FromRow)]
#[from_row(strict)]
struct User {
id: i32,
name: String,
}
// Returns Error if query because `email` column is unknown
let user: Option<User> = conn.exec_first("SELECT id, name, email FROM users");
Manual Construction with exec_foreach
This has an advantage of reusing Vec and being slightly faster because it does not match the column names against struct field names.
struct User {
id: i32,
name: String,
display_name: String, // computed field
}
let stmt = conn.prepare("SELECT id, name FROM users")?;
let mut users = Vec::new();
conn.exec_foreach(&stmt, (), |row: (i32, String)| {
users.push(User {
id: row.0,
display_name: format!("User: {}", row.1),
name: row.1,
});
Ok(())
})?;
Data Type
The library intentionally rejects conversions that could silently lose data. For example, reading a BIGINT column as i8 will return an error rather than truncating the value. This ensures data integrity and makes bugs easier to catch.
Widening conversions (e.g., reading SMALLINT as i64) are allowed.
Parameter Types (Rust to PostgreSQL)
| Rust Type | PostgreSQL Type | Notes |
|---|---|---|
bool | BOOL (BOOLEAN) | |
i8 | INT2 (SMALLINT) | PostgreSQL has no 1-byte integer |
i16 | INT2 (SMALLINT) | |
i32 | INT4 (INTEGER) | |
i64 | INT8 (BIGINT) | |
u8 | INT2 (SMALLINT) | PostgreSQL has no unsigned types |
u16 | INT4 (INTEGER) | u16 max exceeds INT2 max |
u32 | INT8 (BIGINT) | u32 max exceeds INT4 max |
u64 | INT8 (BIGINT) | Overflow checked at runtime |
f32 | FLOAT4 (REAL) | |
f64 | FLOAT8 (DOUBLE PRECISION) | |
&str | TEXT | Also works with VARCHAR, CHAR, JSON, JSONB |
String | TEXT | |
&[u8] | BYTEA | |
Vec<u8> | BYTEA | |
Option<T> | Same as T | None encodes as NULL |
Result Types (PostgreSQL to Rust)
PostgreSQL only has signed integer types. Decoding to unsigned Rust types (u8, u16, u32, u64) is not supported.
| PostgreSQL Type | Rust Types |
|---|---|
BOOL (BOOLEAN) | bool |
INT2 (SMALLINT) | i16, i32, i64 |
INT4 (INTEGER) | i32, i64 |
INT8 (BIGINT) | i64 |
FLOAT4 (REAL) | f32, f64 |
FLOAT8 (DOUBLE PRECISION) | f64 |
NUMERIC | f32, f64 |
TEXT, VARCHAR, CHAR(n), NAME | &str, String |
BYTEA | &[u8], Vec<u8> |
NULL | Option<T> |
Feature-Gated Types
Additional type support is available through feature flags.
with-chrono (chrono crate)
| Rust Type | PostgreSQL Type |
|---|---|
chrono::NaiveDate | DATE |
chrono::NaiveTime | TIME |
chrono::NaiveDateTime | TIMESTAMP, TIMESTAMPTZ |
chrono::DateTime<Utc> | TIMESTAMPTZ |
| PostgreSQL Type | Rust Types |
|---|---|
DATE | chrono::NaiveDate |
TIME | chrono::NaiveTime |
TIMESTAMP | chrono::NaiveDateTime |
TIMESTAMPTZ | chrono::NaiveDateTime, chrono::DateTime<Utc> |
with-time (time crate)
| Rust Type | PostgreSQL Type |
|---|---|
time::Date | DATE |
time::Time | TIME |
time::PrimitiveDateTime | TIMESTAMP, TIMESTAMPTZ |
time::OffsetDateTime | TIMESTAMPTZ |
| PostgreSQL Type | Rust Types |
|---|---|
DATE | time::Date |
TIME | time::Time |
TIMESTAMP | time::PrimitiveDateTime |
TIMESTAMPTZ | time::PrimitiveDateTime, time::OffsetDateTime |
with-uuid (uuid crate)
| Rust Type | PostgreSQL Type |
|---|---|
uuid::Uuid | UUID |
| PostgreSQL Type | Rust Types |
|---|---|
UUID | uuid::Uuid |
with-rust-decimal (rust_decimal crate)
rust_decimal::Decimal uses 96-bit precision, not arbitrary precision like PostgreSQL’s NUMERIC.
| Rust Type | PostgreSQL Type |
|---|---|
rust_decimal::Decimal | NUMERIC |
| PostgreSQL Type | Rust Types |
|---|---|
NUMERIC | rust_decimal::Decimal |
Transaction
Transactions ensure a group of operations either all succeed (commit) or all fail (rollback).
Using Transactions
use zero_postgres::sync::Conn;
conn.transaction(|conn, _tx| {
conn.exec_drop("INSERT INTO users (name) VALUES ($1)", ("Alice",))?;
conn.exec_drop("INSERT INTO users (name) VALUES ($1)", ("Bob",))?;
Ok(())
})?;
If the closure returns Ok, the transaction is automatically committed. If the closure returns Err, the transaction is automatically rolled back.
Automatic Rollback on Error
conn.transaction(|conn, _tx| {
conn.exec_drop("INSERT INTO users (name) VALUES ($1)", ("Alice",))?;
// Returns error - transaction will be rolled back
Err(Error::InvalidUsage("oops".to_string()))
})?;
// No data inserted
Explicit Commit/Rollback
Use tx.commit() or tx.rollback() for explicit control:
conn.transaction(|conn, tx| {
conn.exec_drop("INSERT INTO users (name) VALUES ($1)", ("Alice",))?;
if some_condition {
tx.commit(conn)
} else {
tx.rollback(conn)
}
})?;
Nested Transactions
Nested transactions are not supported. Calling transaction while already in a transaction returns Error::NestedTransaction.
Async Transactions
For async connections, use async closures:
use zero_postgres::tokio::Conn;
conn.transaction(async |conn, _tx| {
conn.exec_drop("INSERT INTO users (name) VALUES ($1)", ("Alice",)).await?;
Ok(())
}).await?;
Pool
Pipelining
Pipelining allows you to send multiple queries to the server without waiting for the response of each query. This reduces round-trip latency and improves throughput.
Basic Pipelining
// Prepare statements outside the pipeline
let stmts = conn.prepare_batch(&[
"SELECT id, name FROM users WHERE active = $1",
"SELECT COUNT(*) FROM users",
])?;
let (active_users, count) = conn.pipeline(|p| {
// Queue executions
let t1 = p.exec(&stmts[0], (true,))?;
let t2 = p.exec(&stmts[1], ())?;
// Sync sends all queued operations
p.sync()?;
// Claim results in order
let active_users: Vec<(i32, String)> = p.claim_collect(t1)?;
let count: Vec<(i64,)> = p.claim_collect(t2)?;
Ok((active_users, count))
})?;
Pipeline Benefits
- Reduced Latency: Multiple queries are sent in a single network round trip
- Higher Throughput: Server can process queries while client sends more
- Batch Operations: Efficient for bulk inserts or updates
Zero-Copy Decoding
For maximum performance, zero-postgres provides zero-copy row decoding through the RefFromRow trait. This allows you to decode rows as references directly into the read buffer, avoiding any memory allocation or copying.
When to Use
Zero-copy decoding is useful when:
- Processing large result sets where allocation overhead matters
- All columns are fixed-size types (integers, floats)
- All columns are
NOT NULL - You don’t need to store the decoded rows (processing in a callback)
PostgreSQL Wire Format
PostgreSQL’s binary protocol includes a 4-byte length prefix before each column value. To enable zero-copy struct casting, use the LengthPrefixed<T> wrapper which includes this prefix in its layout.
Requirements
To use zero-copy decoding, your struct must:
- Derive
RefFromRow - Have
#[repr(C, packed)]attribute - Use
LengthPrefixed<T>with big-endian types (PostgreSQL uses network byte order)
Example
use zero_postgres::conversion::ref_row::{RefFromRow, LengthPrefixed, I64BE, I32BE};
use zero_postgres_derive::RefFromRow;
#[derive(RefFromRow)]
#[repr(C, packed)]
struct UserStats {
user_id: LengthPrefixed<I64BE>, // 4 + 8 = 12 bytes
login_count: LengthPrefixed<I32BE>, // 4 + 4 = 8 bytes
}
// Process rows without allocation
conn.exec_foreach_ref::<UserStats, _, _, _>(&stmt, (), |row| {
// row is &UserStats - a reference into the buffer
println!("user_id: {}", row.user_id.get().get());
println!("login_count: {}", row.login_count.get().get());
Ok(())
})?;
Available Types
PostgreSQL uses big-endian (network byte order) encoding. Use these types wrapped in LengthPrefixed:
| Rust Type | Wire Size | Total with Prefix |
|---|---|---|
i8 / u8 | 1 byte | 5 bytes |
I16BE / U16BE | 2 bytes | 6 bytes |
I32BE / U32BE | 4 bytes | 8 bytes |
I64BE / U64BE | 8 bytes | 12 bytes |
These are re-exported from zero_postgres::conversion::ref_row for convenience.
Accessing Values
LengthPrefixed<T> provides these methods:
// Get the length prefix (should match type size)
let len: i32 = row.user_id.len();
// Check if NULL (length == -1)
let is_null: bool = row.user_id.is_null();
// Get the inner value (returns T, e.g., I64BE)
let inner: I64BE = row.user_id.get();
// Get the native value (chain .get() calls)
let user_id: i64 = row.user_id.get().get();
Limitations
- No NULL support: All columns must be
NOT NULL. UseFromRowfor nullable columns. - Fixed-size types only: Variable-length types like
TEXT,BYTEA,NUMERICare not supported. - No column name matching: Columns must match struct field order exactly.
- Binary format only: Only works with extended query protocol (not simple queries).
- Callback-based only: Returns references into the buffer, so can only be used with
exec_foreach_ref.
Comparison with FromRow
| Feature | FromRow | RefFromRow |
|---|---|---|
| Allocation | Yes (per row) | No |
| NULL support | Yes (Option<T>) | No |
| Variable-length types | Yes | No |
| Column name matching | Yes | No |
| Return type | Owned T | Reference &T |
| API | exec_first, exec_collect, exec_foreach | exec_foreach_ref |
How It Works
- The derive macro generates
zerocopytrait implementations (FromBytes,KnownLayout,Immutable) - At compile time, it verifies all fields implement
FixedWireSize - At runtime, the row buffer is cast directly to
&YourStructusing zerocopy - No parsing, no allocation - just a pointer cast with size validation
Wire Format Diagram
PostgreSQL DataRow for (INT8, INT4):
┌─────────────────────────────┬─────────────────────────┐
│ LengthPrefixed<I64BE> │ LengthPrefixed<I32BE> │
├──────────┬──────────────────┼──────────┬──────────────┤
│ len (4B) │ value (8B) │ len (4B) │ value (4B) │
│ 00000008 │ 0102030405060708 │ 00000004 │ 12345678 │
└──────────┴──────────────────┴──────────┴──────────────┘
The struct layout matches this wire format exactly, enabling direct casting.