FRE-600: Fix code review blockers

- Consolidated duplicate UndoManagers to single instance
- Fixed connection promise to only resolve on 'connected' status
- Fixed WebSocketProvider import (WebsocketProvider)
- Added proper doc.destroy() cleanup
- Renamed isPresenceInitialized property to avoid conflict

Co-Authored-By: Paperclip <noreply@paperclip.ing>
This commit is contained in:
2026-04-25 00:08:01 -04:00
parent 65b552bb08
commit 7c684a42cc
48450 changed files with 5679671 additions and 383 deletions

21
node_modules/@solana/buffer-layout/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2015-2018 Peter A. Bigot
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

403
node_modules/@solana/buffer-layout/README.md generated vendored Normal file
View File

@@ -0,0 +1,403 @@
# @solana/buffer-layout
`@solana/buffer-layout` is a TypeScript fork of `buffer-layout`. Same API, just adds types and TypeScript docs.
## Installation
Install with `npm install @solana/buffer-layout`.
Development and testing is done using Node.js, supporting versions 5.10
and later.
# buffer-layout
[![NPM version](https://img.shields.io/npm/v/buffer-layout.svg)](https://www.npmjs.com/package/buffer-layout "View this project on NPM")
[![Build Status](https://travis-ci.org/pabigot/buffer-layout.svg?branch=master)](https://travis-ci.org/pabigot/buffer-layout "Check build status on TravisCI")
[![Coverage Status](https://coveralls.io/repos/pabigot/buffer-layout/badge.svg?branch=master&service=github)](https://coveralls.io/github/pabigot/buffer-layout?branch=master "Check coverage status on Coveralls")
buffer-layout is a utility module implemented in pure JavaScript that
supports translations between JavaScript values and Buffers. It is made
available through [github](https://github.com/pabigot/buffer-layout) and
released under the MIT license.
Layout support is provided for these types of data:
* Signed and unsigned integral values from 1 to 6 bytes in length, in
little-endian or big-endian format;
* Signed and unsigned 64-bit integral values decoded as integral
Numbers;
* Float and double values (also little-endian or big-endian);
* Sequences of instances of an arbitrary layout, with constant or
data-dependent length;
* Structures with named fields containing arbitrary layouts;
* Unions of variant layouts where the type of data is recorded in a
prefix value, another layout element, or provided externally;
* Bit fields within 8, 16, 24, or 32-bit unsigned integers, numbering
from the least or most significant bit;
* NUL-terminated C strings;
* Blobs of fixed or variable-length raw data.
## Examples
All examples are from the `test/examples.js` unit test and assume the
following context:
const assert = require('assert');
const util = require('util');
const lo = require('buffer-layout');
The examples give only a taste of what can be done. Structures, unions,
and sequences can nest; [union
discriminators](http://pabigot.github.io/buffer-layout/module-Layout-UnionDiscriminator.html)
can be within the union or external to it; sequence and blob lengths may
be fixed or read from the buffer.
For full details see the [documentation](http://pabigot.github.io/buffer-layout/).
### Four-element array of 16-bit signed little-endian integers
The C definition:
int16_t arr[4] = { 1, -1, 3, -3 };
The buffer-layout way:
const ds = lo.seq(lo.s16(), 4);
const b = Buffer.alloc(8);
assert.equal(ds.encode([1, -1, 3, -3], b), 4 * 2);
assert.equal(Buffer.from('0100ffff0300fdff', 'hex').compare(b), 0);
assert.deepEqual(ds.decode(b), [1, -1, 3, -3]);
See [Int](http://pabigot.github.io/buffer-layout/module-Layout-Int.html)
and [Sequence](http://pabigot.github.io/buffer-layout/module-Layout-Sequence.html).
### A native C `struct` on a 32-bit little-endian machine
The C definition:
struct ds {
uint8_t v;
uint32_t u32;
} st;
The buffer-layout way:
const ds = lo.struct([lo.u8('v'),
lo.seq(lo.u8(), 3), // alignment padding
lo.u32('u32')]);
assert.equal(ds.offsetOf('u32'), 4);
const b = Buffer.alloc(8);
b.fill(0xbd);
assert.equal(ds.encode({v: 1, u32: 0x12345678}, b), 1 + 3 + 4);
assert.equal(Buffer.from('01bdbdbd78563412', 'hex').compare(b), 0);
assert.deepEqual(ds.decode(b), {v: 1, u32: 0x12345678});
Note that the C language requires padding which must be explicitly added
in the buffer-layout structure definition. Since the padding is not
accessible, the corresponding layout has no
[property](http://pabigot.github.io/buffer-layout/module-Layout-Layout.html#property).
See [Structure](http://pabigot.github.io/buffer-layout/module-Layout-Structure.html).
### A packed C `struct` on a 32-bit little-endian machine
The C definition:
struct ds {
uint8_t v;
uint32_t u32;
} __attribute__((__packed__)) st;
The buffer-layout way:
const ds = lo.struct([lo.u8('v'),
lo.u32('u32')]);
assert.equal(ds.offsetOf('u32'), 1);
const b = Buffer.alloc(5);
b.fill(0xbd);
assert.equal(ds.encode({v: 1, u32: 0x12345678}, b), 1 + 4);
assert.equal(Buffer.from('0178563412', 'hex').compare(b), 0);
assert.deepEqual(ds.decode(b), {v: 1, u32: 0x12345678});
### A tagged union of 4-byte values
Assume a 5-byte packed structure where the interpretation of the last
four bytes depends on the first byte. The C definition:
struct {
uint8_t t;
union ds {
uint8_t u8[4]; // default interpretation
int16_t s16[2]; // when t is 'h'
uint32_t u32; // when t is 'w'
float f32; // when t is 'f'
} u;
} __attribute__((__packed__)) un;
The buffer-layout way:
const t = lo.u8('t');
const un = lo.union(t, lo.seq(lo.u8(), 4, 'u8'));
const nul = un.addVariant('n'.charCodeAt(0), 'nul');
const u32 = un.addVariant('w'.charCodeAt(0), lo.u32(), 'u32');
const s16 = un.addVariant('h'.charCodeAt(0), lo.seq(lo.s16(), 2), 's16');
const f32 = un.addVariant('f'.charCodeAt(0), lo.f32(), 'f32');
const b = Buffer.alloc(un.span);
assert.deepEqual(un.decode(b), {t: 0, u8: [0, 0, 0, 0]});
assert.deepEqual(un.decode(Buffer.from('6e01020304', 'hex')),
{nul: true});
assert.deepEqual(un.decode(Buffer.from('7778563412', 'hex')),
{u32: 0x12345678});
assert.deepEqual(un.decode(Buffer.from('660000bd41', 'hex')),
{f32: 23.625});
assert.deepEqual(un.decode(Buffer.from('a5a5a5a5a5', 'hex')),
{t: 0xa5, u8: [0xa5, 0xa5, 0xa5, 0xa5]});
assert.equal(s16.encode({s16: [123, -123]}, b), 1 + 2 * 2);
assert.equal(Buffer.from('687b0085ff', 'hex').compare(b), 0);
See [Union](http://pabigot.github.io/buffer-layout/module-Layout-Union.html).
### Decoding into class instances
Using the same 5-byte packet structure but with JavaScript classes
representing the union and the variants:
function Union() { }
lo.bindConstructorLayout(Union,
lo.union(lo.u8('t'), lo.seq(lo.u8(), 4, 'u8')));
function Vn() {}
util.inherits(Vn, Union);
lo.bindConstructorLayout(Vn,
Union.layout_.addVariant('n'.charCodeAt(0), 'nul'));
function Vu32(v) { this.u32 = v; }
util.inherits(Vu32, Union);
lo.bindConstructorLayout(Vu32,
Union.layout_.addVariant('w'.charCodeAt(0), lo.u32(), 'u32'));
function Vs16(v) { this.s16 = v; }
util.inherits(Vs16, Union);
lo.bindConstructorLayout(Vs16,
Union.layout_.addVariant('h'.charCodeAt(0), lo.seq(lo.s16(), 2), 's16'));
function Vf32(v) { this.f32 = v; }
util.inherits(Vf32, Union);
lo.bindConstructorLayout(Vf32,
Union.layout_.addVariant('f'.charCodeAt(0), lo.f32(), 'f32'));
let v = Union.decode(Buffer.from('7778563412', 'hex'));
assert(v instanceof Vu32);
assert(v instanceof Union);
assert.equal(v.u32, 0x12345678);
v = Union.decode(Buffer.from('a5a5a5a5a5', 'hex'));
assert(v instanceof Union);
assert.equal(v.t, 0xa5);
assert.deepEqual(v.u8, [0xa5, 0xa5, 0xa5, 0xa5]);
const b = Buffer.alloc(Union.layout_.span);
v = new Vf32(23.625);
v.encode(b);
assert.equal(Buffer.from('660000bd41', 'hex').compare(b), 0);
b.fill(0xFF);
v = new Vn();
v.encode(b);
assert.equal(Buffer.from('6effffffff', 'hex').compare(b), 0);
Note that one variant (`'n'`) carries no data, leaving the remainder of
the buffer unchanged when stored.
See
[Layout.makeDestinationObject()](http://pabigot.github.io/buffer-layout/module-Layout-Layout.html#makeDestinationObject)
and
[bindConstructorLayout](http://pabigot.github.io/buffer-layout/module-Layout.html#.bindConstructorLayout).
### Packed bit fields on a little-endian machine
The C definition:
struct ds {
unsigned int b00l03: 3;
unsigned int flg03: 1;
unsigned int b04l18: 24;
unsigned int b1Cl04: 4;
} st;
The buffer-layout way:
const ds = lo.bits(lo.u32());
const b = Buffer.alloc(4);
ds.addField(3, 'b00l03');
ds.addBoolean('flg03');
ds.addField(24, 'b04l18');
ds.addField(4, 'b1Cl04');
b.fill(0xff);
assert.equal(ds.encode({b00l03: 3, b04l18: 24, b1Cl04: 4}, b), 4);
assert.equal(Buffer.from('8b010040', 'hex').compare(b), 0);
assert.deepEqual(ds.decode(b),
{b00l03: 3, flg03: true, b04l18: 24, b1Cl04: 4});
See [BitStructure](http://pabigot.github.io/buffer-layout/module-Layout-BitStructure.html).
### 64-bit values as Numbers
The C definition:
uint64_t v = 0x0102030405060708ULL;
The buffer-layout way:
const ds = lo.nu64be();
const b = Buffer.from('0102030405060708', 'hex');
const v = 72623859790382856;
const nv = v - 6;
assert.equal(v, nv);
assert.equal(ds.decode(b), nv);
Note that because the exact value is not less than 2^53 it cannot be
represented as a JavaScript Number, and is instead approximated by a
nearby representable integer that is equivalent within Numbers.
See [NearUInt64](http://pabigot.github.io/buffer-layout/module-Layout-NearUInt64.html).
### A NUL-terminated C string
The C definition:
const char str[] = "hi!";
The buffer-layout way:
const ds = lo.cstr();
const b = Buffer.alloc(8);
assert.equal(ds.encode('hi!', b), 3 + 1);
const slen = ds.getSpan(b);
assert.equal(slen, 4);
assert.equal(Buffer.from('68692100', 'hex').compare(b.slice(0, slen)), 0);
assert.equal(ds.decode(b), 'hi!');
See [CString](http://pabigot.github.io/buffer-layout/module-Layout-CString.html).
### A fixed-length block of data offset within a buffer
The buffer-layout way:
const ds = lo.blob(4);
const b = Buffer.from('0102030405060708', 'hex');
assert.equal(Buffer.from('03040506', 'hex').compare(ds.decode(b, 2)), 0);
See [Blob](http://pabigot.github.io/buffer-layout/module-Layout-Blob.html).
### A variable-length array of pairs of C strings
The buffer-layout way:
const pr = lo.seq(lo.cstr(), 2);
const n = lo.u8('n');
const vla = lo.seq(pr, lo.offset(n, -1), 'a');
const st = lo.struct([n, vla], 'st');
const b = Buffer.alloc(32);
const arr = [['k1', 'v1'], ['k2', 'v2'], ['k3', 'etc']];
b.fill(0);
assert.equal(st.encode({a: arr}, b),
1 + (2 * ((2 + 1) + (2 + 1)) + (2 + 1) + (3 + 1)));
const span = st.getSpan(b);
assert.equal(span, 20);
assert.equal(Buffer.from('036b31007631006b32007632006b330065746300', 'hex')
.compare(b.slice(0, span)), 0);
assert.deepEqual(st.decode(b), {n: 3, a: arr});
See [OffsetLayout](http://pabigot.github.io/buffer-layout/module-Layout-OffsetLayout.html).
### A C flexible array member with implicit length
When data is obtained over a packetized interface the length of the
packet can provide implicit limits on the last field.
The C definition:
struct ds {
uint8_t prop;
uint16_t data[];
};
The buffer-layout way:
const st = lo.struct([lo.u8('prop'),
lo.seq(lo.u16(),
lo.greedy(lo.u16().span),
'data')],
'ds');
const b = Buffer.from('21010002030405', 'hex');
assert.deepEqual(st.decode(b), {prop: 33, data: [0x0001, 0x0302, 0x0504]});
b.fill(0xFF);
assert.equal(st.encode({prop: 9, data: [5, 6]}, b), 1 + 2 * 2);
assert.equal(Buffer.from('0905000600FFFF', 'hex').compare(b), 0);
### Tagged values, or variable-length unions
Storing arbitrary data using a leading byte to identify the content then
a value that takes up only as much room as is necessary.
The example also shows how to extend the variant recognition API to
support abitrary constant without consuming space for them in the
encoded union. This could be used to make something similar to
[BSON](http://bsonspec.org/spec.html).
Here's the code that defines the union, the variants, and the
recognition of `true` and `false` values for `b` as distinct variants:
const un = lo.union(lo.u8('t'));
const u8 = un.addVariant('B'.charCodeAt(0), lo.u8(), 'u8');
const s16 = un.addVariant('h'.charCodeAt(0), lo.s16(), 's16');
const s48 = un.addVariant('Q'.charCodeAt(0), lo.s48(), 's48');
const cstr = un.addVariant('s'.charCodeAt(0), lo.cstr(), 'str');
const tr = un.addVariant('T'.charCodeAt(0), lo.const(true), 'b');
const fa = un.addVariant('F'.charCodeAt(0), lo.const(false), 'b');
const b = Buffer.alloc(1 + 6);
un.configGetSourceVariant(function(src) {
if (src.hasOwnProperty('b')) {
return src.b ? tr : fa;
}
return this.defaultGetSourceVariant(src);
});
And here are examples of encoding, checking the encoded length, and
decoding each of the alternatives:
b.fill(0xff);
assert.equal(un.encode({u8: 1}, b), 1 + 1);
assert.equal(un.getSpan(b), 2);
assert.equal(Buffer.from('4201ffffffffff', 'hex').compare(b), 0);
assert.equal(un.decode(b).u8, 1);
b.fill(0xff);
assert.equal(un.encode({s16: -32000}, b), 1 + 2);
assert.equal(un.getSpan(b), 3);
assert.equal(Buffer.from('680083ffffffff', 'hex').compare(b), 0);
assert.equal(un.decode(b).s16, -32000);
b.fill(0xff);
const v48 = Math.pow(2, 47) - 1;
assert.equal(un.encode({s48: v48}, b), 1 + 6);
assert.equal(un.getSpan(b), 7);
assert.equal(Buffer.from('51ffffffffff7f', 'hex').compare(b), 0);
assert.equal(un.decode(b).s48, v48);
b.fill(0xff);
assert.equal(un.encode({b: true}, b), 1);
assert.equal(un.getSpan(b), 1);
assert.equal(Buffer.from('54ffffffffffff', 'hex').compare(b), 0);
assert.strictEqual(un.decode(b).b, true);
b.fill(0xff);
assert.equal(un.encode({b: false}, b), 1);
assert.equal(un.getSpan(b), 1);
assert.equal(Buffer.from('46ffffffffffff', 'hex').compare(b), 0);
assert.strictEqual(un.decode(b).b, false);
**NOTE** This code tickles a long-standing [bug in
Buffer.writeInt{L,B}E](https://github.com/nodejs/node/pull/3994); if you
are using Node prior to 4.2.4 or 5.2.0 you should update.

1191
node_modules/@solana/buffer-layout/lib/Layout.d.ts generated vendored Normal file

File diff suppressed because it is too large Load Diff

2390
node_modules/@solana/buffer-layout/lib/Layout.js generated vendored Normal file

File diff suppressed because it is too large Load Diff

1
node_modules/@solana/buffer-layout/lib/Layout.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,73 @@
# Authors
#### Ordered by first contribution.
- Romain Beauxis (toots@rastageeks.org)
- Tobias Koppers (tobias.koppers@googlemail.com)
- Janus (ysangkok@gmail.com)
- Rainer Dreyer (rdrey1@gmail.com)
- Tõnis Tiigi (tonistiigi@gmail.com)
- James Halliday (mail@substack.net)
- Michael Williamson (mike@zwobble.org)
- elliottcable (github@elliottcable.name)
- rafael (rvalle@livelens.net)
- Andrew Kelley (superjoe30@gmail.com)
- Andreas Madsen (amwebdk@gmail.com)
- Mike Brevoort (mike.brevoort@pearson.com)
- Brian White (mscdex@mscdex.net)
- Feross Aboukhadijeh (feross@feross.org)
- Ruben Verborgh (ruben@verborgh.org)
- eliang (eliang.cs@gmail.com)
- Jesse Tane (jesse.tane@gmail.com)
- Alfonso Boza (alfonso@cloud.com)
- Mathias Buus (mathiasbuus@gmail.com)
- Devon Govett (devongovett@gmail.com)
- Daniel Cousens (github@dcousens.com)
- Joseph Dykstra (josephdykstra@gmail.com)
- Parsha Pourkhomami (parshap+git@gmail.com)
- Damjan Košir (damjan.kosir@gmail.com)
- daverayment (dave.rayment@gmail.com)
- kawanet (u-suke@kawa.net)
- Linus Unnebäck (linus@folkdatorn.se)
- Nolan Lawson (nolan.lawson@gmail.com)
- Calvin Metcalf (calvin.metcalf@gmail.com)
- Koki Takahashi (hakatasiloving@gmail.com)
- Guy Bedford (guybedford@gmail.com)
- Jan Schär (jscissr@gmail.com)
- RaulTsc (tomescu.raul@gmail.com)
- Matthieu Monsch (monsch@alum.mit.edu)
- Dan Ehrenberg (littledan@chromium.org)
- Kirill Fomichev (fanatid@ya.ru)
- Yusuke Kawasaki (u-suke@kawa.net)
- DC (dcposch@dcpos.ch)
- John-David Dalton (john.david.dalton@gmail.com)
- adventure-yunfei (adventure030@gmail.com)
- Emil Bay (github@tixz.dk)
- Sam Sudar (sudar.sam@gmail.com)
- Volker Mische (volker.mische@gmail.com)
- David Walton (support@geekstocks.com)
- Сковорода Никита Андреевич (chalkerx@gmail.com)
- greenkeeper[bot] (greenkeeper[bot]@users.noreply.github.com)
- ukstv (sergey.ukustov@machinomy.com)
- Renée Kooi (renee@kooi.me)
- ranbochen (ranbochen@qq.com)
- Vladimir Borovik (bobahbdb@gmail.com)
- greenkeeper[bot] (23040076+greenkeeper[bot]@users.noreply.github.com)
- kumavis (aaron@kumavis.me)
- Sergey Ukustov (sergey.ukustov@machinomy.com)
- Fei Liu (liu.feiwood@gmail.com)
- Blaine Bublitz (blaine.bublitz@gmail.com)
- clement (clement@seald.io)
- Koushik Dutta (koushd@gmail.com)
- Jordan Harband (ljharb@gmail.com)
- Niklas Mischkulnig (mischnic@users.noreply.github.com)
- Nikolai Vavilov (vvnicholas@gmail.com)
- Fedor Nezhivoi (gyzerok@users.noreply.github.com)
- shuse2 (shus.toda@gmail.com)
- Peter Newman (peternewman@users.noreply.github.com)
- mathmakgakpak (44949126+mathmakgakpak@users.noreply.github.com)
- jkkang (jkkang@smartauth.kr)
- Deklan Webster (deklanw@gmail.com)
- Martin Heidegger (martin.heidegger@gmail.com)
#### Generated by bin/update-authors.sh.

View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) Feross Aboukhadijeh, and other contributors.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -0,0 +1,410 @@
# buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url]
[travis-image]: https://img.shields.io/travis/feross/buffer/master.svg
[travis-url]: https://travis-ci.org/feross/buffer
[npm-image]: https://img.shields.io/npm/v/buffer.svg
[npm-url]: https://npmjs.org/package/buffer
[downloads-image]: https://img.shields.io/npm/dm/buffer.svg
[downloads-url]: https://npmjs.org/package/buffer
[standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg
[standard-url]: https://standardjs.com
#### The buffer module from [node.js](https://nodejs.org/), for the browser.
[![saucelabs][saucelabs-image]][saucelabs-url]
[saucelabs-image]: https://saucelabs.com/browser-matrix/buffer.svg
[saucelabs-url]: https://saucelabs.com/u/buffer
With [browserify](http://browserify.org), simply `require('buffer')` or use the `Buffer` global and you will get this module.
The goal is to provide an API that is 100% identical to
[node's Buffer API](https://nodejs.org/api/buffer.html). Read the
[official docs](https://nodejs.org/api/buffer.html) for the full list of properties,
instance methods, and class methods that are supported.
## features
- Manipulate binary data like a boss, in all browsers!
- Super fast. Backed by Typed Arrays (`Uint8Array`/`ArrayBuffer`, not `Object`)
- Extremely small bundle size (**6.75KB minified + gzipped**, 51.9KB with comments)
- Excellent browser support (Chrome, Firefox, Edge, Safari 11+, iOS 11+, Android, etc.)
- Preserves Node API exactly, with one minor difference (see below)
- Square-bracket `buf[4]` notation works!
- Does not modify any browser prototypes or put anything on `window`
- Comprehensive test suite (including all buffer tests from node.js core)
## install
To use this module directly (without browserify), install it:
```bash
npm install buffer
```
This module was previously called **native-buffer-browserify**, but please use **buffer**
from now on.
If you do not use a bundler, you can use the [standalone script](https://bundle.run/buffer).
## usage
The module's API is identical to node's `Buffer` API. Read the
[official docs](https://nodejs.org/api/buffer.html) for the full list of properties,
instance methods, and class methods that are supported.
As mentioned above, `require('buffer')` or use the `Buffer` global with
[browserify](http://browserify.org) and this module will automatically be included
in your bundle. Almost any npm module will work in the browser, even if it assumes that
the node `Buffer` API will be available.
To depend on this module explicitly (without browserify), require it like this:
```js
var Buffer = require('buffer/').Buffer // note: the trailing slash is important!
```
To require this module explicitly, use `require('buffer/')` which tells the node.js module
lookup algorithm (also used by browserify) to use the **npm module** named `buffer`
instead of the **node.js core** module named `buffer`!
## how does it work?
The Buffer constructor returns instances of `Uint8Array` that have their prototype
changed to `Buffer.prototype`. Furthermore, `Buffer` is a subclass of `Uint8Array`,
so the returned instances will have all the node `Buffer` methods and the
`Uint8Array` methods. Square bracket notation works as expected -- it returns a
single octet.
The `Uint8Array` prototype remains unmodified.
## tracking the latest node api
This module tracks the Buffer API in the latest (unstable) version of node.js. The Buffer
API is considered **stable** in the
[node stability index](https://nodejs.org/docs/latest/api/documentation.html#documentation_stability_index),
so it is unlikely that there will ever be breaking changes.
Nonetheless, when/if the Buffer API changes in node, this module's API will change
accordingly.
## related packages
- [`buffer-reverse`](https://www.npmjs.com/package/buffer-reverse) - Reverse a buffer
- [`buffer-xor`](https://www.npmjs.com/package/buffer-xor) - Bitwise xor a buffer
- [`is-buffer`](https://www.npmjs.com/package/is-buffer) - Determine if an object is a Buffer without including the whole `Buffer` package
## conversion packages
### convert typed array to buffer
Use [`typedarray-to-buffer`](https://www.npmjs.com/package/typedarray-to-buffer) to convert any kind of typed array to a `Buffer`. Does not perform a copy, so it's super fast.
### convert buffer to typed array
`Buffer` is a subclass of `Uint8Array` (which is a typed array). So there is no need to explicitly convert to typed array. Just use the buffer as a `Uint8Array`.
### convert blob to buffer
Use [`blob-to-buffer`](https://www.npmjs.com/package/blob-to-buffer) to convert a `Blob` to a `Buffer`.
### convert buffer to blob
To convert a `Buffer` to a `Blob`, use the `Blob` constructor:
```js
var blob = new Blob([ buffer ])
```
Optionally, specify a mimetype:
```js
var blob = new Blob([ buffer ], { type: 'text/html' })
```
### convert arraybuffer to buffer
To convert an `ArrayBuffer` to a `Buffer`, use the `Buffer.from` function. Does not perform a copy, so it's super fast.
```js
var buffer = Buffer.from(arrayBuffer)
```
### convert buffer to arraybuffer
To convert a `Buffer` to an `ArrayBuffer`, use the `.buffer` property (which is present on all `Uint8Array` objects):
```js
var arrayBuffer = buffer.buffer.slice(
buffer.byteOffset, buffer.byteOffset + buffer.byteLength
)
```
Alternatively, use the [`to-arraybuffer`](https://www.npmjs.com/package/to-arraybuffer) module.
## performance
See perf tests in `/perf`.
`BrowserBuffer` is the browser `buffer` module (this repo). `Uint8Array` is included as a
sanity check (since `BrowserBuffer` uses `Uint8Array` under the hood, `Uint8Array` will
always be at least a bit faster). Finally, `NodeBuffer` is the node.js buffer module,
which is included to compare against.
NOTE: Performance has improved since these benchmarks were taken. PR welcome to update the README.
### Chrome 38
| Method | Operations | Accuracy | Sampled | Fastest |
|:-------|:-----------|:---------|:--------|:-------:|
| BrowserBuffer#bracket-notation | 11,457,464 ops/sec | ±0.86% | 66 | ✓ |
| Uint8Array#bracket-notation | 10,824,332 ops/sec | ±0.74% | 65 | |
| | | | |
| BrowserBuffer#concat | 450,532 ops/sec | ±0.76% | 68 | |
| Uint8Array#concat | 1,368,911 ops/sec | ±1.50% | 62 | ✓ |
| | | | |
| BrowserBuffer#copy(16000) | 903,001 ops/sec | ±0.96% | 67 | |
| Uint8Array#copy(16000) | 1,422,441 ops/sec | ±1.04% | 66 | ✓ |
| | | | |
| BrowserBuffer#copy(16) | 11,431,358 ops/sec | ±0.46% | 69 | |
| Uint8Array#copy(16) | 13,944,163 ops/sec | ±1.12% | 68 | ✓ |
| | | | |
| BrowserBuffer#new(16000) | 106,329 ops/sec | ±6.70% | 44 | |
| Uint8Array#new(16000) | 131,001 ops/sec | ±2.85% | 31 | ✓ |
| | | | |
| BrowserBuffer#new(16) | 1,554,491 ops/sec | ±1.60% | 65 | |
| Uint8Array#new(16) | 6,623,930 ops/sec | ±1.66% | 65 | ✓ |
| | | | |
| BrowserBuffer#readDoubleBE | 112,830 ops/sec | ±0.51% | 69 | ✓ |
| DataView#getFloat64 | 93,500 ops/sec | ±0.57% | 68 | |
| | | | |
| BrowserBuffer#readFloatBE | 146,678 ops/sec | ±0.95% | 68 | ✓ |
| DataView#getFloat32 | 99,311 ops/sec | ±0.41% | 67 | |
| | | | |
| BrowserBuffer#readUInt32LE | 843,214 ops/sec | ±0.70% | 69 | ✓ |
| DataView#getUint32 | 103,024 ops/sec | ±0.64% | 67 | |
| | | | |
| BrowserBuffer#slice | 1,013,941 ops/sec | ±0.75% | 67 | |
| Uint8Array#subarray | 1,903,928 ops/sec | ±0.53% | 67 | ✓ |
| | | | |
| BrowserBuffer#writeFloatBE | 61,387 ops/sec | ±0.90% | 67 | |
| DataView#setFloat32 | 141,249 ops/sec | ±0.40% | 66 | ✓ |
### Firefox 33
| Method | Operations | Accuracy | Sampled | Fastest |
|:-------|:-----------|:---------|:--------|:-------:|
| BrowserBuffer#bracket-notation | 20,800,421 ops/sec | ±1.84% | 60 | |
| Uint8Array#bracket-notation | 20,826,235 ops/sec | ±2.02% | 61 | ✓ |
| | | | |
| BrowserBuffer#concat | 153,076 ops/sec | ±2.32% | 61 | |
| Uint8Array#concat | 1,255,674 ops/sec | ±8.65% | 52 | ✓ |
| | | | |
| BrowserBuffer#copy(16000) | 1,105,312 ops/sec | ±1.16% | 63 | |
| Uint8Array#copy(16000) | 1,615,911 ops/sec | ±0.55% | 66 | ✓ |
| | | | |
| BrowserBuffer#copy(16) | 16,357,599 ops/sec | ±0.73% | 68 | |
| Uint8Array#copy(16) | 31,436,281 ops/sec | ±1.05% | 68 | ✓ |
| | | | |
| BrowserBuffer#new(16000) | 52,995 ops/sec | ±6.01% | 35 | |
| Uint8Array#new(16000) | 87,686 ops/sec | ±5.68% | 45 | ✓ |
| | | | |
| BrowserBuffer#new(16) | 252,031 ops/sec | ±1.61% | 66 | |
| Uint8Array#new(16) | 8,477,026 ops/sec | ±0.49% | 68 | ✓ |
| | | | |
| BrowserBuffer#readDoubleBE | 99,871 ops/sec | ±0.41% | 69 | |
| DataView#getFloat64 | 285,663 ops/sec | ±0.70% | 68 | ✓ |
| | | | |
| BrowserBuffer#readFloatBE | 115,540 ops/sec | ±0.42% | 69 | |
| DataView#getFloat32 | 288,722 ops/sec | ±0.82% | 68 | ✓ |
| | | | |
| BrowserBuffer#readUInt32LE | 633,926 ops/sec | ±1.08% | 67 | ✓ |
| DataView#getUint32 | 294,808 ops/sec | ±0.79% | 64 | |
| | | | |
| BrowserBuffer#slice | 349,425 ops/sec | ±0.46% | 69 | |
| Uint8Array#subarray | 5,965,819 ops/sec | ±0.60% | 65 | ✓ |
| | | | |
| BrowserBuffer#writeFloatBE | 59,980 ops/sec | ±0.41% | 67 | |
| DataView#setFloat32 | 317,634 ops/sec | ±0.63% | 68 | ✓ |
### Safari 8
| Method | Operations | Accuracy | Sampled | Fastest |
|:-------|:-----------|:---------|:--------|:-------:|
| BrowserBuffer#bracket-notation | 10,279,729 ops/sec | ±2.25% | 56 | ✓ |
| Uint8Array#bracket-notation | 10,030,767 ops/sec | ±2.23% | 59 | |
| | | | |
| BrowserBuffer#concat | 144,138 ops/sec | ±1.38% | 65 | |
| Uint8Array#concat | 4,950,764 ops/sec | ±1.70% | 63 | ✓ |
| | | | |
| BrowserBuffer#copy(16000) | 1,058,548 ops/sec | ±1.51% | 64 | |
| Uint8Array#copy(16000) | 1,409,666 ops/sec | ±1.17% | 65 | ✓ |
| | | | |
| BrowserBuffer#copy(16) | 6,282,529 ops/sec | ±1.88% | 58 | |
| Uint8Array#copy(16) | 11,907,128 ops/sec | ±2.87% | 58 | ✓ |
| | | | |
| BrowserBuffer#new(16000) | 101,663 ops/sec | ±3.89% | 57 | |
| Uint8Array#new(16000) | 22,050,818 ops/sec | ±6.51% | 46 | ✓ |
| | | | |
| BrowserBuffer#new(16) | 176,072 ops/sec | ±2.13% | 64 | |
| Uint8Array#new(16) | 24,385,731 ops/sec | ±5.01% | 51 | ✓ |
| | | | |
| BrowserBuffer#readDoubleBE | 41,341 ops/sec | ±1.06% | 67 | |
| DataView#getFloat64 | 322,280 ops/sec | ±0.84% | 68 | ✓ |
| | | | |
| BrowserBuffer#readFloatBE | 46,141 ops/sec | ±1.06% | 65 | |
| DataView#getFloat32 | 337,025 ops/sec | ±0.43% | 69 | ✓ |
| | | | |
| BrowserBuffer#readUInt32LE | 151,551 ops/sec | ±1.02% | 66 | |
| DataView#getUint32 | 308,278 ops/sec | ±0.94% | 67 | ✓ |
| | | | |
| BrowserBuffer#slice | 197,365 ops/sec | ±0.95% | 66 | |
| Uint8Array#subarray | 9,558,024 ops/sec | ±3.08% | 58 | ✓ |
| | | | |
| BrowserBuffer#writeFloatBE | 17,518 ops/sec | ±1.03% | 63 | |
| DataView#setFloat32 | 319,751 ops/sec | ±0.48% | 68 | ✓ |
### Node 0.11.14
| Method | Operations | Accuracy | Sampled | Fastest |
|:-------|:-----------|:---------|:--------|:-------:|
| BrowserBuffer#bracket-notation | 10,489,828 ops/sec | ±3.25% | 90 | |
| Uint8Array#bracket-notation | 10,534,884 ops/sec | ±0.81% | 92 | ✓ |
| NodeBuffer#bracket-notation | 10,389,910 ops/sec | ±0.97% | 87 | |
| | | | |
| BrowserBuffer#concat | 487,830 ops/sec | ±2.58% | 88 | |
| Uint8Array#concat | 1,814,327 ops/sec | ±1.28% | 88 | ✓ |
| NodeBuffer#concat | 1,636,523 ops/sec | ±1.88% | 73 | |
| | | | |
| BrowserBuffer#copy(16000) | 1,073,665 ops/sec | ±0.77% | 90 | |
| Uint8Array#copy(16000) | 1,348,517 ops/sec | ±0.84% | 89 | ✓ |
| NodeBuffer#copy(16000) | 1,289,533 ops/sec | ±0.82% | 93 | |
| | | | |
| BrowserBuffer#copy(16) | 12,782,706 ops/sec | ±0.74% | 85 | |
| Uint8Array#copy(16) | 14,180,427 ops/sec | ±0.93% | 92 | ✓ |
| NodeBuffer#copy(16) | 11,083,134 ops/sec | ±1.06% | 89 | |
| | | | |
| BrowserBuffer#new(16000) | 141,678 ops/sec | ±3.30% | 67 | |
| Uint8Array#new(16000) | 161,491 ops/sec | ±2.96% | 60 | |
| NodeBuffer#new(16000) | 292,699 ops/sec | ±3.20% | 55 | ✓ |
| | | | |
| BrowserBuffer#new(16) | 1,655,466 ops/sec | ±2.41% | 82 | |
| Uint8Array#new(16) | 14,399,926 ops/sec | ±0.91% | 94 | ✓ |
| NodeBuffer#new(16) | 3,894,696 ops/sec | ±0.88% | 92 | |
| | | | |
| BrowserBuffer#readDoubleBE | 109,582 ops/sec | ±0.75% | 93 | ✓ |
| DataView#getFloat64 | 91,235 ops/sec | ±0.81% | 90 | |
| NodeBuffer#readDoubleBE | 88,593 ops/sec | ±0.96% | 81 | |
| | | | |
| BrowserBuffer#readFloatBE | 139,854 ops/sec | ±1.03% | 85 | ✓ |
| DataView#getFloat32 | 98,744 ops/sec | ±0.80% | 89 | |
| NodeBuffer#readFloatBE | 92,769 ops/sec | ±0.94% | 93 | |
| | | | |
| BrowserBuffer#readUInt32LE | 710,861 ops/sec | ±0.82% | 92 | |
| DataView#getUint32 | 117,893 ops/sec | ±0.84% | 91 | |
| NodeBuffer#readUInt32LE | 851,412 ops/sec | ±0.72% | 93 | ✓ |
| | | | |
| BrowserBuffer#slice | 1,673,877 ops/sec | ±0.73% | 94 | |
| Uint8Array#subarray | 6,919,243 ops/sec | ±0.67% | 90 | ✓ |
| NodeBuffer#slice | 4,617,604 ops/sec | ±0.79% | 93 | |
| | | | |
| BrowserBuffer#writeFloatBE | 66,011 ops/sec | ±0.75% | 93 | |
| DataView#setFloat32 | 127,760 ops/sec | ±0.72% | 93 | ✓ |
| NodeBuffer#writeFloatBE | 103,352 ops/sec | ±0.83% | 93 | |
### iojs 1.8.1
| Method | Operations | Accuracy | Sampled | Fastest |
|:-------|:-----------|:---------|:--------|:-------:|
| BrowserBuffer#bracket-notation | 10,990,488 ops/sec | ±1.11% | 91 | |
| Uint8Array#bracket-notation | 11,268,757 ops/sec | ±0.65% | 97 | |
| NodeBuffer#bracket-notation | 11,353,260 ops/sec | ±0.83% | 94 | ✓ |
| | | | |
| BrowserBuffer#concat | 378,954 ops/sec | ±0.74% | 94 | |
| Uint8Array#concat | 1,358,288 ops/sec | ±0.97% | 87 | |
| NodeBuffer#concat | 1,934,050 ops/sec | ±1.11% | 78 | ✓ |
| | | | |
| BrowserBuffer#copy(16000) | 894,538 ops/sec | ±0.56% | 84 | |
| Uint8Array#copy(16000) | 1,442,656 ops/sec | ±0.71% | 96 | |
| NodeBuffer#copy(16000) | 1,457,898 ops/sec | ±0.53% | 92 | ✓ |
| | | | |
| BrowserBuffer#copy(16) | 12,870,457 ops/sec | ±0.67% | 95 | |
| Uint8Array#copy(16) | 16,643,989 ops/sec | ±0.61% | 93 | ✓ |
| NodeBuffer#copy(16) | 14,885,848 ops/sec | ±0.74% | 94 | |
| | | | |
| BrowserBuffer#new(16000) | 109,264 ops/sec | ±4.21% | 63 | |
| Uint8Array#new(16000) | 138,916 ops/sec | ±1.87% | 61 | |
| NodeBuffer#new(16000) | 281,449 ops/sec | ±3.58% | 51 | ✓ |
| | | | |
| BrowserBuffer#new(16) | 1,362,935 ops/sec | ±0.56% | 99 | |
| Uint8Array#new(16) | 6,193,090 ops/sec | ±0.64% | 95 | ✓ |
| NodeBuffer#new(16) | 4,745,425 ops/sec | ±1.56% | 90 | |
| | | | |
| BrowserBuffer#readDoubleBE | 118,127 ops/sec | ±0.59% | 93 | ✓ |
| DataView#getFloat64 | 107,332 ops/sec | ±0.65% | 91 | |
| NodeBuffer#readDoubleBE | 116,274 ops/sec | ±0.94% | 95 | |
| | | | |
| BrowserBuffer#readFloatBE | 150,326 ops/sec | ±0.58% | 95 | ✓ |
| DataView#getFloat32 | 110,541 ops/sec | ±0.57% | 98 | |
| NodeBuffer#readFloatBE | 121,599 ops/sec | ±0.60% | 87 | |
| | | | |
| BrowserBuffer#readUInt32LE | 814,147 ops/sec | ±0.62% | 93 | |
| DataView#getUint32 | 137,592 ops/sec | ±0.64% | 90 | |
| NodeBuffer#readUInt32LE | 931,650 ops/sec | ±0.71% | 96 | ✓ |
| | | | |
| BrowserBuffer#slice | 878,590 ops/sec | ±0.68% | 93 | |
| Uint8Array#subarray | 2,843,308 ops/sec | ±1.02% | 90 | |
| NodeBuffer#slice | 4,998,316 ops/sec | ±0.68% | 90 | ✓ |
| | | | |
| BrowserBuffer#writeFloatBE | 65,927 ops/sec | ±0.74% | 93 | |
| DataView#setFloat32 | 139,823 ops/sec | ±0.97% | 89 | ✓ |
| NodeBuffer#writeFloatBE | 135,763 ops/sec | ±0.65% | 96 | |
| | | | |
## Testing the project
First, install the project:
npm install
Then, to run tests in Node.js, run:
npm run test-node
To test locally in a browser, you can run:
npm run test-browser-es5-local # For ES5 browsers that don't support ES6
npm run test-browser-es6-local # For ES6 compliant browsers
This will print out a URL that you can then open in a browser to run the tests, using [airtap](https://www.npmjs.com/package/airtap).
To run automated browser tests using Saucelabs, ensure that your `SAUCE_USERNAME` and `SAUCE_ACCESS_KEY` environment variables are set, then run:
npm test
This is what's run in Travis, to check against various browsers. The list of browsers is kept in the `bin/airtap-es5.yml` and `bin/airtap-es6.yml` files.
## JavaScript Standard Style
This module uses [JavaScript Standard Style](https://github.com/feross/standard).
[![JavaScript Style Guide](https://cdn.rawgit.com/feross/standard/master/badge.svg)](https://github.com/feross/standard)
To test that the code conforms to the style, `npm install` and run:
./node_modules/.bin/standard
## credit
This was originally forked from [buffer-browserify](https://github.com/toots/buffer-browserify).
## Security Policies and Procedures
The `buffer` team and community take all security bugs in `buffer` seriously. Please see our [security policies and procedures](https://github.com/feross/security) document to learn how to report issues.
## license
MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org), and other contributors. Originally forked from an MIT-licensed module by Romain Beauxis.

View File

@@ -0,0 +1,194 @@
export class Buffer extends Uint8Array {
length: number
write(string: string, offset?: number, length?: number, encoding?: string): number;
toString(encoding?: string, start?: number, end?: number): string;
toJSON(): { type: 'Buffer', data: any[] };
equals(otherBuffer: Buffer): boolean;
compare(otherBuffer: Uint8Array, targetStart?: number, targetEnd?: number, sourceStart?: number, sourceEnd?: number): number;
copy(targetBuffer: Buffer, targetStart?: number, sourceStart?: number, sourceEnd?: number): number;
slice(start?: number, end?: number): Buffer;
writeUIntLE(value: number, offset: number, byteLength: number, noAssert?: boolean): number;
writeUIntBE(value: number, offset: number, byteLength: number, noAssert?: boolean): number;
writeIntLE(value: number, offset: number, byteLength: number, noAssert?: boolean): number;
writeIntBE(value: number, offset: number, byteLength: number, noAssert?: boolean): number;
readUIntLE(offset: number, byteLength: number, noAssert?: boolean): number;
readUIntBE(offset: number, byteLength: number, noAssert?: boolean): number;
readIntLE(offset: number, byteLength: number, noAssert?: boolean): number;
readIntBE(offset: number, byteLength: number, noAssert?: boolean): number;
readUInt8(offset: number, noAssert?: boolean): number;
readUInt16LE(offset: number, noAssert?: boolean): number;
readUInt16BE(offset: number, noAssert?: boolean): number;
readUInt32LE(offset: number, noAssert?: boolean): number;
readUInt32BE(offset: number, noAssert?: boolean): number;
readBigUInt64LE(offset: number): BigInt;
readBigUInt64BE(offset: number): BigInt;
readInt8(offset: number, noAssert?: boolean): number;
readInt16LE(offset: number, noAssert?: boolean): number;
readInt16BE(offset: number, noAssert?: boolean): number;
readInt32LE(offset: number, noAssert?: boolean): number;
readInt32BE(offset: number, noAssert?: boolean): number;
readBigInt64LE(offset: number): BigInt;
readBigInt64BE(offset: number): BigInt;
readFloatLE(offset: number, noAssert?: boolean): number;
readFloatBE(offset: number, noAssert?: boolean): number;
readDoubleLE(offset: number, noAssert?: boolean): number;
readDoubleBE(offset: number, noAssert?: boolean): number;
reverse(): this;
swap16(): Buffer;
swap32(): Buffer;
swap64(): Buffer;
writeUInt8(value: number, offset: number, noAssert?: boolean): number;
writeUInt16LE(value: number, offset: number, noAssert?: boolean): number;
writeUInt16BE(value: number, offset: number, noAssert?: boolean): number;
writeUInt32LE(value: number, offset: number, noAssert?: boolean): number;
writeUInt32BE(value: number, offset: number, noAssert?: boolean): number;
writeBigUInt64LE(value: number, offset: number): BigInt;
writeBigUInt64BE(value: number, offset: number): BigInt;
writeInt8(value: number, offset: number, noAssert?: boolean): number;
writeInt16LE(value: number, offset: number, noAssert?: boolean): number;
writeInt16BE(value: number, offset: number, noAssert?: boolean): number;
writeInt32LE(value: number, offset: number, noAssert?: boolean): number;
writeInt32BE(value: number, offset: number, noAssert?: boolean): number;
writeBigInt64LE(value: number, offset: number): BigInt;
writeBigInt64BE(value: number, offset: number): BigInt;
writeFloatLE(value: number, offset: number, noAssert?: boolean): number;
writeFloatBE(value: number, offset: number, noAssert?: boolean): number;
writeDoubleLE(value: number, offset: number, noAssert?: boolean): number;
writeDoubleBE(value: number, offset: number, noAssert?: boolean): number;
fill(value: any, offset?: number, end?: number): this;
indexOf(value: string | number | Buffer, byteOffset?: number, encoding?: string): number;
lastIndexOf(value: string | number | Buffer, byteOffset?: number, encoding?: string): number;
includes(value: string | number | Buffer, byteOffset?: number, encoding?: string): boolean;
/**
* Allocates a new buffer containing the given {str}.
*
* @param str String to store in buffer.
* @param encoding encoding to use, optional. Default is 'utf8'
*/
constructor (str: string, encoding?: string);
/**
* Allocates a new buffer of {size} octets.
*
* @param size count of octets to allocate.
*/
constructor (size: number);
/**
* Allocates a new buffer containing the given {array} of octets.
*
* @param array The octets to store.
*/
constructor (array: Uint8Array);
/**
* Produces a Buffer backed by the same allocated memory as
* the given {ArrayBuffer}.
*
*
* @param arrayBuffer The ArrayBuffer with which to share memory.
*/
constructor (arrayBuffer: ArrayBuffer);
/**
* Allocates a new buffer containing the given {array} of octets.
*
* @param array The octets to store.
*/
constructor (array: any[]);
/**
* Copies the passed {buffer} data onto a new {Buffer} instance.
*
* @param buffer The buffer to copy.
*/
constructor (buffer: Buffer);
prototype: Buffer;
/**
* Allocates a new Buffer using an {array} of octets.
*
* @param array
*/
static from(array: any[]): Buffer;
/**
* When passed a reference to the .buffer property of a TypedArray instance,
* the newly created Buffer will share the same allocated memory as the TypedArray.
* The optional {byteOffset} and {length} arguments specify a memory range
* within the {arrayBuffer} that will be shared by the Buffer.
*
* @param arrayBuffer The .buffer property of a TypedArray or a new ArrayBuffer()
* @param byteOffset
* @param length
*/
static from(arrayBuffer: ArrayBuffer, byteOffset?: number, length?: number): Buffer;
/**
* Copies the passed {buffer} data onto a new Buffer instance.
*
* @param buffer
*/
static from(buffer: Buffer | Uint8Array): Buffer;
/**
* Creates a new Buffer containing the given JavaScript string {str}.
* If provided, the {encoding} parameter identifies the character encoding.
* If not provided, {encoding} defaults to 'utf8'.
*
* @param str
*/
static from(str: string, encoding?: string): Buffer;
/**
* Returns true if {obj} is a Buffer
*
* @param obj object to test.
*/
static isBuffer(obj: any): obj is Buffer;
/**
* Returns true if {encoding} is a valid encoding argument.
* Valid string encodings in Node 0.12: 'ascii'|'utf8'|'utf16le'|'ucs2'(alias of 'utf16le')|'base64'|'binary'(deprecated)|'hex'
*
* @param encoding string to test.
*/
static isEncoding(encoding: string): boolean;
/**
* Gives the actual byte length of a string. encoding defaults to 'utf8'.
* This is not the same as String.prototype.length since that returns the number of characters in a string.
*
* @param string string to test.
* @param encoding encoding used to evaluate (defaults to 'utf8')
*/
static byteLength(string: string, encoding?: string): number;
/**
* Returns a buffer which is the result of concatenating all the buffers in the list together.
*
* If the list has no items, or if the totalLength is 0, then it returns a zero-length buffer.
* If the list has exactly one item, then the first item of the list is returned.
* If the list has more than one item, then a new Buffer is created.
*
* @param list An array of Buffer objects to concatenate
* @param totalLength Total length of the buffers when concatenated.
* If totalLength is not provided, it is read from the buffers in the list. However, this adds an additional loop to the function, so it is faster to provide the length explicitly.
*/
static concat(list: Uint8Array[], totalLength?: number): Buffer;
/**
* The same as buf1.compare(buf2).
*/
static compare(buf1: Uint8Array, buf2: Uint8Array): number;
/**
* Allocates a new buffer of {size} octets.
*
* @param size count of octets to allocate.
* @param fill if specified, buffer will be initialized by calling buf.fill(fill).
* If parameter is omitted, buffer will be filled with zeros.
* @param encoding encoding used for call to buf.fill while initializing
*/
static alloc(size: number, fill?: string | Buffer | number, encoding?: string): Buffer;
/**
* Allocates a new buffer of {size} octets, leaving memory not initialized, so the contents
* of the newly created Buffer are unknown and may contain sensitive data.
*
* @param size count of octets to allocate
*/
static allocUnsafe(size: number): Buffer;
/**
* Allocates a new non-pooled buffer of {size} octets, leaving memory not initialized, so the contents
* of the newly created Buffer are unknown and may contain sensitive data.
*
* @param size count of octets to allocate
*/
static allocUnsafeSlow(size: number): Buffer;
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,93 @@
{
"name": "buffer",
"description": "Node.js Buffer API, for the browser",
"version": "6.0.3",
"author": {
"name": "Feross Aboukhadijeh",
"email": "feross@feross.org",
"url": "https://feross.org"
},
"bugs": {
"url": "https://github.com/feross/buffer/issues"
},
"contributors": [
"Romain Beauxis <toots@rastageeks.org>",
"James Halliday <mail@substack.net>"
],
"dependencies": {
"base64-js": "^1.3.1",
"ieee754": "^1.2.1"
},
"devDependencies": {
"airtap": "^3.0.0",
"benchmark": "^2.1.4",
"browserify": "^17.0.0",
"concat-stream": "^2.0.0",
"hyperquest": "^2.1.3",
"is-buffer": "^2.0.5",
"is-nan": "^1.3.0",
"split": "^1.0.1",
"standard": "*",
"tape": "^5.0.1",
"through2": "^4.0.2",
"uglify-js": "^3.11.5"
},
"homepage": "https://github.com/feross/buffer",
"jspm": {
"map": {
"./index.js": {
"node": "@node/buffer"
}
}
},
"keywords": [
"arraybuffer",
"browser",
"browserify",
"buffer",
"compatible",
"dataview",
"uint8array"
],
"license": "MIT",
"main": "index.js",
"types": "index.d.ts",
"repository": {
"type": "git",
"url": "git://github.com/feross/buffer.git"
},
"scripts": {
"perf": "browserify --debug perf/bracket-notation.js > perf/bundle.js && open perf/index.html",
"perf-node": "node perf/bracket-notation.js && node perf/concat.js && node perf/copy-big.js && node perf/copy.js && node perf/new-big.js && node perf/new.js && node perf/readDoubleBE.js && node perf/readFloatBE.js && node perf/readUInt32LE.js && node perf/slice.js && node perf/writeFloatBE.js",
"size": "browserify -r ./ | uglifyjs -c -m | gzip | wc -c",
"test": "standard && node ./bin/test.js",
"test-browser-old": "airtap -- test/*.js",
"test-browser-old-local": "airtap --local -- test/*.js",
"test-browser-new": "airtap -- test/*.js test/node/*.js",
"test-browser-new-local": "airtap --local -- test/*.js test/node/*.js",
"test-node": "tape test/*.js test/node/*.js",
"update-authors": "./bin/update-authors.sh"
},
"standard": {
"ignore": [
"test/node/**/*.js",
"test/common.js",
"test/_polyfill.js",
"perf/**/*.js"
]
},
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/feross"
},
{
"type": "patreon",
"url": "https://www.patreon.com/feross"
},
{
"type": "consulting",
"url": "https://feross.org/support"
}
]
}

55
node_modules/@solana/buffer-layout/package.json generated vendored Normal file
View File

@@ -0,0 +1,55 @@
{
"name": "@solana/buffer-layout",
"version": "4.0.1",
"description": "Translation between JavaScript values and Buffers",
"keywords": [
"Buffer",
"struct",
"endian",
"pack data"
],
"homepage": "https://github.com/solana-labs/buffer-layout",
"bugs": "https://github.com/solana-labs/buffer-layout/issues",
"repository": {
"type": "git",
"url": "https://github.com/solana-labs/buffer-layout.git"
},
"license": "MIT",
"author": "Peter A. Bigot <pab@pabigot.com>",
"main": "./lib/Layout.js",
"types": "./lib/Layout.d.ts",
"files": [
"/lib"
],
"dependencies": {
"buffer": "~6.0.3"
},
"devDependencies": {
"@typescript-eslint/eslint-plugin": "^4.28.2",
"@typescript-eslint/parser": "^4.28.2",
"coveralls": "^3.0.0",
"eslint": "~7.30.0",
"gh-pages": "^3.2.3",
"istanbul": "~0.4.5",
"jsdoc": "~3.5.5",
"lodash": "~4.17.5",
"mocha": "~5.0.4",
"shx": "^0.3.3",
"typedoc": "^0.22.10",
"typescript": "^4.4.4"
},
"engines": {
"node": ">=5.10"
},
"scripts": {
"build": "tsc",
"coverage": "npm run build && istanbul cover _mocha -- -u tdd",
"coveralls": "npm run build && istanbul cover _mocha --report lcovonly -- -u tdd && cat ./coverage/lcov.info | coveralls",
"docs": "shx rm -rf docs && typedoc && shx cp .nojekyll docs/",
"eslint": "eslint src/ --ext .ts",
"jsdoc": "jsdoc -c jsdoc/conf.json",
"pages": "gh-pages --dist docs --dotfiles",
"prepare": "npm run build",
"test": "npm run build && mocha -u tdd"
}
}

20
node_modules/@solana/codecs-core/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,20 @@
Copyright (c) 2023 Solana Labs, Inc
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

662
node_modules/@solana/codecs-core/README.md generated vendored Normal file
View File

@@ -0,0 +1,662 @@
[![npm][npm-image]][npm-url]
[![npm-downloads][npm-downloads-image]][npm-url]
<br />
[![code-style-prettier][code-style-prettier-image]][code-style-prettier-url]
[code-style-prettier-image]: https://img.shields.io/badge/code_style-prettier-ff69b4.svg?style=flat-square
[code-style-prettier-url]: https://github.com/prettier/prettier
[npm-downloads-image]: https://img.shields.io/npm/dm/@solana/codecs-core?style=flat
[npm-image]: https://img.shields.io/npm/v/@solana/codecs-core?style=flat
[npm-url]: https://www.npmjs.com/package/@solana/codecs-core
# @solana/codecs-core
This package contains the core types and functions for encoding and decoding data structures on Solana. It can be used standalone, but it is also exported as part of Kit [`@solana/kit`](https://github.com/anza-xyz/kit/tree/main/packages/kit).
This package is also part of the [`@solana/codecs` package](https://github.com/anza-xyz/kit/tree/main/packages/codecs) which acts as an entry point for all codec packages as well as for their documentation.
## Composing codecs
The easiest way to create your own codecs is to compose the [various codecs](https://github.com/anza-xyz/kit/tree/main/packages/codecs) offered by this library. For instance, heres how you would define a codec for a `Person` object that contains a `name` string attribute and an `age` number stored in 4 bytes.
```ts
type Person = { name: string; age: number };
const getPersonCodec = (): Codec<Person> =>
getStructCodec([
['name', addCodecSizePrefix(getUtf8Codec(), getU32Codec())],
['age', getU32Codec()],
]);
```
This function returns a `Codec` object which contains both an `encode` and `decode` function that can be used to convert a `Person` type to and from a `Uint8Array`.
```ts
const personCodec = getPersonCodec();
const bytes = personCodec.encode({ name: 'John', age: 42 });
const person = personCodec.decode(bytes);
```
There is a significant library of composable codecs at your disposal, enabling you to compose complex types. You may be interested in the documentation of these other packages to learn more about them:
- [`@solana/codecs-numbers`](https://github.com/anza-xyz/kit/tree/main/packages/codecs-numbers) for number codecs.
- [`@solana/codecs-strings`](https://github.com/anza-xyz/kit/tree/main/packages/codecs-strings) for string codecs.
- [`@solana/codecs-data-structures`](https://github.com/anza-xyz/kit/tree/main/packages/codecs-data-structures) for many data structure codecs such as objects, arrays, tuples, sets, maps, enums, discriminated unions, booleans, etc.
- [`@solana/options`](https://github.com/anza-xyz/kit/tree/main/packages/options) for a Rust-like `Option` type and associated codec.
You may also be interested in some of the helpers of this `@solana/codecs-core` library such as `transformCodec`, `fixCodecSize` or `reverseCodec` that create new codecs from existing ones.
Note that all of these libraries are included in the [`@solana/codecs` package](https://github.com/anza-xyz/kit/tree/main/packages/codecs) as well as the main `@solana/kit` package for your convenience.
## Composing encoders and decoders
Whilst Codecs can both encode and decode, it is possible to only focus on encoding or decoding data, enabling the unused logic to be tree-shaken. For instance, heres our previous example using Encoders only to encode a `Person` type.
```ts
const getPersonEncoder = (): Encoder<Person> =>
getStructEncoder([
['name', addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder())],
['age', getU32Encoder()],
]);
const bytes = getPersonEncoder().encode({ name: 'John', age: 42 });
```
The same can be done for decoding the `Person` type by using Decoders like so.
```ts
const getPersonDecoder = (): Decoder<Person> =>
getStructDecoder([
['name', addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder())],
['age', getU32Decoder()],
]);
const person = getPersonDecoder().decode(bytes);
```
## Combining encoders and decoders
Separating Codecs into Encoders and Decoders is particularly good practice for library maintainers as it allows their users to tree-shake any of the encoders and/or decoders they dont need. However, we may still want to offer a codec helper for users who need both for convenience.
Thats why this library offers a `combineCodec` helper that creates a `Codec` instance from a matching `Encoder` and `Decoder`.
```ts
const getPersonCodec = (): Codec<Person> => combineCodec(getPersonEncoder(), getPersonDecoder());
```
This means library maintainers can offer Encoders, Decoders and Codecs for all their types whilst staying efficient and tree-shakeable. In summary, we recommend the following pattern when creating codecs for library types.
```ts
type MyType = /* ... */;
const getMyTypeEncoder = (): Encoder<MyType> => { /* ... */ };
const getMyTypeDecoder = (): Decoder<MyType> => { /* ... */ };
const getMyTypeCodec = (): Codec<MyType> =>
combineCodec(getMyTypeEncoder(), getMyTypeDecoder());
```
## Different From and To types
When creating codecs, the encoded type is allowed to be looser than the decoded type. A good example of that is the u64 number codec:
```ts
const u64Codec: Codec<number | bigint, bigint> = getU64Codec();
```
As you can see, the first type parameter is looser since it accepts numbers or big integers, whereas the second type parameter only accepts big integers. Thats because when _encoding_ a u64 number, you may provide either a `bigint` or a `number` for convenience. However, when you decode a u64 number, you will always get a `bigint` because not all u64 values can fit in a JavaScript `number` type.
```ts
const bytes = u64Codec.encode(42);
const value = u64Codec.decode(bytes); // BigInt(42)
```
This relationship between the type we encode “From” and decode “To” can be generalized in TypeScript as `To extends From`.
Heres another example using an object with default values. You can read more about the `transformEncoder` helper below.
```ts
type Person = { name: string, age: number };
type PersonInput = { name: string, age?: number };
const getPersonEncoder = (): Encoder<PersonInput> =>
transformEncoder(
getStructEncoder([
['name', addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder())],
['age', getU32Encoder()],
]),
input => { ...input, age: input.age ?? 42 }
);
const getPersonDecoder = (): Decoder<Person> =>
getStructDecoder([
['name', addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder())],
['age', getU32Decoder()],
]);
const getPersonCodec = (): Codec<PersonInput, Person> =>
combineCodec(getPersonEncoder(), getPersonDecoder())
```
## Fixed-size and variable-size codecs
It is also worth noting that Codecs can either be of fixed size or variable size.
`FixedSizeCodecs` have a `fixedSize` number attribute that tells us exactly how big their encoded data is in bytes.
```ts
const myCodec: FixedSizeCodec<number> = getU32Codec();
myCodec.fixedSize; // 4 bytes.
```
On the other hand, `VariableSizeCodecs` do not know the size of their encoded data in advance. Instead, they will grab that information either from the provided encoded data or from the value to encode. For the former, we can simply access the length of the `Uint8Array`. For the latter, it provides a `getSizeFromValue` that tells us the encoded byte size of the provided value.
```ts
const myCodec: VariableSizeCodec<string> = addCodecSizePrefix(getUtf8Codec(), getU32Codec());
myCodec.getSizeFromValue('hello world'); // 4 + 11 bytes.
```
Also note that, if the `VariableSizeCodec` is bounded by a maximum size, it can be provided as a `maxSize` number attribute.
The following type guards are available to identify and/or assert the size of codecs: `isFixedSize`, `isVariableSize`, `assertIsFixedSize` and `assertIsVariableSize`.
Finally, note that the same is true for `Encoders` and `Decoders`.
- A `FixedSizeEncoder` has a `fixedSize` number attribute.
- A `VariableSizeEncoder` has a `getSizeFromValue` function and an optional `maxSize` number attribute.
- A `FixedSizeDecoder` has a `fixedSize` number attribute.
- A `VariableSizeDecoder` has an optional `maxSize` number attribute.
## Creating custom codecs
If composing codecs isnt enough for you, you may implement your own codec logic by using the `createCodec` function. This function requires an object with a `read` and a `write` function telling us how to read from and write to an existing byte array.
The `read` function accepts the `bytes` to decode from and the `offset` at each we should start reading. It returns an array with two items:
- The first item should be the decoded value.
- The second item should be the next offset to read from.
```ts
createCodec({
read(bytes, offset) {
const value = bytes[offset];
return [value, offset + 1];
},
// ...
});
```
Reciprocally, the `write` function accepts the `value` to encode, the array of `bytes` to write the encoded value to and the `offset` at which it should be written. It should encode the given value, insert it in the byte array, and provide the next offset to write to as the return value.
```ts
createCodec({
write(value, bytes, offset) {
bytes.set(value, offset);
return offset + 1;
},
// ...
});
```
Additionally, we must specify the size of the codec. If we are defining a `FixedSizeCodec`, we must simply provide the `fixedSize` number attribute. For `VariableSizeCodecs`, we must provide the `getSizeFromValue` function as described in the previous section.
```ts
// FixedSizeCodec.
createCodec({
fixedSize: 1,
// ...
});
// VariableSizeCodec.
createCodec({
getSizeFromValue: (value: string) => value.length,
// ...
});
```
Heres a concrete example of a custom codec that encodes any unsigned integer in a single byte. Since a single byte can only store integers from 0 to 255, if any other integer is provided it will take its modulo 256 to ensure it fits in a single byte. Because it always requires a single byte, that codec is a `FixedSizeCodec` of size `1`.
```ts
const getModuloU8Codec = () =>
createCodec<number>({
fixedSize: 1,
read(bytes, offset) {
const value = bytes[offset];
return [value, offset + 1];
},
write(value, bytes, offset) {
bytes.set(value % 256, offset);
return offset + 1;
},
});
```
Note that, it is also possible to create custom encoders and decoders separately by using the `createEncoder` and `createDecoder` functions respectively and then use the `combineCodec` function on them just like we were doing with composed codecs.
This approach is recommended to library maintainers as it allows their users to tree-shake any of the encoders and/or decoders they dont need.
Heres our previous modulo u8 example but split into separate `Encoder`, `Decoder` and `Codec` instances.
```ts
const getModuloU8Encoder = () =>
createEncoder<number>({
fixedSize: 1,
write(value, bytes, offset) {
bytes.set(value % 256, offset);
return offset + 1;
},
});
const getModuloU8Decoder = () =>
createDecoder<number>({
fixedSize: 1,
read(bytes, offset) {
const value = bytes[offset];
return [value, offset + 1];
},
});
const getModuloU8Codec = () => combineCodec(getModuloU8Encoder(), getModuloU8Decoder());
```
Heres another example returning a `VariableSizeCodec`. This one transforms a simple string composed of characters from `a` to `z` to a buffer of numbers from `1` to `26` where `0` bytes are spaces.
```ts
const alphabet = ' abcdefghijklmnopqrstuvwxyz';
const getCipherEncoder = () =>
createEncoder<string>({
getSizeFromValue: value => value.length,
write(value, bytes, offset) {
const bytesToAdd = [...value].map(char => alphabet.indexOf(char));
bytes.set(bytesToAdd, offset);
return offset + bytesToAdd.length;
},
});
const getCipherDecoder = () =>
createDecoder<string>({
read(bytes, offset) {
const value = [...bytes.slice(offset)].map(byte => alphabet.charAt(byte)).join('');
return [value, bytes.length];
},
});
const getCipherCodec = () => combineCodec(getCipherEncoder(), getCipherDecoder());
```
## Transforming codecs
It is possible to transform a `Codec<T>` to a `Codec<U>` by providing two mapping functions: one that goes from `T` to `U` and one that does the opposite.
For instance, heres how you would map a `u32` integer into a `string` representation of that number.
```ts
const getStringU32Codec = () =>
transformCodec(
getU32Codec(),
(integerAsString: string): number => parseInt(integerAsString),
(integer: number): string => integer.toString(),
);
getStringU32Codec().encode('42'); // new Uint8Array([42])
getStringU32Codec().decode(new Uint8Array([42])); // "42"
```
If a `Codec` has [different From and To types](#different-from-and-to-types), say `Codec<OldFrom, OldTo>`, and we want to map it to `Codec<NewFrom, NewTo>`, we must provide functions that map from `NewFrom` to `OldFrom` and from `OldTo` to `NewTo`.
To illustrate that, lets take our previous `getStringU32Codec` example but make it use a `getU64Codec` codec instead as it returns a `Codec<number | bigint, bigint>`. Additionally, lets make it so our `getStringU64Codec` function returns a `Codec<number | string, string>` so that it also accepts numbers when encoding values. Heres what our mapping functions look like:
```ts
const getStringU64Codec = () =>
transformCodec(
getU64Codec(),
(integerInput: number | string): number | bigint =>
typeof integerInput === 'string' ? BigInt(integerAsString) : integerInput,
(integer: bigint): string => integer.toString(),
);
```
Note that the second function that maps the decoded type is optional. That means, you can omit it to simply update or loosen the type to encode whilst keeping the decoded type the same.
This is particularly useful to provide default values to object structures. For instance, heres how we can map our `Person` codec to give a default value to its `age` attribute.
```ts
type Person = { name: string; age: number; }
const getPersonCodec = (): Codec<Person> => { /*...*/ }
type PersonInput = { name: string; age?: number; }
const getPersonWithDefaultValueCodec = (): Codec<PersonInput, Person> =>
transformCodec(
getPersonCodec(),
(person: PersonInput): Person => { ...person, age: person.age ?? 42 }
)
```
Similar helpers exist to map `Encoder` and `Decoder` instances allowing you to separate your codec logic into tree-shakeable functions. Heres our `getStringU32Codec` written that way.
```ts
const getStringU32Encoder = () =>
transformEncoder(getU32Encoder(), (integerAsString: string): number => parseInt(integerAsString));
const getStringU32Decoder = () => transformDecoder(getU32Decoder(), (integer: number): string => integer.toString());
const getStringU32Codec = () => combineCodec(getStringU32Encoder(), getStringU32Decoder());
```
## Fixing the size of codecs
The `fixCodecSize` function allows you to bind the size of a given codec to the given fixed size.
For instance, say you want to represent a base-58 string that uses exactly 32 bytes when decoded. Heres how you can use the `fixCodecSize` helper to achieve that.
```ts
const get32BytesBase58Codec = () => fixCodecSize(getBase58Codec(), 32);
```
You may also use the `fixEncoderSize` and `fixDecoderSize` functions to separate your codec logic like so:
```ts
const get32BytesBase58Encoder = () => fixEncoderSize(getBase58Encoder(), 32);
const get32BytesBase58Decoder = () => fixDecoderSize(getBase58Decoder(), 32);
const get32BytesBase58Codec = () => combineCodec(get32BytesBase58Encoder(), get32BytesBase58Decoder());
```
## Prefixing codecs with their size
The `addCodecSizePrefix` function allows you to store the byte size of any codec as a number prefix. This allows you to contain variable-size codecs to their actual size.
When encoding, the size of the encoded data is stored before the encoded data itself. When decoding, the size is read first to know how many bytes to read next.
For example, say we want to represent a variable-size base-58 string using a `u32` size prefix. Heres how you can use the `addCodecSizePrefix` function to achieve that.
```ts
const getU32Base58Codec = () => addCodecSizePrefix(getBase58Codec(), getU32Codec());
getU32Base58Codec().encode('hello world');
// 0x0b00000068656c6c6f20776f726c64
// | └-- Our encoded base-58 string.
// └-- Our encoded u32 size prefix.
```
You may also use the `addEncoderSizePrefix` and `addDecoderSizePrefix` functions to separate your codec logic like so:
```ts
const getU32Base58Encoder = () => addEncoderSizePrefix(getBase58Encoder(), getU32Encoder());
const getU32Base58Decoder = () => addDecoderSizePrefix(getBase58Decoder(), getU32Decoder());
const getU32Base58Codec = () => combineCodec(getU32Base58Encoder(), getU32Base58Decoder());
```
## Adding sentinels to codecs
Another way of delimiting the size of a codec is to use sentinels. The `addCodecSentinel` function allows us to add a sentinel to the end of the encoded data and to read until that sentinel is found when decoding. It accepts any codec and a `Uint8Array` sentinel responsible for delimiting the encoded data.
```ts
const codec = addCodecSentinel(getUtf8Codec(), new Uint8Array([255, 255]));
codec.encode('hello');
// 0x68656c6c6fffff
// | └-- Our sentinel.
// └-- Our encoded string.
```
Note that the sentinel _must not_ be present in the encoded data and _must_ be present in the decoded data for this to work. If this is not the case, dedicated errors will be thrown.
```ts
const sentinel = new Uint8Array([108, 108]); // 'll'
const codec = addCodecSentinel(getUtf8Codec(), sentinel);
codec.encode('hello'); // Throws: sentinel is in encoded data.
codec.decode(new Uint8Array([1, 2, 3])); // Throws: sentinel missing in decoded data.
```
Separate `addEncoderSentinel` and `addDecoderSentinel` functions are also available.
```ts
const bytes = addEncoderSentinel(getUtf8Encoder(), sentinel).encode('hello');
const value = addDecoderSentinel(getUtf8Decoder(), sentinel).decode(bytes);
```
## Adjusting the size of codecs
The `resizeCodec` helper re-defines the size of a given codec by accepting a function that takes the current size of the codec and returns a new size. This works for both fixed-size and variable-size codecs.
```ts
// Fixed-size codec.
const getBiggerU32Codec = () => resizeCodec(getU32Codec(), size => size + 4);
getBiggerU32Codec().encode(42);
// 0x2a00000000000000
// | └-- Empty buffer space caused by the resizeCodec function.
// └-- Our encoded u32 number.
// Variable-size codec.
const getBiggerUtf8Codec = () => resizeCodec(getUtf8Codec(), size => size + 4);
getBiggerUtf8Codec().encode('ABC');
// 0x41424300000000
// | └-- Empty buffer space caused by the resizeCodec function.
// └-- Our encoded string.
```
Note that the `resizeCodec` function doesn't change any encoded or decoded bytes, it merely tells the `encode` and `decode` functions how big the `Uint8Array` should be before delegating to their respective `write` and `read` functions. In fact, this is completely bypassed when using the `write` and `read` functions directly. For instance:
```ts
const getBiggerU32Codec = () => resizeCodec(getU32Codec(), size => size + 4);
// Using the encode function.
getBiggerU32Codec().encode(42);
// 0x2a00000000000000
// Using the lower-level write function.
const myCustomBytes = new Uint8Array(4);
getBiggerU32Codec().write(42, myCustomBytes, 0);
// 0x2a000000
```
So when would it make sense to use the `resizeCodec` function? This function is particularly useful when combined with the `offsetCodec` function described below. Whilst the `offsetCodec` may help us push the offset forward — e.g. to skip some padding — it won't change the size of the encoded data which means the last bytes will be truncated by how much we pushed the offset forward. The `resizeCodec` function can be used to fix that. For instance, here's how we can use the `resizeCodec` and the `offsetCodec` functions together to create a struct codec that includes some padding.
```ts
const personCodec = getStructCodec([
['name', fixCodecSize(getUtf8Codec(), 8)],
// There is a 4-byte padding between name and age.
[
'age',
offsetCodec(
resizeCodec(getU32Codec(), size => size + 4),
{ preOffset: ({ preOffset }) => preOffset + 4 },
),
],
]);
personCodec.encode({ name: 'Alice', age: 42 });
// 0x416c696365000000000000002a000000
// | | └-- Our encoded u32 (42).
// | └-- The 4-bytes of padding we are skipping.
// └-- Our 8-byte encoded string ("Alice").
```
As usual, the `resizeEncoder` and `resizeDecoder` functions can also be used to achieve that.
```ts
const getBiggerU32Encoder = () => resizeEncoder(getU32Codec(), size => size + 4);
const getBiggerU32Decoder = () => resizeDecoder(getU32Codec(), size => size + 4);
const getBiggerU32Codec = () => combineCodec(getBiggerU32Encoder(), getBiggerU32Decoder());
```
## Offsetting codecs
The `offsetCodec` function is a powerful codec primitive that allows you to move the offset of a given codec forward or backwards. It accepts one or two functions that takes the current offset and returns a new offset.
To understand how this works, let's take our previous `biggerU32Codec` example which encodes a `u32` number inside an 8-byte buffer.
```ts
const biggerU32Codec = resizeCodec(getU32Codec(), size => size + 4);
biggerU32Codec.encode(0xffffffff);
// 0xffffffff00000000
// | └-- Empty buffer space caused by the resizeCodec function.
// └-- Our encoded u32 number.
```
Now, let's say we want to move the offset of that codec 2 bytes forward so that the encoded number sits in the middle of the buffer. To achieve, this we can use the `offsetCodec` helper and provide a `preOffset` function that moves the "pre-offset" of the codec 2 bytes forward.
```ts
const u32InTheMiddleCodec = offsetCodec(biggerU32Codec, {
preOffset: ({ preOffset }) => preOffset + 2,
});
u32InTheMiddleCodec.encode(0xffffffff);
// 0x0000ffffffff0000
// └-- Our encoded u32 number is now in the middle of the buffer.
```
We refer to this offset as the "pre-offset" because, once the inner codec is encoded or decoded, an additional offset will be returned which we refer to as the "post-offset". That "post-offset" is important as, unless we are reaching the end of our codec, it will be used by any further codecs to continue encoding or decoding data.
By default, that "post-offset" is simply the addition of the "pre-offset" and the size of the encoded or decoded inner data.
```ts
const u32InTheMiddleCodec = offsetCodec(biggerU32Codec, {
preOffset: ({ preOffset }) => preOffset + 2,
});
u32InTheMiddleCodec.encode(0xffffffff);
// 0x0000ffffffff0000
// | | └-- Post-offset.
// | └-- New pre-offset: The original pre-offset + 2.
// └-- Pre-offset: The original pre-offset before we adjusted it.
```
However, you may also provide a `postOffset` function to adjust the "post-offset". For instance, let's push the "post-offset" 2 bytes forward as well such that any further codecs will start doing their job at the end of our 8-byte `u32` number.
```ts
const u32InTheMiddleCodec = offsetCodec(biggerU32Codec, {
preOffset: ({ preOffset }) => preOffset + 2,
postOffset: ({ postOffset }) => postOffset + 2,
});
u32InTheMiddleCodec.encode(0xffffffff);
// 0x0000ffffffff0000
// | | | └-- New post-offset: The original post-offset + 2.
// | | └-- Post-offset: The original post-offset before we adjusted it.
// | └-- New pre-offset: The original pre-offset + 2.
// └-- Pre-offset: The original pre-offset before we adjusted it.
```
Both the `preOffset` and `postOffset` functions offer the following attributes:
- `bytes`: The entire byte array being encoded or decoded.
- `preOffset`: The original and unaltered pre-offset.
- `wrapBytes`: A helper function that wraps the given offset around the byte array length. E.g. `wrapBytes(-1)` will refer to the last byte of the byte array.
Additionally, the post-offset function also provides the following attributes:
- `newPreOffset`: The new pre-offset after the pre-offset function has been applied.
- `postOffset`: The original and unaltered post-offset.
Note that you may also decide to ignore these attributes to achieve absolute offsets. However, relative offsets are usually recommended as they won't break your codecs when composed with other codecs.
```ts
const u32InTheMiddleCodec = offsetCodec(biggerU32Codec, {
preOffset: () => 2,
postOffset: () => 8,
});
u32InTheMiddleCodec.encode(0xffffffff);
// 0x0000ffffffff0000
```
Also note that any negative offset or offset that exceeds the size of the byte array will throw a `SolanaError` of code `SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE`.
```ts
const u32InTheEndCodec = offsetCodec(biggerU32Codec, { preOffset: () => -4 });
u32InTheEndCodec.encode(0xffffffff);
// throws new SolanaError(SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE)
```
To avoid this, you may use the `wrapBytes` function to wrap the offset around the byte array length. For instance, here's how we can use the `wrapBytes` function to move the pre-offset 4 bytes from the end of the byte array.
```ts
const u32InTheEndCodec = offsetCodec(biggerU32Codec, {
preOffset: ({ wrapBytes }) => wrapBytes(-4),
});
u32InTheEndCodec.encode(0xffffffff);
// 0x00000000ffffffff
```
As you can see, the `offsetCodec` helper allows you to jump all over the place with your codecs. This non-linear approach to encoding and decoding data allows you to achieve complex serialization strategies that would otherwise be impossible.
As usual, the `offsetEncoder` and `offsetDecoder` functions can also be used to split your codec logic into tree-shakeable functions.
```ts
const getU32InTheMiddleEncoder = () => offsetEncoder(biggerU32Encoder, { preOffset: ({ preOffset }) => preOffset + 2 });
const getU32InTheMiddleDecoder = () => offsetDecoder(biggerU32Decoder, { preOffset: ({ preOffset }) => preOffset + 2 });
const getU32InTheMiddleCodec = () => combineCodec(getU32InTheMiddleEncoder(), getU32InTheMiddleDecoder());
```
## Padding codecs
The `padLeftCodec` and `padRightCodec` helpers can be used to add padding to the left or right of a given codec. They accept an `offset` number that tells us how big the padding should be.
```ts
const getLeftPaddedCodec = () => padLeftCodec(getU16Codec(), 4);
getLeftPaddedCodec().encode(0xffff);
// 0x00000000ffff
// | └-- Our encoded u16 number.
// └-- Our 4-byte padding.
const getRightPaddedCodec = () => padRightCodec(getU16Codec(), 4);
getRightPaddedCodec().encode(0xffff);
// 0xffff00000000
// | └-- Our 4-byte padding.
// └-- Our encoded u16 number.
```
Note that both the `padLeftCodec` and `padRightCodec` functions are simple wrappers around the `offsetCodec` and `resizeCodec` functions. For more complex padding strategies, you may want to use the `offsetCodec` and `resizeCodec` functions directly instead.
As usual, encoder-only and decoder-only helpers are available for these padding functions. Namely, `padLeftEncoder`, `padRightEncoder`, `padLeftDecoder` and `padRightDecoder`.
```ts
const getMyPaddedEncoder = () => padLeftEncoder(getU16Encoder());
const getMyPaddedDecoder = () => padLeftDecoder(getU16Decoder());
const getMyPaddedCodec = () => combineCodec(getMyPaddedEncoder(), getMyPaddedDecoder());
```
## Reversing codecs
The `reverseCodec` helper reverses the bytes of the provided `FixedSizeCodec`.
```ts
const getBigEndianU64Codec = () => reverseCodec(getU64Codec());
```
Note that number codecs can already do that for you via their `endian` option.
```ts
const getBigEndianU64Codec = () => getU64Codec({ endian: Endian.Big });
```
As usual, the `reverseEncoder` and `reverseDecoder` functions can also be used to achieve that.
```ts
const getBigEndianU64Encoder = () => reverseEncoder(getU64Encoder());
const getBigEndianU64Decoder = () => reverseDecoder(getU64Decoder());
const getBigEndianU64Codec = () => combineCodec(getBigEndianU64Encoder(), getBigEndianU64Decoder());
```
## Byte helpers
This package also provides utility functions for managing bytes such as:
- `mergeBytes`: Concatenates an array of `Uint8Arrays` into a single `Uint8Array`.
- `padBytes`: Pads a `Uint8Array` with zeroes (to the right) to the specified length.
- `fixBytes`: Pads or truncates a `Uint8Array` so it has the specified length.
- `containsBytes`: Checks if a `Uint8Array` contains another `Uint8Array` at a given offset.
```ts
// Merge multiple Uint8Array buffers into one.
mergeBytes([new Uint8Array([1, 2]), new Uint8Array([3, 4])]); // Uint8Array([1, 2, 3, 4])
// Pad a Uint8Array buffer to the given size.
padBytes(new Uint8Array([1, 2]), 4); // Uint8Array([1, 2, 0, 0])
padBytes(new Uint8Array([1, 2, 3, 4]), 2); // Uint8Array([1, 2, 3, 4])
// Pad and truncate a Uint8Array buffer to the given size.
fixBytes(new Uint8Array([1, 2]), 4); // Uint8Array([1, 2, 0, 0])
fixBytes(new Uint8Array([1, 2, 3, 4]), 2); // Uint8Array([1, 2])
// Check if a Uint8Array contains another Uint8Array at a given offset.
containsBytes(new Uint8Array([1, 2, 3, 4]), new Uint8Array([2, 3]), 1); // true
containsBytes(new Uint8Array([1, 2, 3, 4]), new Uint8Array([2, 3]), 2); // false
```
---
To read more about the available codecs and how to use them, check out the documentation of the main [`@solana/codecs` package](https://github.com/anza-xyz/kit/tree/main/packages/codecs).

540
node_modules/@solana/codecs-core/dist/index.browser.cjs generated vendored Normal file
View File

@@ -0,0 +1,540 @@
'use strict';
var errors = require('@solana/errors');
// src/add-codec-sentinel.ts
// src/bytes.ts
var mergeBytes = (byteArrays) => {
const nonEmptyByteArrays = byteArrays.filter((arr) => arr.length);
if (nonEmptyByteArrays.length === 0) {
return byteArrays.length ? byteArrays[0] : new Uint8Array();
}
if (nonEmptyByteArrays.length === 1) {
return nonEmptyByteArrays[0];
}
const totalLength = nonEmptyByteArrays.reduce((total, arr) => total + arr.length, 0);
const result = new Uint8Array(totalLength);
let offset = 0;
nonEmptyByteArrays.forEach((arr) => {
result.set(arr, offset);
offset += arr.length;
});
return result;
};
function padBytes(bytes, length) {
if (bytes.length >= length) return bytes;
const paddedBytes = new Uint8Array(length).fill(0);
paddedBytes.set(bytes);
return paddedBytes;
}
var fixBytes = (bytes, length) => padBytes(bytes.length <= length ? bytes : bytes.slice(0, length), length);
function containsBytes(data, bytes, offset) {
const slice = (offset === 0 || offset <= -data.byteLength) && data.length === bytes.length ? data : data.slice(offset, offset + bytes.length);
return bytesEqual(slice, bytes);
}
function bytesEqual(bytes1, bytes2) {
return bytes1.length === bytes2.length && bytes1.every((value, index) => value === bytes2[index]);
}
function getEncodedSize(value, encoder) {
return "fixedSize" in encoder ? encoder.fixedSize : encoder.getSizeFromValue(value);
}
function createEncoder(encoder) {
return Object.freeze({
...encoder,
encode: (value) => {
const bytes = new Uint8Array(getEncodedSize(value, encoder));
encoder.write(value, bytes, 0);
return bytes;
}
});
}
function createDecoder(decoder) {
return Object.freeze({
...decoder,
decode: (bytes, offset = 0) => decoder.read(bytes, offset)[0]
});
}
function createCodec(codec) {
return Object.freeze({
...codec,
decode: (bytes, offset = 0) => codec.read(bytes, offset)[0],
encode: (value) => {
const bytes = new Uint8Array(getEncodedSize(value, codec));
codec.write(value, bytes, 0);
return bytes;
}
});
}
function isFixedSize(codec) {
return "fixedSize" in codec && typeof codec.fixedSize === "number";
}
function assertIsFixedSize(codec) {
if (!isFixedSize(codec)) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__EXPECTED_FIXED_LENGTH);
}
}
function isVariableSize(codec) {
return !isFixedSize(codec);
}
function assertIsVariableSize(codec) {
if (!isVariableSize(codec)) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__EXPECTED_VARIABLE_LENGTH);
}
}
function combineCodec(encoder, decoder) {
if (isFixedSize(encoder) !== isFixedSize(decoder)) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__ENCODER_DECODER_SIZE_COMPATIBILITY_MISMATCH);
}
if (isFixedSize(encoder) && isFixedSize(decoder) && encoder.fixedSize !== decoder.fixedSize) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__ENCODER_DECODER_FIXED_SIZE_MISMATCH, {
decoderFixedSize: decoder.fixedSize,
encoderFixedSize: encoder.fixedSize
});
}
if (!isFixedSize(encoder) && !isFixedSize(decoder) && encoder.maxSize !== decoder.maxSize) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__ENCODER_DECODER_MAX_SIZE_MISMATCH, {
decoderMaxSize: decoder.maxSize,
encoderMaxSize: encoder.maxSize
});
}
return {
...decoder,
...encoder,
decode: decoder.decode,
encode: encoder.encode,
read: decoder.read,
write: encoder.write
};
}
// src/add-codec-sentinel.ts
function addEncoderSentinel(encoder, sentinel) {
const write = ((value, bytes, offset) => {
const encoderBytes = encoder.encode(value);
if (findSentinelIndex(encoderBytes, sentinel) >= 0) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__ENCODED_BYTES_MUST_NOT_INCLUDE_SENTINEL, {
encodedBytes: encoderBytes,
hexEncodedBytes: hexBytes(encoderBytes),
hexSentinel: hexBytes(sentinel),
sentinel
});
}
bytes.set(encoderBytes, offset);
offset += encoderBytes.length;
bytes.set(sentinel, offset);
offset += sentinel.length;
return offset;
});
if (isFixedSize(encoder)) {
return createEncoder({ ...encoder, fixedSize: encoder.fixedSize + sentinel.length, write });
}
return createEncoder({
...encoder,
...encoder.maxSize != null ? { maxSize: encoder.maxSize + sentinel.length } : {},
getSizeFromValue: (value) => encoder.getSizeFromValue(value) + sentinel.length,
write
});
}
function addDecoderSentinel(decoder, sentinel) {
const read = ((bytes, offset) => {
const candidateBytes = offset === 0 || offset <= -bytes.byteLength ? bytes : bytes.slice(offset);
const sentinelIndex = findSentinelIndex(candidateBytes, sentinel);
if (sentinelIndex === -1) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__SENTINEL_MISSING_IN_DECODED_BYTES, {
decodedBytes: candidateBytes,
hexDecodedBytes: hexBytes(candidateBytes),
hexSentinel: hexBytes(sentinel),
sentinel
});
}
const preSentinelBytes = candidateBytes.slice(0, sentinelIndex);
return [decoder.decode(preSentinelBytes), offset + preSentinelBytes.length + sentinel.length];
});
if (isFixedSize(decoder)) {
return createDecoder({ ...decoder, fixedSize: decoder.fixedSize + sentinel.length, read });
}
return createDecoder({
...decoder,
...decoder.maxSize != null ? { maxSize: decoder.maxSize + sentinel.length } : {},
read
});
}
function addCodecSentinel(codec, sentinel) {
return combineCodec(addEncoderSentinel(codec, sentinel), addDecoderSentinel(codec, sentinel));
}
function findSentinelIndex(bytes, sentinel) {
return bytes.findIndex((byte, index, arr) => {
if (sentinel.length === 1) return byte === sentinel[0];
return containsBytes(arr, sentinel, index);
});
}
function hexBytes(bytes) {
return bytes.reduce((str, byte) => str + byte.toString(16).padStart(2, "0"), "");
}
function assertByteArrayIsNotEmptyForCodec(codecDescription, bytes, offset = 0) {
if (bytes.length - offset <= 0) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__CANNOT_DECODE_EMPTY_BYTE_ARRAY, {
codecDescription
});
}
}
function assertByteArrayHasEnoughBytesForCodec(codecDescription, expected, bytes, offset = 0) {
const bytesLength = bytes.length - offset;
if (bytesLength < expected) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__INVALID_BYTE_LENGTH, {
bytesLength,
codecDescription,
expected
});
}
}
function assertByteArrayOffsetIsNotOutOfRange(codecDescription, offset, bytesLength) {
if (offset < 0 || offset > bytesLength) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE, {
bytesLength,
codecDescription,
offset
});
}
}
// src/add-codec-size-prefix.ts
function addEncoderSizePrefix(encoder, prefix) {
const write = ((value, bytes, offset) => {
const encoderBytes = encoder.encode(value);
offset = prefix.write(encoderBytes.length, bytes, offset);
bytes.set(encoderBytes, offset);
return offset + encoderBytes.length;
});
if (isFixedSize(prefix) && isFixedSize(encoder)) {
return createEncoder({ ...encoder, fixedSize: prefix.fixedSize + encoder.fixedSize, write });
}
const prefixMaxSize = isFixedSize(prefix) ? prefix.fixedSize : prefix.maxSize ?? null;
const encoderMaxSize = isFixedSize(encoder) ? encoder.fixedSize : encoder.maxSize ?? null;
const maxSize = prefixMaxSize !== null && encoderMaxSize !== null ? prefixMaxSize + encoderMaxSize : null;
return createEncoder({
...encoder,
...maxSize !== null ? { maxSize } : {},
getSizeFromValue: (value) => {
const encoderSize = getEncodedSize(value, encoder);
return getEncodedSize(encoderSize, prefix) + encoderSize;
},
write
});
}
function addDecoderSizePrefix(decoder, prefix) {
const read = ((bytes, offset) => {
const [bigintSize, decoderOffset] = prefix.read(bytes, offset);
const size = Number(bigintSize);
offset = decoderOffset;
if (offset > 0 || bytes.length > size) {
bytes = bytes.slice(offset, offset + size);
}
assertByteArrayHasEnoughBytesForCodec("addDecoderSizePrefix", size, bytes);
return [decoder.decode(bytes), offset + size];
});
if (isFixedSize(prefix) && isFixedSize(decoder)) {
return createDecoder({ ...decoder, fixedSize: prefix.fixedSize + decoder.fixedSize, read });
}
const prefixMaxSize = isFixedSize(prefix) ? prefix.fixedSize : prefix.maxSize ?? null;
const decoderMaxSize = isFixedSize(decoder) ? decoder.fixedSize : decoder.maxSize ?? null;
const maxSize = prefixMaxSize !== null && decoderMaxSize !== null ? prefixMaxSize + decoderMaxSize : null;
return createDecoder({ ...decoder, ...maxSize !== null ? { maxSize } : {}, read });
}
function addCodecSizePrefix(codec, prefix) {
return combineCodec(addEncoderSizePrefix(codec, prefix), addDecoderSizePrefix(codec, prefix));
}
// src/array-buffers.ts
function toArrayBuffer(bytes, offset, length) {
const bytesOffset = bytes.byteOffset + (offset ?? 0);
const bytesLength = length ?? bytes.byteLength;
let buffer;
if (typeof SharedArrayBuffer === "undefined") {
buffer = bytes.buffer;
} else if (bytes.buffer instanceof SharedArrayBuffer) {
buffer = new ArrayBuffer(bytes.length);
new Uint8Array(buffer).set(new Uint8Array(bytes));
} else {
buffer = bytes.buffer;
}
return (bytesOffset === 0 || bytesOffset === -bytes.byteLength) && bytesLength === bytes.byteLength ? buffer : buffer.slice(bytesOffset, bytesOffset + bytesLength);
}
function createDecoderThatConsumesEntireByteArray(decoder) {
return createDecoder({
...decoder,
read(bytes, offset) {
const [value, newOffset] = decoder.read(bytes, offset);
if (bytes.length > newOffset) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__EXPECTED_DECODER_TO_CONSUME_ENTIRE_BYTE_ARRAY, {
expectedLength: newOffset,
numExcessBytes: bytes.length - newOffset
});
}
return [value, newOffset];
}
});
}
// src/fix-codec-size.ts
function fixEncoderSize(encoder, fixedBytes) {
return createEncoder({
fixedSize: fixedBytes,
write: (value, bytes, offset) => {
const variableByteArray = encoder.encode(value);
const fixedByteArray = variableByteArray.length > fixedBytes ? variableByteArray.slice(0, fixedBytes) : variableByteArray;
bytes.set(fixedByteArray, offset);
return offset + fixedBytes;
}
});
}
function fixDecoderSize(decoder, fixedBytes) {
return createDecoder({
fixedSize: fixedBytes,
read: (bytes, offset) => {
assertByteArrayHasEnoughBytesForCodec("fixCodecSize", fixedBytes, bytes, offset);
if (offset > 0 || bytes.length > fixedBytes) {
bytes = bytes.slice(offset, offset + fixedBytes);
}
if (isFixedSize(decoder)) {
bytes = fixBytes(bytes, decoder.fixedSize);
}
const [value] = decoder.read(bytes, 0);
return [value, offset + fixedBytes];
}
});
}
function fixCodecSize(codec, fixedBytes) {
return combineCodec(fixEncoderSize(codec, fixedBytes), fixDecoderSize(codec, fixedBytes));
}
// src/offset-codec.ts
function offsetEncoder(encoder, config) {
return createEncoder({
...encoder,
write: (value, bytes, preOffset) => {
const wrapBytes = (offset) => modulo(offset, bytes.length);
const newPreOffset = config.preOffset ? config.preOffset({ bytes, preOffset, wrapBytes }) : preOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetEncoder", newPreOffset, bytes.length);
const postOffset = encoder.write(value, bytes, newPreOffset);
const newPostOffset = config.postOffset ? config.postOffset({ bytes, newPreOffset, postOffset, preOffset, wrapBytes }) : postOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetEncoder", newPostOffset, bytes.length);
return newPostOffset;
}
});
}
function offsetDecoder(decoder, config) {
return createDecoder({
...decoder,
read: (bytes, preOffset) => {
const wrapBytes = (offset) => modulo(offset, bytes.length);
const newPreOffset = config.preOffset ? config.preOffset({ bytes, preOffset, wrapBytes }) : preOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetDecoder", newPreOffset, bytes.length);
const [value, postOffset] = decoder.read(bytes, newPreOffset);
const newPostOffset = config.postOffset ? config.postOffset({ bytes, newPreOffset, postOffset, preOffset, wrapBytes }) : postOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetDecoder", newPostOffset, bytes.length);
return [value, newPostOffset];
}
});
}
function offsetCodec(codec, config) {
return combineCodec(offsetEncoder(codec, config), offsetDecoder(codec, config));
}
function modulo(dividend, divisor) {
if (divisor === 0) return 0;
return (dividend % divisor + divisor) % divisor;
}
function resizeEncoder(encoder, resize) {
if (isFixedSize(encoder)) {
const fixedSize = resize(encoder.fixedSize);
if (fixedSize < 0) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: fixedSize,
codecDescription: "resizeEncoder"
});
}
return createEncoder({ ...encoder, fixedSize });
}
return createEncoder({
...encoder,
getSizeFromValue: (value) => {
const newSize = resize(encoder.getSizeFromValue(value));
if (newSize < 0) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: newSize,
codecDescription: "resizeEncoder"
});
}
return newSize;
}
});
}
function resizeDecoder(decoder, resize) {
if (isFixedSize(decoder)) {
const fixedSize = resize(decoder.fixedSize);
if (fixedSize < 0) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: fixedSize,
codecDescription: "resizeDecoder"
});
}
return createDecoder({ ...decoder, fixedSize });
}
return decoder;
}
function resizeCodec(codec, resize) {
return combineCodec(resizeEncoder(codec, resize), resizeDecoder(codec, resize));
}
// src/pad-codec.ts
function padLeftEncoder(encoder, offset) {
return offsetEncoder(
resizeEncoder(encoder, (size) => size + offset),
{ preOffset: ({ preOffset }) => preOffset + offset }
);
}
function padRightEncoder(encoder, offset) {
return offsetEncoder(
resizeEncoder(encoder, (size) => size + offset),
{ postOffset: ({ postOffset }) => postOffset + offset }
);
}
function padLeftDecoder(decoder, offset) {
return offsetDecoder(
resizeDecoder(decoder, (size) => size + offset),
{ preOffset: ({ preOffset }) => preOffset + offset }
);
}
function padRightDecoder(decoder, offset) {
return offsetDecoder(
resizeDecoder(decoder, (size) => size + offset),
{ postOffset: ({ postOffset }) => postOffset + offset }
);
}
function padLeftCodec(codec, offset) {
return combineCodec(padLeftEncoder(codec, offset), padLeftDecoder(codec, offset));
}
function padRightCodec(codec, offset) {
return combineCodec(padRightEncoder(codec, offset), padRightDecoder(codec, offset));
}
// src/reverse-codec.ts
function copySourceToTargetInReverse(source, target_WILL_MUTATE, sourceOffset, sourceLength, targetOffset = 0) {
while (sourceOffset < --sourceLength) {
const leftValue = source[sourceOffset];
target_WILL_MUTATE[sourceOffset + targetOffset] = source[sourceLength];
target_WILL_MUTATE[sourceLength + targetOffset] = leftValue;
sourceOffset++;
}
if (sourceOffset === sourceLength) {
target_WILL_MUTATE[sourceOffset + targetOffset] = source[sourceOffset];
}
}
function reverseEncoder(encoder) {
assertIsFixedSize(encoder);
return createEncoder({
...encoder,
write: (value, bytes, offset) => {
const newOffset = encoder.write(value, bytes, offset);
copySourceToTargetInReverse(
bytes,
bytes,
offset,
offset + encoder.fixedSize
);
return newOffset;
}
});
}
function reverseDecoder(decoder) {
assertIsFixedSize(decoder);
return createDecoder({
...decoder,
read: (bytes, offset) => {
const reversedBytes = bytes.slice();
copySourceToTargetInReverse(
bytes,
reversedBytes,
offset,
offset + decoder.fixedSize
);
return decoder.read(reversedBytes, offset);
}
});
}
function reverseCodec(codec) {
return combineCodec(reverseEncoder(codec), reverseDecoder(codec));
}
// src/transform-codec.ts
function transformEncoder(encoder, unmap) {
return createEncoder({
...isVariableSize(encoder) ? { ...encoder, getSizeFromValue: (value) => encoder.getSizeFromValue(unmap(value)) } : encoder,
write: (value, bytes, offset) => encoder.write(unmap(value), bytes, offset)
});
}
function transformDecoder(decoder, map) {
return createDecoder({
...decoder,
read: (bytes, offset) => {
const [value, newOffset] = decoder.read(bytes, offset);
return [map(value, bytes, offset), newOffset];
}
});
}
function transformCodec(codec, unmap, map) {
return createCodec({
...transformEncoder(codec, unmap),
read: map ? transformDecoder(codec, map).read : codec.read
});
}
exports.addCodecSentinel = addCodecSentinel;
exports.addCodecSizePrefix = addCodecSizePrefix;
exports.addDecoderSentinel = addDecoderSentinel;
exports.addDecoderSizePrefix = addDecoderSizePrefix;
exports.addEncoderSentinel = addEncoderSentinel;
exports.addEncoderSizePrefix = addEncoderSizePrefix;
exports.assertByteArrayHasEnoughBytesForCodec = assertByteArrayHasEnoughBytesForCodec;
exports.assertByteArrayIsNotEmptyForCodec = assertByteArrayIsNotEmptyForCodec;
exports.assertByteArrayOffsetIsNotOutOfRange = assertByteArrayOffsetIsNotOutOfRange;
exports.assertIsFixedSize = assertIsFixedSize;
exports.assertIsVariableSize = assertIsVariableSize;
exports.bytesEqual = bytesEqual;
exports.combineCodec = combineCodec;
exports.containsBytes = containsBytes;
exports.createCodec = createCodec;
exports.createDecoder = createDecoder;
exports.createDecoderThatConsumesEntireByteArray = createDecoderThatConsumesEntireByteArray;
exports.createEncoder = createEncoder;
exports.fixBytes = fixBytes;
exports.fixCodecSize = fixCodecSize;
exports.fixDecoderSize = fixDecoderSize;
exports.fixEncoderSize = fixEncoderSize;
exports.getEncodedSize = getEncodedSize;
exports.isFixedSize = isFixedSize;
exports.isVariableSize = isVariableSize;
exports.mergeBytes = mergeBytes;
exports.offsetCodec = offsetCodec;
exports.offsetDecoder = offsetDecoder;
exports.offsetEncoder = offsetEncoder;
exports.padBytes = padBytes;
exports.padLeftCodec = padLeftCodec;
exports.padLeftDecoder = padLeftDecoder;
exports.padLeftEncoder = padLeftEncoder;
exports.padRightCodec = padRightCodec;
exports.padRightDecoder = padRightDecoder;
exports.padRightEncoder = padRightEncoder;
exports.resizeCodec = resizeCodec;
exports.resizeDecoder = resizeDecoder;
exports.resizeEncoder = resizeEncoder;
exports.reverseCodec = reverseCodec;
exports.reverseDecoder = reverseDecoder;
exports.reverseEncoder = reverseEncoder;
exports.toArrayBuffer = toArrayBuffer;
exports.transformCodec = transformCodec;
exports.transformDecoder = transformDecoder;
exports.transformEncoder = transformEncoder;
//# sourceMappingURL=index.browser.cjs.map
//# sourceMappingURL=index.browser.cjs.map

File diff suppressed because one or more lines are too long

493
node_modules/@solana/codecs-core/dist/index.browser.mjs generated vendored Normal file
View File

@@ -0,0 +1,493 @@
import { SolanaError, SOLANA_ERROR__CODECS__EXPECTED_FIXED_LENGTH, SOLANA_ERROR__CODECS__EXPECTED_VARIABLE_LENGTH, SOLANA_ERROR__CODECS__ENCODER_DECODER_SIZE_COMPATIBILITY_MISMATCH, SOLANA_ERROR__CODECS__ENCODER_DECODER_FIXED_SIZE_MISMATCH, SOLANA_ERROR__CODECS__ENCODER_DECODER_MAX_SIZE_MISMATCH, SOLANA_ERROR__CODECS__CANNOT_DECODE_EMPTY_BYTE_ARRAY, SOLANA_ERROR__CODECS__INVALID_BYTE_LENGTH, SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE, SOLANA_ERROR__CODECS__EXPECTED_DECODER_TO_CONSUME_ENTIRE_BYTE_ARRAY, SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, SOLANA_ERROR__CODECS__SENTINEL_MISSING_IN_DECODED_BYTES, SOLANA_ERROR__CODECS__ENCODED_BYTES_MUST_NOT_INCLUDE_SENTINEL } from '@solana/errors';
// src/add-codec-sentinel.ts
// src/bytes.ts
var mergeBytes = (byteArrays) => {
const nonEmptyByteArrays = byteArrays.filter((arr) => arr.length);
if (nonEmptyByteArrays.length === 0) {
return byteArrays.length ? byteArrays[0] : new Uint8Array();
}
if (nonEmptyByteArrays.length === 1) {
return nonEmptyByteArrays[0];
}
const totalLength = nonEmptyByteArrays.reduce((total, arr) => total + arr.length, 0);
const result = new Uint8Array(totalLength);
let offset = 0;
nonEmptyByteArrays.forEach((arr) => {
result.set(arr, offset);
offset += arr.length;
});
return result;
};
function padBytes(bytes, length) {
if (bytes.length >= length) return bytes;
const paddedBytes = new Uint8Array(length).fill(0);
paddedBytes.set(bytes);
return paddedBytes;
}
var fixBytes = (bytes, length) => padBytes(bytes.length <= length ? bytes : bytes.slice(0, length), length);
function containsBytes(data, bytes, offset) {
const slice = (offset === 0 || offset <= -data.byteLength) && data.length === bytes.length ? data : data.slice(offset, offset + bytes.length);
return bytesEqual(slice, bytes);
}
function bytesEqual(bytes1, bytes2) {
return bytes1.length === bytes2.length && bytes1.every((value, index) => value === bytes2[index]);
}
function getEncodedSize(value, encoder) {
return "fixedSize" in encoder ? encoder.fixedSize : encoder.getSizeFromValue(value);
}
function createEncoder(encoder) {
return Object.freeze({
...encoder,
encode: (value) => {
const bytes = new Uint8Array(getEncodedSize(value, encoder));
encoder.write(value, bytes, 0);
return bytes;
}
});
}
function createDecoder(decoder) {
return Object.freeze({
...decoder,
decode: (bytes, offset = 0) => decoder.read(bytes, offset)[0]
});
}
function createCodec(codec) {
return Object.freeze({
...codec,
decode: (bytes, offset = 0) => codec.read(bytes, offset)[0],
encode: (value) => {
const bytes = new Uint8Array(getEncodedSize(value, codec));
codec.write(value, bytes, 0);
return bytes;
}
});
}
function isFixedSize(codec) {
return "fixedSize" in codec && typeof codec.fixedSize === "number";
}
function assertIsFixedSize(codec) {
if (!isFixedSize(codec)) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_FIXED_LENGTH);
}
}
function isVariableSize(codec) {
return !isFixedSize(codec);
}
function assertIsVariableSize(codec) {
if (!isVariableSize(codec)) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_VARIABLE_LENGTH);
}
}
function combineCodec(encoder, decoder) {
if (isFixedSize(encoder) !== isFixedSize(decoder)) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODER_DECODER_SIZE_COMPATIBILITY_MISMATCH);
}
if (isFixedSize(encoder) && isFixedSize(decoder) && encoder.fixedSize !== decoder.fixedSize) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODER_DECODER_FIXED_SIZE_MISMATCH, {
decoderFixedSize: decoder.fixedSize,
encoderFixedSize: encoder.fixedSize
});
}
if (!isFixedSize(encoder) && !isFixedSize(decoder) && encoder.maxSize !== decoder.maxSize) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODER_DECODER_MAX_SIZE_MISMATCH, {
decoderMaxSize: decoder.maxSize,
encoderMaxSize: encoder.maxSize
});
}
return {
...decoder,
...encoder,
decode: decoder.decode,
encode: encoder.encode,
read: decoder.read,
write: encoder.write
};
}
// src/add-codec-sentinel.ts
function addEncoderSentinel(encoder, sentinel) {
const write = ((value, bytes, offset) => {
const encoderBytes = encoder.encode(value);
if (findSentinelIndex(encoderBytes, sentinel) >= 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODED_BYTES_MUST_NOT_INCLUDE_SENTINEL, {
encodedBytes: encoderBytes,
hexEncodedBytes: hexBytes(encoderBytes),
hexSentinel: hexBytes(sentinel),
sentinel
});
}
bytes.set(encoderBytes, offset);
offset += encoderBytes.length;
bytes.set(sentinel, offset);
offset += sentinel.length;
return offset;
});
if (isFixedSize(encoder)) {
return createEncoder({ ...encoder, fixedSize: encoder.fixedSize + sentinel.length, write });
}
return createEncoder({
...encoder,
...encoder.maxSize != null ? { maxSize: encoder.maxSize + sentinel.length } : {},
getSizeFromValue: (value) => encoder.getSizeFromValue(value) + sentinel.length,
write
});
}
function addDecoderSentinel(decoder, sentinel) {
const read = ((bytes, offset) => {
const candidateBytes = offset === 0 || offset <= -bytes.byteLength ? bytes : bytes.slice(offset);
const sentinelIndex = findSentinelIndex(candidateBytes, sentinel);
if (sentinelIndex === -1) {
throw new SolanaError(SOLANA_ERROR__CODECS__SENTINEL_MISSING_IN_DECODED_BYTES, {
decodedBytes: candidateBytes,
hexDecodedBytes: hexBytes(candidateBytes),
hexSentinel: hexBytes(sentinel),
sentinel
});
}
const preSentinelBytes = candidateBytes.slice(0, sentinelIndex);
return [decoder.decode(preSentinelBytes), offset + preSentinelBytes.length + sentinel.length];
});
if (isFixedSize(decoder)) {
return createDecoder({ ...decoder, fixedSize: decoder.fixedSize + sentinel.length, read });
}
return createDecoder({
...decoder,
...decoder.maxSize != null ? { maxSize: decoder.maxSize + sentinel.length } : {},
read
});
}
function addCodecSentinel(codec, sentinel) {
return combineCodec(addEncoderSentinel(codec, sentinel), addDecoderSentinel(codec, sentinel));
}
function findSentinelIndex(bytes, sentinel) {
return bytes.findIndex((byte, index, arr) => {
if (sentinel.length === 1) return byte === sentinel[0];
return containsBytes(arr, sentinel, index);
});
}
function hexBytes(bytes) {
return bytes.reduce((str, byte) => str + byte.toString(16).padStart(2, "0"), "");
}
function assertByteArrayIsNotEmptyForCodec(codecDescription, bytes, offset = 0) {
if (bytes.length - offset <= 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__CANNOT_DECODE_EMPTY_BYTE_ARRAY, {
codecDescription
});
}
}
function assertByteArrayHasEnoughBytesForCodec(codecDescription, expected, bytes, offset = 0) {
const bytesLength = bytes.length - offset;
if (bytesLength < expected) {
throw new SolanaError(SOLANA_ERROR__CODECS__INVALID_BYTE_LENGTH, {
bytesLength,
codecDescription,
expected
});
}
}
function assertByteArrayOffsetIsNotOutOfRange(codecDescription, offset, bytesLength) {
if (offset < 0 || offset > bytesLength) {
throw new SolanaError(SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE, {
bytesLength,
codecDescription,
offset
});
}
}
// src/add-codec-size-prefix.ts
function addEncoderSizePrefix(encoder, prefix) {
const write = ((value, bytes, offset) => {
const encoderBytes = encoder.encode(value);
offset = prefix.write(encoderBytes.length, bytes, offset);
bytes.set(encoderBytes, offset);
return offset + encoderBytes.length;
});
if (isFixedSize(prefix) && isFixedSize(encoder)) {
return createEncoder({ ...encoder, fixedSize: prefix.fixedSize + encoder.fixedSize, write });
}
const prefixMaxSize = isFixedSize(prefix) ? prefix.fixedSize : prefix.maxSize ?? null;
const encoderMaxSize = isFixedSize(encoder) ? encoder.fixedSize : encoder.maxSize ?? null;
const maxSize = prefixMaxSize !== null && encoderMaxSize !== null ? prefixMaxSize + encoderMaxSize : null;
return createEncoder({
...encoder,
...maxSize !== null ? { maxSize } : {},
getSizeFromValue: (value) => {
const encoderSize = getEncodedSize(value, encoder);
return getEncodedSize(encoderSize, prefix) + encoderSize;
},
write
});
}
function addDecoderSizePrefix(decoder, prefix) {
const read = ((bytes, offset) => {
const [bigintSize, decoderOffset] = prefix.read(bytes, offset);
const size = Number(bigintSize);
offset = decoderOffset;
if (offset > 0 || bytes.length > size) {
bytes = bytes.slice(offset, offset + size);
}
assertByteArrayHasEnoughBytesForCodec("addDecoderSizePrefix", size, bytes);
return [decoder.decode(bytes), offset + size];
});
if (isFixedSize(prefix) && isFixedSize(decoder)) {
return createDecoder({ ...decoder, fixedSize: prefix.fixedSize + decoder.fixedSize, read });
}
const prefixMaxSize = isFixedSize(prefix) ? prefix.fixedSize : prefix.maxSize ?? null;
const decoderMaxSize = isFixedSize(decoder) ? decoder.fixedSize : decoder.maxSize ?? null;
const maxSize = prefixMaxSize !== null && decoderMaxSize !== null ? prefixMaxSize + decoderMaxSize : null;
return createDecoder({ ...decoder, ...maxSize !== null ? { maxSize } : {}, read });
}
function addCodecSizePrefix(codec, prefix) {
return combineCodec(addEncoderSizePrefix(codec, prefix), addDecoderSizePrefix(codec, prefix));
}
// src/array-buffers.ts
function toArrayBuffer(bytes, offset, length) {
const bytesOffset = bytes.byteOffset + (offset ?? 0);
const bytesLength = length ?? bytes.byteLength;
let buffer;
if (typeof SharedArrayBuffer === "undefined") {
buffer = bytes.buffer;
} else if (bytes.buffer instanceof SharedArrayBuffer) {
buffer = new ArrayBuffer(bytes.length);
new Uint8Array(buffer).set(new Uint8Array(bytes));
} else {
buffer = bytes.buffer;
}
return (bytesOffset === 0 || bytesOffset === -bytes.byteLength) && bytesLength === bytes.byteLength ? buffer : buffer.slice(bytesOffset, bytesOffset + bytesLength);
}
function createDecoderThatConsumesEntireByteArray(decoder) {
return createDecoder({
...decoder,
read(bytes, offset) {
const [value, newOffset] = decoder.read(bytes, offset);
if (bytes.length > newOffset) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_DECODER_TO_CONSUME_ENTIRE_BYTE_ARRAY, {
expectedLength: newOffset,
numExcessBytes: bytes.length - newOffset
});
}
return [value, newOffset];
}
});
}
// src/fix-codec-size.ts
function fixEncoderSize(encoder, fixedBytes) {
return createEncoder({
fixedSize: fixedBytes,
write: (value, bytes, offset) => {
const variableByteArray = encoder.encode(value);
const fixedByteArray = variableByteArray.length > fixedBytes ? variableByteArray.slice(0, fixedBytes) : variableByteArray;
bytes.set(fixedByteArray, offset);
return offset + fixedBytes;
}
});
}
function fixDecoderSize(decoder, fixedBytes) {
return createDecoder({
fixedSize: fixedBytes,
read: (bytes, offset) => {
assertByteArrayHasEnoughBytesForCodec("fixCodecSize", fixedBytes, bytes, offset);
if (offset > 0 || bytes.length > fixedBytes) {
bytes = bytes.slice(offset, offset + fixedBytes);
}
if (isFixedSize(decoder)) {
bytes = fixBytes(bytes, decoder.fixedSize);
}
const [value] = decoder.read(bytes, 0);
return [value, offset + fixedBytes];
}
});
}
function fixCodecSize(codec, fixedBytes) {
return combineCodec(fixEncoderSize(codec, fixedBytes), fixDecoderSize(codec, fixedBytes));
}
// src/offset-codec.ts
function offsetEncoder(encoder, config) {
return createEncoder({
...encoder,
write: (value, bytes, preOffset) => {
const wrapBytes = (offset) => modulo(offset, bytes.length);
const newPreOffset = config.preOffset ? config.preOffset({ bytes, preOffset, wrapBytes }) : preOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetEncoder", newPreOffset, bytes.length);
const postOffset = encoder.write(value, bytes, newPreOffset);
const newPostOffset = config.postOffset ? config.postOffset({ bytes, newPreOffset, postOffset, preOffset, wrapBytes }) : postOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetEncoder", newPostOffset, bytes.length);
return newPostOffset;
}
});
}
function offsetDecoder(decoder, config) {
return createDecoder({
...decoder,
read: (bytes, preOffset) => {
const wrapBytes = (offset) => modulo(offset, bytes.length);
const newPreOffset = config.preOffset ? config.preOffset({ bytes, preOffset, wrapBytes }) : preOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetDecoder", newPreOffset, bytes.length);
const [value, postOffset] = decoder.read(bytes, newPreOffset);
const newPostOffset = config.postOffset ? config.postOffset({ bytes, newPreOffset, postOffset, preOffset, wrapBytes }) : postOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetDecoder", newPostOffset, bytes.length);
return [value, newPostOffset];
}
});
}
function offsetCodec(codec, config) {
return combineCodec(offsetEncoder(codec, config), offsetDecoder(codec, config));
}
function modulo(dividend, divisor) {
if (divisor === 0) return 0;
return (dividend % divisor + divisor) % divisor;
}
function resizeEncoder(encoder, resize) {
if (isFixedSize(encoder)) {
const fixedSize = resize(encoder.fixedSize);
if (fixedSize < 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: fixedSize,
codecDescription: "resizeEncoder"
});
}
return createEncoder({ ...encoder, fixedSize });
}
return createEncoder({
...encoder,
getSizeFromValue: (value) => {
const newSize = resize(encoder.getSizeFromValue(value));
if (newSize < 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: newSize,
codecDescription: "resizeEncoder"
});
}
return newSize;
}
});
}
function resizeDecoder(decoder, resize) {
if (isFixedSize(decoder)) {
const fixedSize = resize(decoder.fixedSize);
if (fixedSize < 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: fixedSize,
codecDescription: "resizeDecoder"
});
}
return createDecoder({ ...decoder, fixedSize });
}
return decoder;
}
function resizeCodec(codec, resize) {
return combineCodec(resizeEncoder(codec, resize), resizeDecoder(codec, resize));
}
// src/pad-codec.ts
function padLeftEncoder(encoder, offset) {
return offsetEncoder(
resizeEncoder(encoder, (size) => size + offset),
{ preOffset: ({ preOffset }) => preOffset + offset }
);
}
function padRightEncoder(encoder, offset) {
return offsetEncoder(
resizeEncoder(encoder, (size) => size + offset),
{ postOffset: ({ postOffset }) => postOffset + offset }
);
}
function padLeftDecoder(decoder, offset) {
return offsetDecoder(
resizeDecoder(decoder, (size) => size + offset),
{ preOffset: ({ preOffset }) => preOffset + offset }
);
}
function padRightDecoder(decoder, offset) {
return offsetDecoder(
resizeDecoder(decoder, (size) => size + offset),
{ postOffset: ({ postOffset }) => postOffset + offset }
);
}
function padLeftCodec(codec, offset) {
return combineCodec(padLeftEncoder(codec, offset), padLeftDecoder(codec, offset));
}
function padRightCodec(codec, offset) {
return combineCodec(padRightEncoder(codec, offset), padRightDecoder(codec, offset));
}
// src/reverse-codec.ts
function copySourceToTargetInReverse(source, target_WILL_MUTATE, sourceOffset, sourceLength, targetOffset = 0) {
while (sourceOffset < --sourceLength) {
const leftValue = source[sourceOffset];
target_WILL_MUTATE[sourceOffset + targetOffset] = source[sourceLength];
target_WILL_MUTATE[sourceLength + targetOffset] = leftValue;
sourceOffset++;
}
if (sourceOffset === sourceLength) {
target_WILL_MUTATE[sourceOffset + targetOffset] = source[sourceOffset];
}
}
function reverseEncoder(encoder) {
assertIsFixedSize(encoder);
return createEncoder({
...encoder,
write: (value, bytes, offset) => {
const newOffset = encoder.write(value, bytes, offset);
copySourceToTargetInReverse(
bytes,
bytes,
offset,
offset + encoder.fixedSize
);
return newOffset;
}
});
}
function reverseDecoder(decoder) {
assertIsFixedSize(decoder);
return createDecoder({
...decoder,
read: (bytes, offset) => {
const reversedBytes = bytes.slice();
copySourceToTargetInReverse(
bytes,
reversedBytes,
offset,
offset + decoder.fixedSize
);
return decoder.read(reversedBytes, offset);
}
});
}
function reverseCodec(codec) {
return combineCodec(reverseEncoder(codec), reverseDecoder(codec));
}
// src/transform-codec.ts
function transformEncoder(encoder, unmap) {
return createEncoder({
...isVariableSize(encoder) ? { ...encoder, getSizeFromValue: (value) => encoder.getSizeFromValue(unmap(value)) } : encoder,
write: (value, bytes, offset) => encoder.write(unmap(value), bytes, offset)
});
}
function transformDecoder(decoder, map) {
return createDecoder({
...decoder,
read: (bytes, offset) => {
const [value, newOffset] = decoder.read(bytes, offset);
return [map(value, bytes, offset), newOffset];
}
});
}
function transformCodec(codec, unmap, map) {
return createCodec({
...transformEncoder(codec, unmap),
read: map ? transformDecoder(codec, map).read : codec.read
});
}
export { addCodecSentinel, addCodecSizePrefix, addDecoderSentinel, addDecoderSizePrefix, addEncoderSentinel, addEncoderSizePrefix, assertByteArrayHasEnoughBytesForCodec, assertByteArrayIsNotEmptyForCodec, assertByteArrayOffsetIsNotOutOfRange, assertIsFixedSize, assertIsVariableSize, bytesEqual, combineCodec, containsBytes, createCodec, createDecoder, createDecoderThatConsumesEntireByteArray, createEncoder, fixBytes, fixCodecSize, fixDecoderSize, fixEncoderSize, getEncodedSize, isFixedSize, isVariableSize, mergeBytes, offsetCodec, offsetDecoder, offsetEncoder, padBytes, padLeftCodec, padLeftDecoder, padLeftEncoder, padRightCodec, padRightDecoder, padRightEncoder, resizeCodec, resizeDecoder, resizeEncoder, reverseCodec, reverseDecoder, reverseEncoder, toArrayBuffer, transformCodec, transformDecoder, transformEncoder };
//# sourceMappingURL=index.browser.mjs.map
//# sourceMappingURL=index.browser.mjs.map

File diff suppressed because one or more lines are too long

493
node_modules/@solana/codecs-core/dist/index.native.mjs generated vendored Normal file
View File

@@ -0,0 +1,493 @@
import { SolanaError, SOLANA_ERROR__CODECS__EXPECTED_FIXED_LENGTH, SOLANA_ERROR__CODECS__EXPECTED_VARIABLE_LENGTH, SOLANA_ERROR__CODECS__ENCODER_DECODER_SIZE_COMPATIBILITY_MISMATCH, SOLANA_ERROR__CODECS__ENCODER_DECODER_FIXED_SIZE_MISMATCH, SOLANA_ERROR__CODECS__ENCODER_DECODER_MAX_SIZE_MISMATCH, SOLANA_ERROR__CODECS__CANNOT_DECODE_EMPTY_BYTE_ARRAY, SOLANA_ERROR__CODECS__INVALID_BYTE_LENGTH, SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE, SOLANA_ERROR__CODECS__EXPECTED_DECODER_TO_CONSUME_ENTIRE_BYTE_ARRAY, SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, SOLANA_ERROR__CODECS__SENTINEL_MISSING_IN_DECODED_BYTES, SOLANA_ERROR__CODECS__ENCODED_BYTES_MUST_NOT_INCLUDE_SENTINEL } from '@solana/errors';
// src/add-codec-sentinel.ts
// src/bytes.ts
var mergeBytes = (byteArrays) => {
const nonEmptyByteArrays = byteArrays.filter((arr) => arr.length);
if (nonEmptyByteArrays.length === 0) {
return byteArrays.length ? byteArrays[0] : new Uint8Array();
}
if (nonEmptyByteArrays.length === 1) {
return nonEmptyByteArrays[0];
}
const totalLength = nonEmptyByteArrays.reduce((total, arr) => total + arr.length, 0);
const result = new Uint8Array(totalLength);
let offset = 0;
nonEmptyByteArrays.forEach((arr) => {
result.set(arr, offset);
offset += arr.length;
});
return result;
};
function padBytes(bytes, length) {
if (bytes.length >= length) return bytes;
const paddedBytes = new Uint8Array(length).fill(0);
paddedBytes.set(bytes);
return paddedBytes;
}
var fixBytes = (bytes, length) => padBytes(bytes.length <= length ? bytes : bytes.slice(0, length), length);
function containsBytes(data, bytes, offset) {
const slice = (offset === 0 || offset <= -data.byteLength) && data.length === bytes.length ? data : data.slice(offset, offset + bytes.length);
return bytesEqual(slice, bytes);
}
function bytesEqual(bytes1, bytes2) {
return bytes1.length === bytes2.length && bytes1.every((value, index) => value === bytes2[index]);
}
function getEncodedSize(value, encoder) {
return "fixedSize" in encoder ? encoder.fixedSize : encoder.getSizeFromValue(value);
}
function createEncoder(encoder) {
return Object.freeze({
...encoder,
encode: (value) => {
const bytes = new Uint8Array(getEncodedSize(value, encoder));
encoder.write(value, bytes, 0);
return bytes;
}
});
}
function createDecoder(decoder) {
return Object.freeze({
...decoder,
decode: (bytes, offset = 0) => decoder.read(bytes, offset)[0]
});
}
function createCodec(codec) {
return Object.freeze({
...codec,
decode: (bytes, offset = 0) => codec.read(bytes, offset)[0],
encode: (value) => {
const bytes = new Uint8Array(getEncodedSize(value, codec));
codec.write(value, bytes, 0);
return bytes;
}
});
}
function isFixedSize(codec) {
return "fixedSize" in codec && typeof codec.fixedSize === "number";
}
function assertIsFixedSize(codec) {
if (!isFixedSize(codec)) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_FIXED_LENGTH);
}
}
function isVariableSize(codec) {
return !isFixedSize(codec);
}
function assertIsVariableSize(codec) {
if (!isVariableSize(codec)) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_VARIABLE_LENGTH);
}
}
function combineCodec(encoder, decoder) {
if (isFixedSize(encoder) !== isFixedSize(decoder)) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODER_DECODER_SIZE_COMPATIBILITY_MISMATCH);
}
if (isFixedSize(encoder) && isFixedSize(decoder) && encoder.fixedSize !== decoder.fixedSize) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODER_DECODER_FIXED_SIZE_MISMATCH, {
decoderFixedSize: decoder.fixedSize,
encoderFixedSize: encoder.fixedSize
});
}
if (!isFixedSize(encoder) && !isFixedSize(decoder) && encoder.maxSize !== decoder.maxSize) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODER_DECODER_MAX_SIZE_MISMATCH, {
decoderMaxSize: decoder.maxSize,
encoderMaxSize: encoder.maxSize
});
}
return {
...decoder,
...encoder,
decode: decoder.decode,
encode: encoder.encode,
read: decoder.read,
write: encoder.write
};
}
// src/add-codec-sentinel.ts
function addEncoderSentinel(encoder, sentinel) {
const write = ((value, bytes, offset) => {
const encoderBytes = encoder.encode(value);
if (findSentinelIndex(encoderBytes, sentinel) >= 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODED_BYTES_MUST_NOT_INCLUDE_SENTINEL, {
encodedBytes: encoderBytes,
hexEncodedBytes: hexBytes(encoderBytes),
hexSentinel: hexBytes(sentinel),
sentinel
});
}
bytes.set(encoderBytes, offset);
offset += encoderBytes.length;
bytes.set(sentinel, offset);
offset += sentinel.length;
return offset;
});
if (isFixedSize(encoder)) {
return createEncoder({ ...encoder, fixedSize: encoder.fixedSize + sentinel.length, write });
}
return createEncoder({
...encoder,
...encoder.maxSize != null ? { maxSize: encoder.maxSize + sentinel.length } : {},
getSizeFromValue: (value) => encoder.getSizeFromValue(value) + sentinel.length,
write
});
}
function addDecoderSentinel(decoder, sentinel) {
const read = ((bytes, offset) => {
const candidateBytes = offset === 0 || offset <= -bytes.byteLength ? bytes : bytes.slice(offset);
const sentinelIndex = findSentinelIndex(candidateBytes, sentinel);
if (sentinelIndex === -1) {
throw new SolanaError(SOLANA_ERROR__CODECS__SENTINEL_MISSING_IN_DECODED_BYTES, {
decodedBytes: candidateBytes,
hexDecodedBytes: hexBytes(candidateBytes),
hexSentinel: hexBytes(sentinel),
sentinel
});
}
const preSentinelBytes = candidateBytes.slice(0, sentinelIndex);
return [decoder.decode(preSentinelBytes), offset + preSentinelBytes.length + sentinel.length];
});
if (isFixedSize(decoder)) {
return createDecoder({ ...decoder, fixedSize: decoder.fixedSize + sentinel.length, read });
}
return createDecoder({
...decoder,
...decoder.maxSize != null ? { maxSize: decoder.maxSize + sentinel.length } : {},
read
});
}
function addCodecSentinel(codec, sentinel) {
return combineCodec(addEncoderSentinel(codec, sentinel), addDecoderSentinel(codec, sentinel));
}
function findSentinelIndex(bytes, sentinel) {
return bytes.findIndex((byte, index, arr) => {
if (sentinel.length === 1) return byte === sentinel[0];
return containsBytes(arr, sentinel, index);
});
}
function hexBytes(bytes) {
return bytes.reduce((str, byte) => str + byte.toString(16).padStart(2, "0"), "");
}
function assertByteArrayIsNotEmptyForCodec(codecDescription, bytes, offset = 0) {
if (bytes.length - offset <= 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__CANNOT_DECODE_EMPTY_BYTE_ARRAY, {
codecDescription
});
}
}
function assertByteArrayHasEnoughBytesForCodec(codecDescription, expected, bytes, offset = 0) {
const bytesLength = bytes.length - offset;
if (bytesLength < expected) {
throw new SolanaError(SOLANA_ERROR__CODECS__INVALID_BYTE_LENGTH, {
bytesLength,
codecDescription,
expected
});
}
}
function assertByteArrayOffsetIsNotOutOfRange(codecDescription, offset, bytesLength) {
if (offset < 0 || offset > bytesLength) {
throw new SolanaError(SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE, {
bytesLength,
codecDescription,
offset
});
}
}
// src/add-codec-size-prefix.ts
function addEncoderSizePrefix(encoder, prefix) {
const write = ((value, bytes, offset) => {
const encoderBytes = encoder.encode(value);
offset = prefix.write(encoderBytes.length, bytes, offset);
bytes.set(encoderBytes, offset);
return offset + encoderBytes.length;
});
if (isFixedSize(prefix) && isFixedSize(encoder)) {
return createEncoder({ ...encoder, fixedSize: prefix.fixedSize + encoder.fixedSize, write });
}
const prefixMaxSize = isFixedSize(prefix) ? prefix.fixedSize : prefix.maxSize ?? null;
const encoderMaxSize = isFixedSize(encoder) ? encoder.fixedSize : encoder.maxSize ?? null;
const maxSize = prefixMaxSize !== null && encoderMaxSize !== null ? prefixMaxSize + encoderMaxSize : null;
return createEncoder({
...encoder,
...maxSize !== null ? { maxSize } : {},
getSizeFromValue: (value) => {
const encoderSize = getEncodedSize(value, encoder);
return getEncodedSize(encoderSize, prefix) + encoderSize;
},
write
});
}
function addDecoderSizePrefix(decoder, prefix) {
const read = ((bytes, offset) => {
const [bigintSize, decoderOffset] = prefix.read(bytes, offset);
const size = Number(bigintSize);
offset = decoderOffset;
if (offset > 0 || bytes.length > size) {
bytes = bytes.slice(offset, offset + size);
}
assertByteArrayHasEnoughBytesForCodec("addDecoderSizePrefix", size, bytes);
return [decoder.decode(bytes), offset + size];
});
if (isFixedSize(prefix) && isFixedSize(decoder)) {
return createDecoder({ ...decoder, fixedSize: prefix.fixedSize + decoder.fixedSize, read });
}
const prefixMaxSize = isFixedSize(prefix) ? prefix.fixedSize : prefix.maxSize ?? null;
const decoderMaxSize = isFixedSize(decoder) ? decoder.fixedSize : decoder.maxSize ?? null;
const maxSize = prefixMaxSize !== null && decoderMaxSize !== null ? prefixMaxSize + decoderMaxSize : null;
return createDecoder({ ...decoder, ...maxSize !== null ? { maxSize } : {}, read });
}
function addCodecSizePrefix(codec, prefix) {
return combineCodec(addEncoderSizePrefix(codec, prefix), addDecoderSizePrefix(codec, prefix));
}
// src/array-buffers.ts
function toArrayBuffer(bytes, offset, length) {
const bytesOffset = bytes.byteOffset + (offset ?? 0);
const bytesLength = length ?? bytes.byteLength;
let buffer;
if (typeof SharedArrayBuffer === "undefined") {
buffer = bytes.buffer;
} else if (bytes.buffer instanceof SharedArrayBuffer) {
buffer = new ArrayBuffer(bytes.length);
new Uint8Array(buffer).set(new Uint8Array(bytes));
} else {
buffer = bytes.buffer;
}
return (bytesOffset === 0 || bytesOffset === -bytes.byteLength) && bytesLength === bytes.byteLength ? buffer : buffer.slice(bytesOffset, bytesOffset + bytesLength);
}
function createDecoderThatConsumesEntireByteArray(decoder) {
return createDecoder({
...decoder,
read(bytes, offset) {
const [value, newOffset] = decoder.read(bytes, offset);
if (bytes.length > newOffset) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_DECODER_TO_CONSUME_ENTIRE_BYTE_ARRAY, {
expectedLength: newOffset,
numExcessBytes: bytes.length - newOffset
});
}
return [value, newOffset];
}
});
}
// src/fix-codec-size.ts
function fixEncoderSize(encoder, fixedBytes) {
return createEncoder({
fixedSize: fixedBytes,
write: (value, bytes, offset) => {
const variableByteArray = encoder.encode(value);
const fixedByteArray = variableByteArray.length > fixedBytes ? variableByteArray.slice(0, fixedBytes) : variableByteArray;
bytes.set(fixedByteArray, offset);
return offset + fixedBytes;
}
});
}
function fixDecoderSize(decoder, fixedBytes) {
return createDecoder({
fixedSize: fixedBytes,
read: (bytes, offset) => {
assertByteArrayHasEnoughBytesForCodec("fixCodecSize", fixedBytes, bytes, offset);
if (offset > 0 || bytes.length > fixedBytes) {
bytes = bytes.slice(offset, offset + fixedBytes);
}
if (isFixedSize(decoder)) {
bytes = fixBytes(bytes, decoder.fixedSize);
}
const [value] = decoder.read(bytes, 0);
return [value, offset + fixedBytes];
}
});
}
function fixCodecSize(codec, fixedBytes) {
return combineCodec(fixEncoderSize(codec, fixedBytes), fixDecoderSize(codec, fixedBytes));
}
// src/offset-codec.ts
function offsetEncoder(encoder, config) {
return createEncoder({
...encoder,
write: (value, bytes, preOffset) => {
const wrapBytes = (offset) => modulo(offset, bytes.length);
const newPreOffset = config.preOffset ? config.preOffset({ bytes, preOffset, wrapBytes }) : preOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetEncoder", newPreOffset, bytes.length);
const postOffset = encoder.write(value, bytes, newPreOffset);
const newPostOffset = config.postOffset ? config.postOffset({ bytes, newPreOffset, postOffset, preOffset, wrapBytes }) : postOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetEncoder", newPostOffset, bytes.length);
return newPostOffset;
}
});
}
function offsetDecoder(decoder, config) {
return createDecoder({
...decoder,
read: (bytes, preOffset) => {
const wrapBytes = (offset) => modulo(offset, bytes.length);
const newPreOffset = config.preOffset ? config.preOffset({ bytes, preOffset, wrapBytes }) : preOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetDecoder", newPreOffset, bytes.length);
const [value, postOffset] = decoder.read(bytes, newPreOffset);
const newPostOffset = config.postOffset ? config.postOffset({ bytes, newPreOffset, postOffset, preOffset, wrapBytes }) : postOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetDecoder", newPostOffset, bytes.length);
return [value, newPostOffset];
}
});
}
function offsetCodec(codec, config) {
return combineCodec(offsetEncoder(codec, config), offsetDecoder(codec, config));
}
function modulo(dividend, divisor) {
if (divisor === 0) return 0;
return (dividend % divisor + divisor) % divisor;
}
function resizeEncoder(encoder, resize) {
if (isFixedSize(encoder)) {
const fixedSize = resize(encoder.fixedSize);
if (fixedSize < 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: fixedSize,
codecDescription: "resizeEncoder"
});
}
return createEncoder({ ...encoder, fixedSize });
}
return createEncoder({
...encoder,
getSizeFromValue: (value) => {
const newSize = resize(encoder.getSizeFromValue(value));
if (newSize < 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: newSize,
codecDescription: "resizeEncoder"
});
}
return newSize;
}
});
}
function resizeDecoder(decoder, resize) {
if (isFixedSize(decoder)) {
const fixedSize = resize(decoder.fixedSize);
if (fixedSize < 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: fixedSize,
codecDescription: "resizeDecoder"
});
}
return createDecoder({ ...decoder, fixedSize });
}
return decoder;
}
function resizeCodec(codec, resize) {
return combineCodec(resizeEncoder(codec, resize), resizeDecoder(codec, resize));
}
// src/pad-codec.ts
function padLeftEncoder(encoder, offset) {
return offsetEncoder(
resizeEncoder(encoder, (size) => size + offset),
{ preOffset: ({ preOffset }) => preOffset + offset }
);
}
function padRightEncoder(encoder, offset) {
return offsetEncoder(
resizeEncoder(encoder, (size) => size + offset),
{ postOffset: ({ postOffset }) => postOffset + offset }
);
}
function padLeftDecoder(decoder, offset) {
return offsetDecoder(
resizeDecoder(decoder, (size) => size + offset),
{ preOffset: ({ preOffset }) => preOffset + offset }
);
}
function padRightDecoder(decoder, offset) {
return offsetDecoder(
resizeDecoder(decoder, (size) => size + offset),
{ postOffset: ({ postOffset }) => postOffset + offset }
);
}
function padLeftCodec(codec, offset) {
return combineCodec(padLeftEncoder(codec, offset), padLeftDecoder(codec, offset));
}
function padRightCodec(codec, offset) {
return combineCodec(padRightEncoder(codec, offset), padRightDecoder(codec, offset));
}
// src/reverse-codec.ts
function copySourceToTargetInReverse(source, target_WILL_MUTATE, sourceOffset, sourceLength, targetOffset = 0) {
while (sourceOffset < --sourceLength) {
const leftValue = source[sourceOffset];
target_WILL_MUTATE[sourceOffset + targetOffset] = source[sourceLength];
target_WILL_MUTATE[sourceLength + targetOffset] = leftValue;
sourceOffset++;
}
if (sourceOffset === sourceLength) {
target_WILL_MUTATE[sourceOffset + targetOffset] = source[sourceOffset];
}
}
function reverseEncoder(encoder) {
assertIsFixedSize(encoder);
return createEncoder({
...encoder,
write: (value, bytes, offset) => {
const newOffset = encoder.write(value, bytes, offset);
copySourceToTargetInReverse(
bytes,
bytes,
offset,
offset + encoder.fixedSize
);
return newOffset;
}
});
}
function reverseDecoder(decoder) {
assertIsFixedSize(decoder);
return createDecoder({
...decoder,
read: (bytes, offset) => {
const reversedBytes = bytes.slice();
copySourceToTargetInReverse(
bytes,
reversedBytes,
offset,
offset + decoder.fixedSize
);
return decoder.read(reversedBytes, offset);
}
});
}
function reverseCodec(codec) {
return combineCodec(reverseEncoder(codec), reverseDecoder(codec));
}
// src/transform-codec.ts
function transformEncoder(encoder, unmap) {
return createEncoder({
...isVariableSize(encoder) ? { ...encoder, getSizeFromValue: (value) => encoder.getSizeFromValue(unmap(value)) } : encoder,
write: (value, bytes, offset) => encoder.write(unmap(value), bytes, offset)
});
}
function transformDecoder(decoder, map) {
return createDecoder({
...decoder,
read: (bytes, offset) => {
const [value, newOffset] = decoder.read(bytes, offset);
return [map(value, bytes, offset), newOffset];
}
});
}
function transformCodec(codec, unmap, map) {
return createCodec({
...transformEncoder(codec, unmap),
read: map ? transformDecoder(codec, map).read : codec.read
});
}
export { addCodecSentinel, addCodecSizePrefix, addDecoderSentinel, addDecoderSizePrefix, addEncoderSentinel, addEncoderSizePrefix, assertByteArrayHasEnoughBytesForCodec, assertByteArrayIsNotEmptyForCodec, assertByteArrayOffsetIsNotOutOfRange, assertIsFixedSize, assertIsVariableSize, bytesEqual, combineCodec, containsBytes, createCodec, createDecoder, createDecoderThatConsumesEntireByteArray, createEncoder, fixBytes, fixCodecSize, fixDecoderSize, fixEncoderSize, getEncodedSize, isFixedSize, isVariableSize, mergeBytes, offsetCodec, offsetDecoder, offsetEncoder, padBytes, padLeftCodec, padLeftDecoder, padLeftEncoder, padRightCodec, padRightDecoder, padRightEncoder, resizeCodec, resizeDecoder, resizeEncoder, reverseCodec, reverseDecoder, reverseEncoder, toArrayBuffer, transformCodec, transformDecoder, transformEncoder };
//# sourceMappingURL=index.native.mjs.map
//# sourceMappingURL=index.native.mjs.map

File diff suppressed because one or more lines are too long

540
node_modules/@solana/codecs-core/dist/index.node.cjs generated vendored Normal file
View File

@@ -0,0 +1,540 @@
'use strict';
var errors = require('@solana/errors');
// src/add-codec-sentinel.ts
// src/bytes.ts
var mergeBytes = (byteArrays) => {
const nonEmptyByteArrays = byteArrays.filter((arr) => arr.length);
if (nonEmptyByteArrays.length === 0) {
return byteArrays.length ? byteArrays[0] : new Uint8Array();
}
if (nonEmptyByteArrays.length === 1) {
return nonEmptyByteArrays[0];
}
const totalLength = nonEmptyByteArrays.reduce((total, arr) => total + arr.length, 0);
const result = new Uint8Array(totalLength);
let offset = 0;
nonEmptyByteArrays.forEach((arr) => {
result.set(arr, offset);
offset += arr.length;
});
return result;
};
function padBytes(bytes, length) {
if (bytes.length >= length) return bytes;
const paddedBytes = new Uint8Array(length).fill(0);
paddedBytes.set(bytes);
return paddedBytes;
}
var fixBytes = (bytes, length) => padBytes(bytes.length <= length ? bytes : bytes.slice(0, length), length);
function containsBytes(data, bytes, offset) {
const slice = (offset === 0 || offset <= -data.byteLength) && data.length === bytes.length ? data : data.slice(offset, offset + bytes.length);
return bytesEqual(slice, bytes);
}
function bytesEqual(bytes1, bytes2) {
return bytes1.length === bytes2.length && bytes1.every((value, index) => value === bytes2[index]);
}
function getEncodedSize(value, encoder) {
return "fixedSize" in encoder ? encoder.fixedSize : encoder.getSizeFromValue(value);
}
function createEncoder(encoder) {
return Object.freeze({
...encoder,
encode: (value) => {
const bytes = new Uint8Array(getEncodedSize(value, encoder));
encoder.write(value, bytes, 0);
return bytes;
}
});
}
function createDecoder(decoder) {
return Object.freeze({
...decoder,
decode: (bytes, offset = 0) => decoder.read(bytes, offset)[0]
});
}
function createCodec(codec) {
return Object.freeze({
...codec,
decode: (bytes, offset = 0) => codec.read(bytes, offset)[0],
encode: (value) => {
const bytes = new Uint8Array(getEncodedSize(value, codec));
codec.write(value, bytes, 0);
return bytes;
}
});
}
function isFixedSize(codec) {
return "fixedSize" in codec && typeof codec.fixedSize === "number";
}
function assertIsFixedSize(codec) {
if (!isFixedSize(codec)) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__EXPECTED_FIXED_LENGTH);
}
}
function isVariableSize(codec) {
return !isFixedSize(codec);
}
function assertIsVariableSize(codec) {
if (!isVariableSize(codec)) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__EXPECTED_VARIABLE_LENGTH);
}
}
function combineCodec(encoder, decoder) {
if (isFixedSize(encoder) !== isFixedSize(decoder)) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__ENCODER_DECODER_SIZE_COMPATIBILITY_MISMATCH);
}
if (isFixedSize(encoder) && isFixedSize(decoder) && encoder.fixedSize !== decoder.fixedSize) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__ENCODER_DECODER_FIXED_SIZE_MISMATCH, {
decoderFixedSize: decoder.fixedSize,
encoderFixedSize: encoder.fixedSize
});
}
if (!isFixedSize(encoder) && !isFixedSize(decoder) && encoder.maxSize !== decoder.maxSize) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__ENCODER_DECODER_MAX_SIZE_MISMATCH, {
decoderMaxSize: decoder.maxSize,
encoderMaxSize: encoder.maxSize
});
}
return {
...decoder,
...encoder,
decode: decoder.decode,
encode: encoder.encode,
read: decoder.read,
write: encoder.write
};
}
// src/add-codec-sentinel.ts
function addEncoderSentinel(encoder, sentinel) {
const write = ((value, bytes, offset) => {
const encoderBytes = encoder.encode(value);
if (findSentinelIndex(encoderBytes, sentinel) >= 0) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__ENCODED_BYTES_MUST_NOT_INCLUDE_SENTINEL, {
encodedBytes: encoderBytes,
hexEncodedBytes: hexBytes(encoderBytes),
hexSentinel: hexBytes(sentinel),
sentinel
});
}
bytes.set(encoderBytes, offset);
offset += encoderBytes.length;
bytes.set(sentinel, offset);
offset += sentinel.length;
return offset;
});
if (isFixedSize(encoder)) {
return createEncoder({ ...encoder, fixedSize: encoder.fixedSize + sentinel.length, write });
}
return createEncoder({
...encoder,
...encoder.maxSize != null ? { maxSize: encoder.maxSize + sentinel.length } : {},
getSizeFromValue: (value) => encoder.getSizeFromValue(value) + sentinel.length,
write
});
}
function addDecoderSentinel(decoder, sentinel) {
const read = ((bytes, offset) => {
const candidateBytes = offset === 0 || offset <= -bytes.byteLength ? bytes : bytes.slice(offset);
const sentinelIndex = findSentinelIndex(candidateBytes, sentinel);
if (sentinelIndex === -1) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__SENTINEL_MISSING_IN_DECODED_BYTES, {
decodedBytes: candidateBytes,
hexDecodedBytes: hexBytes(candidateBytes),
hexSentinel: hexBytes(sentinel),
sentinel
});
}
const preSentinelBytes = candidateBytes.slice(0, sentinelIndex);
return [decoder.decode(preSentinelBytes), offset + preSentinelBytes.length + sentinel.length];
});
if (isFixedSize(decoder)) {
return createDecoder({ ...decoder, fixedSize: decoder.fixedSize + sentinel.length, read });
}
return createDecoder({
...decoder,
...decoder.maxSize != null ? { maxSize: decoder.maxSize + sentinel.length } : {},
read
});
}
function addCodecSentinel(codec, sentinel) {
return combineCodec(addEncoderSentinel(codec, sentinel), addDecoderSentinel(codec, sentinel));
}
function findSentinelIndex(bytes, sentinel) {
return bytes.findIndex((byte, index, arr) => {
if (sentinel.length === 1) return byte === sentinel[0];
return containsBytes(arr, sentinel, index);
});
}
function hexBytes(bytes) {
return bytes.reduce((str, byte) => str + byte.toString(16).padStart(2, "0"), "");
}
function assertByteArrayIsNotEmptyForCodec(codecDescription, bytes, offset = 0) {
if (bytes.length - offset <= 0) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__CANNOT_DECODE_EMPTY_BYTE_ARRAY, {
codecDescription
});
}
}
function assertByteArrayHasEnoughBytesForCodec(codecDescription, expected, bytes, offset = 0) {
const bytesLength = bytes.length - offset;
if (bytesLength < expected) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__INVALID_BYTE_LENGTH, {
bytesLength,
codecDescription,
expected
});
}
}
function assertByteArrayOffsetIsNotOutOfRange(codecDescription, offset, bytesLength) {
if (offset < 0 || offset > bytesLength) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE, {
bytesLength,
codecDescription,
offset
});
}
}
// src/add-codec-size-prefix.ts
function addEncoderSizePrefix(encoder, prefix) {
const write = ((value, bytes, offset) => {
const encoderBytes = encoder.encode(value);
offset = prefix.write(encoderBytes.length, bytes, offset);
bytes.set(encoderBytes, offset);
return offset + encoderBytes.length;
});
if (isFixedSize(prefix) && isFixedSize(encoder)) {
return createEncoder({ ...encoder, fixedSize: prefix.fixedSize + encoder.fixedSize, write });
}
const prefixMaxSize = isFixedSize(prefix) ? prefix.fixedSize : prefix.maxSize ?? null;
const encoderMaxSize = isFixedSize(encoder) ? encoder.fixedSize : encoder.maxSize ?? null;
const maxSize = prefixMaxSize !== null && encoderMaxSize !== null ? prefixMaxSize + encoderMaxSize : null;
return createEncoder({
...encoder,
...maxSize !== null ? { maxSize } : {},
getSizeFromValue: (value) => {
const encoderSize = getEncodedSize(value, encoder);
return getEncodedSize(encoderSize, prefix) + encoderSize;
},
write
});
}
function addDecoderSizePrefix(decoder, prefix) {
const read = ((bytes, offset) => {
const [bigintSize, decoderOffset] = prefix.read(bytes, offset);
const size = Number(bigintSize);
offset = decoderOffset;
if (offset > 0 || bytes.length > size) {
bytes = bytes.slice(offset, offset + size);
}
assertByteArrayHasEnoughBytesForCodec("addDecoderSizePrefix", size, bytes);
return [decoder.decode(bytes), offset + size];
});
if (isFixedSize(prefix) && isFixedSize(decoder)) {
return createDecoder({ ...decoder, fixedSize: prefix.fixedSize + decoder.fixedSize, read });
}
const prefixMaxSize = isFixedSize(prefix) ? prefix.fixedSize : prefix.maxSize ?? null;
const decoderMaxSize = isFixedSize(decoder) ? decoder.fixedSize : decoder.maxSize ?? null;
const maxSize = prefixMaxSize !== null && decoderMaxSize !== null ? prefixMaxSize + decoderMaxSize : null;
return createDecoder({ ...decoder, ...maxSize !== null ? { maxSize } : {}, read });
}
function addCodecSizePrefix(codec, prefix) {
return combineCodec(addEncoderSizePrefix(codec, prefix), addDecoderSizePrefix(codec, prefix));
}
// src/array-buffers.ts
function toArrayBuffer(bytes, offset, length) {
const bytesOffset = bytes.byteOffset + (offset ?? 0);
const bytesLength = length ?? bytes.byteLength;
let buffer;
if (typeof SharedArrayBuffer === "undefined") {
buffer = bytes.buffer;
} else if (bytes.buffer instanceof SharedArrayBuffer) {
buffer = new ArrayBuffer(bytes.length);
new Uint8Array(buffer).set(new Uint8Array(bytes));
} else {
buffer = bytes.buffer;
}
return (bytesOffset === 0 || bytesOffset === -bytes.byteLength) && bytesLength === bytes.byteLength ? buffer : buffer.slice(bytesOffset, bytesOffset + bytesLength);
}
function createDecoderThatConsumesEntireByteArray(decoder) {
return createDecoder({
...decoder,
read(bytes, offset) {
const [value, newOffset] = decoder.read(bytes, offset);
if (bytes.length > newOffset) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__EXPECTED_DECODER_TO_CONSUME_ENTIRE_BYTE_ARRAY, {
expectedLength: newOffset,
numExcessBytes: bytes.length - newOffset
});
}
return [value, newOffset];
}
});
}
// src/fix-codec-size.ts
function fixEncoderSize(encoder, fixedBytes) {
return createEncoder({
fixedSize: fixedBytes,
write: (value, bytes, offset) => {
const variableByteArray = encoder.encode(value);
const fixedByteArray = variableByteArray.length > fixedBytes ? variableByteArray.slice(0, fixedBytes) : variableByteArray;
bytes.set(fixedByteArray, offset);
return offset + fixedBytes;
}
});
}
function fixDecoderSize(decoder, fixedBytes) {
return createDecoder({
fixedSize: fixedBytes,
read: (bytes, offset) => {
assertByteArrayHasEnoughBytesForCodec("fixCodecSize", fixedBytes, bytes, offset);
if (offset > 0 || bytes.length > fixedBytes) {
bytes = bytes.slice(offset, offset + fixedBytes);
}
if (isFixedSize(decoder)) {
bytes = fixBytes(bytes, decoder.fixedSize);
}
const [value] = decoder.read(bytes, 0);
return [value, offset + fixedBytes];
}
});
}
function fixCodecSize(codec, fixedBytes) {
return combineCodec(fixEncoderSize(codec, fixedBytes), fixDecoderSize(codec, fixedBytes));
}
// src/offset-codec.ts
function offsetEncoder(encoder, config) {
return createEncoder({
...encoder,
write: (value, bytes, preOffset) => {
const wrapBytes = (offset) => modulo(offset, bytes.length);
const newPreOffset = config.preOffset ? config.preOffset({ bytes, preOffset, wrapBytes }) : preOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetEncoder", newPreOffset, bytes.length);
const postOffset = encoder.write(value, bytes, newPreOffset);
const newPostOffset = config.postOffset ? config.postOffset({ bytes, newPreOffset, postOffset, preOffset, wrapBytes }) : postOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetEncoder", newPostOffset, bytes.length);
return newPostOffset;
}
});
}
function offsetDecoder(decoder, config) {
return createDecoder({
...decoder,
read: (bytes, preOffset) => {
const wrapBytes = (offset) => modulo(offset, bytes.length);
const newPreOffset = config.preOffset ? config.preOffset({ bytes, preOffset, wrapBytes }) : preOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetDecoder", newPreOffset, bytes.length);
const [value, postOffset] = decoder.read(bytes, newPreOffset);
const newPostOffset = config.postOffset ? config.postOffset({ bytes, newPreOffset, postOffset, preOffset, wrapBytes }) : postOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetDecoder", newPostOffset, bytes.length);
return [value, newPostOffset];
}
});
}
function offsetCodec(codec, config) {
return combineCodec(offsetEncoder(codec, config), offsetDecoder(codec, config));
}
function modulo(dividend, divisor) {
if (divisor === 0) return 0;
return (dividend % divisor + divisor) % divisor;
}
function resizeEncoder(encoder, resize) {
if (isFixedSize(encoder)) {
const fixedSize = resize(encoder.fixedSize);
if (fixedSize < 0) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: fixedSize,
codecDescription: "resizeEncoder"
});
}
return createEncoder({ ...encoder, fixedSize });
}
return createEncoder({
...encoder,
getSizeFromValue: (value) => {
const newSize = resize(encoder.getSizeFromValue(value));
if (newSize < 0) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: newSize,
codecDescription: "resizeEncoder"
});
}
return newSize;
}
});
}
function resizeDecoder(decoder, resize) {
if (isFixedSize(decoder)) {
const fixedSize = resize(decoder.fixedSize);
if (fixedSize < 0) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: fixedSize,
codecDescription: "resizeDecoder"
});
}
return createDecoder({ ...decoder, fixedSize });
}
return decoder;
}
function resizeCodec(codec, resize) {
return combineCodec(resizeEncoder(codec, resize), resizeDecoder(codec, resize));
}
// src/pad-codec.ts
function padLeftEncoder(encoder, offset) {
return offsetEncoder(
resizeEncoder(encoder, (size) => size + offset),
{ preOffset: ({ preOffset }) => preOffset + offset }
);
}
function padRightEncoder(encoder, offset) {
return offsetEncoder(
resizeEncoder(encoder, (size) => size + offset),
{ postOffset: ({ postOffset }) => postOffset + offset }
);
}
function padLeftDecoder(decoder, offset) {
return offsetDecoder(
resizeDecoder(decoder, (size) => size + offset),
{ preOffset: ({ preOffset }) => preOffset + offset }
);
}
function padRightDecoder(decoder, offset) {
return offsetDecoder(
resizeDecoder(decoder, (size) => size + offset),
{ postOffset: ({ postOffset }) => postOffset + offset }
);
}
function padLeftCodec(codec, offset) {
return combineCodec(padLeftEncoder(codec, offset), padLeftDecoder(codec, offset));
}
function padRightCodec(codec, offset) {
return combineCodec(padRightEncoder(codec, offset), padRightDecoder(codec, offset));
}
// src/reverse-codec.ts
function copySourceToTargetInReverse(source, target_WILL_MUTATE, sourceOffset, sourceLength, targetOffset = 0) {
while (sourceOffset < --sourceLength) {
const leftValue = source[sourceOffset];
target_WILL_MUTATE[sourceOffset + targetOffset] = source[sourceLength];
target_WILL_MUTATE[sourceLength + targetOffset] = leftValue;
sourceOffset++;
}
if (sourceOffset === sourceLength) {
target_WILL_MUTATE[sourceOffset + targetOffset] = source[sourceOffset];
}
}
function reverseEncoder(encoder) {
assertIsFixedSize(encoder);
return createEncoder({
...encoder,
write: (value, bytes, offset) => {
const newOffset = encoder.write(value, bytes, offset);
copySourceToTargetInReverse(
bytes,
bytes,
offset,
offset + encoder.fixedSize
);
return newOffset;
}
});
}
function reverseDecoder(decoder) {
assertIsFixedSize(decoder);
return createDecoder({
...decoder,
read: (bytes, offset) => {
const reversedBytes = bytes.slice();
copySourceToTargetInReverse(
bytes,
reversedBytes,
offset,
offset + decoder.fixedSize
);
return decoder.read(reversedBytes, offset);
}
});
}
function reverseCodec(codec) {
return combineCodec(reverseEncoder(codec), reverseDecoder(codec));
}
// src/transform-codec.ts
function transformEncoder(encoder, unmap) {
return createEncoder({
...isVariableSize(encoder) ? { ...encoder, getSizeFromValue: (value) => encoder.getSizeFromValue(unmap(value)) } : encoder,
write: (value, bytes, offset) => encoder.write(unmap(value), bytes, offset)
});
}
function transformDecoder(decoder, map) {
return createDecoder({
...decoder,
read: (bytes, offset) => {
const [value, newOffset] = decoder.read(bytes, offset);
return [map(value, bytes, offset), newOffset];
}
});
}
function transformCodec(codec, unmap, map) {
return createCodec({
...transformEncoder(codec, unmap),
read: map ? transformDecoder(codec, map).read : codec.read
});
}
exports.addCodecSentinel = addCodecSentinel;
exports.addCodecSizePrefix = addCodecSizePrefix;
exports.addDecoderSentinel = addDecoderSentinel;
exports.addDecoderSizePrefix = addDecoderSizePrefix;
exports.addEncoderSentinel = addEncoderSentinel;
exports.addEncoderSizePrefix = addEncoderSizePrefix;
exports.assertByteArrayHasEnoughBytesForCodec = assertByteArrayHasEnoughBytesForCodec;
exports.assertByteArrayIsNotEmptyForCodec = assertByteArrayIsNotEmptyForCodec;
exports.assertByteArrayOffsetIsNotOutOfRange = assertByteArrayOffsetIsNotOutOfRange;
exports.assertIsFixedSize = assertIsFixedSize;
exports.assertIsVariableSize = assertIsVariableSize;
exports.bytesEqual = bytesEqual;
exports.combineCodec = combineCodec;
exports.containsBytes = containsBytes;
exports.createCodec = createCodec;
exports.createDecoder = createDecoder;
exports.createDecoderThatConsumesEntireByteArray = createDecoderThatConsumesEntireByteArray;
exports.createEncoder = createEncoder;
exports.fixBytes = fixBytes;
exports.fixCodecSize = fixCodecSize;
exports.fixDecoderSize = fixDecoderSize;
exports.fixEncoderSize = fixEncoderSize;
exports.getEncodedSize = getEncodedSize;
exports.isFixedSize = isFixedSize;
exports.isVariableSize = isVariableSize;
exports.mergeBytes = mergeBytes;
exports.offsetCodec = offsetCodec;
exports.offsetDecoder = offsetDecoder;
exports.offsetEncoder = offsetEncoder;
exports.padBytes = padBytes;
exports.padLeftCodec = padLeftCodec;
exports.padLeftDecoder = padLeftDecoder;
exports.padLeftEncoder = padLeftEncoder;
exports.padRightCodec = padRightCodec;
exports.padRightDecoder = padRightDecoder;
exports.padRightEncoder = padRightEncoder;
exports.resizeCodec = resizeCodec;
exports.resizeDecoder = resizeDecoder;
exports.resizeEncoder = resizeEncoder;
exports.reverseCodec = reverseCodec;
exports.reverseDecoder = reverseDecoder;
exports.reverseEncoder = reverseEncoder;
exports.toArrayBuffer = toArrayBuffer;
exports.transformCodec = transformCodec;
exports.transformDecoder = transformDecoder;
exports.transformEncoder = transformEncoder;
//# sourceMappingURL=index.node.cjs.map
//# sourceMappingURL=index.node.cjs.map

File diff suppressed because one or more lines are too long

493
node_modules/@solana/codecs-core/dist/index.node.mjs generated vendored Normal file
View File

@@ -0,0 +1,493 @@
import { SolanaError, SOLANA_ERROR__CODECS__EXPECTED_FIXED_LENGTH, SOLANA_ERROR__CODECS__EXPECTED_VARIABLE_LENGTH, SOLANA_ERROR__CODECS__ENCODER_DECODER_SIZE_COMPATIBILITY_MISMATCH, SOLANA_ERROR__CODECS__ENCODER_DECODER_FIXED_SIZE_MISMATCH, SOLANA_ERROR__CODECS__ENCODER_DECODER_MAX_SIZE_MISMATCH, SOLANA_ERROR__CODECS__CANNOT_DECODE_EMPTY_BYTE_ARRAY, SOLANA_ERROR__CODECS__INVALID_BYTE_LENGTH, SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE, SOLANA_ERROR__CODECS__EXPECTED_DECODER_TO_CONSUME_ENTIRE_BYTE_ARRAY, SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, SOLANA_ERROR__CODECS__SENTINEL_MISSING_IN_DECODED_BYTES, SOLANA_ERROR__CODECS__ENCODED_BYTES_MUST_NOT_INCLUDE_SENTINEL } from '@solana/errors';
// src/add-codec-sentinel.ts
// src/bytes.ts
var mergeBytes = (byteArrays) => {
const nonEmptyByteArrays = byteArrays.filter((arr) => arr.length);
if (nonEmptyByteArrays.length === 0) {
return byteArrays.length ? byteArrays[0] : new Uint8Array();
}
if (nonEmptyByteArrays.length === 1) {
return nonEmptyByteArrays[0];
}
const totalLength = nonEmptyByteArrays.reduce((total, arr) => total + arr.length, 0);
const result = new Uint8Array(totalLength);
let offset = 0;
nonEmptyByteArrays.forEach((arr) => {
result.set(arr, offset);
offset += arr.length;
});
return result;
};
function padBytes(bytes, length) {
if (bytes.length >= length) return bytes;
const paddedBytes = new Uint8Array(length).fill(0);
paddedBytes.set(bytes);
return paddedBytes;
}
var fixBytes = (bytes, length) => padBytes(bytes.length <= length ? bytes : bytes.slice(0, length), length);
function containsBytes(data, bytes, offset) {
const slice = (offset === 0 || offset <= -data.byteLength) && data.length === bytes.length ? data : data.slice(offset, offset + bytes.length);
return bytesEqual(slice, bytes);
}
function bytesEqual(bytes1, bytes2) {
return bytes1.length === bytes2.length && bytes1.every((value, index) => value === bytes2[index]);
}
function getEncodedSize(value, encoder) {
return "fixedSize" in encoder ? encoder.fixedSize : encoder.getSizeFromValue(value);
}
function createEncoder(encoder) {
return Object.freeze({
...encoder,
encode: (value) => {
const bytes = new Uint8Array(getEncodedSize(value, encoder));
encoder.write(value, bytes, 0);
return bytes;
}
});
}
function createDecoder(decoder) {
return Object.freeze({
...decoder,
decode: (bytes, offset = 0) => decoder.read(bytes, offset)[0]
});
}
function createCodec(codec) {
return Object.freeze({
...codec,
decode: (bytes, offset = 0) => codec.read(bytes, offset)[0],
encode: (value) => {
const bytes = new Uint8Array(getEncodedSize(value, codec));
codec.write(value, bytes, 0);
return bytes;
}
});
}
function isFixedSize(codec) {
return "fixedSize" in codec && typeof codec.fixedSize === "number";
}
function assertIsFixedSize(codec) {
if (!isFixedSize(codec)) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_FIXED_LENGTH);
}
}
function isVariableSize(codec) {
return !isFixedSize(codec);
}
function assertIsVariableSize(codec) {
if (!isVariableSize(codec)) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_VARIABLE_LENGTH);
}
}
function combineCodec(encoder, decoder) {
if (isFixedSize(encoder) !== isFixedSize(decoder)) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODER_DECODER_SIZE_COMPATIBILITY_MISMATCH);
}
if (isFixedSize(encoder) && isFixedSize(decoder) && encoder.fixedSize !== decoder.fixedSize) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODER_DECODER_FIXED_SIZE_MISMATCH, {
decoderFixedSize: decoder.fixedSize,
encoderFixedSize: encoder.fixedSize
});
}
if (!isFixedSize(encoder) && !isFixedSize(decoder) && encoder.maxSize !== decoder.maxSize) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODER_DECODER_MAX_SIZE_MISMATCH, {
decoderMaxSize: decoder.maxSize,
encoderMaxSize: encoder.maxSize
});
}
return {
...decoder,
...encoder,
decode: decoder.decode,
encode: encoder.encode,
read: decoder.read,
write: encoder.write
};
}
// src/add-codec-sentinel.ts
function addEncoderSentinel(encoder, sentinel) {
const write = ((value, bytes, offset) => {
const encoderBytes = encoder.encode(value);
if (findSentinelIndex(encoderBytes, sentinel) >= 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODED_BYTES_MUST_NOT_INCLUDE_SENTINEL, {
encodedBytes: encoderBytes,
hexEncodedBytes: hexBytes(encoderBytes),
hexSentinel: hexBytes(sentinel),
sentinel
});
}
bytes.set(encoderBytes, offset);
offset += encoderBytes.length;
bytes.set(sentinel, offset);
offset += sentinel.length;
return offset;
});
if (isFixedSize(encoder)) {
return createEncoder({ ...encoder, fixedSize: encoder.fixedSize + sentinel.length, write });
}
return createEncoder({
...encoder,
...encoder.maxSize != null ? { maxSize: encoder.maxSize + sentinel.length } : {},
getSizeFromValue: (value) => encoder.getSizeFromValue(value) + sentinel.length,
write
});
}
function addDecoderSentinel(decoder, sentinel) {
const read = ((bytes, offset) => {
const candidateBytes = offset === 0 || offset <= -bytes.byteLength ? bytes : bytes.slice(offset);
const sentinelIndex = findSentinelIndex(candidateBytes, sentinel);
if (sentinelIndex === -1) {
throw new SolanaError(SOLANA_ERROR__CODECS__SENTINEL_MISSING_IN_DECODED_BYTES, {
decodedBytes: candidateBytes,
hexDecodedBytes: hexBytes(candidateBytes),
hexSentinel: hexBytes(sentinel),
sentinel
});
}
const preSentinelBytes = candidateBytes.slice(0, sentinelIndex);
return [decoder.decode(preSentinelBytes), offset + preSentinelBytes.length + sentinel.length];
});
if (isFixedSize(decoder)) {
return createDecoder({ ...decoder, fixedSize: decoder.fixedSize + sentinel.length, read });
}
return createDecoder({
...decoder,
...decoder.maxSize != null ? { maxSize: decoder.maxSize + sentinel.length } : {},
read
});
}
function addCodecSentinel(codec, sentinel) {
return combineCodec(addEncoderSentinel(codec, sentinel), addDecoderSentinel(codec, sentinel));
}
function findSentinelIndex(bytes, sentinel) {
return bytes.findIndex((byte, index, arr) => {
if (sentinel.length === 1) return byte === sentinel[0];
return containsBytes(arr, sentinel, index);
});
}
function hexBytes(bytes) {
return bytes.reduce((str, byte) => str + byte.toString(16).padStart(2, "0"), "");
}
function assertByteArrayIsNotEmptyForCodec(codecDescription, bytes, offset = 0) {
if (bytes.length - offset <= 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__CANNOT_DECODE_EMPTY_BYTE_ARRAY, {
codecDescription
});
}
}
function assertByteArrayHasEnoughBytesForCodec(codecDescription, expected, bytes, offset = 0) {
const bytesLength = bytes.length - offset;
if (bytesLength < expected) {
throw new SolanaError(SOLANA_ERROR__CODECS__INVALID_BYTE_LENGTH, {
bytesLength,
codecDescription,
expected
});
}
}
function assertByteArrayOffsetIsNotOutOfRange(codecDescription, offset, bytesLength) {
if (offset < 0 || offset > bytesLength) {
throw new SolanaError(SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE, {
bytesLength,
codecDescription,
offset
});
}
}
// src/add-codec-size-prefix.ts
function addEncoderSizePrefix(encoder, prefix) {
const write = ((value, bytes, offset) => {
const encoderBytes = encoder.encode(value);
offset = prefix.write(encoderBytes.length, bytes, offset);
bytes.set(encoderBytes, offset);
return offset + encoderBytes.length;
});
if (isFixedSize(prefix) && isFixedSize(encoder)) {
return createEncoder({ ...encoder, fixedSize: prefix.fixedSize + encoder.fixedSize, write });
}
const prefixMaxSize = isFixedSize(prefix) ? prefix.fixedSize : prefix.maxSize ?? null;
const encoderMaxSize = isFixedSize(encoder) ? encoder.fixedSize : encoder.maxSize ?? null;
const maxSize = prefixMaxSize !== null && encoderMaxSize !== null ? prefixMaxSize + encoderMaxSize : null;
return createEncoder({
...encoder,
...maxSize !== null ? { maxSize } : {},
getSizeFromValue: (value) => {
const encoderSize = getEncodedSize(value, encoder);
return getEncodedSize(encoderSize, prefix) + encoderSize;
},
write
});
}
function addDecoderSizePrefix(decoder, prefix) {
const read = ((bytes, offset) => {
const [bigintSize, decoderOffset] = prefix.read(bytes, offset);
const size = Number(bigintSize);
offset = decoderOffset;
if (offset > 0 || bytes.length > size) {
bytes = bytes.slice(offset, offset + size);
}
assertByteArrayHasEnoughBytesForCodec("addDecoderSizePrefix", size, bytes);
return [decoder.decode(bytes), offset + size];
});
if (isFixedSize(prefix) && isFixedSize(decoder)) {
return createDecoder({ ...decoder, fixedSize: prefix.fixedSize + decoder.fixedSize, read });
}
const prefixMaxSize = isFixedSize(prefix) ? prefix.fixedSize : prefix.maxSize ?? null;
const decoderMaxSize = isFixedSize(decoder) ? decoder.fixedSize : decoder.maxSize ?? null;
const maxSize = prefixMaxSize !== null && decoderMaxSize !== null ? prefixMaxSize + decoderMaxSize : null;
return createDecoder({ ...decoder, ...maxSize !== null ? { maxSize } : {}, read });
}
function addCodecSizePrefix(codec, prefix) {
return combineCodec(addEncoderSizePrefix(codec, prefix), addDecoderSizePrefix(codec, prefix));
}
// src/array-buffers.ts
function toArrayBuffer(bytes, offset, length) {
const bytesOffset = bytes.byteOffset + (offset ?? 0);
const bytesLength = length ?? bytes.byteLength;
let buffer;
if (typeof SharedArrayBuffer === "undefined") {
buffer = bytes.buffer;
} else if (bytes.buffer instanceof SharedArrayBuffer) {
buffer = new ArrayBuffer(bytes.length);
new Uint8Array(buffer).set(new Uint8Array(bytes));
} else {
buffer = bytes.buffer;
}
return (bytesOffset === 0 || bytesOffset === -bytes.byteLength) && bytesLength === bytes.byteLength ? buffer : buffer.slice(bytesOffset, bytesOffset + bytesLength);
}
function createDecoderThatConsumesEntireByteArray(decoder) {
return createDecoder({
...decoder,
read(bytes, offset) {
const [value, newOffset] = decoder.read(bytes, offset);
if (bytes.length > newOffset) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_DECODER_TO_CONSUME_ENTIRE_BYTE_ARRAY, {
expectedLength: newOffset,
numExcessBytes: bytes.length - newOffset
});
}
return [value, newOffset];
}
});
}
// src/fix-codec-size.ts
function fixEncoderSize(encoder, fixedBytes) {
return createEncoder({
fixedSize: fixedBytes,
write: (value, bytes, offset) => {
const variableByteArray = encoder.encode(value);
const fixedByteArray = variableByteArray.length > fixedBytes ? variableByteArray.slice(0, fixedBytes) : variableByteArray;
bytes.set(fixedByteArray, offset);
return offset + fixedBytes;
}
});
}
function fixDecoderSize(decoder, fixedBytes) {
return createDecoder({
fixedSize: fixedBytes,
read: (bytes, offset) => {
assertByteArrayHasEnoughBytesForCodec("fixCodecSize", fixedBytes, bytes, offset);
if (offset > 0 || bytes.length > fixedBytes) {
bytes = bytes.slice(offset, offset + fixedBytes);
}
if (isFixedSize(decoder)) {
bytes = fixBytes(bytes, decoder.fixedSize);
}
const [value] = decoder.read(bytes, 0);
return [value, offset + fixedBytes];
}
});
}
function fixCodecSize(codec, fixedBytes) {
return combineCodec(fixEncoderSize(codec, fixedBytes), fixDecoderSize(codec, fixedBytes));
}
// src/offset-codec.ts
function offsetEncoder(encoder, config) {
return createEncoder({
...encoder,
write: (value, bytes, preOffset) => {
const wrapBytes = (offset) => modulo(offset, bytes.length);
const newPreOffset = config.preOffset ? config.preOffset({ bytes, preOffset, wrapBytes }) : preOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetEncoder", newPreOffset, bytes.length);
const postOffset = encoder.write(value, bytes, newPreOffset);
const newPostOffset = config.postOffset ? config.postOffset({ bytes, newPreOffset, postOffset, preOffset, wrapBytes }) : postOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetEncoder", newPostOffset, bytes.length);
return newPostOffset;
}
});
}
function offsetDecoder(decoder, config) {
return createDecoder({
...decoder,
read: (bytes, preOffset) => {
const wrapBytes = (offset) => modulo(offset, bytes.length);
const newPreOffset = config.preOffset ? config.preOffset({ bytes, preOffset, wrapBytes }) : preOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetDecoder", newPreOffset, bytes.length);
const [value, postOffset] = decoder.read(bytes, newPreOffset);
const newPostOffset = config.postOffset ? config.postOffset({ bytes, newPreOffset, postOffset, preOffset, wrapBytes }) : postOffset;
assertByteArrayOffsetIsNotOutOfRange("offsetDecoder", newPostOffset, bytes.length);
return [value, newPostOffset];
}
});
}
function offsetCodec(codec, config) {
return combineCodec(offsetEncoder(codec, config), offsetDecoder(codec, config));
}
function modulo(dividend, divisor) {
if (divisor === 0) return 0;
return (dividend % divisor + divisor) % divisor;
}
function resizeEncoder(encoder, resize) {
if (isFixedSize(encoder)) {
const fixedSize = resize(encoder.fixedSize);
if (fixedSize < 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: fixedSize,
codecDescription: "resizeEncoder"
});
}
return createEncoder({ ...encoder, fixedSize });
}
return createEncoder({
...encoder,
getSizeFromValue: (value) => {
const newSize = resize(encoder.getSizeFromValue(value));
if (newSize < 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: newSize,
codecDescription: "resizeEncoder"
});
}
return newSize;
}
});
}
function resizeDecoder(decoder, resize) {
if (isFixedSize(decoder)) {
const fixedSize = resize(decoder.fixedSize);
if (fixedSize < 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: fixedSize,
codecDescription: "resizeDecoder"
});
}
return createDecoder({ ...decoder, fixedSize });
}
return decoder;
}
function resizeCodec(codec, resize) {
return combineCodec(resizeEncoder(codec, resize), resizeDecoder(codec, resize));
}
// src/pad-codec.ts
function padLeftEncoder(encoder, offset) {
return offsetEncoder(
resizeEncoder(encoder, (size) => size + offset),
{ preOffset: ({ preOffset }) => preOffset + offset }
);
}
function padRightEncoder(encoder, offset) {
return offsetEncoder(
resizeEncoder(encoder, (size) => size + offset),
{ postOffset: ({ postOffset }) => postOffset + offset }
);
}
function padLeftDecoder(decoder, offset) {
return offsetDecoder(
resizeDecoder(decoder, (size) => size + offset),
{ preOffset: ({ preOffset }) => preOffset + offset }
);
}
function padRightDecoder(decoder, offset) {
return offsetDecoder(
resizeDecoder(decoder, (size) => size + offset),
{ postOffset: ({ postOffset }) => postOffset + offset }
);
}
function padLeftCodec(codec, offset) {
return combineCodec(padLeftEncoder(codec, offset), padLeftDecoder(codec, offset));
}
function padRightCodec(codec, offset) {
return combineCodec(padRightEncoder(codec, offset), padRightDecoder(codec, offset));
}
// src/reverse-codec.ts
function copySourceToTargetInReverse(source, target_WILL_MUTATE, sourceOffset, sourceLength, targetOffset = 0) {
while (sourceOffset < --sourceLength) {
const leftValue = source[sourceOffset];
target_WILL_MUTATE[sourceOffset + targetOffset] = source[sourceLength];
target_WILL_MUTATE[sourceLength + targetOffset] = leftValue;
sourceOffset++;
}
if (sourceOffset === sourceLength) {
target_WILL_MUTATE[sourceOffset + targetOffset] = source[sourceOffset];
}
}
function reverseEncoder(encoder) {
assertIsFixedSize(encoder);
return createEncoder({
...encoder,
write: (value, bytes, offset) => {
const newOffset = encoder.write(value, bytes, offset);
copySourceToTargetInReverse(
bytes,
bytes,
offset,
offset + encoder.fixedSize
);
return newOffset;
}
});
}
function reverseDecoder(decoder) {
assertIsFixedSize(decoder);
return createDecoder({
...decoder,
read: (bytes, offset) => {
const reversedBytes = bytes.slice();
copySourceToTargetInReverse(
bytes,
reversedBytes,
offset,
offset + decoder.fixedSize
);
return decoder.read(reversedBytes, offset);
}
});
}
function reverseCodec(codec) {
return combineCodec(reverseEncoder(codec), reverseDecoder(codec));
}
// src/transform-codec.ts
function transformEncoder(encoder, unmap) {
return createEncoder({
...isVariableSize(encoder) ? { ...encoder, getSizeFromValue: (value) => encoder.getSizeFromValue(unmap(value)) } : encoder,
write: (value, bytes, offset) => encoder.write(unmap(value), bytes, offset)
});
}
function transformDecoder(decoder, map) {
return createDecoder({
...decoder,
read: (bytes, offset) => {
const [value, newOffset] = decoder.read(bytes, offset);
return [map(value, bytes, offset), newOffset];
}
});
}
function transformCodec(codec, unmap, map) {
return createCodec({
...transformEncoder(codec, unmap),
read: map ? transformDecoder(codec, map).read : codec.read
});
}
export { addCodecSentinel, addCodecSizePrefix, addDecoderSentinel, addDecoderSizePrefix, addEncoderSentinel, addEncoderSizePrefix, assertByteArrayHasEnoughBytesForCodec, assertByteArrayIsNotEmptyForCodec, assertByteArrayOffsetIsNotOutOfRange, assertIsFixedSize, assertIsVariableSize, bytesEqual, combineCodec, containsBytes, createCodec, createDecoder, createDecoderThatConsumesEntireByteArray, createEncoder, fixBytes, fixCodecSize, fixDecoderSize, fixEncoderSize, getEncodedSize, isFixedSize, isVariableSize, mergeBytes, offsetCodec, offsetDecoder, offsetEncoder, padBytes, padLeftCodec, padLeftDecoder, padLeftEncoder, padRightCodec, padRightDecoder, padRightEncoder, resizeCodec, resizeDecoder, resizeEncoder, reverseCodec, reverseDecoder, reverseEncoder, toArrayBuffer, transformCodec, transformDecoder, transformEncoder };
//# sourceMappingURL=index.node.mjs.map
//# sourceMappingURL=index.node.mjs.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,70 @@
import { Codec, Decoder, Encoder, FixedSizeCodec, FixedSizeDecoder, FixedSizeEncoder, VariableSizeCodec, VariableSizeDecoder, VariableSizeEncoder } from './codec';
import { ReadonlyUint8Array } from './readonly-uint8array';
/**
* Creates an encoder that writes a `Uint8Array` sentinel after the encoded value.
* This is useful to delimit the encoded value when being read by a decoder.
*
* See {@link addCodecSentinel} for more information.
*
* @typeParam TFrom - The type of the value to encode.
*
* @see {@link addCodecSentinel}
*/
export declare function addEncoderSentinel<TFrom>(encoder: FixedSizeEncoder<TFrom>, sentinel: ReadonlyUint8Array): FixedSizeEncoder<TFrom>;
export declare function addEncoderSentinel<TFrom>(encoder: Encoder<TFrom>, sentinel: ReadonlyUint8Array): VariableSizeEncoder<TFrom>;
/**
* Creates a decoder that continues reading until
* a given `Uint8Array` sentinel is found.
*
* See {@link addCodecSentinel} for more information.
*
* @typeParam TTo - The type of the decoded value.
*
* @see {@link addCodecSentinel}
*/
export declare function addDecoderSentinel<TTo>(decoder: FixedSizeDecoder<TTo>, sentinel: ReadonlyUint8Array): FixedSizeDecoder<TTo>;
export declare function addDecoderSentinel<TTo>(decoder: Decoder<TTo>, sentinel: ReadonlyUint8Array): VariableSizeDecoder<TTo>;
/**
* Creates a Codec that writes a given `Uint8Array` sentinel after the encoded
* value and, when decoding, continues reading until the sentinel is found.
*
* This sets a limit on variable-size codecs and tells us when to stop decoding.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
*
* @example
* ```ts
* const codec = addCodecSentinel(getUtf8Codec(), new Uint8Array([255, 255]));
* codec.encode('hello');
* // 0x68656c6c6fffff
* // | └-- Our sentinel.
* // └-- Our encoded string.
* ```
*
* @remarks
* Note that the sentinel _must not_ be present in the encoded data and
* _must_ be present in the decoded data for this to work.
* If this is not the case, dedicated errors will be thrown.
*
* ```ts
* const sentinel = new Uint8Array([108, 108]); // 'll'
* const codec = addCodecSentinel(getUtf8Codec(), sentinel);
*
* codec.encode('hello'); // Throws: sentinel is in encoded data.
* codec.decode(new Uint8Array([1, 2, 3])); // Throws: sentinel missing in decoded data.
* ```
*
* Separate {@link addEncoderSentinel} and {@link addDecoderSentinel} functions are also available.
*
* ```ts
* const bytes = addEncoderSentinel(getUtf8Encoder(), sentinel).encode('hello');
* const value = addDecoderSentinel(getUtf8Decoder(), sentinel).decode(bytes);
* ```
*
* @see {@link addEncoderSentinel}
* @see {@link addDecoderSentinel}
*/
export declare function addCodecSentinel<TFrom, TTo extends TFrom>(codec: FixedSizeCodec<TFrom, TTo>, sentinel: ReadonlyUint8Array): FixedSizeCodec<TFrom, TTo>;
export declare function addCodecSentinel<TFrom, TTo extends TFrom>(codec: Codec<TFrom, TTo>, sentinel: ReadonlyUint8Array): VariableSizeCodec<TFrom, TTo>;
//# sourceMappingURL=add-codec-sentinel.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"add-codec-sentinel.d.ts","sourceRoot":"","sources":["../../src/add-codec-sentinel.ts"],"names":[],"mappings":"AAOA,OAAO,EACH,KAAK,EAGL,OAAO,EACP,OAAO,EACP,cAAc,EACd,gBAAgB,EAChB,gBAAgB,EAEhB,iBAAiB,EACjB,mBAAmB,EACnB,mBAAmB,EACtB,MAAM,SAAS,CAAC;AAEjB,OAAO,EAAE,kBAAkB,EAAE,MAAM,uBAAuB,CAAC;AAE3D;;;;;;;;;GASG;AACH,wBAAgB,kBAAkB,CAAC,KAAK,EACpC,OAAO,EAAE,gBAAgB,CAAC,KAAK,CAAC,EAChC,QAAQ,EAAE,kBAAkB,GAC7B,gBAAgB,CAAC,KAAK,CAAC,CAAC;AAC3B,wBAAgB,kBAAkB,CAAC,KAAK,EACpC,OAAO,EAAE,OAAO,CAAC,KAAK,CAAC,EACvB,QAAQ,EAAE,kBAAkB,GAC7B,mBAAmB,CAAC,KAAK,CAAC,CAAC;AAkC9B;;;;;;;;;GASG;AACH,wBAAgB,kBAAkB,CAAC,GAAG,EAClC,OAAO,EAAE,gBAAgB,CAAC,GAAG,CAAC,EAC9B,QAAQ,EAAE,kBAAkB,GAC7B,gBAAgB,CAAC,GAAG,CAAC,CAAC;AACzB,wBAAgB,kBAAkB,CAAC,GAAG,EAAE,OAAO,EAAE,OAAO,CAAC,GAAG,CAAC,EAAE,QAAQ,EAAE,kBAAkB,GAAG,mBAAmB,CAAC,GAAG,CAAC,CAAC;AA+BvH;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAwCG;AACH,wBAAgB,gBAAgB,CAAC,KAAK,EAAE,GAAG,SAAS,KAAK,EACrD,KAAK,EAAE,cAAc,CAAC,KAAK,EAAE,GAAG,CAAC,EACjC,QAAQ,EAAE,kBAAkB,GAC7B,cAAc,CAAC,KAAK,EAAE,GAAG,CAAC,CAAC;AAC9B,wBAAgB,gBAAgB,CAAC,KAAK,EAAE,GAAG,SAAS,KAAK,EACrD,KAAK,EAAE,KAAK,CAAC,KAAK,EAAE,GAAG,CAAC,EACxB,QAAQ,EAAE,kBAAkB,GAC7B,iBAAiB,CAAC,KAAK,EAAE,GAAG,CAAC,CAAC"}

View File

@@ -0,0 +1,67 @@
import { Codec, Decoder, Encoder, FixedSizeCodec, FixedSizeDecoder, FixedSizeEncoder, VariableSizeCodec, VariableSizeDecoder, VariableSizeEncoder } from './codec';
type NumberEncoder = Encoder<bigint | number> | Encoder<number>;
type FixedSizeNumberEncoder<TSize extends number = number> = FixedSizeEncoder<bigint | number, TSize> | FixedSizeEncoder<number, TSize>;
type NumberDecoder = Decoder<bigint> | Decoder<number>;
type FixedSizeNumberDecoder<TSize extends number = number> = FixedSizeDecoder<bigint, TSize> | FixedSizeDecoder<number, TSize>;
type NumberCodec = Codec<bigint | number, bigint> | Codec<number>;
type FixedSizeNumberCodec<TSize extends number = number> = FixedSizeCodec<bigint | number, bigint, TSize> | FixedSizeCodec<number, number, TSize>;
/**
* Stores the size of the `encoder` in bytes as a prefix using the `prefix` encoder.
*
* See {@link addCodecSizePrefix} for more information.
*
* @typeParam TFrom - The type of the value to encode.
*
* @see {@link addCodecSizePrefix}
*/
export declare function addEncoderSizePrefix<TFrom>(encoder: FixedSizeEncoder<TFrom>, prefix: FixedSizeNumberEncoder): FixedSizeEncoder<TFrom>;
export declare function addEncoderSizePrefix<TFrom>(encoder: Encoder<TFrom>, prefix: NumberEncoder): VariableSizeEncoder<TFrom>;
/**
* Bounds the size of the nested `decoder` by reading its encoded `prefix`.
*
* See {@link addCodecSizePrefix} for more information.
*
* @typeParam TTo - The type of the decoded value.
*
* @see {@link addCodecSizePrefix}
*/
export declare function addDecoderSizePrefix<TTo>(decoder: FixedSizeDecoder<TTo>, prefix: FixedSizeNumberDecoder): FixedSizeDecoder<TTo>;
export declare function addDecoderSizePrefix<TTo>(decoder: Decoder<TTo>, prefix: NumberDecoder): VariableSizeDecoder<TTo>;
/**
* Stores the byte size of any given codec as an encoded number prefix.
*
* This sets a limit on variable-size codecs and tells us when to stop decoding.
* When encoding, the size of the encoded data is stored before the encoded data itself.
* When decoding, the size is read first to know how many bytes to read next.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
*
* @example
* For example, say we want to bound a variable-size base-58 string using a `u32` size prefix.
* Heres how you can use the `addCodecSizePrefix` function to achieve that.
*
* ```ts
* const getU32Base58Codec = () => addCodecSizePrefix(getBase58Codec(), getU32Codec());
*
* getU32Base58Codec().encode('hello world');
* // 0x0b00000068656c6c6f20776f726c64
* // | └-- Our encoded base-58 string.
* // └-- Our encoded u32 size prefix.
* ```
*
* @remarks
* Separate {@link addEncoderSizePrefix} and {@link addDecoderSizePrefix} functions are also available.
*
* ```ts
* const bytes = addEncoderSizePrefix(getBase58Encoder(), getU32Encoder()).encode('hello');
* const value = addDecoderSizePrefix(getBase58Decoder(), getU32Decoder()).decode(bytes);
* ```
*
* @see {@link addEncoderSizePrefix}
* @see {@link addDecoderSizePrefix}
*/
export declare function addCodecSizePrefix<TFrom, TTo extends TFrom>(codec: FixedSizeCodec<TFrom, TTo>, prefix: FixedSizeNumberCodec): FixedSizeCodec<TFrom, TTo>;
export declare function addCodecSizePrefix<TFrom, TTo extends TFrom>(codec: Codec<TFrom, TTo>, prefix: NumberCodec): VariableSizeCodec<TFrom, TTo>;
export {};
//# sourceMappingURL=add-codec-size-prefix.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"add-codec-size-prefix.d.ts","sourceRoot":"","sources":["../../src/add-codec-size-prefix.ts"],"names":[],"mappings":"AACA,OAAO,EACH,KAAK,EAGL,OAAO,EACP,OAAO,EACP,cAAc,EACd,gBAAgB,EAChB,gBAAgB,EAGhB,iBAAiB,EACjB,mBAAmB,EACnB,mBAAmB,EACtB,MAAM,SAAS,CAAC;AAGjB,KAAK,aAAa,GAAG,OAAO,CAAC,MAAM,GAAG,MAAM,CAAC,GAAG,OAAO,CAAC,MAAM,CAAC,CAAC;AAChE,KAAK,sBAAsB,CAAC,KAAK,SAAS,MAAM,GAAG,MAAM,IACnD,gBAAgB,CAAC,MAAM,GAAG,MAAM,EAAE,KAAK,CAAC,GACxC,gBAAgB,CAAC,MAAM,EAAE,KAAK,CAAC,CAAC;AACtC,KAAK,aAAa,GAAG,OAAO,CAAC,MAAM,CAAC,GAAG,OAAO,CAAC,MAAM,CAAC,CAAC;AACvD,KAAK,sBAAsB,CAAC,KAAK,SAAS,MAAM,GAAG,MAAM,IACnD,gBAAgB,CAAC,MAAM,EAAE,KAAK,CAAC,GAC/B,gBAAgB,CAAC,MAAM,EAAE,KAAK,CAAC,CAAC;AACtC,KAAK,WAAW,GAAG,KAAK,CAAC,MAAM,GAAG,MAAM,EAAE,MAAM,CAAC,GAAG,KAAK,CAAC,MAAM,CAAC,CAAC;AAClE,KAAK,oBAAoB,CAAC,KAAK,SAAS,MAAM,GAAG,MAAM,IACjD,cAAc,CAAC,MAAM,GAAG,MAAM,EAAE,MAAM,EAAE,KAAK,CAAC,GAC9C,cAAc,CAAC,MAAM,EAAE,MAAM,EAAE,KAAK,CAAC,CAAC;AAE5C;;;;;;;;GAQG;AACH,wBAAgB,oBAAoB,CAAC,KAAK,EACtC,OAAO,EAAE,gBAAgB,CAAC,KAAK,CAAC,EAChC,MAAM,EAAE,sBAAsB,GAC/B,gBAAgB,CAAC,KAAK,CAAC,CAAC;AAC3B,wBAAgB,oBAAoB,CAAC,KAAK,EAAE,OAAO,EAAE,OAAO,CAAC,KAAK,CAAC,EAAE,MAAM,EAAE,aAAa,GAAG,mBAAmB,CAAC,KAAK,CAAC,CAAC;AA8BxH;;;;;;;;GAQG;AACH,wBAAgB,oBAAoB,CAAC,GAAG,EACpC,OAAO,EAAE,gBAAgB,CAAC,GAAG,CAAC,EAC9B,MAAM,EAAE,sBAAsB,GAC/B,gBAAgB,CAAC,GAAG,CAAC,CAAC;AACzB,wBAAgB,oBAAoB,CAAC,GAAG,EAAE,OAAO,EAAE,OAAO,CAAC,GAAG,CAAC,EAAE,MAAM,EAAE,aAAa,GAAG,mBAAmB,CAAC,GAAG,CAAC,CAAC;AA0BlH;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAiCG;AACH,wBAAgB,kBAAkB,CAAC,KAAK,EAAE,GAAG,SAAS,KAAK,EACvD,KAAK,EAAE,cAAc,CAAC,KAAK,EAAE,GAAG,CAAC,EACjC,MAAM,EAAE,oBAAoB,GAC7B,cAAc,CAAC,KAAK,EAAE,GAAG,CAAC,CAAC;AAC9B,wBAAgB,kBAAkB,CAAC,KAAK,EAAE,GAAG,SAAS,KAAK,EACvD,KAAK,EAAE,KAAK,CAAC,KAAK,EAAE,GAAG,CAAC,EACxB,MAAM,EAAE,WAAW,GACpB,iBAAiB,CAAC,KAAK,EAAE,GAAG,CAAC,CAAC"}

View File

@@ -0,0 +1,10 @@
import { ReadonlyUint8Array } from './readonly-uint8array';
/**
* Converts a `Uint8Array` to an `ArrayBuffer`. If the underlying buffer is a `SharedArrayBuffer`,
* it will be copied to a non-shared buffer, for safety.
*
* @remarks
* Source: https://stackoverflow.com/questions/37228285/uint8array-to-arraybuffer
*/
export declare function toArrayBuffer(bytes: ReadonlyUint8Array | Uint8Array, offset?: number, length?: number): ArrayBuffer;
//# sourceMappingURL=array-buffers.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"array-buffers.d.ts","sourceRoot":"","sources":["../../src/array-buffers.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,kBAAkB,EAAE,MAAM,uBAAuB,CAAC;AAE3D;;;;;;GAMG;AACH,wBAAgB,aAAa,CAAC,KAAK,EAAE,kBAAkB,GAAG,UAAU,EAAE,MAAM,CAAC,EAAE,MAAM,EAAE,MAAM,CAAC,EAAE,MAAM,GAAG,WAAW,CAenH"}

View File

@@ -0,0 +1,62 @@
import { ReadonlyUint8Array } from './readonly-uint8array';
/**
* Asserts that a given byte array is not empty (after the optional provided offset).
*
* Returns void if the byte array is not empty but throws a {@link SolanaError} otherwise.
*
* @param codecDescription - A description of the codec used by the assertion error.
* @param bytes - The byte array to check.
* @param offset - The offset from which to start checking the byte array.
* If provided, the byte array is considered empty if it has no bytes after the offset.
*
* @example
* ```ts
* const bytes = new Uint8Array([0x01, 0x02, 0x03]);
* assertByteArrayIsNotEmptyForCodec('myCodec', bytes); // OK
* assertByteArrayIsNotEmptyForCodec('myCodec', bytes, 1); // OK
* assertByteArrayIsNotEmptyForCodec('myCodec', bytes, 3); // Throws
* ```
*/
export declare function assertByteArrayIsNotEmptyForCodec(codecDescription: string, bytes: ReadonlyUint8Array | Uint8Array, offset?: number): void;
/**
* Asserts that a given byte array has enough bytes to decode
* (after the optional provided offset).
*
* Returns void if the byte array has at least the expected number
* of bytes but throws a {@link SolanaError} otherwise.
*
* @param codecDescription - A description of the codec used by the assertion error.
* @param expected - The minimum number of bytes expected in the byte array.
* @param bytes - The byte array to check.
* @param offset - The offset from which to start checking the byte array.
*
* @example
* ```ts
* const bytes = new Uint8Array([0x01, 0x02, 0x03]);
* assertByteArrayHasEnoughBytesForCodec('myCodec', 3, bytes); // OK
* assertByteArrayHasEnoughBytesForCodec('myCodec', 4, bytes); // Throws
* assertByteArrayHasEnoughBytesForCodec('myCodec', 2, bytes, 1); // OK
* assertByteArrayHasEnoughBytesForCodec('myCodec', 3, bytes, 1); // Throws
* ```
*/
export declare function assertByteArrayHasEnoughBytesForCodec(codecDescription: string, expected: number, bytes: ReadonlyUint8Array | Uint8Array, offset?: number): void;
/**
* Asserts that a given offset is within the byte array bounds.
* This range is between 0 and the byte array length and is inclusive.
* An offset equals to the byte array length is considered a valid offset
* as it allows the post-offset of codecs to signal the end of the byte array.
*
* @param codecDescription - A description of the codec used by the assertion error.
* @param offset - The offset to check.
* @param bytesLength - The length of the byte array from which the offset should be within bounds.
*
* @example
* ```ts
* const bytes = new Uint8Array([0x01, 0x02, 0x03]);
* assertByteArrayOffsetIsNotOutOfRange('myCodec', 0, bytes.length); // OK
* assertByteArrayOffsetIsNotOutOfRange('myCodec', 3, bytes.length); // OK
* assertByteArrayOffsetIsNotOutOfRange('myCodec', 4, bytes.length); // Throws
* ```
*/
export declare function assertByteArrayOffsetIsNotOutOfRange(codecDescription: string, offset: number, bytesLength: number): void;
//# sourceMappingURL=assertions.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"assertions.d.ts","sourceRoot":"","sources":["../../src/assertions.ts"],"names":[],"mappings":"AAOA,OAAO,EAAE,kBAAkB,EAAE,MAAM,uBAAuB,CAAC;AAE3D;;;;;;;;;;;;;;;;;GAiBG;AACH,wBAAgB,iCAAiC,CAC7C,gBAAgB,EAAE,MAAM,EACxB,KAAK,EAAE,kBAAkB,GAAG,UAAU,EACtC,MAAM,SAAI,QAOb;AAED;;;;;;;;;;;;;;;;;;;;GAoBG;AACH,wBAAgB,qCAAqC,CACjD,gBAAgB,EAAE,MAAM,EACxB,QAAQ,EAAE,MAAM,EAChB,KAAK,EAAE,kBAAkB,GAAG,UAAU,EACtC,MAAM,SAAI,QAUb;AAED;;;;;;;;;;;;;;;;;GAiBG;AACH,wBAAgB,oCAAoC,CAAC,gBAAgB,EAAE,MAAM,EAAE,MAAM,EAAE,MAAM,EAAE,WAAW,EAAE,MAAM,QAQjH"}

107
node_modules/@solana/codecs-core/dist/types/bytes.d.ts generated vendored Normal file
View File

@@ -0,0 +1,107 @@
import { ReadonlyUint8Array } from './readonly-uint8array';
/**
* Concatenates an array of `Uint8Array`s into a single `Uint8Array`.
* Reuses the original byte array when applicable.
*
* @param byteArrays - The array of byte arrays to concatenate.
*
* @example
* ```ts
* const bytes1 = new Uint8Array([0x01, 0x02]);
* const bytes2 = new Uint8Array([]);
* const bytes3 = new Uint8Array([0x03, 0x04]);
* const bytes = mergeBytes([bytes1, bytes2, bytes3]);
* // ^ [0x01, 0x02, 0x03, 0x04]
* ```
*/
export declare const mergeBytes: (byteArrays: Uint8Array[]) => Uint8Array;
/**
* Pads a `Uint8Array` with zeroes to the specified length.
* If the array is longer than the specified length, it is returned as-is.
*
* @param bytes - The byte array to pad.
* @param length - The desired length of the byte array.
*
* @example
* Adds zeroes to the end of the byte array to reach the desired length.
* ```ts
* const bytes = new Uint8Array([0x01, 0x02]);
* const paddedBytes = padBytes(bytes, 4);
* // ^ [0x01, 0x02, 0x00, 0x00]
* ```
*
* @example
* Returns the original byte array if it is already at the desired length.
* ```ts
* const bytes = new Uint8Array([0x01, 0x02]);
* const paddedBytes = padBytes(bytes, 2);
* // bytes === paddedBytes
* ```
*/
export declare function padBytes(bytes: Uint8Array, length: number): Uint8Array;
export declare function padBytes(bytes: ReadonlyUint8Array, length: number): ReadonlyUint8Array;
/**
* Fixes a `Uint8Array` to the specified length.
* If the array is longer than the specified length, it is truncated.
* If the array is shorter than the specified length, it is padded with zeroes.
*
* @param bytes - The byte array to truncate or pad.
* @param length - The desired length of the byte array.
*
* @example
* Truncates the byte array to the desired length.
* ```ts
* const bytes = new Uint8Array([0x01, 0x02, 0x03, 0x04]);
* const fixedBytes = fixBytes(bytes, 2);
* // ^ [0x01, 0x02]
* ```
*
* @example
* Adds zeroes to the end of the byte array to reach the desired length.
* ```ts
* const bytes = new Uint8Array([0x01, 0x02]);
* const fixedBytes = fixBytes(bytes, 4);
* // ^ [0x01, 0x02, 0x00, 0x00]
* ```
*
* @example
* Returns the original byte array if it is already at the desired length.
* ```ts
* const bytes = new Uint8Array([0x01, 0x02]);
* const fixedBytes = fixBytes(bytes, 2);
* // bytes === fixedBytes
* ```
*/
export declare const fixBytes: (bytes: ReadonlyUint8Array | Uint8Array, length: number) => ReadonlyUint8Array | Uint8Array;
/**
* Returns true if and only if the provided `data` byte array contains
* the provided `bytes` byte array at the specified `offset`.
*
* @param data - The byte array in which to search for `bytes`.
* @param bytes - The byte sequence to search for.
* @param offset - The position in `data` where the search begins.
*
* @example
* ```ts
* const data = new Uint8Array([0x01, 0x02, 0x03, 0x04]);
* const bytes = new Uint8Array([0x02, 0x03]);
* containsBytes(data, bytes, 1); // true
* containsBytes(data, bytes, 2); // false
* ```
*/
export declare function containsBytes(data: ReadonlyUint8Array | Uint8Array, bytes: ReadonlyUint8Array | Uint8Array, offset: number): boolean;
/**
* Returns true if and only if the provided `bytes1` and `bytes2` byte arrays are equal.
*
* @param bytes1 - The first byte array to compare.
* @param bytes2 - The second byte array to compare.
*
* @example
* ```ts
* const bytes1 = new Uint8Array([0x01, 0x02, 0x03, 0x04]);
* const bytes2 = new Uint8Array([0x01, 0x02, 0x03, 0x04]);
* bytesEqual(bytes1, bytes2); // true
* ```
*/
export declare function bytesEqual(bytes1: ReadonlyUint8Array | Uint8Array, bytes2: ReadonlyUint8Array | Uint8Array): boolean;
//# sourceMappingURL=bytes.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"bytes.d.ts","sourceRoot":"","sources":["../../src/bytes.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,kBAAkB,EAAE,MAAM,uBAAuB,CAAC;AAE3D;;;;;;;;;;;;;;GAcG;AACH,eAAO,MAAM,UAAU,GAAI,YAAY,UAAU,EAAE,KAAG,UAkBrD,CAAC;AAEF;;;;;;;;;;;;;;;;;;;;;;GAsBG;AACH,wBAAgB,QAAQ,CAAC,KAAK,EAAE,UAAU,EAAE,MAAM,EAAE,MAAM,GAAG,UAAU,CAAC;AACxE,wBAAgB,QAAQ,CAAC,KAAK,EAAE,kBAAkB,EAAE,MAAM,EAAE,MAAM,GAAG,kBAAkB,CAAC;AAQxF;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GA+BG;AACH,eAAO,MAAM,QAAQ,GAAI,OAAO,kBAAkB,GAAG,UAAU,EAAE,QAAQ,MAAM,KAAG,kBAAkB,GAAG,UAC1B,CAAC;AAE9E;;;;;;;;;;;;;;;GAeG;AACH,wBAAgB,aAAa,CACzB,IAAI,EAAE,kBAAkB,GAAG,UAAU,EACrC,KAAK,EAAE,kBAAkB,GAAG,UAAU,EACtC,MAAM,EAAE,MAAM,GACf,OAAO,CAMT;AAED;;;;;;;;;;;;GAYG;AACH,wBAAgB,UAAU,CAAC,MAAM,EAAE,kBAAkB,GAAG,UAAU,EAAE,MAAM,EAAE,kBAAkB,GAAG,UAAU,GAAG,OAAO,CAEpH"}

827
node_modules/@solana/codecs-core/dist/types/codec.d.ts generated vendored Normal file
View File

@@ -0,0 +1,827 @@
import { ReadonlyUint8Array } from './readonly-uint8array';
/**
* Defines an offset in bytes.
*/
export type Offset = number;
/**
* An object that can encode a value of type {@link TFrom} into a {@link ReadonlyUint8Array}.
*
* This is a common interface for {@link FixedSizeEncoder} and {@link VariableSizeEncoder}.
*
* @interface
* @typeParam TFrom - The type of the value to encode.
*
* @see {@link FixedSizeEncoder}
* @see {@link VariableSizeEncoder}
*/
type BaseEncoder<TFrom> = {
/** Encode the provided value and return the encoded bytes directly. */
readonly encode: (value: TFrom) => ReadonlyUint8Array<ArrayBuffer>;
/**
* Writes the encoded value into the provided byte array at the given offset.
* Returns the offset of the next byte after the encoded value.
*/
readonly write: (value: TFrom, bytes: Uint8Array, offset: Offset) => Offset;
};
/**
* An object that can encode a value of type {@link TFrom} into a fixed-size {@link ReadonlyUint8Array}.
*
* See {@link Encoder} to learn more about creating and composing encoders.
*
* @interface
* @typeParam TFrom - The type of the value to encode.
* @typeParam TSize - The fixed size of the encoded value in bytes.
*
* @example
* ```ts
* const encoder: FixedSizeEncoder<number, 4>;
* const bytes = encoder.encode(42);
* const size = encoder.fixedSize; // 4
* ```
*
* @see {@link Encoder}
* @see {@link VariableSizeEncoder}
*/
export type FixedSizeEncoder<TFrom, TSize extends number = number> = BaseEncoder<TFrom> & {
/** The fixed size of the encoded value in bytes. */
readonly fixedSize: TSize;
};
/**
* An object that can encode a value of type {@link TFrom} into a variable-size {@link ReadonlyUint8Array}.
*
* See {@link Encoder} to learn more about creating and composing encoders.
*
* @interface
* @typeParam TFrom - The type of the value to encode.
*
* @example
* ```ts
* const encoder: VariableSizeEncoder<string>;
* const bytes = encoder.encode('hello');
* const size = encoder.getSizeFromValue('hello');
* ```
*
* @see {@link Encoder}
* @see {@link FixedSizeEncoder}
*/
export type VariableSizeEncoder<TFrom> = BaseEncoder<TFrom> & {
/** Returns the size of the encoded value in bytes for a given input. */
readonly getSizeFromValue: (value: TFrom) => number;
/** The maximum possible size of an encoded value in bytes, if applicable. */
readonly maxSize?: number;
};
/**
* An object that can encode a value of type {@link TFrom} into a {@link ReadonlyUint8Array}.
*
* An `Encoder` can be either:
* - A {@link FixedSizeEncoder}, where all encoded values have the same fixed size.
* - A {@link VariableSizeEncoder}, where encoded values can vary in size.
*
* @typeParam TFrom - The type of the value to encode.
*
* @example
* Encoding a value into a new byte array.
* ```ts
* const encoder: Encoder<string>;
* const bytes = encoder.encode('hello');
* ```
*
* @example
* Writing the encoded value into an existing byte array.
* ```ts
* const encoder: Encoder<string>;
* const bytes = new Uint8Array(100);
* const nextOffset = encoder.write('hello', bytes, 20);
* ```
*
* @remarks
* You may create `Encoders` manually using the {@link createEncoder} function but it is more common
* to compose multiple `Encoders` together using the various helpers of the `@solana/codecs` package.
*
* For instance, here's how you might create an `Encoder` for a `Person` object type that contains
* a `name` string and an `age` number:
*
* ```ts
* import { getStructEncoder, addEncoderSizePrefix, getUtf8Encoder, getU32Encoder } from '@solana/codecs';
*
* type Person = { name: string; age: number };
* const getPersonEncoder = (): Encoder<Person> =>
* getStructEncoder([
* ['name', addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder())],
* ['age', getU32Encoder()],
* ]);
* ```
*
* Note that composed `Encoder` types are clever enough to understand whether
* they are fixed-size or variable-size. In the example above, `getU32Encoder()` is
* a fixed-size encoder, while `addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder())`
* is a variable-size encoder. This makes the final `Person` encoder a variable-size encoder.
*
* @see {@link FixedSizeEncoder}
* @see {@link VariableSizeEncoder}
* @see {@link createEncoder}
*/
export type Encoder<TFrom> = FixedSizeEncoder<TFrom> | VariableSizeEncoder<TFrom>;
/**
* An object that can decode a byte array into a value of type {@link TTo}.
*
* This is a common interface for {@link FixedSizeDecoder} and {@link VariableSizeDecoder}.
*
* @interface
* @typeParam TTo - The type of the decoded value.
*
* @see {@link FixedSizeDecoder}
* @see {@link VariableSizeDecoder}
*/
type BaseDecoder<TTo> = {
/** Decodes the provided byte array at the given offset (or zero) and returns the value directly. */
readonly decode: (bytes: ReadonlyUint8Array | Uint8Array, offset?: Offset) => TTo;
/**
* Reads the encoded value from the provided byte array at the given offset.
* Returns the decoded value and the offset of the next byte after the encoded value.
*/
readonly read: (bytes: ReadonlyUint8Array | Uint8Array, offset: Offset) => [TTo, Offset];
};
/**
* An object that can decode a fixed-size byte array into a value of type {@link TTo}.
*
* See {@link Decoder} to learn more about creating and composing decoders.
*
* @interface
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
*
* @example
* ```ts
* const decoder: FixedSizeDecoder<number, 4>;
* const value = decoder.decode(bytes);
* const size = decoder.fixedSize; // 4
* ```
*
* @see {@link Decoder}
* @see {@link VariableSizeDecoder}
*/
export type FixedSizeDecoder<TTo, TSize extends number = number> = BaseDecoder<TTo> & {
/** The fixed size of the encoded value in bytes. */
readonly fixedSize: TSize;
};
/**
* An object that can decode a variable-size byte array into a value of type {@link TTo}.
*
* See {@link Decoder} to learn more about creating and composing decoders.
*
* @interface
* @typeParam TTo - The type of the decoded value.
*
* @example
* ```ts
* const decoder: VariableSizeDecoder<number>;
* const value = decoder.decode(bytes);
* ```
*
* @see {@link Decoder}
* @see {@link VariableSizeDecoder}
*/
export type VariableSizeDecoder<TTo> = BaseDecoder<TTo> & {
/** The maximum possible size of an encoded value in bytes, if applicable. */
readonly maxSize?: number;
};
/**
* An object that can decode a byte array into a value of type {@link TTo}.
*
* An `Decoder` can be either:
* - A {@link FixedSizeDecoder}, where all byte arrays have the same fixed size.
* - A {@link VariableSizeDecoder}, where byte arrays can vary in size.
*
* @typeParam TTo - The type of the decoded value.
*
* @example
* Getting the decoded value from a byte array.
* ```ts
* const decoder: Decoder<string>;
* const value = decoder.decode(bytes);
* ```
*
* @example
* Reading the decoded value from a byte array at a specific offset
* and getting the offset of the next byte to read.
* ```ts
* const decoder: Decoder<string>;
* const [value, nextOffset] = decoder.read('hello', bytes, 20);
* ```
*
* @remarks
* You may create `Decoders` manually using the {@link createDecoder} function but it is more common
* to compose multiple `Decoders` together using the various helpers of the `@solana/codecs` package.
*
* For instance, here's how you might create an `Decoder` for a `Person` object type that contains
* a `name` string and an `age` number:
*
* ```ts
* import { getStructDecoder, addDecoderSizePrefix, getUtf8Decoder, getU32Decoder } from '@solana/codecs';
*
* type Person = { name: string; age: number };
* const getPersonDecoder = (): Decoder<Person> =>
* getStructDecoder([
* ['name', addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder())],
* ['age', getU32Decoder()],
* ]);
* ```
*
* Note that composed `Decoder` types are clever enough to understand whether
* they are fixed-size or variable-size. In the example above, `getU32Decoder()` is
* a fixed-size decoder, while `addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder())`
* is a variable-size decoder. This makes the final `Person` decoder a variable-size decoder.
*
* @see {@link FixedSizeDecoder}
* @see {@link VariableSizeDecoder}
* @see {@link createDecoder}
*/
export type Decoder<TTo> = FixedSizeDecoder<TTo> | VariableSizeDecoder<TTo>;
/**
* An object that can encode and decode a value to and from a fixed-size byte array.
*
* See {@link Codec} to learn more about creating and composing codecs.
*
* @interface
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
*
* @example
* ```ts
* const codec: FixedSizeCodec<number | bigint, bigint, 8>;
* const bytes = codec.encode(42);
* const value = codec.decode(bytes); // 42n
* const size = codec.fixedSize; // 8
* ```
*
* @see {@link Codec}
* @see {@link VariableSizeCodec}
*/
export type FixedSizeCodec<TFrom, TTo extends TFrom = TFrom, TSize extends number = number> = FixedSizeDecoder<TTo, TSize> & FixedSizeEncoder<TFrom, TSize>;
/**
* An object that can encode and decode a value to and from a variable-size byte array.
*
* See {@link Codec} to learn more about creating and composing codecs.
*
* @interface
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
*
* @example
* ```ts
* const codec: VariableSizeCodec<number | bigint, bigint>;
* const bytes = codec.encode(42);
* const value = codec.decode(bytes); // 42n
* const size = codec.getSizeFromValue(42);
* ```
*
* @see {@link Codec}
* @see {@link FixedSizeCodec}
*/
export type VariableSizeCodec<TFrom, TTo extends TFrom = TFrom> = VariableSizeDecoder<TTo> & VariableSizeEncoder<TFrom>;
/**
* An object that can encode and decode a value to and from a byte array.
*
* A `Codec` can be either:
* - A {@link FixedSizeCodec}, where all encoded values have the same fixed size.
* - A {@link VariableSizeCodec}, where encoded values can vary in size.
*
* @example
* ```ts
* const codec: Codec<string>;
* const bytes = codec.encode('hello');
* const value = codec.decode(bytes); // 'hello'
* ```
*
* @remarks
* For convenience, codecs can encode looser types than they decode.
* That is, type {@link TFrom} can be a superset of type {@link TTo}.
* For instance, a `Codec<bigint | number, bigint>` can encode both
* `bigint` and `number` values, but will always decode to a `bigint`.
*
* ```ts
* const codec: Codec<bigint | number, bigint>;
* const bytes = codec.encode(42);
* const value = codec.decode(bytes); // 42n
* ```
*
* It is worth noting that codecs are the union of encoders and decoders.
* This means that a `Codec<TFrom, TTo>` can be combined from an `Encoder<TFrom>`
* and a `Decoder<TTo>` using the {@link combineCodec} function. This is particularly
* useful for library authors who want to expose all three types of objects to their users.
*
* ```ts
* const encoder: Encoder<bigint | number>;
* const decoder: Decoder<bigint>;
* const codec: Codec<bigint | number, bigint> = combineCodec(encoder, decoder);
* ```
*
* Aside from combining encoders and decoders, codecs can also be created from scratch using
* the {@link createCodec} function but it is more common to compose multiple codecs together
* using the various helpers of the `@solana/codecs` package.
*
* For instance, here's how you might create a `Codec` for a `Person` object type that contains
* a `name` string and an `age` number:
*
* ```ts
* import { getStructCodec, addCodecSizePrefix, getUtf8Codec, getU32Codec } from '@solana/codecs';
*
* type Person = { name: string; age: number };
* const getPersonCodec = (): Codec<Person> =>
* getStructCodec([
* ['name', addCodecSizePrefix(getUtf8Codec(), getU32Codec())],
* ['age', getU32Codec()],
* ]);
* ```
*
* Note that composed `Codec` types are clever enough to understand whether
* they are fixed-size or variable-size. In the example above, `getU32Codec()` is
* a fixed-size codec, while `addCodecSizePrefix(getUtf8Codec(), getU32Codec())`
* is a variable-size codec. This makes the final `Person` codec a variable-size codec.
*
* @see {@link FixedSizeCodec}
* @see {@link VariableSizeCodec}
* @see {@link combineCodec}
* @see {@link createCodec}
*/
export type Codec<TFrom, TTo extends TFrom = TFrom> = FixedSizeCodec<TFrom, TTo> | VariableSizeCodec<TFrom, TTo>;
/**
* Gets the encoded size of a given value in bytes using the provided encoder.
*
* @typeParam TFrom - The type of the value to encode.
* @param value - The value to be encoded.
* @param encoder - The encoder used to determine the encoded size.
* @returns The size of the encoded value in bytes.
*
* @example
* ```ts
* const fixedSizeEncoder = { fixedSize: 4 };
* getEncodedSize(123, fixedSizeEncoder); // Returns 4.
*
* const variableSizeEncoder = { getSizeFromValue: (value: string) => value.length };
* getEncodedSize("hello", variableSizeEncoder); // Returns 5.
* ```
*
* @see {@link Encoder}
*/
export declare function getEncodedSize<TFrom>(value: TFrom, encoder: {
fixedSize: number;
} | {
getSizeFromValue: (value: TFrom) => number;
}): number;
/**
* Creates an `Encoder` by filling in the missing `encode` function using the provided `write` function and
* either the `fixedSize` property (for {@link FixedSizeEncoder | FixedSizeEncoders}) or
* the `getSizeFromValue` function (for {@link VariableSizeEncoder | VariableSizeEncoders}).
*
* Instead of manually implementing `encode`, this utility leverages the existing `write` function
* and the size helpers to generate a complete encoder. The provided `encode` method will allocate
* a new `Uint8Array` of the correct size and use `write` to populate it.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TSize - The fixed size of the encoded value in bytes (for fixed-size encoders).
*
* @param encoder - An encoder object that implements `write`, but not `encode`.
* - If the encoder has a `fixedSize` property, it is treated as a {@link FixedSizeEncoder}.
* - Otherwise, it is treated as a {@link VariableSizeEncoder}.
*
* @returns A fully functional `Encoder` with both `write` and `encode` methods.
*
* @example
* Creating a custom fixed-size encoder.
* ```ts
* const encoder = createEncoder({
* fixedSize: 4,
* write: (value: number, bytes, offset) => {
* bytes.set(new Uint8Array([value]), offset);
* return offset + 4;
* },
* });
*
* const bytes = encoder.encode(42);
* // 0x2a000000
* ```
*
* @example
* Creating a custom variable-size encoder:
* ```ts
* const encoder = createEncoder({
* getSizeFromValue: (value: string) => value.length,
* write: (value: string, bytes, offset) => {
* const encodedValue = new TextEncoder().encode(value);
* bytes.set(encodedValue, offset);
* return offset + encodedValue.length;
* },
* });
*
* const bytes = encoder.encode("hello");
* // 0x68656c6c6f
* ```
*
* @remarks
* Note that, while `createEncoder` is useful for defining more complex encoders, it is more common to compose
* encoders together using the various helpers and primitives of the `@solana/codecs` package.
*
* Here are some alternative examples using codec primitives instead of `createEncoder`.
*
* ```ts
* // Fixed-size encoder for unsigned 32-bit integers.
* const encoder = getU32Encoder();
* const bytes = encoder.encode(42);
* // 0x2a000000
*
* // Variable-size encoder for 32-bytes prefixed UTF-8 strings.
* const encoder = addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder());
* const bytes = encoder.encode("hello");
* // 0x0500000068656c6c6f
*
* // Variable-size encoder for custom objects.
* type Person = { name: string; age: number };
* const encoder: Encoder<Person> = getStructEncoder([
* ['name', addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder())],
* ['age', getU32Encoder()],
* ]);
* const bytes = encoder.encode({ name: "Bob", age: 42 });
* // 0x03000000426f622a000000
* ```
*
* @see {@link Encoder}
* @see {@link FixedSizeEncoder}
* @see {@link VariableSizeEncoder}
* @see {@link getStructEncoder}
* @see {@link getU32Encoder}
* @see {@link getUtf8Encoder}
* @see {@link addEncoderSizePrefix}
*/
export declare function createEncoder<TFrom, TSize extends number>(encoder: Omit<FixedSizeEncoder<TFrom, TSize>, 'encode'>): FixedSizeEncoder<TFrom, TSize>;
export declare function createEncoder<TFrom>(encoder: Omit<VariableSizeEncoder<TFrom>, 'encode'>): VariableSizeEncoder<TFrom>;
export declare function createEncoder<TFrom>(encoder: Omit<FixedSizeEncoder<TFrom>, 'encode'> | Omit<VariableSizeEncoder<TFrom>, 'encode'>): Encoder<TFrom>;
/**
* Creates a `Decoder` by filling in the missing `decode` function using the provided `read` function.
*
* Instead of manually implementing `decode`, this utility leverages the existing `read` function
* and the size properties to generate a complete decoder. The provided `decode` method will read
* from a `Uint8Array` at the given offset and return the decoded value.
*
* If the `fixedSize` property is provided, a {@link FixedSizeDecoder} will be created, otherwise
* a {@link VariableSizeDecoder} will be created.
*
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes (for fixed-size decoders).
*
* @param decoder - A decoder object that implements `read`, but not `decode`.
* - If the decoder has a `fixedSize` property, it is treated as a {@link FixedSizeDecoder}.
* - Otherwise, it is treated as a {@link VariableSizeDecoder}.
*
* @returns A fully functional `Decoder` with both `read` and `decode` methods.
*
* @example
* Creating a custom fixed-size decoder.
* ```ts
* const decoder = createDecoder({
* fixedSize: 4,
* read: (bytes, offset) => {
* const value = bytes[offset];
* return [value, offset + 4];
* },
* });
*
* const value = decoder.decode(new Uint8Array([42, 0, 0, 0]));
* // 42
* ```
*
* @example
* Creating a custom variable-size decoder:
* ```ts
* const decoder = createDecoder({
* read: (bytes, offset) => {
* const decodedValue = new TextDecoder().decode(bytes.subarray(offset));
* return [decodedValue, bytes.length];
* },
* });
*
* const value = decoder.decode(new Uint8Array([104, 101, 108, 108, 111]));
* // "hello"
* ```
*
* @remarks
* Note that, while `createDecoder` is useful for defining more complex decoders, it is more common to compose
* decoders together using the various helpers and primitives of the `@solana/codecs` package.
*
* Here are some alternative examples using codec primitives instead of `createDecoder`.
*
* ```ts
* // Fixed-size decoder for unsigned 32-bit integers.
* const decoder = getU32Decoder();
* const value = decoder.decode(new Uint8Array([42, 0, 0, 0]));
* // 42
*
* // Variable-size decoder for 32-bytes prefixed UTF-8 strings.
* const decoder = addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder());
* const value = decoder.decode(new Uint8Array([5, 0, 0, 0, 104, 101, 108, 108, 111]));
* // "hello"
*
* // Variable-size decoder for custom objects.
* type Person = { name: string; age: number };
* const decoder: Decoder<Person> = getStructDecoder([
* ['name', addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder())],
* ['age', getU32Decoder()],
* ]);
* const value = decoder.decode(new Uint8Array([3, 0, 0, 0, 66, 111, 98, 42, 0, 0, 0]));
* // { name: "Bob", age: 42 }
* ```
*
* @see {@link Decoder}
* @see {@link FixedSizeDecoder}
* @see {@link VariableSizeDecoder}
* @see {@link getStructDecoder}
* @see {@link getU32Decoder}
* @see {@link getUtf8Decoder}
* @see {@link addDecoderSizePrefix}
*/
export declare function createDecoder<TTo, TSize extends number>(decoder: Omit<FixedSizeDecoder<TTo, TSize>, 'decode'>): FixedSizeDecoder<TTo, TSize>;
export declare function createDecoder<TTo>(decoder: Omit<VariableSizeDecoder<TTo>, 'decode'>): VariableSizeDecoder<TTo>;
export declare function createDecoder<TTo>(decoder: Omit<FixedSizeDecoder<TTo>, 'decode'> | Omit<VariableSizeDecoder<TTo>, 'decode'>): Decoder<TTo>;
/**
* Creates a `Codec` by filling in the missing `encode` and `decode` functions using the provided `write` and `read` functions.
*
* This utility combines the behavior of {@link createEncoder} and {@link createDecoder} to produce a fully functional `Codec`.
* The `encode` method is derived from the `write` function, while the `decode` method is derived from the `read` function.
*
* If the `fixedSize` property is provided, a {@link FixedSizeCodec} will be created, otherwise
* a {@link VariableSizeCodec} will be created.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes (for fixed-size codecs).
*
* @param codec - A codec object that implements `write` and `read`, but not `encode` or `decode`.
* - If the codec has a `fixedSize` property, it is treated as a {@link FixedSizeCodec}.
* - Otherwise, it is treated as a {@link VariableSizeCodec}.
*
* @returns A fully functional `Codec` with `write`, `read`, `encode`, and `decode` methods.
*
* @example
* Creating a custom fixed-size codec.
* ```ts
* const codec = createCodec({
* fixedSize: 4,
* read: (bytes, offset) => {
* const value = bytes[offset];
* return [value, offset + 4];
* },
* write: (value: number, bytes, offset) => {
* bytes.set(new Uint8Array([value]), offset);
* return offset + 4;
* },
* });
*
* const bytes = codec.encode(42);
* // 0x2a000000
* const value = codec.decode(bytes);
* // 42
* ```
*
* @example
* Creating a custom variable-size codec:
* ```ts
* const codec = createCodec({
* getSizeFromValue: (value: string) => value.length,
* read: (bytes, offset) => {
* const decodedValue = new TextDecoder().decode(bytes.subarray(offset));
* return [decodedValue, bytes.length];
* },
* write: (value: string, bytes, offset) => {
* const encodedValue = new TextEncoder().encode(value);
* bytes.set(encodedValue, offset);
* return offset + encodedValue.length;
* },
* });
*
* const bytes = codec.encode("hello");
* // 0x68656c6c6f
* const value = codec.decode(bytes);
* // "hello"
* ```
*
* @remarks
* This function effectively combines the behavior of {@link createEncoder} and {@link createDecoder}.
* If you only need to encode or decode (but not both), consider using those functions instead.
*
* Here are some alternative examples using codec primitives instead of `createCodec`.
*
* ```ts
* // Fixed-size codec for unsigned 32-bit integers.
* const codec = getU32Codec();
* const bytes = codec.encode(42);
* // 0x2a000000
* const value = codec.decode(bytes);
* // 42
*
* // Variable-size codec for 32-bytes prefixed UTF-8 strings.
* const codec = addCodecSizePrefix(getUtf8Codec(), getU32Codec());
* const bytes = codec.encode("hello");
* // 0x0500000068656c6c6f
* const value = codec.decode(bytes);
* // "hello"
*
* // Variable-size codec for custom objects.
* type Person = { name: string; age: number };
* const codec: Codec<PersonInput, Person> = getStructCodec([
* ['name', addCodecSizePrefix(getUtf8Codec(), getU32Codec())],
* ['age', getU32Codec()],
* ]);
* const bytes = codec.encode({ name: "Bob", age: 42 });
* // 0x03000000426f622a000000
* const value = codec.decode(bytes);
* // { name: "Bob", age: 42 }
* ```
*
* @see {@link Codec}
* @see {@link FixedSizeCodec}
* @see {@link VariableSizeCodec}
* @see {@link createEncoder}
* @see {@link createDecoder}
* @see {@link getStructCodec}
* @see {@link getU32Codec}
* @see {@link getUtf8Codec}
* @see {@link addCodecSizePrefix}
*/
export declare function createCodec<TFrom, TTo extends TFrom = TFrom, TSize extends number = number>(codec: Omit<FixedSizeCodec<TFrom, TTo, TSize>, 'decode' | 'encode'>): FixedSizeCodec<TFrom, TTo, TSize>;
export declare function createCodec<TFrom, TTo extends TFrom = TFrom>(codec: Omit<VariableSizeCodec<TFrom, TTo>, 'decode' | 'encode'>): VariableSizeCodec<TFrom, TTo>;
export declare function createCodec<TFrom, TTo extends TFrom = TFrom>(codec: Omit<FixedSizeCodec<TFrom, TTo>, 'decode' | 'encode'> | Omit<VariableSizeCodec<TFrom, TTo>, 'decode' | 'encode'>): Codec<TFrom, TTo>;
/**
* Determines whether the given codec, encoder, or decoder is fixed-size.
*
* A fixed-size object is identified by the presence of a `fixedSize` property.
* If this property exists, the object is considered a {@link FixedSizeCodec},
* {@link FixedSizeEncoder}, or {@link FixedSizeDecoder}.
* Otherwise, it is assumed to be a {@link VariableSizeCodec},
* {@link VariableSizeEncoder}, or {@link VariableSizeDecoder}.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
* @returns `true` if the object is fixed-size, `false` otherwise.
*
* @example
* Checking a fixed-size encoder.
* ```ts
* const encoder = getU32Encoder();
* isFixedSize(encoder); // true
* ```
*
* @example
* Checking a variable-size encoder.
* ```ts
* const encoder = addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder());
* isFixedSize(encoder); // false
* ```
*
* @remarks
* This function is commonly used to distinguish between fixed-size and variable-size objects at runtime.
* If you need to enforce this distinction with type assertions, consider using {@link assertIsFixedSize}.
*
* @see {@link assertIsFixedSize}
*/
export declare function isFixedSize<TFrom, TSize extends number>(encoder: FixedSizeEncoder<TFrom, TSize> | VariableSizeEncoder<TFrom>): encoder is FixedSizeEncoder<TFrom, TSize>;
export declare function isFixedSize<TTo, TSize extends number>(decoder: FixedSizeDecoder<TTo, TSize> | VariableSizeDecoder<TTo>): decoder is FixedSizeDecoder<TTo, TSize>;
export declare function isFixedSize<TFrom, TTo extends TFrom, TSize extends number>(codec: FixedSizeCodec<TFrom, TTo, TSize> | VariableSizeCodec<TFrom, TTo>): codec is FixedSizeCodec<TFrom, TTo, TSize>;
export declare function isFixedSize<TSize extends number>(codec: {
fixedSize: TSize;
} | {
maxSize?: number;
}): codec is {
fixedSize: TSize;
};
/**
* Asserts that the given codec, encoder, or decoder is fixed-size.
*
* If the object is not fixed-size (i.e., it lacks a `fixedSize` property),
* this function throws a {@link SolanaError} with the code `SOLANA_ERROR__CODECS__EXPECTED_FIXED_LENGTH`.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
* @throws {SolanaError} If the object is not fixed-size.
*
* @example
* Asserting a fixed-size encoder.
* ```ts
* const encoder = getU32Encoder();
* assertIsFixedSize(encoder); // Passes
* ```
*
* @example
* Attempting to assert a variable-size encoder.
* ```ts
* const encoder = addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder());
* assertIsFixedSize(encoder); // Throws SolanaError
* ```
*
* @remarks
* This function is the assertion-based counterpart of {@link isFixedSize}.
* If you only need to check whether an object is fixed-size without throwing an error, use {@link isFixedSize} instead.
*
* @see {@link isFixedSize}
*/
export declare function assertIsFixedSize<TFrom, TSize extends number>(encoder: FixedSizeEncoder<TFrom, TSize> | VariableSizeEncoder<TFrom>): asserts encoder is FixedSizeEncoder<TFrom, TSize>;
export declare function assertIsFixedSize<TTo, TSize extends number>(decoder: FixedSizeDecoder<TTo, TSize> | VariableSizeDecoder<TTo>): asserts decoder is FixedSizeDecoder<TTo, TSize>;
export declare function assertIsFixedSize<TFrom, TTo extends TFrom, TSize extends number>(codec: FixedSizeCodec<TFrom, TTo, TSize> | VariableSizeCodec<TFrom, TTo>): asserts codec is FixedSizeCodec<TFrom, TTo, TSize>;
export declare function assertIsFixedSize<TSize extends number>(codec: {
fixedSize: TSize;
} | {
maxSize?: number;
}): asserts codec is {
fixedSize: TSize;
};
/**
* Determines whether the given codec, encoder, or decoder is variable-size.
*
* A variable-size object is identified by the absence of a `fixedSize` property.
* If this property is missing, the object is considered a {@link VariableSizeCodec},
* {@link VariableSizeEncoder}, or {@link VariableSizeDecoder}.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
* @returns `true` if the object is variable-size, `false` otherwise.
*
* @example
* Checking a variable-size encoder.
* ```ts
* const encoder = addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder());
* isVariableSize(encoder); // true
* ```
*
* @example
* Checking a fixed-size encoder.
* ```ts
* const encoder = getU32Encoder();
* isVariableSize(encoder); // false
* ```
*
* @remarks
* This function is the inverse of {@link isFixedSize}.
*
* @see {@link isFixedSize}
* @see {@link assertIsVariableSize}
*/
export declare function isVariableSize<TFrom>(encoder: Encoder<TFrom>): encoder is VariableSizeEncoder<TFrom>;
export declare function isVariableSize<TTo>(decoder: Decoder<TTo>): decoder is VariableSizeDecoder<TTo>;
export declare function isVariableSize<TFrom, TTo extends TFrom>(codec: Codec<TFrom, TTo>): codec is VariableSizeCodec<TFrom, TTo>;
export declare function isVariableSize(codec: {
fixedSize: number;
} | {
maxSize?: number;
}): codec is {
maxSize?: number;
};
/**
* Asserts that the given codec, encoder, or decoder is variable-size.
*
* If the object is not variable-size (i.e., it has a `fixedSize` property),
* this function throws a {@link SolanaError} with the code `SOLANA_ERROR__CODECS__EXPECTED_VARIABLE_LENGTH`.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
* @throws {SolanaError} If the object is not variable-size.
*
* @example
* Asserting a variable-size encoder.
* ```ts
* const encoder = addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder());
* assertIsVariableSize(encoder); // Passes
* ```
*
* @example
* Attempting to assert a fixed-size encoder.
* ```ts
* const encoder = getU32Encoder();
* assertIsVariableSize(encoder); // Throws SolanaError
* ```
*
* @remarks
* This function is the assertion-based counterpart of {@link isVariableSize}.
* If you only need to check whether an object is variable-size without throwing an error, use {@link isVariableSize} instead.
*
* Also note that this function is the inverse of {@link assertIsFixedSize}.
*
* @see {@link isVariableSize}
* @see {@link assertIsFixedSize}
*/
export declare function assertIsVariableSize<TFrom>(encoder: Encoder<TFrom>): asserts encoder is VariableSizeEncoder<TFrom>;
export declare function assertIsVariableSize<TTo>(decoder: Decoder<TTo>): asserts decoder is VariableSizeDecoder<TTo>;
export declare function assertIsVariableSize<TFrom, TTo extends TFrom>(codec: Codec<TFrom, TTo>): asserts codec is VariableSizeCodec<TFrom, TTo>;
export declare function assertIsVariableSize(codec: {
fixedSize: number;
} | {
maxSize?: number;
}): asserts codec is {
maxSize?: number;
};
export {};
//# sourceMappingURL=codec.d.ts.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,75 @@
import { Codec, Decoder, Encoder, FixedSizeCodec, FixedSizeDecoder, FixedSizeEncoder, VariableSizeCodec, VariableSizeDecoder, VariableSizeEncoder } from './codec';
/**
* Combines an `Encoder` and a `Decoder` into a `Codec`.
*
* That is, given a `Encoder<TFrom>` and a `Decoder<TTo>`, this function returns a `Codec<TFrom, TTo>`.
*
* This allows for modular composition by keeping encoding and decoding logic separate
* while still offering a convenient way to bundle them into a single `Codec`.
* This is particularly useful for library maintainers who want to expose `Encoders`,
* `Decoders`, and `Codecs` separately, enabling tree-shaking of unused logic.
*
* The provided `Encoder` and `Decoder` must be compatible in terms of:
* - **Fixed Size:** If both are fixed-size, they must have the same `fixedSize` value.
* - **Variable Size:** If either has a `maxSize` attribute, it must match the other.
*
* If these conditions are not met, a {@link SolanaError} will be thrown.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes (for fixed-size codecs).
*
* @param encoder - The `Encoder` to combine.
* @param decoder - The `Decoder` to combine.
* @returns A `Codec` that provides both `encode` and `decode` methods.
*
* @throws {SolanaError}
* - `SOLANA_ERROR__CODECS__ENCODER_DECODER_SIZE_COMPATIBILITY_MISMATCH`
* Thrown if the encoder and decoder have mismatched size types (fixed vs. variable).
* - `SOLANA_ERROR__CODECS__ENCODER_DECODER_FIXED_SIZE_MISMATCH`
* Thrown if both are fixed-size but have different `fixedSize` values.
* - `SOLANA_ERROR__CODECS__ENCODER_DECODER_MAX_SIZE_MISMATCH`
* Thrown if the `maxSize` attributes do not match.
*
* @example
* Creating a fixed-size `Codec` from an encoder and a decoder.
* ```ts
* const encoder = getU32Encoder();
* const decoder = getU32Decoder();
* const codec = combineCodec(encoder, decoder);
*
* const bytes = codec.encode(42); // 0x2a000000
* const value = codec.decode(bytes); // 42
* ```
*
* @example
* Creating a variable-size `Codec` from an encoder and a decoder.
* ```ts
* const encoder = addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder());
* const decoder = addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder());
* const codec = combineCodec(encoder, decoder);
*
* const bytes = codec.encode("hello"); // 0x0500000068656c6c6f
* const value = codec.decode(bytes); // "hello"
* ```
*
* @remarks
* The recommended pattern for defining codecs in libraries is to expose separate functions for the encoder, decoder, and codec.
* This allows users to import only what they need, improving tree-shaking efficiency.
*
* ```ts
* type MyType = \/* ... *\/;
* const getMyTypeEncoder = (): Encoder<MyType> => { \/* ... *\/ };
* const getMyTypeDecoder = (): Decoder<MyType> => { \/* ... *\/ };
* const getMyTypeCodec = (): Codec<MyType> =>
* combineCodec(getMyTypeEncoder(), getMyTypeDecoder());
* ```
*
* @see {@link Codec}
* @see {@link Encoder}
* @see {@link Decoder}
*/
export declare function combineCodec<TFrom, TTo extends TFrom, TSize extends number>(encoder: FixedSizeEncoder<TFrom, TSize>, decoder: FixedSizeDecoder<TTo, TSize>): FixedSizeCodec<TFrom, TTo, TSize>;
export declare function combineCodec<TFrom, TTo extends TFrom>(encoder: VariableSizeEncoder<TFrom>, decoder: VariableSizeDecoder<TTo>): VariableSizeCodec<TFrom, TTo>;
export declare function combineCodec<TFrom, TTo extends TFrom>(encoder: Encoder<TFrom>, decoder: Decoder<TTo>): Codec<TFrom, TTo>;
//# sourceMappingURL=combine-codec.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"combine-codec.d.ts","sourceRoot":"","sources":["../../src/combine-codec.ts"],"names":[],"mappings":"AAOA,OAAO,EACH,KAAK,EACL,OAAO,EACP,OAAO,EACP,cAAc,EACd,gBAAgB,EAChB,gBAAgB,EAEhB,iBAAiB,EACjB,mBAAmB,EACnB,mBAAmB,EACtB,MAAM,SAAS,CAAC;AAEjB;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAqEG;AACH,wBAAgB,YAAY,CAAC,KAAK,EAAE,GAAG,SAAS,KAAK,EAAE,KAAK,SAAS,MAAM,EACvE,OAAO,EAAE,gBAAgB,CAAC,KAAK,EAAE,KAAK,CAAC,EACvC,OAAO,EAAE,gBAAgB,CAAC,GAAG,EAAE,KAAK,CAAC,GACtC,cAAc,CAAC,KAAK,EAAE,GAAG,EAAE,KAAK,CAAC,CAAC;AACrC,wBAAgB,YAAY,CAAC,KAAK,EAAE,GAAG,SAAS,KAAK,EACjD,OAAO,EAAE,mBAAmB,CAAC,KAAK,CAAC,EACnC,OAAO,EAAE,mBAAmB,CAAC,GAAG,CAAC,GAClC,iBAAiB,CAAC,KAAK,EAAE,GAAG,CAAC,CAAC;AACjC,wBAAgB,YAAY,CAAC,KAAK,EAAE,GAAG,SAAS,KAAK,EACjD,OAAO,EAAE,OAAO,CAAC,KAAK,CAAC,EACvB,OAAO,EAAE,OAAO,CAAC,GAAG,CAAC,GACtB,KAAK,CAAC,KAAK,EAAE,GAAG,CAAC,CAAC"}

View File

@@ -0,0 +1,29 @@
import { Decoder } from './codec';
/**
* Create a {@link Decoder} that asserts that the bytes provided to `decode` or `read` are fully consumed by the inner decoder
* @param decoder A decoder to wrap
* @returns A new decoder that will throw if provided with a byte array that it does not fully consume
*
* @typeParam T - The type of the decoder
*
* @remarks
* Note that this compares the offset after encoding to the length of the input byte array
*
* The `offset` parameter to `decode` and `read` is still considered, and will affect the new offset that is compared to the byte array length
*
* The error that is thrown by the returned decoder is a {@link SolanaError} with the code `SOLANA_ERROR__CODECS__EXPECTED_DECODER_TO_CONSUME_ENTIRE_BYTE_ARRAY`
*
* @example
* Create a decoder that decodes a `u32` (4 bytes) and ensures the entire byte array is consumed
* ```ts
* const decoder = createDecoderThatUsesExactByteArray(getU32Decoder());
* decoder.decode(new Uint8Array([0, 0, 0, 0])); // 0
* decoder.decode(new Uint8Array([0, 0, 0, 0, 0])); // throws
*
* // with an offset
* decoder.decode(new Uint8Array([0, 0, 0, 0, 0]), 1); // 0
* decoder.decode(new Uint8Array([0, 0, 0, 0, 0, 0]), 1); // throws
* ```
*/
export declare function createDecoderThatConsumesEntireByteArray<T>(decoder: Decoder<T>): Decoder<T>;
//# sourceMappingURL=decoder-entire-byte-array.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"decoder-entire-byte-array.d.ts","sourceRoot":"","sources":["../../src/decoder-entire-byte-array.ts"],"names":[],"mappings":"AAEA,OAAO,EAAiB,OAAO,EAAE,MAAM,SAAS,CAAC;AAEjD;;;;;;;;;;;;;;;;;;;;;;;;;GAyBG;AACH,wBAAgB,wCAAwC,CAAC,CAAC,EAAE,OAAO,EAAE,OAAO,CAAC,CAAC,CAAC,GAAG,OAAO,CAAC,CAAC,CAAC,CAc3F"}

View File

@@ -0,0 +1,111 @@
import { Codec, Decoder, Encoder, FixedSizeCodec, FixedSizeDecoder, FixedSizeEncoder } from './codec';
/**
* Creates a fixed-size encoder from a given encoder.
*
* The resulting encoder ensures that encoded values always have the specified number of bytes.
* If the original encoded value is larger than `fixedBytes`, it is truncated.
* If it is smaller, it is padded with trailing zeroes.
*
* For more details, see {@link fixCodecSize}.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TSize - The fixed size of the encoded value in bytes.
*
* @param encoder - The encoder to wrap into a fixed-size encoder.
* @param fixedBytes - The fixed number of bytes to write.
* @returns A `FixedSizeEncoder` that ensures a consistent output size.
*
* @example
* ```ts
* const encoder = fixEncoderSize(getUtf8Encoder(), 4);
* encoder.encode("Hello"); // 0x48656c6c (truncated)
* encoder.encode("Hi"); // 0x48690000 (padded)
* encoder.encode("Hiya"); // 0x48697961 (same length)
* ```
*
* @remarks
* If you need a full codec with both encoding and decoding, use {@link fixCodecSize}.
*
* @see {@link fixCodecSize}
* @see {@link fixDecoderSize}
*/
export declare function fixEncoderSize<TFrom, TSize extends number>(encoder: Encoder<TFrom>, fixedBytes: TSize): FixedSizeEncoder<TFrom, TSize>;
/**
* Creates a fixed-size decoder from a given decoder.
*
* The resulting decoder always reads exactly `fixedBytes` bytes from the input.
* If the nested decoder is also fixed-size, the bytes are truncated or padded as needed.
*
* For more details, see {@link fixCodecSize}.
*
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
*
* @param decoder - The decoder to wrap into a fixed-size decoder.
* @param fixedBytes - The fixed number of bytes to read.
* @returns A `FixedSizeDecoder` that ensures a consistent input size.
*
* @example
* ```ts
* const decoder = fixDecoderSize(getUtf8Decoder(), 4);
* decoder.decode(new Uint8Array([72, 101, 108, 108, 111])); // "Hell" (truncated)
* decoder.decode(new Uint8Array([72, 105, 0, 0])); // "Hi" (zeroes ignored)
* decoder.decode(new Uint8Array([72, 105, 121, 97])); // "Hiya" (same length)
* ```
*
* @remarks
* If you need a full codec with both encoding and decoding, use {@link fixCodecSize}.
*
* @see {@link fixCodecSize}
* @see {@link fixEncoderSize}
*/
export declare function fixDecoderSize<TTo, TSize extends number>(decoder: Decoder<TTo>, fixedBytes: TSize): FixedSizeDecoder<TTo, TSize>;
/**
* Creates a fixed-size codec from a given codec.
*
* The resulting codec ensures that both encoding and decoding operate on a fixed number of bytes.
* When encoding:
* - If the encoded value is larger than `fixedBytes`, it is truncated.
* - If it is smaller, it is padded with trailing zeroes.
* - If it is exactly `fixedBytes`, it remains unchanged.
*
* When decoding:
* - Exactly `fixedBytes` bytes are read from the input.
* - If the nested decoder has a smaller fixed size, bytes are truncated or padded as necessary.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
*
* @param codec - The codec to wrap into a fixed-size codec.
* @param fixedBytes - The fixed number of bytes to read/write.
* @returns A `FixedSizeCodec` that ensures both encoding and decoding conform to a fixed size.
*
* @example
* ```ts
* const codec = fixCodecSize(getUtf8Codec(), 4);
*
* const bytes1 = codec.encode("Hello"); // 0x48656c6c (truncated)
* const value1 = codec.decode(bytes1); // "Hell"
*
* const bytes2 = codec.encode("Hi"); // 0x48690000 (padded)
* const value2 = codec.decode(bytes2); // "Hi"
*
* const bytes3 = codec.encode("Hiya"); // 0x48697961 (same length)
* const value3 = codec.decode(bytes3); // "Hiya"
* ```
*
* @remarks
* If you only need to enforce a fixed size for encoding, use {@link fixEncoderSize}.
* If you only need to enforce a fixed size for decoding, use {@link fixDecoderSize}.
*
* ```ts
* const bytes = fixEncoderSize(getUtf8Encoder(), 4).encode("Hiya");
* const value = fixDecoderSize(getUtf8Decoder(), 4).decode(bytes);
* ```
*
* @see {@link fixEncoderSize}
* @see {@link fixDecoderSize}
*/
export declare function fixCodecSize<TFrom, TTo extends TFrom, TSize extends number>(codec: Codec<TFrom, TTo>, fixedBytes: TSize): FixedSizeCodec<TFrom, TTo, TSize>;
//# sourceMappingURL=fix-codec-size.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"fix-codec-size.d.ts","sourceRoot":"","sources":["../../src/fix-codec-size.ts"],"names":[],"mappings":"AAEA,OAAO,EACH,KAAK,EAGL,OAAO,EACP,OAAO,EACP,cAAc,EACd,gBAAgB,EAChB,gBAAgB,EAGnB,MAAM,SAAS,CAAC;AAGjB;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GA6BG;AACH,wBAAgB,cAAc,CAAC,KAAK,EAAE,KAAK,SAAS,MAAM,EACtD,OAAO,EAAE,OAAO,CAAC,KAAK,CAAC,EACvB,UAAU,EAAE,KAAK,GAClB,gBAAgB,CAAC,KAAK,EAAE,KAAK,CAAC,CAchC;AAED;;;;;;;;;;;;;;;;;;;;;;;;;;;;GA4BG;AACH,wBAAgB,cAAc,CAAC,GAAG,EAAE,KAAK,SAAS,MAAM,EACpD,OAAO,EAAE,OAAO,CAAC,GAAG,CAAC,EACrB,UAAU,EAAE,KAAK,GAClB,gBAAgB,CAAC,GAAG,EAAE,KAAK,CAAC,CAkB9B;AAED;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GA8CG;AACH,wBAAgB,YAAY,CAAC,KAAK,EAAE,GAAG,SAAS,KAAK,EAAE,KAAK,SAAS,MAAM,EACvE,KAAK,EAAE,KAAK,CAAC,KAAK,EAAE,GAAG,CAAC,EACxB,UAAU,EAAE,KAAK,GAClB,cAAc,CAAC,KAAK,EAAE,GAAG,EAAE,KAAK,CAAC,CAEnC"}

669
node_modules/@solana/codecs-core/dist/types/index.d.ts generated vendored Normal file
View File

@@ -0,0 +1,669 @@
/**
* This package contains the core types and functions for encoding and decoding data structures on Solana. It can be used standalone, but it is also exported as part of Kit [`@solana/kit`](https://github.com/anza-xyz/kit/tree/main/packages/kit).
*
* This package is also part of the [`@solana/codecs` package](https://github.com/anza-xyz/kit/tree/main/packages/codecs) which acts as an entry point for all codec packages as well as for their documentation.
*
* ## Composing codecs
*
* The easiest way to create your own codecs is to compose the [various codecs](https://github.com/anza-xyz/kit/tree/main/packages/codecs) offered by this library. For instance, heres how you would define a codec for a `Person` object that contains a `name` string attribute and an `age` number stored in 4 bytes.
*
* ```ts
* type Person = { name: string; age: number };
* const getPersonCodec = (): Codec<Person> =>
* getStructCodec([
* ['name', addCodecSizePrefix(getUtf8Codec(), getU32Codec())],
* ['age', getU32Codec()],
* ]);
* ```
*
* This function returns a `Codec` object which contains both an `encode` and `decode` function that can be used to convert a `Person` type to and from a `Uint8Array`.
*
* ```ts
* const personCodec = getPersonCodec();
* const bytes = personCodec.encode({ name: 'John', age: 42 });
* const person = personCodec.decode(bytes);
* ```
*
* There is a significant library of composable codecs at your disposal, enabling you to compose complex types. You may be interested in the documentation of these other packages to learn more about them:
*
* - [`@solana/codecs-numbers`](https://github.com/anza-xyz/kit/tree/main/packages/codecs-numbers) for number codecs.
* - [`@solana/codecs-strings`](https://github.com/anza-xyz/kit/tree/main/packages/codecs-strings) for string codecs.
* - [`@solana/codecs-data-structures`](https://github.com/anza-xyz/kit/tree/main/packages/codecs-data-structures) for many data structure codecs such as objects, arrays, tuples, sets, maps, enums, discriminated unions, booleans, etc.
* - [`@solana/options`](https://github.com/anza-xyz/kit/tree/main/packages/options) for a Rust-like `Option` type and associated codec.
*
* You may also be interested in some of the helpers of this `@solana/codecs-core` library such as `transformCodec`, `fixCodecSize` or `reverseCodec` that create new codecs from existing ones.
*
* Note that all of these libraries are included in the [`@solana/codecs` package](https://github.com/anza-xyz/kit/tree/main/packages/codecs) as well as the main `@solana/kit` package for your convenience.
*
* ## Composing encoders and decoders
*
* Whilst Codecs can both encode and decode, it is possible to only focus on encoding or decoding data, enabling the unused logic to be tree-shaken. For instance, heres our previous example using Encoders only to encode a `Person` type.
*
* ```ts
* const getPersonEncoder = (): Encoder<Person> =>
* getStructEncoder([
* ['name', addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder())],
* ['age', getU32Encoder()],
* ]);
*
* const bytes = getPersonEncoder().encode({ name: 'John', age: 42 });
* ```
*
* The same can be done for decoding the `Person` type by using Decoders like so.
*
* ```ts
* const getPersonDecoder = (): Decoder<Person> =>
* getStructDecoder([
* ['name', addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder())],
* ['age', getU32Decoder()],
* ]);
*
* const person = getPersonDecoder().decode(bytes);
* ```
*
* ## Combining encoders and decoders
*
* Separating Codecs into Encoders and Decoders is particularly good practice for library maintainers as it allows their users to tree-shake any of the encoders and/or decoders they dont need. However, we may still want to offer a codec helper for users who need both for convenience.
*
* Thats why this library offers a `combineCodec` helper that creates a `Codec` instance from a matching `Encoder` and `Decoder`.
*
* ```ts
* const getPersonCodec = (): Codec<Person> => combineCodec(getPersonEncoder(), getPersonDecoder());
* ```
*
* This means library maintainers can offer Encoders, Decoders and Codecs for all their types whilst staying efficient and tree-shakeable. In summary, we recommend the following pattern when creating codecs for library types.
*
* ```ts
* type MyType = \/* ... *\/;
* const getMyTypeEncoder = (): Encoder<MyType> => { \/* ... *\/ };
* const getMyTypeDecoder = (): Decoder<MyType> => { \/* ... *\/ };
* const getMyTypeCodec = (): Codec<MyType> =>
* combineCodec(getMyTypeEncoder(), getMyTypeDecoder());
* ```
*
* ## Different From and To types
*
* When creating codecs, the encoded type is allowed to be looser than the decoded type. A good example of that is the u64 number codec:
*
* ```ts
* const u64Codec: Codec<number | bigint, bigint> = getU64Codec();
* ```
*
* As you can see, the first type parameter is looser since it accepts numbers or big integers, whereas the second type parameter only accepts big integers. Thats because when _encoding_ a u64 number, you may provide either a `bigint` or a `number` for convenience. However, when you decode a u64 number, you will always get a `bigint` because not all u64 values can fit in a JavaScript `number` type.
*
* ```ts
* const bytes = u64Codec.encode(42);
* const value = u64Codec.decode(bytes); // BigInt(42)
* ```
*
* This relationship between the type we encode “From” and decode “To” can be generalized in TypeScript as `To extends From`.
*
* Heres another example using an object with default values. You can read more about the `transformEncoder` helper below.
*
* ```ts
* type Person = { name: string, age: number };
* type PersonInput = { name: string, age?: number };
*
* const getPersonEncoder = (): Encoder<PersonInput> =>
* transformEncoder(
* getStructEncoder([
* ['name', addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder())],
* ['age', getU32Encoder()],
* ]),
* input => { ...input, age: input.age ?? 42 }
* );
*
* const getPersonDecoder = (): Decoder<Person> =>
* getStructDecoder([
* ['name', addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder())],
* ['age', getU32Decoder()],
* ]);
*
* const getPersonCodec = (): Codec<PersonInput, Person> =>
* combineCodec(getPersonEncoder(), getPersonDecoder())
* ```
*
* ## Fixed-size and variable-size codecs
*
* It is also worth noting that Codecs can either be of fixed size or variable size.
*
* `FixedSizeCodecs` have a `fixedSize` number attribute that tells us exactly how big their encoded data is in bytes.
*
* ```ts
* const myCodec: FixedSizeCodec<number> = getU32Codec();
* myCodec.fixedSize; // 4 bytes.
* ```
*
* On the other hand, `VariableSizeCodecs` do not know the size of their encoded data in advance. Instead, they will grab that information either from the provided encoded data or from the value to encode. For the former, we can simply access the length of the `Uint8Array`. For the latter, it provides a `getSizeFromValue` that tells us the encoded byte size of the provided value.
*
* ```ts
* const myCodec: VariableSizeCodec<string> = addCodecSizePrefix(getUtf8Codec(), getU32Codec());
* myCodec.getSizeFromValue('hello world'); // 4 + 11 bytes.
* ```
*
* Also note that, if the `VariableSizeCodec` is bounded by a maximum size, it can be provided as a `maxSize` number attribute.
*
* The following type guards are available to identify and/or assert the size of codecs: `isFixedSize`, `isVariableSize`, `assertIsFixedSize` and `assertIsVariableSize`.
*
* Finally, note that the same is true for `Encoders` and `Decoders`.
*
* - A `FixedSizeEncoder` has a `fixedSize` number attribute.
* - A `VariableSizeEncoder` has a `getSizeFromValue` function and an optional `maxSize` number attribute.
* - A `FixedSizeDecoder` has a `fixedSize` number attribute.
* - A `VariableSizeDecoder` has an optional `maxSize` number attribute.
*
* ## Creating custom codecs
*
* If composing codecs isnt enough for you, you may implement your own codec logic by using the `createCodec` function. This function requires an object with a `read` and a `write` function telling us how to read from and write to an existing byte array.
*
* The `read` function accepts the `bytes` to decode from and the `offset` at each we should start reading. It returns an array with two items:
*
* - The first item should be the decoded value.
* - The second item should be the next offset to read from.
*
* ```ts
* createCodec({
* read(bytes, offset) {
* const value = bytes[offset];
* return [value, offset + 1];
* },
* // ...
* });
* ```
*
* Reciprocally, the `write` function accepts the `value` to encode, the array of `bytes` to write the encoded value to and the `offset` at which it should be written. It should encode the given value, insert it in the byte array, and provide the next offset to write to as the return value.
*
* ```ts
* createCodec({
* write(value, bytes, offset) {
* bytes.set(value, offset);
* return offset + 1;
* },
* // ...
* });
* ```
*
* Additionally, we must specify the size of the codec. If we are defining a `FixedSizeCodec`, we must simply provide the `fixedSize` number attribute. For `VariableSizeCodecs`, we must provide the `getSizeFromValue` function as described in the previous section.
*
* ```ts
* // FixedSizeCodec.
* createCodec({
* fixedSize: 1,
* // ...
* });
*
* // VariableSizeCodec.
* createCodec({
* getSizeFromValue: (value: string) => value.length,
* // ...
* });
* ```
*
* Heres a concrete example of a custom codec that encodes any unsigned integer in a single byte. Since a single byte can only store integers from 0 to 255, if any other integer is provided it will take its modulo 256 to ensure it fits in a single byte. Because it always requires a single byte, that codec is a `FixedSizeCodec` of size `1`.
*
* ```ts
* const getModuloU8Codec = () =>
* createCodec<number>({
* fixedSize: 1,
* read(bytes, offset) {
* const value = bytes[offset];
* return [value, offset + 1];
* },
* write(value, bytes, offset) {
* bytes.set(value % 256, offset);
* return offset + 1;
* },
* });
* ```
*
* Note that, it is also possible to create custom encoders and decoders separately by using the `createEncoder` and `createDecoder` functions respectively and then use the `combineCodec` function on them just like we were doing with composed codecs.
*
* This approach is recommended to library maintainers as it allows their users to tree-shake any of the encoders and/or decoders they dont need.
*
* Heres our previous modulo u8 example but split into separate `Encoder`, `Decoder` and `Codec` instances.
*
* ```ts
* const getModuloU8Encoder = () =>
* createEncoder<number>({
* fixedSize: 1,
* write(value, bytes, offset) {
* bytes.set(value % 256, offset);
* return offset + 1;
* },
* });
*
* const getModuloU8Decoder = () =>
* createDecoder<number>({
* fixedSize: 1,
* read(bytes, offset) {
* const value = bytes[offset];
* return [value, offset + 1];
* },
* });
*
* const getModuloU8Codec = () => combineCodec(getModuloU8Encoder(), getModuloU8Decoder());
* ```
*
* Heres another example returning a `VariableSizeCodec`. This one transforms a simple string composed of characters from `a` to `z` to a buffer of numbers from `1` to `26` where `0` bytes are spaces.
*
* ```ts
* const alphabet = ' abcdefghijklmnopqrstuvwxyz';
*
* const getCipherEncoder = () =>
* createEncoder<string>({
* getSizeFromValue: value => value.length,
* write(value, bytes, offset) {
* const bytesToAdd = [...value].map(char => alphabet.indexOf(char));
* bytes.set(bytesToAdd, offset);
* return offset + bytesToAdd.length;
* },
* });
*
* const getCipherDecoder = () =>
* createDecoder<string>({
* read(bytes, offset) {
* const value = [...bytes.slice(offset)].map(byte => alphabet.charAt(byte)).join('');
* return [value, bytes.length];
* },
* });
*
* const getCipherCodec = () => combineCodec(getCipherEncoder(), getCipherDecoder());
* ```
*
* ## Transforming codecs
*
* It is possible to transform a `Codec<T>` to a `Codec<U>` by providing two mapping functions: one that goes from `T` to `U` and one that does the opposite.
*
* For instance, heres how you would map a `u32` integer into a `string` representation of that number.
*
* ```ts
* const getStringU32Codec = () =>
* transformCodec(
* getU32Codec(),
* (integerAsString: string): number => parseInt(integerAsString),
* (integer: number): string => integer.toString(),
* );
*
* getStringU32Codec().encode('42'); // new Uint8Array([42])
* getStringU32Codec().decode(new Uint8Array([42])); // "42"
* ```
*
* If a `Codec` has [different From and To types](#different-from-and-to-types), say `Codec<OldFrom, OldTo>`, and we want to map it to `Codec<NewFrom, NewTo>`, we must provide functions that map from `NewFrom` to `OldFrom` and from `OldTo` to `NewTo`.
*
* To illustrate that, lets take our previous `getStringU32Codec` example but make it use a `getU64Codec` codec instead as it returns a `Codec<number | bigint, bigint>`. Additionally, lets make it so our `getStringU64Codec` function returns a `Codec<number | string, string>` so that it also accepts numbers when encoding values. Heres what our mapping functions look like:
*
* ```ts
* const getStringU64Codec = () =>
* transformCodec(
* getU64Codec(),
* (integerInput: number | string): number | bigint =>
* typeof integerInput === 'string' ? BigInt(integerAsString) : integerInput,
* (integer: bigint): string => integer.toString(),
* );
* ```
*
* Note that the second function that maps the decoded type is optional. That means, you can omit it to simply update or loosen the type to encode whilst keeping the decoded type the same.
*
* This is particularly useful to provide default values to object structures. For instance, heres how we can map our `Person` codec to give a default value to its `age` attribute.
*
* ```ts
* type Person = { name: string; age: number; }
* const getPersonCodec = (): Codec<Person> => { \/* ... *\/ }
*
* type PersonInput = { name: string; age?: number; }
* const getPersonWithDefaultValueCodec = (): Codec<PersonInput, Person> =>
* transformCodec(
* getPersonCodec(),
* (person: PersonInput): Person => { ...person, age: person.age ?? 42 }
* )
* ```
*
* Similar helpers exist to map `Encoder` and `Decoder` instances allowing you to separate your codec logic into tree-shakeable functions. Heres our `getStringU32Codec` written that way.
*
* ```ts
* const getStringU32Encoder = () =>
* transformEncoder(getU32Encoder(), (integerAsString: string): number => parseInt(integerAsString));
* const getStringU32Decoder = () => transformDecoder(getU32Decoder(), (integer: number): string => integer.toString());
* const getStringU32Codec = () => combineCodec(getStringU32Encoder(), getStringU32Decoder());
* ```
*
* ## Fixing the size of codecs
*
* The `fixCodecSize` function allows you to bind the size of a given codec to the given fixed size.
*
* For instance, say you want to represent a base-58 string that uses exactly 32 bytes when decoded. Heres how you can use the `fixCodecSize` helper to achieve that.
*
* ```ts
* const get32BytesBase58Codec = () => fixCodecSize(getBase58Codec(), 32);
* ```
*
* You may also use the `fixEncoderSize` and `fixDecoderSize` functions to separate your codec logic like so:
*
* ```ts
* const get32BytesBase58Encoder = () => fixEncoderSize(getBase58Encoder(), 32);
* const get32BytesBase58Decoder = () => fixDecoderSize(getBase58Decoder(), 32);
* const get32BytesBase58Codec = () => combineCodec(get32BytesBase58Encoder(), get32BytesBase58Decoder());
* ```
*
* ## Prefixing codecs with their size
*
* The `addCodecSizePrefix` function allows you to store the byte size of any codec as a number prefix. This allows you to contain variable-size codecs to their actual size.
*
* When encoding, the size of the encoded data is stored before the encoded data itself. When decoding, the size is read first to know how many bytes to read next.
*
* For example, say we want to represent a variable-size base-58 string using a `u32` size prefix. Heres how you can use the `addCodecSizePrefix` function to achieve that.
*
* ```ts
* const getU32Base58Codec = () => addCodecSizePrefix(getBase58Codec(), getU32Codec());
*
* getU32Base58Codec().encode('hello world');
* // 0x0b00000068656c6c6f20776f726c64
* // | └-- Our encoded base-58 string.
* // └-- Our encoded u32 size prefix.
* ```
*
* You may also use the `addEncoderSizePrefix` and `addDecoderSizePrefix` functions to separate your codec logic like so:
*
* ```ts
* const getU32Base58Encoder = () => addEncoderSizePrefix(getBase58Encoder(), getU32Encoder());
* const getU32Base58Decoder = () => addDecoderSizePrefix(getBase58Decoder(), getU32Decoder());
* const getU32Base58Codec = () => combineCodec(getU32Base58Encoder(), getU32Base58Decoder());
* ```
*
* ## Adding sentinels to codecs
*
* Another way of delimiting the size of a codec is to use sentinels. The `addCodecSentinel` function allows us to add a sentinel to the end of the encoded data and to read until that sentinel is found when decoding. It accepts any codec and a `Uint8Array` sentinel responsible for delimiting the encoded data.
*
* ```ts
* const codec = addCodecSentinel(getUtf8Codec(), new Uint8Array([255, 255]));
* codec.encode('hello');
* // 0x68656c6c6fffff
* // | └-- Our sentinel.
* // └-- Our encoded string.
* ```
*
* Note that the sentinel _must not_ be present in the encoded data and _must_ be present in the decoded data for this to work. If this is not the case, dedicated errors will be thrown.
*
* ```ts
* const sentinel = new Uint8Array([108, 108]); // 'll'
* const codec = addCodecSentinel(getUtf8Codec(), sentinel);
*
* codec.encode('hello'); // Throws: sentinel is in encoded data.
* codec.decode(new Uint8Array([1, 2, 3])); // Throws: sentinel missing in decoded data.
* ```
*
* Separate `addEncoderSentinel` and `addDecoderSentinel` functions are also available.
*
* ```ts
* const bytes = addEncoderSentinel(getUtf8Encoder(), sentinel).encode('hello');
* const value = addDecoderSentinel(getUtf8Decoder(), sentinel).decode(bytes);
* ```
*
* ## Adjusting the size of codecs
*
* The `resizeCodec` helper re-defines the size of a given codec by accepting a function that takes the current size of the codec and returns a new size. This works for both fixed-size and variable-size codecs.
*
* ```ts
* // Fixed-size codec.
* const getBiggerU32Codec = () => resizeCodec(getU32Codec(), size => size + 4);
* getBiggerU32Codec().encode(42);
* // 0x2a00000000000000
* // | └-- Empty buffer space caused by the resizeCodec function.
* // └-- Our encoded u32 number.
*
* // Variable-size codec.
* const getBiggerUtf8Codec = () => resizeCodec(getUtf8Codec(), size => size + 4);
* getBiggerUtf8Codec().encode('ABC');
* // 0x41424300000000
* // | └-- Empty buffer space caused by the resizeCodec function.
* // └-- Our encoded string.
* ```
*
* Note that the `resizeCodec` function doesn't change any encoded or decoded bytes, it merely tells the `encode` and `decode` functions how big the `Uint8Array` should be before delegating to their respective `write` and `read` functions. In fact, this is completely bypassed when using the `write` and `read` functions directly. For instance:
*
* ```ts
* const getBiggerU32Codec = () => resizeCodec(getU32Codec(), size => size + 4);
*
* // Using the encode function.
* getBiggerU32Codec().encode(42);
* // 0x2a00000000000000
*
* // Using the lower-level write function.
* const myCustomBytes = new Uint8Array(4);
* getBiggerU32Codec().write(42, myCustomBytes, 0);
* // 0x2a000000
* ```
*
* So when would it make sense to use the `resizeCodec` function? This function is particularly useful when combined with the `offsetCodec` function described below. Whilst the `offsetCodec` may help us push the offset forward — e.g. to skip some padding — it won't change the size of the encoded data which means the last bytes will be truncated by how much we pushed the offset forward. The `resizeCodec` function can be used to fix that. For instance, here's how we can use the `resizeCodec` and the `offsetCodec` functions together to create a struct codec that includes some padding.
*
* ```ts
* const personCodec = getStructCodec([
* ['name', fixCodecSize(getUtf8Codec(), 8)],
* // There is a 4-byte padding between name and age.
* [
* 'age',
* offsetCodec(
* resizeCodec(getU32Codec(), size => size + 4),
* { preOffset: ({ preOffset }) => preOffset + 4 },
* ),
* ],
* ]);
*
* personCodec.encode({ name: 'Alice', age: 42 });
* // 0x416c696365000000000000002a000000
* // | | └-- Our encoded u32 (42).
* // | └-- The 4-bytes of padding we are skipping.
* // └-- Our 8-byte encoded string ("Alice").
* ```
*
* As usual, the `resizeEncoder` and `resizeDecoder` functions can also be used to achieve that.
*
* ```ts
* const getBiggerU32Encoder = () => resizeEncoder(getU32Codec(), size => size + 4);
* const getBiggerU32Decoder = () => resizeDecoder(getU32Codec(), size => size + 4);
* const getBiggerU32Codec = () => combineCodec(getBiggerU32Encoder(), getBiggerU32Decoder());
* ```
*
* ## Offsetting codecs
*
* The `offsetCodec` function is a powerful codec primitive that allows you to move the offset of a given codec forward or backwards. It accepts one or two functions that takes the current offset and returns a new offset.
*
* To understand how this works, let's take our previous `biggerU32Codec` example which encodes a `u32` number inside an 8-byte buffer.
*
* ```ts
* const biggerU32Codec = resizeCodec(getU32Codec(), size => size + 4);
* biggerU32Codec.encode(0xffffffff);
* // 0xffffffff00000000
* // | └-- Empty buffer space caused by the resizeCodec function.
* // └-- Our encoded u32 number.
* ```
*
* Now, let's say we want to move the offset of that codec 2 bytes forward so that the encoded number sits in the middle of the buffer. To achieve, this we can use the `offsetCodec` helper and provide a `preOffset` function that moves the "pre-offset" of the codec 2 bytes forward.
*
* ```ts
* const u32InTheMiddleCodec = offsetCodec(biggerU32Codec, {
* preOffset: ({ preOffset }) => preOffset + 2,
* });
* u32InTheMiddleCodec.encode(0xffffffff);
* // 0x0000ffffffff0000
* // └-- Our encoded u32 number is now in the middle of the buffer.
* ```
*
* We refer to this offset as the "pre-offset" because, once the inner codec is encoded or decoded, an additional offset will be returned which we refer to as the "post-offset". That "post-offset" is important as, unless we are reaching the end of our codec, it will be used by any further codecs to continue encoding or decoding data.
*
* By default, that "post-offset" is simply the addition of the "pre-offset" and the size of the encoded or decoded inner data.
*
* ```ts
* const u32InTheMiddleCodec = offsetCodec(biggerU32Codec, {
* preOffset: ({ preOffset }) => preOffset + 2,
* });
* u32InTheMiddleCodec.encode(0xffffffff);
* // 0x0000ffffffff0000
* // | | └-- Post-offset.
* // | └-- New pre-offset: The original pre-offset + 2.
* // └-- Pre-offset: The original pre-offset before we adjusted it.
* ```
*
* However, you may also provide a `postOffset` function to adjust the "post-offset". For instance, let's push the "post-offset" 2 bytes forward as well such that any further codecs will start doing their job at the end of our 8-byte `u32` number.
*
* ```ts
* const u32InTheMiddleCodec = offsetCodec(biggerU32Codec, {
* preOffset: ({ preOffset }) => preOffset + 2,
* postOffset: ({ postOffset }) => postOffset + 2,
* });
* u32InTheMiddleCodec.encode(0xffffffff);
* // 0x0000ffffffff0000
* // | | | └-- New post-offset: The original post-offset + 2.
* // | | └-- Post-offset: The original post-offset before we adjusted it.
* // | └-- New pre-offset: The original pre-offset + 2.
* // └-- Pre-offset: The original pre-offset before we adjusted it.
* ```
*
* Both the `preOffset` and `postOffset` functions offer the following attributes:
*
* - `bytes`: The entire byte array being encoded or decoded.
* - `preOffset`: The original and unaltered pre-offset.
* - `wrapBytes`: A helper function that wraps the given offset around the byte array length. E.g. `wrapBytes(-1)` will refer to the last byte of the byte array.
*
* Additionally, the post-offset function also provides the following attributes:
*
* - `newPreOffset`: The new pre-offset after the pre-offset function has been applied.
* - `postOffset`: The original and unaltered post-offset.
*
* Note that you may also decide to ignore these attributes to achieve absolute offsets. However, relative offsets are usually recommended as they won't break your codecs when composed with other codecs.
*
* ```ts
* const u32InTheMiddleCodec = offsetCodec(biggerU32Codec, {
* preOffset: () => 2,
* postOffset: () => 8,
* });
* u32InTheMiddleCodec.encode(0xffffffff);
* // 0x0000ffffffff0000
* ```
*
* Also note that any negative offset or offset that exceeds the size of the byte array will throw a `SolanaError` of code `SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE`.
*
* ```ts
* const u32InTheEndCodec = offsetCodec(biggerU32Codec, { preOffset: () => -4 });
* u32InTheEndCodec.encode(0xffffffff);
* // throws new SolanaError(SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE)
* ```
*
* To avoid this, you may use the `wrapBytes` function to wrap the offset around the byte array length. For instance, here's how we can use the `wrapBytes` function to move the pre-offset 4 bytes from the end of the byte array.
*
* ```ts
* const u32InTheEndCodec = offsetCodec(biggerU32Codec, {
* preOffset: ({ wrapBytes }) => wrapBytes(-4),
* });
* u32InTheEndCodec.encode(0xffffffff);
* // 0x00000000ffffffff
* ```
*
* As you can see, the `offsetCodec` helper allows you to jump all over the place with your codecs. This non-linear approach to encoding and decoding data allows you to achieve complex serialization strategies that would otherwise be impossible.
*
* As usual, the `offsetEncoder` and `offsetDecoder` functions can also be used to split your codec logic into tree-shakeable functions.
*
* ```ts
* const getU32InTheMiddleEncoder = () => offsetEncoder(biggerU32Encoder, { preOffset: ({ preOffset }) => preOffset + 2 });
* const getU32InTheMiddleDecoder = () => offsetDecoder(biggerU32Decoder, { preOffset: ({ preOffset }) => preOffset + 2 });
* const getU32InTheMiddleCodec = () => combineCodec(getU32InTheMiddleEncoder(), getU32InTheMiddleDecoder());
* ```
*
* ## Padding codecs
*
* The `padLeftCodec` and `padRightCodec` helpers can be used to add padding to the left or right of a given codec. They accept an `offset` number that tells us how big the padding should be.
*
* ```ts
* const getLeftPaddedCodec = () => padLeftCodec(getU16Codec(), 4);
* getLeftPaddedCodec().encode(0xffff);
* // 0x00000000ffff
* // | └-- Our encoded u16 number.
* // └-- Our 4-byte padding.
*
* const getRightPaddedCodec = () => padRightCodec(getU16Codec(), 4);
* getRightPaddedCodec().encode(0xffff);
* // 0xffff00000000
* // | └-- Our 4-byte padding.
* // └-- Our encoded u16 number.
* ```
*
* Note that both the `padLeftCodec` and `padRightCodec` functions are simple wrappers around the `offsetCodec` and `resizeCodec` functions. For more complex padding strategies, you may want to use the `offsetCodec` and `resizeCodec` functions directly instead.
*
* As usual, encoder-only and decoder-only helpers are available for these padding functions. Namely, `padLeftEncoder`, `padRightEncoder`, `padLeftDecoder` and `padRightDecoder`.
*
* ```ts
* const getMyPaddedEncoder = () => padLeftEncoder(getU16Encoder());
* const getMyPaddedDecoder = () => padLeftDecoder(getU16Decoder());
* const getMyPaddedCodec = () => combineCodec(getMyPaddedEncoder(), getMyPaddedDecoder());
* ```
*
* ## Reversing codecs
*
* The `reverseCodec` helper reverses the bytes of the provided `FixedSizeCodec`.
*
* ```ts
* const getBigEndianU64Codec = () => reverseCodec(getU64Codec());
* ```
*
* Note that number codecs can already do that for you via their `endian` option.
*
* ```ts
* const getBigEndianU64Codec = () => getU64Codec({ endian: Endian.Big });
* ```
*
* As usual, the `reverseEncoder` and `reverseDecoder` functions can also be used to achieve that.
*
* ```ts
* const getBigEndianU64Encoder = () => reverseEncoder(getU64Encoder());
* const getBigEndianU64Decoder = () => reverseDecoder(getU64Decoder());
* const getBigEndianU64Codec = () => combineCodec(getBigEndianU64Encoder(), getBigEndianU64Decoder());
* ```
*
* ## Byte helpers
*
* This package also provides utility functions for managing bytes such as:
*
* - `mergeBytes`: Concatenates an array of `Uint8Arrays` into a single `Uint8Array`.
* - `padBytes`: Pads a `Uint8Array` with zeroes (to the right) to the specified length.
* - `fixBytes`: Pads or truncates a `Uint8Array` so it has the specified length.
* - `containsBytes`: Checks if a `Uint8Array` contains another `Uint8Array` at a given offset.
*
* ```ts
* // Merge multiple Uint8Array buffers into one.
* mergeBytes([new Uint8Array([1, 2]), new Uint8Array([3, 4])]); // Uint8Array([1, 2, 3, 4])
*
* // Pad a Uint8Array buffer to the given size.
* padBytes(new Uint8Array([1, 2]), 4); // Uint8Array([1, 2, 0, 0])
* padBytes(new Uint8Array([1, 2, 3, 4]), 2); // Uint8Array([1, 2, 3, 4])
*
* // Pad and truncate a Uint8Array buffer to the given size.
* fixBytes(new Uint8Array([1, 2]), 4); // Uint8Array([1, 2, 0, 0])
* fixBytes(new Uint8Array([1, 2, 3, 4]), 2); // Uint8Array([1, 2])
*
* // Check if a Uint8Array contains another Uint8Array at a given offset.
* containsBytes(new Uint8Array([1, 2, 3, 4]), new Uint8Array([2, 3]), 1); // true
* containsBytes(new Uint8Array([1, 2, 3, 4]), new Uint8Array([2, 3]), 2); // false
* ```
*
* ---
*
* To read more about the available codecs and how to use them, check out the documentation of the main [`@solana/codecs` package](https://github.com/anza-xyz/kit/tree/main/packages/codecs).
*
* @packageDocumentation
*/
export * from './add-codec-sentinel';
export * from './add-codec-size-prefix';
export * from './array-buffers';
export * from './assertions';
export * from './bytes';
export * from './codec';
export * from './combine-codec';
export * from './decoder-entire-byte-array';
export * from './fix-codec-size';
export * from './offset-codec';
export * from './pad-codec';
export * from './readonly-uint8array';
export * from './resize-codec';
export * from './reverse-codec';
export * from './transform-codec';
//# sourceMappingURL=index.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../../src/index.ts"],"names":[],"mappings":"AAAA;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GA4oBG;AACH,cAAc,sBAAsB,CAAC;AACrC,cAAc,yBAAyB,CAAC;AACxC,cAAc,iBAAiB,CAAC;AAChC,cAAc,cAAc,CAAC;AAC7B,cAAc,SAAS,CAAC;AACxB,cAAc,SAAS,CAAC;AACxB,cAAc,iBAAiB,CAAC;AAChC,cAAc,6BAA6B,CAAC;AAC5C,cAAc,kBAAkB,CAAC;AACjC,cAAc,gBAAgB,CAAC;AAC/B,cAAc,aAAa,CAAC;AAC5B,cAAc,uBAAuB,CAAC;AACtC,cAAc,gBAAgB,CAAC;AAC/B,cAAc,iBAAiB,CAAC;AAChC,cAAc,mBAAmB,CAAC"}

View File

@@ -0,0 +1,328 @@
import { Codec, Decoder, Encoder, Offset } from './codec';
import { ReadonlyUint8Array } from './readonly-uint8array';
type AnyEncoder = Encoder<any>;
type AnyDecoder = Decoder<any>;
type AnyCodec = Codec<any>;
/**
* Configuration object for modifying the offset of an encoder, decoder, or codec.
*
* This type defines optional functions for adjusting the **pre-offset** (before encoding/decoding)
* and the **post-offset** (after encoding/decoding). These functions allow precise control
* over where data is written or read within a byte array.
*
* @property preOffset - A function that modifies the offset before encoding or decoding.
* @property postOffset - A function that modifies the offset after encoding or decoding.
*
* @example
* Moving the pre-offset forward by 2 bytes.
* ```ts
* const config: OffsetConfig = {
* preOffset: ({ preOffset }) => preOffset + 2,
* };
* ```
*
* @example
* Moving the post-offset forward by 2 bytes.
* ```ts
* const config: OffsetConfig = {
* postOffset: ({ postOffset }) => postOffset + 2,
* };
* ```
*
* @example
* Using both pre-offset and post-offset together.
* ```ts
* const config: OffsetConfig = {
* preOffset: ({ preOffset }) => preOffset + 2,
* postOffset: ({ postOffset }) => postOffset + 4,
* };
* ```
*
* @see {@link offsetEncoder}
* @see {@link offsetDecoder}
* @see {@link offsetCodec}
*/
type OffsetConfig = {
postOffset?: PostOffsetFunction;
preOffset?: PreOffsetFunction;
};
/**
* Scope provided to the `preOffset` and `postOffset` functions,
* containing contextual information about the current encoding or decoding process.
*
* The pre-offset function modifies where encoding or decoding begins,
* while the post-offset function modifies where the next operation continues.
*
* @property bytes - The entire byte array being encoded or decoded.
* @property preOffset - The original offset before encoding or decoding starts.
* @property wrapBytes - A helper function that wraps offsets around the byte array length.
*
* @example
* Using `wrapBytes` to wrap a negative offset to the end of the byte array.
* ```ts
* const config: OffsetConfig = {
* preOffset: ({ wrapBytes }) => wrapBytes(-4), // Moves to last 4 bytes
* };
* ```
*
* @example
* Adjusting the offset dynamically based on the byte array size.
* ```ts
* const config: OffsetConfig = {
* preOffset: ({ bytes }) => bytes.length > 10 ? 4 : 2,
* };
* ```
*
* @see {@link PreOffsetFunction}
* @see {@link PostOffsetFunction}
*/
type PreOffsetFunctionScope = {
/** The entire byte array. */
bytes: ReadonlyUint8Array | Uint8Array;
/** The original offset prior to encode or decode. */
preOffset: Offset;
/** Wraps the offset to the byte array length. */
wrapBytes: (offset: Offset) => Offset;
};
/**
* A function that modifies the pre-offset before encoding or decoding.
*
* This function is used to adjust the starting position before writing
* or reading data in a byte array.
*
* @param scope - The current encoding or decoding context.
* @returns The new offset at which encoding or decoding should start.
*
* @example
* Skipping the first 2 bytes before writing or reading.
* ```ts
* const preOffset: PreOffsetFunction = ({ preOffset }) => preOffset + 2;
* ```
*
* @example
* Wrapping the offset to ensure it stays within bounds.
* ```ts
* const preOffset: PreOffsetFunction = ({ wrapBytes, preOffset }) => wrapBytes(preOffset + 10);
* ```
*
* @see {@link OffsetConfig}
* @see {@link PreOffsetFunctionScope}
*/
type PreOffsetFunction = (scope: PreOffsetFunctionScope) => Offset;
/**
* A function that modifies the post-offset after encoding or decoding.
*
* This function adjusts where the next encoder or decoder should start
* after the current operation has completed.
*
* @param scope - The current encoding or decoding context, including the modified pre-offset
* and the original post-offset.
* @returns The new offset at which the next operation should begin.
*
* @example
* Moving the post-offset forward by 4 bytes.
* ```ts
* const postOffset: PostOffsetFunction = ({ postOffset }) => postOffset + 4;
* ```
*
* @example
* Wrapping the post-offset within the byte array length.
* ```ts
* const postOffset: PostOffsetFunction = ({ wrapBytes, postOffset }) => wrapBytes(postOffset);
* ```
*
* @example
* Ensuring a minimum spacing of 8 bytes between values.
* ```ts
* const postOffset: PostOffsetFunction = ({ postOffset, newPreOffset }) =>
* Math.max(postOffset, newPreOffset + 8);
* ```
*
* @see {@link OffsetConfig}
* @see {@link PreOffsetFunctionScope}
*/
type PostOffsetFunction = (scope: PreOffsetFunctionScope & {
/** The modified offset used to encode or decode. */
newPreOffset: Offset;
/** The original offset returned by the encoder or decoder. */
postOffset: Offset;
}) => Offset;
/**
* Moves the offset of a given encoder before and/or after encoding.
*
* This function allows an encoder to write its encoded value at a different offset
* than the one originally provided. It supports both pre-offset adjustments
* (before encoding) and post-offset adjustments (after encoding).
*
* The pre-offset function determines where encoding should start, while the
* post-offset function adjusts where the next encoder should continue writing.
*
* For more details, see {@link offsetCodec}.
*
* @typeParam TFrom - The type of the value to encode.
*
* @param encoder - The encoder to adjust.
* @param config - An object specifying how the offset should be modified.
* @returns A new encoder with adjusted offsets.
*
* @example
* Moving the pre-offset forward by 2 bytes.
* ```ts
* const encoder = offsetEncoder(getU32Encoder(), {
* preOffset: ({ preOffset }) => preOffset + 2,
* });
* const bytes = new Uint8Array(10);
* encoder.write(42, bytes, 0); // Actually written at offset 2
* ```
*
* @example
* Moving the post-offset forward by 2 bytes.
* ```ts
* const encoder = offsetEncoder(getU32Encoder(), {
* postOffset: ({ postOffset }) => postOffset + 2,
* });
* const bytes = new Uint8Array(10);
* const nextOffset = encoder.write(42, bytes, 0); // Next encoder starts at offset 6 instead of 4
* ```
*
* @example
* Using `wrapBytes` to ensure an offset wraps around the byte array length.
* ```ts
* const encoder = offsetEncoder(getU32Encoder(), {
* preOffset: ({ wrapBytes }) => wrapBytes(-4), // Moves offset to last 4 bytes of the array
* });
* const bytes = new Uint8Array(10);
* encoder.write(42, bytes, 0); // Writes at bytes.length - 4
* ```
*
* @remarks
* If you need both encoding and decoding offsets to be adjusted, use {@link offsetCodec}.
*
* @see {@link offsetCodec}
* @see {@link offsetDecoder}
*/
export declare function offsetEncoder<TEncoder extends AnyEncoder>(encoder: TEncoder, config: OffsetConfig): TEncoder;
/**
* Moves the offset of a given decoder before and/or after decoding.
*
* This function allows a decoder to read its input from a different offset
* than the one originally provided. It supports both pre-offset adjustments
* (before decoding) and post-offset adjustments (after decoding).
*
* The pre-offset function determines where decoding should start, while the
* post-offset function adjusts where the next decoder should continue reading.
*
* For more details, see {@link offsetCodec}.
*
* @typeParam TTo - The type of the decoded value.
*
* @param decoder - The decoder to adjust.
* @param config - An object specifying how the offset should be modified.
* @returns A new decoder with adjusted offsets.
*
* @example
* Moving the pre-offset forward by 2 bytes.
* ```ts
* const decoder = offsetDecoder(getU32Decoder(), {
* preOffset: ({ preOffset }) => preOffset + 2,
* });
* const bytes = new Uint8Array([0, 0, 42, 0]); // Value starts at offset 2
* decoder.read(bytes, 0); // Actually reads from offset 2
* ```
*
* @example
* Moving the post-offset forward by 2 bytes.
* ```ts
* const decoder = offsetDecoder(getU32Decoder(), {
* postOffset: ({ postOffset }) => postOffset + 2,
* });
* const bytes = new Uint8Array([42, 0, 0, 0]);
* const [value, nextOffset] = decoder.read(bytes, 0); // Next decoder starts at offset 6 instead of 4
* ```
*
* @example
* Using `wrapBytes` to read from the last 4 bytes of an array.
* ```ts
* const decoder = offsetDecoder(getU32Decoder(), {
* preOffset: ({ wrapBytes }) => wrapBytes(-4), // Moves offset to last 4 bytes of the array
* });
* const bytes = new Uint8Array([0, 0, 0, 0, 0, 0, 0, 42]); // Value stored at the last 4 bytes
* decoder.read(bytes, 0); // Reads from bytes.length - 4
* ```
*
* @remarks
* If you need both encoding and decoding offsets to be adjusted, use {@link offsetCodec}.
*
* @see {@link offsetCodec}
* @see {@link offsetEncoder}
*/
export declare function offsetDecoder<TDecoder extends AnyDecoder>(decoder: TDecoder, config: OffsetConfig): TDecoder;
/**
* Moves the offset of a given codec before and/or after encoding and decoding.
*
* This function allows a codec to encode and decode values at custom offsets
* within a byte array. It modifies both the **pre-offset** (where encoding/decoding starts)
* and the **post-offset** (where the next operation should continue).
*
* This is particularly useful when working with structured binary formats
* that require skipping reserved bytes, inserting padding, or aligning fields at
* specific locations.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
*
* @param codec - The codec to adjust.
* @param config - An object specifying how the offset should be modified.
* @returns A new codec with adjusted offsets.
*
* @example
* Moving the pre-offset forward by 2 bytes when encoding and decoding.
* ```ts
* const codec = offsetCodec(getU32Codec(), {
* preOffset: ({ preOffset }) => preOffset + 2,
* });
* const bytes = new Uint8Array(10);
* codec.write(42, bytes, 0); // Actually written at offset 2
* codec.read(bytes, 0); // Actually read from offset 2
* ```
*
* @example
* Moving the post-offset forward by 2 bytes when encoding and decoding.
* ```ts
* const codec = offsetCodec(getU32Codec(), {
* postOffset: ({ postOffset }) => postOffset + 2,
* });
* const bytes = new Uint8Array(10);
* codec.write(42, bytes, 0);
* // Next encoding starts at offset 6 instead of 4
* codec.read(bytes, 0);
* // Next decoding starts at offset 6 instead of 4
* ```
*
* @example
* Using `wrapBytes` to loop around negative offsets.
* ```ts
* const codec = offsetCodec(getU32Codec(), {
* preOffset: ({ wrapBytes }) => wrapBytes(-4), // Moves offset to last 4 bytes
* });
* const bytes = new Uint8Array(10);
* codec.write(42, bytes, 0); // Writes at bytes.length - 4
* codec.read(bytes, 0); // Reads from bytes.length - 4
* ```
*
* @remarks
* If you only need to adjust offsets for encoding, use {@link offsetEncoder}.
* If you only need to adjust offsets for decoding, use {@link offsetDecoder}.
*
* ```ts
* const bytes = new Uint8Array(10);
* offsetEncoder(getU32Encoder(), { preOffset: ({ preOffset }) => preOffset + 2 }).write(42, bytes, 0);
* const [value] = offsetDecoder(getU32Decoder(), { preOffset: ({ preOffset }) => preOffset + 2 }).read(bytes, 0);
* ```
*
* @see {@link offsetEncoder}
* @see {@link offsetDecoder}
*/
export declare function offsetCodec<TCodec extends AnyCodec>(codec: TCodec, config: OffsetConfig): TCodec;
export {};
//# sourceMappingURL=offset-codec.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"offset-codec.d.ts","sourceRoot":"","sources":["../../src/offset-codec.ts"],"names":[],"mappings":"AACA,OAAO,EAAE,KAAK,EAAgC,OAAO,EAAE,OAAO,EAAE,MAAM,EAAE,MAAM,SAAS,CAAC;AAExF,OAAO,EAAE,kBAAkB,EAAE,MAAM,uBAAuB,CAAC;AAG3D,KAAK,UAAU,GAAG,OAAO,CAAC,GAAG,CAAC,CAAC;AAE/B,KAAK,UAAU,GAAG,OAAO,CAAC,GAAG,CAAC,CAAC;AAE/B,KAAK,QAAQ,GAAG,KAAK,CAAC,GAAG,CAAC,CAAC;AAE3B;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAsCG;AACH,KAAK,YAAY,GAAG;IAChB,UAAU,CAAC,EAAE,kBAAkB,CAAC;IAChC,SAAS,CAAC,EAAE,iBAAiB,CAAC;CACjC,CAAC;AAEF;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GA6BG;AACH,KAAK,sBAAsB,GAAG;IAC1B,6BAA6B;IAC7B,KAAK,EAAE,kBAAkB,GAAG,UAAU,CAAC;IACvC,qDAAqD;IACrD,SAAS,EAAE,MAAM,CAAC;IAClB,iDAAiD;IACjD,SAAS,EAAE,CAAC,MAAM,EAAE,MAAM,KAAK,MAAM,CAAC;CACzC,CAAC;AAEF;;;;;;;;;;;;;;;;;;;;;;;GAuBG;AACH,KAAK,iBAAiB,GAAG,CAAC,KAAK,EAAE,sBAAsB,KAAK,MAAM,CAAC;AAEnE;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GA+BG;AACH,KAAK,kBAAkB,GAAG,CACtB,KAAK,EAAE,sBAAsB,GAAG;IAC5B,oDAAoD;IACpD,YAAY,EAAE,MAAM,CAAC;IACrB,8DAA8D;IAC9D,UAAU,EAAE,MAAM,CAAC;CACtB,KACA,MAAM,CAAC;AAEZ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAqDG;AACH,wBAAgB,aAAa,CAAC,QAAQ,SAAS,UAAU,EAAE,OAAO,EAAE,QAAQ,EAAE,MAAM,EAAE,YAAY,GAAG,QAAQ,CAe5G;AAED;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAqDG;AACH,wBAAgB,aAAa,CAAC,QAAQ,SAAS,UAAU,EAAE,OAAO,EAAE,QAAQ,EAAE,MAAM,EAAE,YAAY,GAAG,QAAQ,CAe5G;AAED;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAiEG;AACH,wBAAgB,WAAW,CAAC,MAAM,SAAS,QAAQ,EAAE,KAAK,EAAE,MAAM,EAAE,MAAM,EAAE,YAAY,GAAG,MAAM,CAEhG"}

View File

@@ -0,0 +1,162 @@
import { Codec, Decoder, Encoder, Offset } from './codec';
type AnyEncoder = Encoder<any>;
type AnyDecoder = Decoder<any>;
type AnyCodec = Codec<any>;
/**
* Adds left padding to the given encoder, shifting the encoded value forward
* by `offset` bytes whilst increasing the size of the encoder accordingly.
*
* For more details, see {@link padLeftCodec}.
*
* @typeParam TFrom - The type of the value to encode.
*
* @param encoder - The encoder to pad.
* @param offset - The number of padding bytes to add before encoding.
* @returns A new encoder with left padding applied.
*
* @example
* ```ts
* const encoder = padLeftEncoder(getU16Encoder(), 2);
* const bytes = encoder.encode(0xffff); // 0x0000ffff (0xffff written at offset 2)
* ```
*
* @see {@link padLeftCodec}
* @see {@link padLeftDecoder}
*/
export declare function padLeftEncoder<TEncoder extends AnyEncoder>(encoder: TEncoder, offset: Offset): TEncoder;
/**
* Adds right padding to the given encoder, extending the encoded value by `offset`
* bytes whilst increasing the size of the encoder accordingly.
*
* For more details, see {@link padRightCodec}.
*
* @typeParam TFrom - The type of the value to encode.
*
* @param encoder - The encoder to pad.
* @param offset - The number of padding bytes to add after encoding.
* @returns A new encoder with right padding applied.
*
* @example
* ```ts
* const encoder = padRightEncoder(getU16Encoder(), 2);
* const bytes = encoder.encode(0xffff); // 0xffff0000 (two extra bytes added at the end)
* ```
*
* @see {@link padRightCodec}
* @see {@link padRightDecoder}
*/
export declare function padRightEncoder<TEncoder extends AnyEncoder>(encoder: TEncoder, offset: Offset): TEncoder;
/**
* Adds left padding to the given decoder, shifting the decoding position forward
* by `offset` bytes whilst increasing the size of the decoder accordingly.
*
* For more details, see {@link padLeftCodec}.
*
* @typeParam TTo - The type of the decoded value.
*
* @param decoder - The decoder to pad.
* @param offset - The number of padding bytes to skip before decoding.
* @returns A new decoder with left padding applied.
*
* @example
* ```ts
* const decoder = padLeftDecoder(getU16Decoder(), 2);
* const value = decoder.decode(new Uint8Array([0, 0, 0x12, 0x34])); // 0xffff (reads from offset 2)
* ```
*
* @see {@link padLeftCodec}
* @see {@link padLeftEncoder}
*/
export declare function padLeftDecoder<TDecoder extends AnyDecoder>(decoder: TDecoder, offset: Offset): TDecoder;
/**
* Adds right padding to the given decoder, extending the post-offset by `offset`
* bytes whilst increasing the size of the decoder accordingly.
*
* For more details, see {@link padRightCodec}.
*
* @typeParam TTo - The type of the decoded value.
*
* @param decoder - The decoder to pad.
* @param offset - The number of padding bytes to skip after decoding.
* @returns A new decoder with right padding applied.
*
* @example
* ```ts
* const decoder = padRightDecoder(getU16Decoder(), 2);
* const value = decoder.decode(new Uint8Array([0x12, 0x34, 0, 0])); // 0xffff (ignores trailing bytes)
* ```
*
* @see {@link padRightCodec}
* @see {@link padRightEncoder}
*/
export declare function padRightDecoder<TDecoder extends AnyDecoder>(decoder: TDecoder, offset: Offset): TDecoder;
/**
* Adds left padding to the given codec, shifting the encoding and decoding positions
* forward by `offset` bytes whilst increasing the size of the codec accordingly.
*
* This ensures that values are read and written at a later position in the byte array,
* while the padding bytes remain unused.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
*
* @param codec - The codec to pad.
* @param offset - The number of padding bytes to add before encoding and decoding.
* @returns A new codec with left padding applied.
*
* @example
* ```ts
* const codec = padLeftCodec(getU16Codec(), 2);
* const bytes = codec.encode(0xffff); // 0x0000ffff (0xffff written at offset 2)
* const value = codec.decode(bytes); // 0xffff (reads from offset 2)
* ```
*
* @remarks
* If you only need to apply padding for encoding, use {@link padLeftEncoder}.
* If you only need to apply padding for decoding, use {@link padLeftDecoder}.
*
* ```ts
* const bytes = padLeftEncoder(getU16Encoder(), 2).encode(0xffff);
* const value = padLeftDecoder(getU16Decoder(), 2).decode(bytes);
* ```
*
* @see {@link padLeftEncoder}
* @see {@link padLeftDecoder}
*/
export declare function padLeftCodec<TCodec extends AnyCodec>(codec: TCodec, offset: Offset): TCodec;
/**
* Adds right padding to the given codec, extending the encoded and decoded value
* by `offset` bytes whilst increasing the size of the codec accordingly.
*
* The extra bytes remain unused, ensuring that the next operation starts further
* along the byte array.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
*
* @param codec - The codec to pad.
* @param offset - The number of padding bytes to add after encoding and decoding.
* @returns A new codec with right padding applied.
*
* @example
* ```ts
* const codec = padRightCodec(getU16Codec(), 2);
* const bytes = codec.encode(0xffff); // 0xffff0000 (two extra bytes added)
* const value = codec.decode(bytes); // 0xffff (ignores padding bytes)
* ```
*
* @remarks
* If you only need to apply padding for encoding, use {@link padRightEncoder}.
* If you only need to apply padding for decoding, use {@link padRightDecoder}.
*
* ```ts
* const bytes = padRightEncoder(getU16Encoder(), 2).encode(0xffff);
* const value = padRightDecoder(getU16Decoder(), 2).decode(bytes);
* ```
*
* @see {@link padRightEncoder}
* @see {@link padRightDecoder}
*/
export declare function padRightCodec<TCodec extends AnyCodec>(codec: TCodec, offset: Offset): TCodec;
export {};
//# sourceMappingURL=pad-codec.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"pad-codec.d.ts","sourceRoot":"","sources":["../../src/pad-codec.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,KAAK,EAAE,OAAO,EAAE,OAAO,EAAE,MAAM,EAAE,MAAM,SAAS,CAAC;AAM1D,KAAK,UAAU,GAAG,OAAO,CAAC,GAAG,CAAC,CAAC;AAE/B,KAAK,UAAU,GAAG,OAAO,CAAC,GAAG,CAAC,CAAC;AAE/B,KAAK,QAAQ,GAAG,KAAK,CAAC,GAAG,CAAC,CAAC;AAE3B;;;;;;;;;;;;;;;;;;;;GAoBG;AACH,wBAAgB,cAAc,CAAC,QAAQ,SAAS,UAAU,EAAE,OAAO,EAAE,QAAQ,EAAE,MAAM,EAAE,MAAM,GAAG,QAAQ,CAKvG;AAED;;;;;;;;;;;;;;;;;;;;GAoBG;AACH,wBAAgB,eAAe,CAAC,QAAQ,SAAS,UAAU,EAAE,OAAO,EAAE,QAAQ,EAAE,MAAM,EAAE,MAAM,GAAG,QAAQ,CAKxG;AAED;;;;;;;;;;;;;;;;;;;;GAoBG;AACH,wBAAgB,cAAc,CAAC,QAAQ,SAAS,UAAU,EAAE,OAAO,EAAE,QAAQ,EAAE,MAAM,EAAE,MAAM,GAAG,QAAQ,CAKvG;AAED;;;;;;;;;;;;;;;;;;;;GAoBG;AACH,wBAAgB,eAAe,CAAC,QAAQ,SAAS,UAAU,EAAE,OAAO,EAAE,QAAQ,EAAE,MAAM,EAAE,MAAM,GAAG,QAAQ,CAKxG;AAED;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAgCG;AACH,wBAAgB,YAAY,CAAC,MAAM,SAAS,QAAQ,EAAE,KAAK,EAAE,MAAM,EAAE,MAAM,EAAE,MAAM,GAAG,MAAM,CAE3F;AAED;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAgCG;AACH,wBAAgB,aAAa,CAAC,MAAM,SAAS,QAAQ,EAAE,KAAK,EAAE,MAAM,EAAE,MAAM,EAAE,MAAM,GAAG,MAAM,CAE5F"}

View File

@@ -0,0 +1,19 @@
/**
* A read-only variant of `Uint8Array`.
*
* This type prevents modifications to the array by omitting mutable methods such as `copyWithin`,
* `fill`, `reverse`, `set`, and `sort`, while still allowing indexed access to elements.
*
* @example
* ```ts
* const bytes: ReadonlyUint8Array = new Uint8Array([1, 2, 3]);
* console.log(bytes[0]); // 1
* bytes[0] = 42; // Type error: Cannot assign to '0' because it is a read-only property.
* ```
*/
export interface ReadonlyUint8Array<TArrayBuffer extends ArrayBufferLike = ArrayBufferLike> extends Omit<Uint8Array<TArrayBuffer>, TypedArrayMutableProperties> {
readonly [n: number]: number;
}
type TypedArrayMutableProperties = 'copyWithin' | 'fill' | 'reverse' | 'set' | 'sort';
export {};
//# sourceMappingURL=readonly-uint8array.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"readonly-uint8array.d.ts","sourceRoot":"","sources":["../../src/readonly-uint8array.ts"],"names":[],"mappings":"AAAA;;;;;;;;;;;;GAYG;AACH,MAAM,WAAW,kBAAkB,CAAC,YAAY,SAAS,eAAe,GAAG,eAAe,CAAE,SAAQ,IAAI,CACpG,UAAU,CAAC,YAAY,CAAC,EACxB,2BAA2B,CAC9B;IACG,QAAQ,EAAE,CAAC,EAAE,MAAM,GAAG,MAAM,CAAC;CAChC;AAED,KAAK,2BAA2B,GAAG,YAAY,GAAG,MAAM,GAAG,SAAS,GAAG,KAAK,GAAG,MAAM,CAAC"}

View File

@@ -0,0 +1,129 @@
import { Codec, Decoder, Encoder, FixedSizeCodec, FixedSizeDecoder, FixedSizeEncoder } from './codec';
type AnyEncoder = Encoder<any>;
type AnyDecoder = Decoder<any>;
type AnyCodec = Codec<any>;
/**
* Updates the size of a given encoder.
*
* This function modifies the size of an encoder using a provided transformation function.
* For fixed-size encoders, it updates the `fixedSize` property, and for variable-size
* encoders, it adjusts the size calculation based on the encoded value.
*
* If the new size is negative, an error will be thrown.
*
* For more details, see {@link resizeCodec}.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TSize - The original fixed size of the encoded value.
* @typeParam TNewSize - The new fixed size after resizing.
*
* @param encoder - The encoder whose size will be updated.
* @param resize - A function that takes the current size and returns the new size.
* @returns A new encoder with the updated size.
*
* @example
* Increasing the size of a `u16` encoder by 2 bytes.
* ```ts
* const encoder = resizeEncoder(getU16Encoder(), size => size + 2);
* encoder.encode(0xffff); // 0xffff0000 (two extra bytes added)
* ```
*
* @example
* Shrinking a `u32` encoder to only use 2 bytes.
* ```ts
* const encoder = resizeEncoder(getU32Encoder(), () => 2);
* encoder.fixedSize; // 2
* ```
*
* @see {@link resizeCodec}
* @see {@link resizeDecoder}
*/
export declare function resizeEncoder<TFrom, TSize extends number, TNewSize extends number>(encoder: FixedSizeEncoder<TFrom, TSize>, resize: (size: TSize) => TNewSize): FixedSizeEncoder<TFrom, TNewSize>;
export declare function resizeEncoder<TEncoder extends AnyEncoder>(encoder: TEncoder, resize: (size: number) => number): TEncoder;
/**
* Updates the size of a given decoder.
*
* This function modifies the size of a decoder using a provided transformation function.
* For fixed-size decoders, it updates the `fixedSize` property to reflect the new size.
* Variable-size decoders remain unchanged, as their size is determined dynamically.
*
* If the new size is negative, an error will be thrown.
*
* For more details, see {@link resizeCodec}.
*
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The original fixed size of the decoded value.
* @typeParam TNewSize - The new fixed size after resizing.
*
* @param decoder - The decoder whose size will be updated.
* @param resize - A function that takes the current size and returns the new size.
* @returns A new decoder with the updated size.
*
* @example
* Expanding a `u16` decoder to read 4 bytes instead of 2.
* ```ts
* const decoder = resizeDecoder(getU16Decoder(), size => size + 2);
* decoder.fixedSize; // 4
* ```
*
* @example
* Shrinking a `u32` decoder to only read 2 bytes.
* ```ts
* const decoder = resizeDecoder(getU32Decoder(), () => 2);
* decoder.fixedSize; // 2
* ```
*
* @see {@link resizeCodec}
* @see {@link resizeEncoder}
*/
export declare function resizeDecoder<TFrom, TSize extends number, TNewSize extends number>(decoder: FixedSizeDecoder<TFrom, TSize>, resize: (size: TSize) => TNewSize): FixedSizeDecoder<TFrom, TNewSize>;
export declare function resizeDecoder<TDecoder extends AnyDecoder>(decoder: TDecoder, resize: (size: number) => number): TDecoder;
/**
* Updates the size of a given codec.
*
* This function modifies the size of both the codec using a provided
* transformation function. It is useful for adjusting the allocated byte size for
* encoding and decoding without altering the underlying data structure.
*
* If the new size is negative, an error will be thrown.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The original fixed size of the encoded/decoded value (for fixed-size codecs).
* @typeParam TNewSize - The new fixed size after resizing (for fixed-size codecs).
*
* @param codec - The codec whose size will be updated.
* @param resize - A function that takes the current size and returns the new size.
* @returns A new codec with the updated size.
*
* @example
* Expanding a `u16` codec from 2 to 4 bytes.
* ```ts
* const codec = resizeCodec(getU16Codec(), size => size + 2);
* const bytes = codec.encode(0xffff); // 0xffff0000 (two extra bytes added)
* const value = codec.decode(bytes); // 0xffff (reads original two bytes)
* ```
*
* @example
* Shrinking a `u32` codec to only use 2 bytes.
* ```ts
* const codec = resizeCodec(getU32Codec(), () => 2);
* codec.fixedSize; // 2
* ```
*
* @remarks
* If you only need to resize an encoder, use {@link resizeEncoder}.
* If you only need to resize a decoder, use {@link resizeDecoder}.
*
* ```ts
* const bytes = resizeEncoder(getU32Encoder(), (size) => size + 2).encode(0xffff);
* const value = resizeDecoder(getU32Decoder(), (size) => size + 2).decode(bytes);
* ```
*
* @see {@link resizeEncoder}
* @see {@link resizeDecoder}
*/
export declare function resizeCodec<TFrom, TTo extends TFrom, TSize extends number, TNewSize extends number>(codec: FixedSizeCodec<TFrom, TTo, TSize>, resize: (size: TSize) => TNewSize): FixedSizeCodec<TFrom, TTo, TNewSize>;
export declare function resizeCodec<TCodec extends AnyCodec>(codec: TCodec, resize: (size: number) => number): TCodec;
export {};
//# sourceMappingURL=resize-codec.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"resize-codec.d.ts","sourceRoot":"","sources":["../../src/resize-codec.ts"],"names":[],"mappings":"AAEA,OAAO,EACH,KAAK,EAGL,OAAO,EACP,OAAO,EACP,cAAc,EACd,gBAAgB,EAChB,gBAAgB,EAEnB,MAAM,SAAS,CAAC;AAIjB,KAAK,UAAU,GAAG,OAAO,CAAC,GAAG,CAAC,CAAC;AAE/B,KAAK,UAAU,GAAG,OAAO,CAAC,GAAG,CAAC,CAAC;AAE/B,KAAK,QAAQ,GAAG,KAAK,CAAC,GAAG,CAAC,CAAC;AAE3B;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAmCG;AACH,wBAAgB,aAAa,CAAC,KAAK,EAAE,KAAK,SAAS,MAAM,EAAE,QAAQ,SAAS,MAAM,EAC9E,OAAO,EAAE,gBAAgB,CAAC,KAAK,EAAE,KAAK,CAAC,EACvC,MAAM,EAAE,CAAC,IAAI,EAAE,KAAK,KAAK,QAAQ,GAClC,gBAAgB,CAAC,KAAK,EAAE,QAAQ,CAAC,CAAC;AACrC,wBAAgB,aAAa,CAAC,QAAQ,SAAS,UAAU,EACrD,OAAO,EAAE,QAAQ,EACjB,MAAM,EAAE,CAAC,IAAI,EAAE,MAAM,KAAK,MAAM,GACjC,QAAQ,CAAC;AA8BZ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAmCG;AACH,wBAAgB,aAAa,CAAC,KAAK,EAAE,KAAK,SAAS,MAAM,EAAE,QAAQ,SAAS,MAAM,EAC9E,OAAO,EAAE,gBAAgB,CAAC,KAAK,EAAE,KAAK,CAAC,EACvC,MAAM,EAAE,CAAC,IAAI,EAAE,KAAK,KAAK,QAAQ,GAClC,gBAAgB,CAAC,KAAK,EAAE,QAAQ,CAAC,CAAC;AACrC,wBAAgB,aAAa,CAAC,QAAQ,SAAS,UAAU,EACrD,OAAO,EAAE,QAAQ,EACjB,MAAM,EAAE,CAAC,IAAI,EAAE,MAAM,KAAK,MAAM,GACjC,QAAQ,CAAC;AAkBZ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GA4CG;AACH,wBAAgB,WAAW,CAAC,KAAK,EAAE,GAAG,SAAS,KAAK,EAAE,KAAK,SAAS,MAAM,EAAE,QAAQ,SAAS,MAAM,EAC/F,KAAK,EAAE,cAAc,CAAC,KAAK,EAAE,GAAG,EAAE,KAAK,CAAC,EACxC,MAAM,EAAE,CAAC,IAAI,EAAE,KAAK,KAAK,QAAQ,GAClC,cAAc,CAAC,KAAK,EAAE,GAAG,EAAE,QAAQ,CAAC,CAAC;AACxC,wBAAgB,WAAW,CAAC,MAAM,SAAS,QAAQ,EAAE,KAAK,EAAE,MAAM,EAAE,MAAM,EAAE,CAAC,IAAI,EAAE,MAAM,KAAK,MAAM,GAAG,MAAM,CAAC"}

View File

@@ -0,0 +1,92 @@
import { FixedSizeCodec, FixedSizeDecoder, FixedSizeEncoder } from './codec';
/**
* Reverses the bytes of a fixed-size encoder.
*
* Given a `FixedSizeEncoder`, this function returns a new `FixedSizeEncoder` that
* reverses the bytes within the fixed-size byte array when encoding.
*
* This can be useful to modify endianness or for other byte-order transformations.
*
* For more details, see {@link reverseCodec}.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TSize - The fixed size of the encoded value in bytes.
*
* @param encoder - The fixed-size encoder to reverse.
* @returns A new encoder that writes bytes in reverse order.
*
* @example
* Encoding a `u16` value in reverse order.
* ```ts
* const encoder = reverseEncoder(getU16Encoder({ endian: Endian.Big }));
* const bytes = encoder.encode(0x1234); // 0x3412 (bytes are flipped)
* ```
*
* @see {@link reverseCodec}
* @see {@link reverseDecoder}
*/
export declare function reverseEncoder<TFrom, TSize extends number>(encoder: FixedSizeEncoder<TFrom, TSize>): FixedSizeEncoder<TFrom, TSize>;
/**
* Reverses the bytes of a fixed-size decoder.
*
* Given a `FixedSizeDecoder`, this function returns a new `FixedSizeDecoder` that
* reverses the bytes within the fixed-size byte array before decoding.
*
* This can be useful to modify endianness or for other byte-order transformations.
*
* For more details, see {@link reverseCodec}.
*
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the decoded value in bytes.
*
* @param decoder - The fixed-size decoder to reverse.
* @returns A new decoder that reads bytes in reverse order.
*
* @example
* Decoding a reversed `u16` value.
* ```ts
* const decoder = reverseDecoder(getU16Decoder({ endian: Endian.Big }));
* const value = decoder.decode(new Uint8Array([0x34, 0x12])); // 0x1234 (bytes are flipped back)
* ```
*
* @see {@link reverseCodec}
* @see {@link reverseEncoder}
*/
export declare function reverseDecoder<TTo, TSize extends number>(decoder: FixedSizeDecoder<TTo, TSize>): FixedSizeDecoder<TTo, TSize>;
/**
* Reverses the bytes of a fixed-size codec.
*
* Given a `FixedSizeCodec`, this function returns a new `FixedSizeCodec` that
* reverses the bytes within the fixed-size byte array during encoding and decoding.
*
* This can be useful to modify endianness or for other byte-order transformations.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded/decoded value in bytes.
*
* @param codec - The fixed-size codec to reverse.
* @returns A new codec that encodes and decodes bytes in reverse order.
*
* @example
* Reversing a `u16` codec.
* ```ts
* const codec = reverseCodec(getU16Codec({ endian: Endian.Big }));
* const bytes = codec.encode(0x1234); // 0x3412 (bytes are flipped)
* const value = codec.decode(bytes); // 0x1234 (bytes are flipped back)
* ```
*
* @remarks
* If you only need to reverse an encoder, use {@link reverseEncoder}.
* If you only need to reverse a decoder, use {@link reverseDecoder}.
*
* ```ts
* const bytes = reverseEncoder(getU16Encoder()).encode(0x1234);
* const value = reverseDecoder(getU16Decoder()).decode(bytes);
* ```
*
* @see {@link reverseEncoder}
* @see {@link reverseDecoder}
*/
export declare function reverseCodec<TFrom, TTo extends TFrom, TSize extends number>(codec: FixedSizeCodec<TFrom, TTo, TSize>): FixedSizeCodec<TFrom, TTo, TSize>;
//# sourceMappingURL=reverse-codec.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"reverse-codec.d.ts","sourceRoot":"","sources":["../../src/reverse-codec.ts"],"names":[],"mappings":"AAAA,OAAO,EAIH,cAAc,EACd,gBAAgB,EAChB,gBAAgB,EACnB,MAAM,SAAS,CAAC;AAsBjB;;;;;;;;;;;;;;;;;;;;;;;;;GAyBG;AACH,wBAAgB,cAAc,CAAC,KAAK,EAAE,KAAK,SAAS,MAAM,EACtD,OAAO,EAAE,gBAAgB,CAAC,KAAK,EAAE,KAAK,CAAC,GACxC,gBAAgB,CAAC,KAAK,EAAE,KAAK,CAAC,CAehC;AAED;;;;;;;;;;;;;;;;;;;;;;;;;GAyBG;AACH,wBAAgB,cAAc,CAAC,GAAG,EAAE,KAAK,SAAS,MAAM,EACpD,OAAO,EAAE,gBAAgB,CAAC,GAAG,EAAE,KAAK,CAAC,GACtC,gBAAgB,CAAC,GAAG,EAAE,KAAK,CAAC,CAe9B;AAED;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAkCG;AACH,wBAAgB,YAAY,CAAC,KAAK,EAAE,GAAG,SAAS,KAAK,EAAE,KAAK,SAAS,MAAM,EACvE,KAAK,EAAE,cAAc,CAAC,KAAK,EAAE,GAAG,EAAE,KAAK,CAAC,GACzC,cAAc,CAAC,KAAK,EAAE,GAAG,EAAE,KAAK,CAAC,CAEnC"}

View File

@@ -0,0 +1,114 @@
import { Codec, Decoder, Encoder, FixedSizeCodec, FixedSizeDecoder, FixedSizeEncoder, VariableSizeCodec, VariableSizeDecoder, VariableSizeEncoder } from './codec';
import { ReadonlyUint8Array } from './readonly-uint8array';
/**
* Transforms an encoder by mapping its input values.
*
* This function takes an existing `Encoder<A>` and returns an `Encoder<B>`, allowing values of type `B`
* to be converted into values of type `A` before encoding. The transformation is applied via the `unmap` function.
*
* This is useful for handling type conversions, applying default values, or structuring data before encoding.
*
* For more details, see {@link transformCodec}.
*
* @typeParam TOldFrom - The original type expected by the encoder.
* @typeParam TNewFrom - The new type that will be transformed before encoding.
*
* @param encoder - The encoder to transform.
* @param unmap - A function that converts values of `TNewFrom` into `TOldFrom` before encoding.
* @returns A new encoder that accepts `TNewFrom` values and transforms them before encoding.
*
* @example
* Encoding a string by counting its characters and storing the length as a `u32`.
* ```ts
* const encoder = transformEncoder(getU32Encoder(), (value: string) => value.length);
* encoder.encode("hello"); // 0x05000000 (stores length 5)
* ```
*
* @see {@link transformCodec}
* @see {@link transformDecoder}
*/
export declare function transformEncoder<TOldFrom, TNewFrom, TSize extends number>(encoder: FixedSizeEncoder<TOldFrom, TSize>, unmap: (value: TNewFrom) => TOldFrom): FixedSizeEncoder<TNewFrom, TSize>;
export declare function transformEncoder<TOldFrom, TNewFrom>(encoder: VariableSizeEncoder<TOldFrom>, unmap: (value: TNewFrom) => TOldFrom): VariableSizeEncoder<TNewFrom>;
export declare function transformEncoder<TOldFrom, TNewFrom>(encoder: Encoder<TOldFrom>, unmap: (value: TNewFrom) => TOldFrom): Encoder<TNewFrom>;
/**
* Transforms a decoder by mapping its output values.
*
* This function takes an existing `Decoder<A>` and returns a `Decoder<B>`, allowing values of type `A`
* to be converted into values of type `B` after decoding. The transformation is applied via the `map` function.
*
* This is useful for post-processing, type conversions, or enriching decoded data.
*
* For more details, see {@link transformCodec}.
*
* @typeParam TOldTo - The original type returned by the decoder.
* @typeParam TNewTo - The new type that will be transformed after decoding.
*
* @param decoder - The decoder to transform.
* @param map - A function that converts values of `TOldTo` into `TNewTo` after decoding.
* @returns A new decoder that decodes into `TNewTo`.
*
* @example
* Decoding a stored `u32` length into a string of `'x'` characters.
* ```ts
* const decoder = transformDecoder(getU32Decoder(), (length) => 'x'.repeat(length));
* decoder.decode(new Uint8Array([0x05, 0x00, 0x00, 0x00])); // "xxxxx"
* ```
*
* @see {@link transformCodec}
* @see {@link transformEncoder}
*/
export declare function transformDecoder<TOldTo, TNewTo, TSize extends number>(decoder: FixedSizeDecoder<TOldTo, TSize>, map: (value: TOldTo, bytes: ReadonlyUint8Array | Uint8Array, offset: number) => TNewTo): FixedSizeDecoder<TNewTo, TSize>;
export declare function transformDecoder<TOldTo, TNewTo>(decoder: VariableSizeDecoder<TOldTo>, map: (value: TOldTo, bytes: ReadonlyUint8Array | Uint8Array, offset: number) => TNewTo): VariableSizeDecoder<TNewTo>;
export declare function transformDecoder<TOldTo, TNewTo>(decoder: Decoder<TOldTo>, map: (value: TOldTo, bytes: ReadonlyUint8Array | Uint8Array, offset: number) => TNewTo): Decoder<TNewTo>;
/**
* Transforms a codec by mapping its input and output values.
*
* This function takes an existing `Codec<A, B>` and returns a `Codec<C, D>`, allowing:
* - Values of type `C` to be transformed into `A` before encoding.
* - Values of type `B` to be transformed into `D` after decoding.
*
* This is useful for adapting codecs to work with different representations, handling default values, or
* converting between primitive and structured types.
*
* @typeParam TOldFrom - The original type expected by the codec.
* @typeParam TNewFrom - The new type that will be transformed before encoding.
* @typeParam TOldTo - The original type returned by the codec.
* @typeParam TNewTo - The new type that will be transformed after decoding.
*
* @param codec - The codec to transform.
* @param unmap - A function that converts values of `TNewFrom` into `TOldFrom` before encoding.
* @param map - A function that converts values of `TOldTo` into `TNewTo` after decoding (optional).
* @returns A new codec that encodes `TNewFrom` and decodes into `TNewTo`.
*
* @example
* Mapping a `u32` codec to encode string lengths and decode them into `'x'` characters.
* ```ts
* const codec = transformCodec(
* getU32Codec(),
* (value: string) => value.length, // Encode string length
* (length) => 'x'.repeat(length) // Decode length into a string of 'x's
* );
*
* const bytes = codec.encode("hello"); // 0x05000000 (stores length 5)
* const value = codec.decode(bytes); // "xxxxx"
* ```
*
* @remarks
* If only input transformation is needed, use {@link transformEncoder}.
* If only output transformation is needed, use {@link transformDecoder}.
*
* ```ts
* const bytes = transformEncoder(getU32Encoder(), (value: string) => value.length).encode("hello");
* const value = transformDecoder(getU32Decoder(), (length) => 'x'.repeat(length)).decode(bytes);
* ```
*
* @see {@link transformEncoder}
* @see {@link transformDecoder}
*/
export declare function transformCodec<TOldFrom, TNewFrom, TTo extends TNewFrom & TOldFrom, TSize extends number>(codec: FixedSizeCodec<TOldFrom, TTo, TSize>, unmap: (value: TNewFrom) => TOldFrom): FixedSizeCodec<TNewFrom, TTo, TSize>;
export declare function transformCodec<TOldFrom, TNewFrom, TTo extends TNewFrom & TOldFrom>(codec: VariableSizeCodec<TOldFrom, TTo>, unmap: (value: TNewFrom) => TOldFrom): VariableSizeCodec<TNewFrom, TTo>;
export declare function transformCodec<TOldFrom, TNewFrom, TTo extends TNewFrom & TOldFrom>(codec: Codec<TOldFrom, TTo>, unmap: (value: TNewFrom) => TOldFrom): Codec<TNewFrom, TTo>;
export declare function transformCodec<TOldFrom, TNewFrom, TOldTo extends TOldFrom, TNewTo extends TNewFrom, TSize extends number>(codec: FixedSizeCodec<TOldFrom, TOldTo, TSize>, unmap: (value: TNewFrom) => TOldFrom, map: (value: TOldTo, bytes: ReadonlyUint8Array | Uint8Array, offset: number) => TNewTo): FixedSizeCodec<TNewFrom, TNewTo, TSize>;
export declare function transformCodec<TOldFrom, TNewFrom, TOldTo extends TOldFrom, TNewTo extends TNewFrom>(codec: VariableSizeCodec<TOldFrom, TOldTo>, unmap: (value: TNewFrom) => TOldFrom, map: (value: TOldTo, bytes: ReadonlyUint8Array | Uint8Array, offset: number) => TNewTo): VariableSizeCodec<TNewFrom, TNewTo>;
export declare function transformCodec<TOldFrom, TNewFrom, TOldTo extends TOldFrom, TNewTo extends TNewFrom>(codec: Codec<TOldFrom, TOldTo>, unmap: (value: TNewFrom) => TOldFrom, map: (value: TOldTo, bytes: ReadonlyUint8Array | Uint8Array, offset: number) => TNewTo): Codec<TNewFrom, TNewTo>;
//# sourceMappingURL=transform-codec.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"transform-codec.d.ts","sourceRoot":"","sources":["../../src/transform-codec.ts"],"names":[],"mappings":"AAAA,OAAO,EACH,KAAK,EAIL,OAAO,EACP,OAAO,EACP,cAAc,EACd,gBAAgB,EAChB,gBAAgB,EAEhB,iBAAiB,EACjB,mBAAmB,EACnB,mBAAmB,EACtB,MAAM,SAAS,CAAC;AACjB,OAAO,EAAE,kBAAkB,EAAE,MAAM,uBAAuB,CAAC;AAE3D;;;;;;;;;;;;;;;;;;;;;;;;;;GA0BG;AACH,wBAAgB,gBAAgB,CAAC,QAAQ,EAAE,QAAQ,EAAE,KAAK,SAAS,MAAM,EACrE,OAAO,EAAE,gBAAgB,CAAC,QAAQ,EAAE,KAAK,CAAC,EAC1C,KAAK,EAAE,CAAC,KAAK,EAAE,QAAQ,KAAK,QAAQ,GACrC,gBAAgB,CAAC,QAAQ,EAAE,KAAK,CAAC,CAAC;AACrC,wBAAgB,gBAAgB,CAAC,QAAQ,EAAE,QAAQ,EAC/C,OAAO,EAAE,mBAAmB,CAAC,QAAQ,CAAC,EACtC,KAAK,EAAE,CAAC,KAAK,EAAE,QAAQ,KAAK,QAAQ,GACrC,mBAAmB,CAAC,QAAQ,CAAC,CAAC;AACjC,wBAAgB,gBAAgB,CAAC,QAAQ,EAAE,QAAQ,EAC/C,OAAO,EAAE,OAAO,CAAC,QAAQ,CAAC,EAC1B,KAAK,EAAE,CAAC,KAAK,EAAE,QAAQ,KAAK,QAAQ,GACrC,OAAO,CAAC,QAAQ,CAAC,CAAC;AAarB;;;;;;;;;;;;;;;;;;;;;;;;;;GA0BG;AACH,wBAAgB,gBAAgB,CAAC,MAAM,EAAE,MAAM,EAAE,KAAK,SAAS,MAAM,EACjE,OAAO,EAAE,gBAAgB,CAAC,MAAM,EAAE,KAAK,CAAC,EACxC,GAAG,EAAE,CAAC,KAAK,EAAE,MAAM,EAAE,KAAK,EAAE,kBAAkB,GAAG,UAAU,EAAE,MAAM,EAAE,MAAM,KAAK,MAAM,GACvF,gBAAgB,CAAC,MAAM,EAAE,KAAK,CAAC,CAAC;AACnC,wBAAgB,gBAAgB,CAAC,MAAM,EAAE,MAAM,EAC3C,OAAO,EAAE,mBAAmB,CAAC,MAAM,CAAC,EACpC,GAAG,EAAE,CAAC,KAAK,EAAE,MAAM,EAAE,KAAK,EAAE,kBAAkB,GAAG,UAAU,EAAE,MAAM,EAAE,MAAM,KAAK,MAAM,GACvF,mBAAmB,CAAC,MAAM,CAAC,CAAC;AAC/B,wBAAgB,gBAAgB,CAAC,MAAM,EAAE,MAAM,EAC3C,OAAO,EAAE,OAAO,CAAC,MAAM,CAAC,EACxB,GAAG,EAAE,CAAC,KAAK,EAAE,MAAM,EAAE,KAAK,EAAE,kBAAkB,GAAG,UAAU,EAAE,MAAM,EAAE,MAAM,KAAK,MAAM,GACvF,OAAO,CAAC,MAAM,CAAC,CAAC;AAcnB;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GA4CG;AACH,wBAAgB,cAAc,CAAC,QAAQ,EAAE,QAAQ,EAAE,GAAG,SAAS,QAAQ,GAAG,QAAQ,EAAE,KAAK,SAAS,MAAM,EACpG,KAAK,EAAE,cAAc,CAAC,QAAQ,EAAE,GAAG,EAAE,KAAK,CAAC,EAC3C,KAAK,EAAE,CAAC,KAAK,EAAE,QAAQ,KAAK,QAAQ,GACrC,cAAc,CAAC,QAAQ,EAAE,GAAG,EAAE,KAAK,CAAC,CAAC;AACxC,wBAAgB,cAAc,CAAC,QAAQ,EAAE,QAAQ,EAAE,GAAG,SAAS,QAAQ,GAAG,QAAQ,EAC9E,KAAK,EAAE,iBAAiB,CAAC,QAAQ,EAAE,GAAG,CAAC,EACvC,KAAK,EAAE,CAAC,KAAK,EAAE,QAAQ,KAAK,QAAQ,GACrC,iBAAiB,CAAC,QAAQ,EAAE,GAAG,CAAC,CAAC;AACpC,wBAAgB,cAAc,CAAC,QAAQ,EAAE,QAAQ,EAAE,GAAG,SAAS,QAAQ,GAAG,QAAQ,EAC9E,KAAK,EAAE,KAAK,CAAC,QAAQ,EAAE,GAAG,CAAC,EAC3B,KAAK,EAAE,CAAC,KAAK,EAAE,QAAQ,KAAK,QAAQ,GACrC,KAAK,CAAC,QAAQ,EAAE,GAAG,CAAC,CAAC;AACxB,wBAAgB,cAAc,CAC1B,QAAQ,EACR,QAAQ,EACR,MAAM,SAAS,QAAQ,EACvB,MAAM,SAAS,QAAQ,EACvB,KAAK,SAAS,MAAM,EAEpB,KAAK,EAAE,cAAc,CAAC,QAAQ,EAAE,MAAM,EAAE,KAAK,CAAC,EAC9C,KAAK,EAAE,CAAC,KAAK,EAAE,QAAQ,KAAK,QAAQ,EACpC,GAAG,EAAE,CAAC,KAAK,EAAE,MAAM,EAAE,KAAK,EAAE,kBAAkB,GAAG,UAAU,EAAE,MAAM,EAAE,MAAM,KAAK,MAAM,GACvF,cAAc,CAAC,QAAQ,EAAE,MAAM,EAAE,KAAK,CAAC,CAAC;AAC3C,wBAAgB,cAAc,CAAC,QAAQ,EAAE,QAAQ,EAAE,MAAM,SAAS,QAAQ,EAAE,MAAM,SAAS,QAAQ,EAC/F,KAAK,EAAE,iBAAiB,CAAC,QAAQ,EAAE,MAAM,CAAC,EAC1C,KAAK,EAAE,CAAC,KAAK,EAAE,QAAQ,KAAK,QAAQ,EACpC,GAAG,EAAE,CAAC,KAAK,EAAE,MAAM,EAAE,KAAK,EAAE,kBAAkB,GAAG,UAAU,EAAE,MAAM,EAAE,MAAM,KAAK,MAAM,GACvF,iBAAiB,CAAC,QAAQ,EAAE,MAAM,CAAC,CAAC;AACvC,wBAAgB,cAAc,CAAC,QAAQ,EAAE,QAAQ,EAAE,MAAM,SAAS,QAAQ,EAAE,MAAM,SAAS,QAAQ,EAC/F,KAAK,EAAE,KAAK,CAAC,QAAQ,EAAE,MAAM,CAAC,EAC9B,KAAK,EAAE,CAAC,KAAK,EAAE,QAAQ,KAAK,QAAQ,EACpC,GAAG,EAAE,CAAC,KAAK,EAAE,MAAM,EAAE,KAAK,EAAE,kBAAkB,GAAG,UAAU,EAAE,MAAM,EAAE,MAAM,KAAK,MAAM,GACvF,KAAK,CAAC,QAAQ,EAAE,MAAM,CAAC,CAAC"}

90
node_modules/@solana/codecs-core/package.json generated vendored Normal file
View File

@@ -0,0 +1,90 @@
{
"name": "@solana/codecs-core",
"version": "6.8.0",
"description": "Core types and helpers for encoding and decoding byte arrays on Solana",
"homepage": "https://www.solanakit.com/api#solanacodecs-core",
"exports": {
"edge-light": {
"import": "./dist/index.node.mjs",
"require": "./dist/index.node.cjs"
},
"workerd": {
"import": "./dist/index.node.mjs",
"require": "./dist/index.node.cjs"
},
"browser": {
"import": "./dist/index.browser.mjs",
"require": "./dist/index.browser.cjs"
},
"node": {
"import": "./dist/index.node.mjs",
"require": "./dist/index.node.cjs"
},
"react-native": "./dist/index.native.mjs",
"types": "./dist/types/index.d.ts"
},
"browser": {
"./dist/index.node.cjs": "./dist/index.browser.cjs",
"./dist/index.node.mjs": "./dist/index.browser.mjs"
},
"main": "./dist/index.node.cjs",
"module": "./dist/index.node.mjs",
"react-native": "./dist/index.native.mjs",
"types": "./dist/types/index.d.ts",
"type": "commonjs",
"files": [
"./dist/",
"./src/"
],
"sideEffects": false,
"keywords": [
"blockchain",
"solana",
"web3"
],
"author": "Solana Labs Maintainers <maintainers@solanalabs.com>",
"license": "MIT",
"repository": {
"type": "git",
"url": "https://github.com/anza-xyz/kit"
},
"bugs": {
"url": "https://github.com/anza-xyz/kit/issues"
},
"browserslist": [
"supports bigint and not dead",
"maintained node versions"
],
"dependencies": {
"@solana/errors": "6.8.0"
},
"peerDependencies": {
"typescript": ">=5.0.0"
},
"peerDependenciesMeta": {
"typescript": {
"optional": true
}
},
"engines": {
"node": ">=20.18.0"
},
"scripts": {
"benchmark": "./src/__benchmarks__/run.ts",
"compile:docs": "typedoc",
"compile:js": "tsup --config build-scripts/tsup.config.package.ts",
"compile:typedefs": "tsc -p ./tsconfig.declarations.json",
"dev": "NODE_OPTIONS=\"--no-experimental-webstorage\" jest -c ../../node_modules/@solana/test-config/jest-dev.config.js --rootDir . --watch",
"publish-impl": "npm view $npm_package_name@$npm_package_version > /dev/null 2>&1 || (pnpm publish --tag ${PUBLISH_TAG:-canary} --access public --no-git-checks && (([ -n \"${GITHUB_OUTPUT:-}\" ] && echo 'published=true' >> \"$GITHUB_OUTPUT\") || true) && (([ \"$PUBLISH_TAG\" != \"canary\" ] && ../build-scripts/maybe-tag-latest.ts --token \"$GITHUB_TOKEN\" $npm_package_name@$npm_package_version) || true))",
"publish-packages": "pnpm prepublishOnly && pnpm publish-impl",
"style:fix": "pnpm eslint --fix src && pnpm prettier --log-level warn --ignore-unknown --write ./*",
"test:lint": "TERM_OVERRIDE=\"${TURBO_HASH:+dumb}\" TERM=${TERM_OVERRIDE:-$TERM} jest -c ../../node_modules/@solana/test-config/jest-lint.config.js --rootDir . --silent",
"test:prettier": "TERM_OVERRIDE=\"${TURBO_HASH:+dumb}\" TERM=${TERM_OVERRIDE:-$TERM} jest -c ../../node_modules/@solana/test-config/jest-prettier.config.js --rootDir . --silent",
"test:treeshakability:browser": "agadoo dist/index.browser.mjs",
"test:treeshakability:native": "agadoo dist/index.native.mjs",
"test:treeshakability:node": "agadoo dist/index.node.mjs",
"test:typecheck": "tsc --noEmit",
"test:unit:browser": "NODE_OPTIONS=\"--no-experimental-webstorage\" TERM_OVERRIDE=\"${TURBO_HASH:+dumb}\" TERM=${TERM_OVERRIDE:-$TERM} jest -c ../../node_modules/@solana/test-config/jest-unit.config.browser.js --rootDir . --silent",
"test:unit:node": "NODE_OPTIONS=\"--no-experimental-webstorage\" TERM_OVERRIDE=\"${TURBO_HASH:+dumb}\" TERM=${TERM_OVERRIDE:-$TERM} jest -c ../../node_modules/@solana/test-config/jest-unit.config.node.js --rootDir . --silent"
}
}

View File

@@ -0,0 +1,186 @@
import {
SOLANA_ERROR__CODECS__ENCODED_BYTES_MUST_NOT_INCLUDE_SENTINEL,
SOLANA_ERROR__CODECS__SENTINEL_MISSING_IN_DECODED_BYTES,
SolanaError,
} from '@solana/errors';
import { containsBytes } from './bytes';
import {
Codec,
createDecoder,
createEncoder,
Decoder,
Encoder,
FixedSizeCodec,
FixedSizeDecoder,
FixedSizeEncoder,
isFixedSize,
VariableSizeCodec,
VariableSizeDecoder,
VariableSizeEncoder,
} from './codec';
import { combineCodec } from './combine-codec';
import { ReadonlyUint8Array } from './readonly-uint8array';
/**
* Creates an encoder that writes a `Uint8Array` sentinel after the encoded value.
* This is useful to delimit the encoded value when being read by a decoder.
*
* See {@link addCodecSentinel} for more information.
*
* @typeParam TFrom - The type of the value to encode.
*
* @see {@link addCodecSentinel}
*/
export function addEncoderSentinel<TFrom>(
encoder: FixedSizeEncoder<TFrom>,
sentinel: ReadonlyUint8Array,
): FixedSizeEncoder<TFrom>;
export function addEncoderSentinel<TFrom>(
encoder: Encoder<TFrom>,
sentinel: ReadonlyUint8Array,
): VariableSizeEncoder<TFrom>;
export function addEncoderSentinel<TFrom>(encoder: Encoder<TFrom>, sentinel: ReadonlyUint8Array): Encoder<TFrom> {
const write = ((value, bytes, offset) => {
// Here we exceptionally use the `encode` function instead of the `write`
// function to contain the content of the encoder within its own bounds
// and to avoid writing the sentinel as part of the encoded value.
const encoderBytes = encoder.encode(value);
if (findSentinelIndex(encoderBytes, sentinel) >= 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODED_BYTES_MUST_NOT_INCLUDE_SENTINEL, {
encodedBytes: encoderBytes,
hexEncodedBytes: hexBytes(encoderBytes),
hexSentinel: hexBytes(sentinel),
sentinel,
});
}
bytes.set(encoderBytes, offset);
offset += encoderBytes.length;
bytes.set(sentinel, offset);
offset += sentinel.length;
return offset;
}) as Encoder<TFrom>['write'];
if (isFixedSize(encoder)) {
return createEncoder({ ...encoder, fixedSize: encoder.fixedSize + sentinel.length, write });
}
return createEncoder({
...encoder,
...(encoder.maxSize != null ? { maxSize: encoder.maxSize + sentinel.length } : {}),
getSizeFromValue: value => encoder.getSizeFromValue(value) + sentinel.length,
write,
});
}
/**
* Creates a decoder that continues reading until
* a given `Uint8Array` sentinel is found.
*
* See {@link addCodecSentinel} for more information.
*
* @typeParam TTo - The type of the decoded value.
*
* @see {@link addCodecSentinel}
*/
export function addDecoderSentinel<TTo>(
decoder: FixedSizeDecoder<TTo>,
sentinel: ReadonlyUint8Array,
): FixedSizeDecoder<TTo>;
export function addDecoderSentinel<TTo>(decoder: Decoder<TTo>, sentinel: ReadonlyUint8Array): VariableSizeDecoder<TTo>;
export function addDecoderSentinel<TTo>(decoder: Decoder<TTo>, sentinel: ReadonlyUint8Array): Decoder<TTo> {
const read = ((bytes, offset) => {
const candidateBytes = offset === 0 || offset <= -bytes.byteLength ? bytes : bytes.slice(offset);
const sentinelIndex = findSentinelIndex(candidateBytes, sentinel);
if (sentinelIndex === -1) {
throw new SolanaError(SOLANA_ERROR__CODECS__SENTINEL_MISSING_IN_DECODED_BYTES, {
decodedBytes: candidateBytes,
hexDecodedBytes: hexBytes(candidateBytes),
hexSentinel: hexBytes(sentinel),
sentinel,
});
}
const preSentinelBytes = candidateBytes.slice(0, sentinelIndex);
// Here we exceptionally use the `decode` function instead of the `read`
// function to contain the content of the decoder within its own bounds
// and ensure that the sentinel is not part of the decoded value.
return [decoder.decode(preSentinelBytes), offset + preSentinelBytes.length + sentinel.length];
}) as Decoder<TTo>['read'];
if (isFixedSize(decoder)) {
return createDecoder({ ...decoder, fixedSize: decoder.fixedSize + sentinel.length, read });
}
return createDecoder({
...decoder,
...(decoder.maxSize != null ? { maxSize: decoder.maxSize + sentinel.length } : {}),
read,
});
}
/**
* Creates a Codec that writes a given `Uint8Array` sentinel after the encoded
* value and, when decoding, continues reading until the sentinel is found.
*
* This sets a limit on variable-size codecs and tells us when to stop decoding.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
*
* @example
* ```ts
* const codec = addCodecSentinel(getUtf8Codec(), new Uint8Array([255, 255]));
* codec.encode('hello');
* // 0x68656c6c6fffff
* // | └-- Our sentinel.
* // └-- Our encoded string.
* ```
*
* @remarks
* Note that the sentinel _must not_ be present in the encoded data and
* _must_ be present in the decoded data for this to work.
* If this is not the case, dedicated errors will be thrown.
*
* ```ts
* const sentinel = new Uint8Array([108, 108]); // 'll'
* const codec = addCodecSentinel(getUtf8Codec(), sentinel);
*
* codec.encode('hello'); // Throws: sentinel is in encoded data.
* codec.decode(new Uint8Array([1, 2, 3])); // Throws: sentinel missing in decoded data.
* ```
*
* Separate {@link addEncoderSentinel} and {@link addDecoderSentinel} functions are also available.
*
* ```ts
* const bytes = addEncoderSentinel(getUtf8Encoder(), sentinel).encode('hello');
* const value = addDecoderSentinel(getUtf8Decoder(), sentinel).decode(bytes);
* ```
*
* @see {@link addEncoderSentinel}
* @see {@link addDecoderSentinel}
*/
export function addCodecSentinel<TFrom, TTo extends TFrom>(
codec: FixedSizeCodec<TFrom, TTo>,
sentinel: ReadonlyUint8Array,
): FixedSizeCodec<TFrom, TTo>;
export function addCodecSentinel<TFrom, TTo extends TFrom>(
codec: Codec<TFrom, TTo>,
sentinel: ReadonlyUint8Array,
): VariableSizeCodec<TFrom, TTo>;
export function addCodecSentinel<TFrom, TTo extends TFrom>(
codec: Codec<TFrom, TTo>,
sentinel: ReadonlyUint8Array,
): Codec<TFrom, TTo> {
return combineCodec(addEncoderSentinel(codec, sentinel), addDecoderSentinel(codec, sentinel));
}
function findSentinelIndex(bytes: ReadonlyUint8Array, sentinel: ReadonlyUint8Array) {
return bytes.findIndex((byte, index, arr) => {
if (sentinel.length === 1) return byte === sentinel[0];
return containsBytes(arr, sentinel, index);
});
}
function hexBytes(bytes: ReadonlyUint8Array): string {
return bytes.reduce((str, byte) => str + byte.toString(16).padStart(2, '0'), '');
}

View File

@@ -0,0 +1,161 @@
import { assertByteArrayHasEnoughBytesForCodec } from './assertions';
import {
Codec,
createDecoder,
createEncoder,
Decoder,
Encoder,
FixedSizeCodec,
FixedSizeDecoder,
FixedSizeEncoder,
getEncodedSize,
isFixedSize,
VariableSizeCodec,
VariableSizeDecoder,
VariableSizeEncoder,
} from './codec';
import { combineCodec } from './combine-codec';
type NumberEncoder = Encoder<bigint | number> | Encoder<number>;
type FixedSizeNumberEncoder<TSize extends number = number> =
| FixedSizeEncoder<bigint | number, TSize>
| FixedSizeEncoder<number, TSize>;
type NumberDecoder = Decoder<bigint> | Decoder<number>;
type FixedSizeNumberDecoder<TSize extends number = number> =
| FixedSizeDecoder<bigint, TSize>
| FixedSizeDecoder<number, TSize>;
type NumberCodec = Codec<bigint | number, bigint> | Codec<number>;
type FixedSizeNumberCodec<TSize extends number = number> =
| FixedSizeCodec<bigint | number, bigint, TSize>
| FixedSizeCodec<number, number, TSize>;
/**
* Stores the size of the `encoder` in bytes as a prefix using the `prefix` encoder.
*
* See {@link addCodecSizePrefix} for more information.
*
* @typeParam TFrom - The type of the value to encode.
*
* @see {@link addCodecSizePrefix}
*/
export function addEncoderSizePrefix<TFrom>(
encoder: FixedSizeEncoder<TFrom>,
prefix: FixedSizeNumberEncoder,
): FixedSizeEncoder<TFrom>;
export function addEncoderSizePrefix<TFrom>(encoder: Encoder<TFrom>, prefix: NumberEncoder): VariableSizeEncoder<TFrom>;
export function addEncoderSizePrefix<TFrom>(encoder: Encoder<TFrom>, prefix: NumberEncoder): Encoder<TFrom> {
const write = ((value, bytes, offset) => {
// Here we exceptionally use the `encode` function instead of the `write`
// function to contain the content of the encoder within its own bounds.
const encoderBytes = encoder.encode(value);
offset = prefix.write(encoderBytes.length, bytes, offset);
bytes.set(encoderBytes, offset);
return offset + encoderBytes.length;
}) as Encoder<TFrom>['write'];
if (isFixedSize(prefix) && isFixedSize(encoder)) {
return createEncoder({ ...encoder, fixedSize: prefix.fixedSize + encoder.fixedSize, write });
}
const prefixMaxSize = isFixedSize(prefix) ? prefix.fixedSize : (prefix.maxSize ?? null);
const encoderMaxSize = isFixedSize(encoder) ? encoder.fixedSize : (encoder.maxSize ?? null);
const maxSize = prefixMaxSize !== null && encoderMaxSize !== null ? prefixMaxSize + encoderMaxSize : null;
return createEncoder({
...encoder,
...(maxSize !== null ? { maxSize } : {}),
getSizeFromValue: value => {
const encoderSize = getEncodedSize(value, encoder);
return getEncodedSize(encoderSize, prefix) + encoderSize;
},
write,
});
}
/**
* Bounds the size of the nested `decoder` by reading its encoded `prefix`.
*
* See {@link addCodecSizePrefix} for more information.
*
* @typeParam TTo - The type of the decoded value.
*
* @see {@link addCodecSizePrefix}
*/
export function addDecoderSizePrefix<TTo>(
decoder: FixedSizeDecoder<TTo>,
prefix: FixedSizeNumberDecoder,
): FixedSizeDecoder<TTo>;
export function addDecoderSizePrefix<TTo>(decoder: Decoder<TTo>, prefix: NumberDecoder): VariableSizeDecoder<TTo>;
export function addDecoderSizePrefix<TTo>(decoder: Decoder<TTo>, prefix: NumberDecoder): Decoder<TTo> {
const read = ((bytes, offset) => {
const [bigintSize, decoderOffset] = prefix.read(bytes, offset);
const size = Number(bigintSize);
offset = decoderOffset;
// Slice the byte array to the contained size if necessary.
if (offset > 0 || bytes.length > size) {
bytes = bytes.slice(offset, offset + size);
}
assertByteArrayHasEnoughBytesForCodec('addDecoderSizePrefix', size, bytes);
// Here we exceptionally use the `decode` function instead of the `read`
// function to contain the content of the decoder within its own bounds.
return [decoder.decode(bytes), offset + size];
}) as Decoder<TTo>['read'];
if (isFixedSize(prefix) && isFixedSize(decoder)) {
return createDecoder({ ...decoder, fixedSize: prefix.fixedSize + decoder.fixedSize, read });
}
const prefixMaxSize = isFixedSize(prefix) ? prefix.fixedSize : (prefix.maxSize ?? null);
const decoderMaxSize = isFixedSize(decoder) ? decoder.fixedSize : (decoder.maxSize ?? null);
const maxSize = prefixMaxSize !== null && decoderMaxSize !== null ? prefixMaxSize + decoderMaxSize : null;
return createDecoder({ ...decoder, ...(maxSize !== null ? { maxSize } : {}), read });
}
/**
* Stores the byte size of any given codec as an encoded number prefix.
*
* This sets a limit on variable-size codecs and tells us when to stop decoding.
* When encoding, the size of the encoded data is stored before the encoded data itself.
* When decoding, the size is read first to know how many bytes to read next.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
*
* @example
* For example, say we want to bound a variable-size base-58 string using a `u32` size prefix.
* Heres how you can use the `addCodecSizePrefix` function to achieve that.
*
* ```ts
* const getU32Base58Codec = () => addCodecSizePrefix(getBase58Codec(), getU32Codec());
*
* getU32Base58Codec().encode('hello world');
* // 0x0b00000068656c6c6f20776f726c64
* // | └-- Our encoded base-58 string.
* // └-- Our encoded u32 size prefix.
* ```
*
* @remarks
* Separate {@link addEncoderSizePrefix} and {@link addDecoderSizePrefix} functions are also available.
*
* ```ts
* const bytes = addEncoderSizePrefix(getBase58Encoder(), getU32Encoder()).encode('hello');
* const value = addDecoderSizePrefix(getBase58Decoder(), getU32Decoder()).decode(bytes);
* ```
*
* @see {@link addEncoderSizePrefix}
* @see {@link addDecoderSizePrefix}
*/
export function addCodecSizePrefix<TFrom, TTo extends TFrom>(
codec: FixedSizeCodec<TFrom, TTo>,
prefix: FixedSizeNumberCodec,
): FixedSizeCodec<TFrom, TTo>;
export function addCodecSizePrefix<TFrom, TTo extends TFrom>(
codec: Codec<TFrom, TTo>,
prefix: NumberCodec,
): VariableSizeCodec<TFrom, TTo>;
export function addCodecSizePrefix<TFrom, TTo extends TFrom>(
codec: Codec<TFrom, TTo>,
prefix: NumberCodec,
): Codec<TFrom, TTo> {
return combineCodec(addEncoderSizePrefix(codec, prefix), addDecoderSizePrefix(codec, prefix));
}

25
node_modules/@solana/codecs-core/src/array-buffers.ts generated vendored Normal file
View File

@@ -0,0 +1,25 @@
import { ReadonlyUint8Array } from './readonly-uint8array';
/**
* Converts a `Uint8Array` to an `ArrayBuffer`. If the underlying buffer is a `SharedArrayBuffer`,
* it will be copied to a non-shared buffer, for safety.
*
* @remarks
* Source: https://stackoverflow.com/questions/37228285/uint8array-to-arraybuffer
*/
export function toArrayBuffer(bytes: ReadonlyUint8Array | Uint8Array, offset?: number, length?: number): ArrayBuffer {
const bytesOffset = bytes.byteOffset + (offset ?? 0);
const bytesLength = length ?? bytes.byteLength;
let buffer: ArrayBuffer;
if (typeof SharedArrayBuffer === 'undefined') {
buffer = bytes.buffer as ArrayBuffer;
} else if (bytes.buffer instanceof SharedArrayBuffer) {
buffer = new ArrayBuffer(bytes.length);
new Uint8Array(buffer).set(new Uint8Array(bytes));
} else {
buffer = bytes.buffer;
}
return (bytesOffset === 0 || bytesOffset === -bytes.byteLength) && bytesLength === bytes.byteLength
? buffer
: buffer.slice(bytesOffset, bytesOffset + bytesLength);
}

103
node_modules/@solana/codecs-core/src/assertions.ts generated vendored Normal file
View File

@@ -0,0 +1,103 @@
import {
SOLANA_ERROR__CODECS__CANNOT_DECODE_EMPTY_BYTE_ARRAY,
SOLANA_ERROR__CODECS__INVALID_BYTE_LENGTH,
SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE,
SolanaError,
} from '@solana/errors';
import { ReadonlyUint8Array } from './readonly-uint8array';
/**
* Asserts that a given byte array is not empty (after the optional provided offset).
*
* Returns void if the byte array is not empty but throws a {@link SolanaError} otherwise.
*
* @param codecDescription - A description of the codec used by the assertion error.
* @param bytes - The byte array to check.
* @param offset - The offset from which to start checking the byte array.
* If provided, the byte array is considered empty if it has no bytes after the offset.
*
* @example
* ```ts
* const bytes = new Uint8Array([0x01, 0x02, 0x03]);
* assertByteArrayIsNotEmptyForCodec('myCodec', bytes); // OK
* assertByteArrayIsNotEmptyForCodec('myCodec', bytes, 1); // OK
* assertByteArrayIsNotEmptyForCodec('myCodec', bytes, 3); // Throws
* ```
*/
export function assertByteArrayIsNotEmptyForCodec(
codecDescription: string,
bytes: ReadonlyUint8Array | Uint8Array,
offset = 0,
) {
if (bytes.length - offset <= 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__CANNOT_DECODE_EMPTY_BYTE_ARRAY, {
codecDescription,
});
}
}
/**
* Asserts that a given byte array has enough bytes to decode
* (after the optional provided offset).
*
* Returns void if the byte array has at least the expected number
* of bytes but throws a {@link SolanaError} otherwise.
*
* @param codecDescription - A description of the codec used by the assertion error.
* @param expected - The minimum number of bytes expected in the byte array.
* @param bytes - The byte array to check.
* @param offset - The offset from which to start checking the byte array.
*
* @example
* ```ts
* const bytes = new Uint8Array([0x01, 0x02, 0x03]);
* assertByteArrayHasEnoughBytesForCodec('myCodec', 3, bytes); // OK
* assertByteArrayHasEnoughBytesForCodec('myCodec', 4, bytes); // Throws
* assertByteArrayHasEnoughBytesForCodec('myCodec', 2, bytes, 1); // OK
* assertByteArrayHasEnoughBytesForCodec('myCodec', 3, bytes, 1); // Throws
* ```
*/
export function assertByteArrayHasEnoughBytesForCodec(
codecDescription: string,
expected: number,
bytes: ReadonlyUint8Array | Uint8Array,
offset = 0,
) {
const bytesLength = bytes.length - offset;
if (bytesLength < expected) {
throw new SolanaError(SOLANA_ERROR__CODECS__INVALID_BYTE_LENGTH, {
bytesLength,
codecDescription,
expected,
});
}
}
/**
* Asserts that a given offset is within the byte array bounds.
* This range is between 0 and the byte array length and is inclusive.
* An offset equals to the byte array length is considered a valid offset
* as it allows the post-offset of codecs to signal the end of the byte array.
*
* @param codecDescription - A description of the codec used by the assertion error.
* @param offset - The offset to check.
* @param bytesLength - The length of the byte array from which the offset should be within bounds.
*
* @example
* ```ts
* const bytes = new Uint8Array([0x01, 0x02, 0x03]);
* assertByteArrayOffsetIsNotOutOfRange('myCodec', 0, bytes.length); // OK
* assertByteArrayOffsetIsNotOutOfRange('myCodec', 3, bytes.length); // OK
* assertByteArrayOffsetIsNotOutOfRange('myCodec', 4, bytes.length); // Throws
* ```
*/
export function assertByteArrayOffsetIsNotOutOfRange(codecDescription: string, offset: number, bytesLength: number) {
if (offset < 0 || offset > bytesLength) {
throw new SolanaError(SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE, {
bytesLength,
codecDescription,
offset,
});
}
}

148
node_modules/@solana/codecs-core/src/bytes.ts generated vendored Normal file
View File

@@ -0,0 +1,148 @@
import { ReadonlyUint8Array } from './readonly-uint8array';
/**
* Concatenates an array of `Uint8Array`s into a single `Uint8Array`.
* Reuses the original byte array when applicable.
*
* @param byteArrays - The array of byte arrays to concatenate.
*
* @example
* ```ts
* const bytes1 = new Uint8Array([0x01, 0x02]);
* const bytes2 = new Uint8Array([]);
* const bytes3 = new Uint8Array([0x03, 0x04]);
* const bytes = mergeBytes([bytes1, bytes2, bytes3]);
* // ^ [0x01, 0x02, 0x03, 0x04]
* ```
*/
export const mergeBytes = (byteArrays: Uint8Array[]): Uint8Array => {
const nonEmptyByteArrays = byteArrays.filter(arr => arr.length);
if (nonEmptyByteArrays.length === 0) {
return byteArrays.length ? byteArrays[0] : new Uint8Array();
}
if (nonEmptyByteArrays.length === 1) {
return nonEmptyByteArrays[0];
}
const totalLength = nonEmptyByteArrays.reduce((total, arr) => total + arr.length, 0);
const result = new Uint8Array(totalLength);
let offset = 0;
nonEmptyByteArrays.forEach(arr => {
result.set(arr, offset);
offset += arr.length;
});
return result;
};
/**
* Pads a `Uint8Array` with zeroes to the specified length.
* If the array is longer than the specified length, it is returned as-is.
*
* @param bytes - The byte array to pad.
* @param length - The desired length of the byte array.
*
* @example
* Adds zeroes to the end of the byte array to reach the desired length.
* ```ts
* const bytes = new Uint8Array([0x01, 0x02]);
* const paddedBytes = padBytes(bytes, 4);
* // ^ [0x01, 0x02, 0x00, 0x00]
* ```
*
* @example
* Returns the original byte array if it is already at the desired length.
* ```ts
* const bytes = new Uint8Array([0x01, 0x02]);
* const paddedBytes = padBytes(bytes, 2);
* // bytes === paddedBytes
* ```
*/
export function padBytes(bytes: Uint8Array, length: number): Uint8Array;
export function padBytes(bytes: ReadonlyUint8Array, length: number): ReadonlyUint8Array;
export function padBytes(bytes: ReadonlyUint8Array, length: number): ReadonlyUint8Array {
if (bytes.length >= length) return bytes;
const paddedBytes = new Uint8Array(length).fill(0);
paddedBytes.set(bytes);
return paddedBytes;
}
/**
* Fixes a `Uint8Array` to the specified length.
* If the array is longer than the specified length, it is truncated.
* If the array is shorter than the specified length, it is padded with zeroes.
*
* @param bytes - The byte array to truncate or pad.
* @param length - The desired length of the byte array.
*
* @example
* Truncates the byte array to the desired length.
* ```ts
* const bytes = new Uint8Array([0x01, 0x02, 0x03, 0x04]);
* const fixedBytes = fixBytes(bytes, 2);
* // ^ [0x01, 0x02]
* ```
*
* @example
* Adds zeroes to the end of the byte array to reach the desired length.
* ```ts
* const bytes = new Uint8Array([0x01, 0x02]);
* const fixedBytes = fixBytes(bytes, 4);
* // ^ [0x01, 0x02, 0x00, 0x00]
* ```
*
* @example
* Returns the original byte array if it is already at the desired length.
* ```ts
* const bytes = new Uint8Array([0x01, 0x02]);
* const fixedBytes = fixBytes(bytes, 2);
* // bytes === fixedBytes
* ```
*/
export const fixBytes = (bytes: ReadonlyUint8Array | Uint8Array, length: number): ReadonlyUint8Array | Uint8Array =>
padBytes(bytes.length <= length ? bytes : bytes.slice(0, length), length);
/**
* Returns true if and only if the provided `data` byte array contains
* the provided `bytes` byte array at the specified `offset`.
*
* @param data - The byte array in which to search for `bytes`.
* @param bytes - The byte sequence to search for.
* @param offset - The position in `data` where the search begins.
*
* @example
* ```ts
* const data = new Uint8Array([0x01, 0x02, 0x03, 0x04]);
* const bytes = new Uint8Array([0x02, 0x03]);
* containsBytes(data, bytes, 1); // true
* containsBytes(data, bytes, 2); // false
* ```
*/
export function containsBytes(
data: ReadonlyUint8Array | Uint8Array,
bytes: ReadonlyUint8Array | Uint8Array,
offset: number,
): boolean {
const slice =
(offset === 0 || offset <= -data.byteLength) && data.length === bytes.length
? data
: data.slice(offset, offset + bytes.length);
return bytesEqual(slice, bytes);
}
/**
* Returns true if and only if the provided `bytes1` and `bytes2` byte arrays are equal.
*
* @param bytes1 - The first byte array to compare.
* @param bytes2 - The second byte array to compare.
*
* @example
* ```ts
* const bytes1 = new Uint8Array([0x01, 0x02, 0x03, 0x04]);
* const bytes2 = new Uint8Array([0x01, 0x02, 0x03, 0x04]);
* bytesEqual(bytes1, bytes2); // true
* ```
*/
export function bytesEqual(bytes1: ReadonlyUint8Array | Uint8Array, bytes2: ReadonlyUint8Array | Uint8Array): boolean {
return bytes1.length === bytes2.length && bytes1.every((value, index) => value === bytes2[index]);
}

925
node_modules/@solana/codecs-core/src/codec.ts generated vendored Normal file
View File

@@ -0,0 +1,925 @@
import {
SOLANA_ERROR__CODECS__EXPECTED_FIXED_LENGTH,
SOLANA_ERROR__CODECS__EXPECTED_VARIABLE_LENGTH,
SolanaError,
} from '@solana/errors';
import { ReadonlyUint8Array } from './readonly-uint8array';
/**
* Defines an offset in bytes.
*/
export type Offset = number;
/**
* An object that can encode a value of type {@link TFrom} into a {@link ReadonlyUint8Array}.
*
* This is a common interface for {@link FixedSizeEncoder} and {@link VariableSizeEncoder}.
*
* @interface
* @typeParam TFrom - The type of the value to encode.
*
* @see {@link FixedSizeEncoder}
* @see {@link VariableSizeEncoder}
*/
type BaseEncoder<TFrom> = {
/** Encode the provided value and return the encoded bytes directly. */
readonly encode: (value: TFrom) => ReadonlyUint8Array<ArrayBuffer>;
/**
* Writes the encoded value into the provided byte array at the given offset.
* Returns the offset of the next byte after the encoded value.
*/
readonly write: (value: TFrom, bytes: Uint8Array, offset: Offset) => Offset;
};
/**
* An object that can encode a value of type {@link TFrom} into a fixed-size {@link ReadonlyUint8Array}.
*
* See {@link Encoder} to learn more about creating and composing encoders.
*
* @interface
* @typeParam TFrom - The type of the value to encode.
* @typeParam TSize - The fixed size of the encoded value in bytes.
*
* @example
* ```ts
* const encoder: FixedSizeEncoder<number, 4>;
* const bytes = encoder.encode(42);
* const size = encoder.fixedSize; // 4
* ```
*
* @see {@link Encoder}
* @see {@link VariableSizeEncoder}
*/
export type FixedSizeEncoder<TFrom, TSize extends number = number> = BaseEncoder<TFrom> & {
/** The fixed size of the encoded value in bytes. */
readonly fixedSize: TSize;
};
/**
* An object that can encode a value of type {@link TFrom} into a variable-size {@link ReadonlyUint8Array}.
*
* See {@link Encoder} to learn more about creating and composing encoders.
*
* @interface
* @typeParam TFrom - The type of the value to encode.
*
* @example
* ```ts
* const encoder: VariableSizeEncoder<string>;
* const bytes = encoder.encode('hello');
* const size = encoder.getSizeFromValue('hello');
* ```
*
* @see {@link Encoder}
* @see {@link FixedSizeEncoder}
*/
export type VariableSizeEncoder<TFrom> = BaseEncoder<TFrom> & {
/** Returns the size of the encoded value in bytes for a given input. */
readonly getSizeFromValue: (value: TFrom) => number;
/** The maximum possible size of an encoded value in bytes, if applicable. */
readonly maxSize?: number;
};
/**
* An object that can encode a value of type {@link TFrom} into a {@link ReadonlyUint8Array}.
*
* An `Encoder` can be either:
* - A {@link FixedSizeEncoder}, where all encoded values have the same fixed size.
* - A {@link VariableSizeEncoder}, where encoded values can vary in size.
*
* @typeParam TFrom - The type of the value to encode.
*
* @example
* Encoding a value into a new byte array.
* ```ts
* const encoder: Encoder<string>;
* const bytes = encoder.encode('hello');
* ```
*
* @example
* Writing the encoded value into an existing byte array.
* ```ts
* const encoder: Encoder<string>;
* const bytes = new Uint8Array(100);
* const nextOffset = encoder.write('hello', bytes, 20);
* ```
*
* @remarks
* You may create `Encoders` manually using the {@link createEncoder} function but it is more common
* to compose multiple `Encoders` together using the various helpers of the `@solana/codecs` package.
*
* For instance, here's how you might create an `Encoder` for a `Person` object type that contains
* a `name` string and an `age` number:
*
* ```ts
* import { getStructEncoder, addEncoderSizePrefix, getUtf8Encoder, getU32Encoder } from '@solana/codecs';
*
* type Person = { name: string; age: number };
* const getPersonEncoder = (): Encoder<Person> =>
* getStructEncoder([
* ['name', addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder())],
* ['age', getU32Encoder()],
* ]);
* ```
*
* Note that composed `Encoder` types are clever enough to understand whether
* they are fixed-size or variable-size. In the example above, `getU32Encoder()` is
* a fixed-size encoder, while `addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder())`
* is a variable-size encoder. This makes the final `Person` encoder a variable-size encoder.
*
* @see {@link FixedSizeEncoder}
* @see {@link VariableSizeEncoder}
* @see {@link createEncoder}
*/
export type Encoder<TFrom> = FixedSizeEncoder<TFrom> | VariableSizeEncoder<TFrom>;
/**
* An object that can decode a byte array into a value of type {@link TTo}.
*
* This is a common interface for {@link FixedSizeDecoder} and {@link VariableSizeDecoder}.
*
* @interface
* @typeParam TTo - The type of the decoded value.
*
* @see {@link FixedSizeDecoder}
* @see {@link VariableSizeDecoder}
*/
type BaseDecoder<TTo> = {
/** Decodes the provided byte array at the given offset (or zero) and returns the value directly. */
readonly decode: (bytes: ReadonlyUint8Array | Uint8Array, offset?: Offset) => TTo;
/**
* Reads the encoded value from the provided byte array at the given offset.
* Returns the decoded value and the offset of the next byte after the encoded value.
*/
readonly read: (bytes: ReadonlyUint8Array | Uint8Array, offset: Offset) => [TTo, Offset];
};
/**
* An object that can decode a fixed-size byte array into a value of type {@link TTo}.
*
* See {@link Decoder} to learn more about creating and composing decoders.
*
* @interface
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
*
* @example
* ```ts
* const decoder: FixedSizeDecoder<number, 4>;
* const value = decoder.decode(bytes);
* const size = decoder.fixedSize; // 4
* ```
*
* @see {@link Decoder}
* @see {@link VariableSizeDecoder}
*/
export type FixedSizeDecoder<TTo, TSize extends number = number> = BaseDecoder<TTo> & {
/** The fixed size of the encoded value in bytes. */
readonly fixedSize: TSize;
};
/**
* An object that can decode a variable-size byte array into a value of type {@link TTo}.
*
* See {@link Decoder} to learn more about creating and composing decoders.
*
* @interface
* @typeParam TTo - The type of the decoded value.
*
* @example
* ```ts
* const decoder: VariableSizeDecoder<number>;
* const value = decoder.decode(bytes);
* ```
*
* @see {@link Decoder}
* @see {@link VariableSizeDecoder}
*/
export type VariableSizeDecoder<TTo> = BaseDecoder<TTo> & {
/** The maximum possible size of an encoded value in bytes, if applicable. */
readonly maxSize?: number;
};
/**
* An object that can decode a byte array into a value of type {@link TTo}.
*
* An `Decoder` can be either:
* - A {@link FixedSizeDecoder}, where all byte arrays have the same fixed size.
* - A {@link VariableSizeDecoder}, where byte arrays can vary in size.
*
* @typeParam TTo - The type of the decoded value.
*
* @example
* Getting the decoded value from a byte array.
* ```ts
* const decoder: Decoder<string>;
* const value = decoder.decode(bytes);
* ```
*
* @example
* Reading the decoded value from a byte array at a specific offset
* and getting the offset of the next byte to read.
* ```ts
* const decoder: Decoder<string>;
* const [value, nextOffset] = decoder.read('hello', bytes, 20);
* ```
*
* @remarks
* You may create `Decoders` manually using the {@link createDecoder} function but it is more common
* to compose multiple `Decoders` together using the various helpers of the `@solana/codecs` package.
*
* For instance, here's how you might create an `Decoder` for a `Person` object type that contains
* a `name` string and an `age` number:
*
* ```ts
* import { getStructDecoder, addDecoderSizePrefix, getUtf8Decoder, getU32Decoder } from '@solana/codecs';
*
* type Person = { name: string; age: number };
* const getPersonDecoder = (): Decoder<Person> =>
* getStructDecoder([
* ['name', addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder())],
* ['age', getU32Decoder()],
* ]);
* ```
*
* Note that composed `Decoder` types are clever enough to understand whether
* they are fixed-size or variable-size. In the example above, `getU32Decoder()` is
* a fixed-size decoder, while `addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder())`
* is a variable-size decoder. This makes the final `Person` decoder a variable-size decoder.
*
* @see {@link FixedSizeDecoder}
* @see {@link VariableSizeDecoder}
* @see {@link createDecoder}
*/
export type Decoder<TTo> = FixedSizeDecoder<TTo> | VariableSizeDecoder<TTo>;
/**
* An object that can encode and decode a value to and from a fixed-size byte array.
*
* See {@link Codec} to learn more about creating and composing codecs.
*
* @interface
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
*
* @example
* ```ts
* const codec: FixedSizeCodec<number | bigint, bigint, 8>;
* const bytes = codec.encode(42);
* const value = codec.decode(bytes); // 42n
* const size = codec.fixedSize; // 8
* ```
*
* @see {@link Codec}
* @see {@link VariableSizeCodec}
*/
export type FixedSizeCodec<TFrom, TTo extends TFrom = TFrom, TSize extends number = number> = FixedSizeDecoder<
TTo,
TSize
> &
FixedSizeEncoder<TFrom, TSize>;
/**
* An object that can encode and decode a value to and from a variable-size byte array.
*
* See {@link Codec} to learn more about creating and composing codecs.
*
* @interface
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
*
* @example
* ```ts
* const codec: VariableSizeCodec<number | bigint, bigint>;
* const bytes = codec.encode(42);
* const value = codec.decode(bytes); // 42n
* const size = codec.getSizeFromValue(42);
* ```
*
* @see {@link Codec}
* @see {@link FixedSizeCodec}
*/
export type VariableSizeCodec<TFrom, TTo extends TFrom = TFrom> = VariableSizeDecoder<TTo> & VariableSizeEncoder<TFrom>;
/**
* An object that can encode and decode a value to and from a byte array.
*
* A `Codec` can be either:
* - A {@link FixedSizeCodec}, where all encoded values have the same fixed size.
* - A {@link VariableSizeCodec}, where encoded values can vary in size.
*
* @example
* ```ts
* const codec: Codec<string>;
* const bytes = codec.encode('hello');
* const value = codec.decode(bytes); // 'hello'
* ```
*
* @remarks
* For convenience, codecs can encode looser types than they decode.
* That is, type {@link TFrom} can be a superset of type {@link TTo}.
* For instance, a `Codec<bigint | number, bigint>` can encode both
* `bigint` and `number` values, but will always decode to a `bigint`.
*
* ```ts
* const codec: Codec<bigint | number, bigint>;
* const bytes = codec.encode(42);
* const value = codec.decode(bytes); // 42n
* ```
*
* It is worth noting that codecs are the union of encoders and decoders.
* This means that a `Codec<TFrom, TTo>` can be combined from an `Encoder<TFrom>`
* and a `Decoder<TTo>` using the {@link combineCodec} function. This is particularly
* useful for library authors who want to expose all three types of objects to their users.
*
* ```ts
* const encoder: Encoder<bigint | number>;
* const decoder: Decoder<bigint>;
* const codec: Codec<bigint | number, bigint> = combineCodec(encoder, decoder);
* ```
*
* Aside from combining encoders and decoders, codecs can also be created from scratch using
* the {@link createCodec} function but it is more common to compose multiple codecs together
* using the various helpers of the `@solana/codecs` package.
*
* For instance, here's how you might create a `Codec` for a `Person` object type that contains
* a `name` string and an `age` number:
*
* ```ts
* import { getStructCodec, addCodecSizePrefix, getUtf8Codec, getU32Codec } from '@solana/codecs';
*
* type Person = { name: string; age: number };
* const getPersonCodec = (): Codec<Person> =>
* getStructCodec([
* ['name', addCodecSizePrefix(getUtf8Codec(), getU32Codec())],
* ['age', getU32Codec()],
* ]);
* ```
*
* Note that composed `Codec` types are clever enough to understand whether
* they are fixed-size or variable-size. In the example above, `getU32Codec()` is
* a fixed-size codec, while `addCodecSizePrefix(getUtf8Codec(), getU32Codec())`
* is a variable-size codec. This makes the final `Person` codec a variable-size codec.
*
* @see {@link FixedSizeCodec}
* @see {@link VariableSizeCodec}
* @see {@link combineCodec}
* @see {@link createCodec}
*/
export type Codec<TFrom, TTo extends TFrom = TFrom> = FixedSizeCodec<TFrom, TTo> | VariableSizeCodec<TFrom, TTo>;
/**
* Gets the encoded size of a given value in bytes using the provided encoder.
*
* @typeParam TFrom - The type of the value to encode.
* @param value - The value to be encoded.
* @param encoder - The encoder used to determine the encoded size.
* @returns The size of the encoded value in bytes.
*
* @example
* ```ts
* const fixedSizeEncoder = { fixedSize: 4 };
* getEncodedSize(123, fixedSizeEncoder); // Returns 4.
*
* const variableSizeEncoder = { getSizeFromValue: (value: string) => value.length };
* getEncodedSize("hello", variableSizeEncoder); // Returns 5.
* ```
*
* @see {@link Encoder}
*/
export function getEncodedSize<TFrom>(
value: TFrom,
encoder: { fixedSize: number } | { getSizeFromValue: (value: TFrom) => number },
): number {
return 'fixedSize' in encoder ? encoder.fixedSize : encoder.getSizeFromValue(value);
}
/**
* Creates an `Encoder` by filling in the missing `encode` function using the provided `write` function and
* either the `fixedSize` property (for {@link FixedSizeEncoder | FixedSizeEncoders}) or
* the `getSizeFromValue` function (for {@link VariableSizeEncoder | VariableSizeEncoders}).
*
* Instead of manually implementing `encode`, this utility leverages the existing `write` function
* and the size helpers to generate a complete encoder. The provided `encode` method will allocate
* a new `Uint8Array` of the correct size and use `write` to populate it.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TSize - The fixed size of the encoded value in bytes (for fixed-size encoders).
*
* @param encoder - An encoder object that implements `write`, but not `encode`.
* - If the encoder has a `fixedSize` property, it is treated as a {@link FixedSizeEncoder}.
* - Otherwise, it is treated as a {@link VariableSizeEncoder}.
*
* @returns A fully functional `Encoder` with both `write` and `encode` methods.
*
* @example
* Creating a custom fixed-size encoder.
* ```ts
* const encoder = createEncoder({
* fixedSize: 4,
* write: (value: number, bytes, offset) => {
* bytes.set(new Uint8Array([value]), offset);
* return offset + 4;
* },
* });
*
* const bytes = encoder.encode(42);
* // 0x2a000000
* ```
*
* @example
* Creating a custom variable-size encoder:
* ```ts
* const encoder = createEncoder({
* getSizeFromValue: (value: string) => value.length,
* write: (value: string, bytes, offset) => {
* const encodedValue = new TextEncoder().encode(value);
* bytes.set(encodedValue, offset);
* return offset + encodedValue.length;
* },
* });
*
* const bytes = encoder.encode("hello");
* // 0x68656c6c6f
* ```
*
* @remarks
* Note that, while `createEncoder` is useful for defining more complex encoders, it is more common to compose
* encoders together using the various helpers and primitives of the `@solana/codecs` package.
*
* Here are some alternative examples using codec primitives instead of `createEncoder`.
*
* ```ts
* // Fixed-size encoder for unsigned 32-bit integers.
* const encoder = getU32Encoder();
* const bytes = encoder.encode(42);
* // 0x2a000000
*
* // Variable-size encoder for 32-bytes prefixed UTF-8 strings.
* const encoder = addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder());
* const bytes = encoder.encode("hello");
* // 0x0500000068656c6c6f
*
* // Variable-size encoder for custom objects.
* type Person = { name: string; age: number };
* const encoder: Encoder<Person> = getStructEncoder([
* ['name', addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder())],
* ['age', getU32Encoder()],
* ]);
* const bytes = encoder.encode({ name: "Bob", age: 42 });
* // 0x03000000426f622a000000
* ```
*
* @see {@link Encoder}
* @see {@link FixedSizeEncoder}
* @see {@link VariableSizeEncoder}
* @see {@link getStructEncoder}
* @see {@link getU32Encoder}
* @see {@link getUtf8Encoder}
* @see {@link addEncoderSizePrefix}
*/
export function createEncoder<TFrom, TSize extends number>(
encoder: Omit<FixedSizeEncoder<TFrom, TSize>, 'encode'>,
): FixedSizeEncoder<TFrom, TSize>;
export function createEncoder<TFrom>(encoder: Omit<VariableSizeEncoder<TFrom>, 'encode'>): VariableSizeEncoder<TFrom>;
export function createEncoder<TFrom>(
encoder: Omit<FixedSizeEncoder<TFrom>, 'encode'> | Omit<VariableSizeEncoder<TFrom>, 'encode'>,
): Encoder<TFrom>;
export function createEncoder<TFrom>(
encoder: Omit<FixedSizeEncoder<TFrom>, 'encode'> | Omit<VariableSizeEncoder<TFrom>, 'encode'>,
): Encoder<TFrom> {
return Object.freeze({
...encoder,
encode: value => {
const bytes = new Uint8Array(getEncodedSize(value, encoder));
encoder.write(value, bytes, 0);
return bytes;
},
});
}
/**
* Creates a `Decoder` by filling in the missing `decode` function using the provided `read` function.
*
* Instead of manually implementing `decode`, this utility leverages the existing `read` function
* and the size properties to generate a complete decoder. The provided `decode` method will read
* from a `Uint8Array` at the given offset and return the decoded value.
*
* If the `fixedSize` property is provided, a {@link FixedSizeDecoder} will be created, otherwise
* a {@link VariableSizeDecoder} will be created.
*
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes (for fixed-size decoders).
*
* @param decoder - A decoder object that implements `read`, but not `decode`.
* - If the decoder has a `fixedSize` property, it is treated as a {@link FixedSizeDecoder}.
* - Otherwise, it is treated as a {@link VariableSizeDecoder}.
*
* @returns A fully functional `Decoder` with both `read` and `decode` methods.
*
* @example
* Creating a custom fixed-size decoder.
* ```ts
* const decoder = createDecoder({
* fixedSize: 4,
* read: (bytes, offset) => {
* const value = bytes[offset];
* return [value, offset + 4];
* },
* });
*
* const value = decoder.decode(new Uint8Array([42, 0, 0, 0]));
* // 42
* ```
*
* @example
* Creating a custom variable-size decoder:
* ```ts
* const decoder = createDecoder({
* read: (bytes, offset) => {
* const decodedValue = new TextDecoder().decode(bytes.subarray(offset));
* return [decodedValue, bytes.length];
* },
* });
*
* const value = decoder.decode(new Uint8Array([104, 101, 108, 108, 111]));
* // "hello"
* ```
*
* @remarks
* Note that, while `createDecoder` is useful for defining more complex decoders, it is more common to compose
* decoders together using the various helpers and primitives of the `@solana/codecs` package.
*
* Here are some alternative examples using codec primitives instead of `createDecoder`.
*
* ```ts
* // Fixed-size decoder for unsigned 32-bit integers.
* const decoder = getU32Decoder();
* const value = decoder.decode(new Uint8Array([42, 0, 0, 0]));
* // 42
*
* // Variable-size decoder for 32-bytes prefixed UTF-8 strings.
* const decoder = addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder());
* const value = decoder.decode(new Uint8Array([5, 0, 0, 0, 104, 101, 108, 108, 111]));
* // "hello"
*
* // Variable-size decoder for custom objects.
* type Person = { name: string; age: number };
* const decoder: Decoder<Person> = getStructDecoder([
* ['name', addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder())],
* ['age', getU32Decoder()],
* ]);
* const value = decoder.decode(new Uint8Array([3, 0, 0, 0, 66, 111, 98, 42, 0, 0, 0]));
* // { name: "Bob", age: 42 }
* ```
*
* @see {@link Decoder}
* @see {@link FixedSizeDecoder}
* @see {@link VariableSizeDecoder}
* @see {@link getStructDecoder}
* @see {@link getU32Decoder}
* @see {@link getUtf8Decoder}
* @see {@link addDecoderSizePrefix}
*/
export function createDecoder<TTo, TSize extends number>(
decoder: Omit<FixedSizeDecoder<TTo, TSize>, 'decode'>,
): FixedSizeDecoder<TTo, TSize>;
export function createDecoder<TTo>(decoder: Omit<VariableSizeDecoder<TTo>, 'decode'>): VariableSizeDecoder<TTo>;
export function createDecoder<TTo>(
decoder: Omit<FixedSizeDecoder<TTo>, 'decode'> | Omit<VariableSizeDecoder<TTo>, 'decode'>,
): Decoder<TTo>;
export function createDecoder<TTo>(
decoder: Omit<FixedSizeDecoder<TTo>, 'decode'> | Omit<VariableSizeDecoder<TTo>, 'decode'>,
): Decoder<TTo> {
return Object.freeze({
...decoder,
decode: (bytes, offset = 0) => decoder.read(bytes, offset)[0],
});
}
/**
* Creates a `Codec` by filling in the missing `encode` and `decode` functions using the provided `write` and `read` functions.
*
* This utility combines the behavior of {@link createEncoder} and {@link createDecoder} to produce a fully functional `Codec`.
* The `encode` method is derived from the `write` function, while the `decode` method is derived from the `read` function.
*
* If the `fixedSize` property is provided, a {@link FixedSizeCodec} will be created, otherwise
* a {@link VariableSizeCodec} will be created.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes (for fixed-size codecs).
*
* @param codec - A codec object that implements `write` and `read`, but not `encode` or `decode`.
* - If the codec has a `fixedSize` property, it is treated as a {@link FixedSizeCodec}.
* - Otherwise, it is treated as a {@link VariableSizeCodec}.
*
* @returns A fully functional `Codec` with `write`, `read`, `encode`, and `decode` methods.
*
* @example
* Creating a custom fixed-size codec.
* ```ts
* const codec = createCodec({
* fixedSize: 4,
* read: (bytes, offset) => {
* const value = bytes[offset];
* return [value, offset + 4];
* },
* write: (value: number, bytes, offset) => {
* bytes.set(new Uint8Array([value]), offset);
* return offset + 4;
* },
* });
*
* const bytes = codec.encode(42);
* // 0x2a000000
* const value = codec.decode(bytes);
* // 42
* ```
*
* @example
* Creating a custom variable-size codec:
* ```ts
* const codec = createCodec({
* getSizeFromValue: (value: string) => value.length,
* read: (bytes, offset) => {
* const decodedValue = new TextDecoder().decode(bytes.subarray(offset));
* return [decodedValue, bytes.length];
* },
* write: (value: string, bytes, offset) => {
* const encodedValue = new TextEncoder().encode(value);
* bytes.set(encodedValue, offset);
* return offset + encodedValue.length;
* },
* });
*
* const bytes = codec.encode("hello");
* // 0x68656c6c6f
* const value = codec.decode(bytes);
* // "hello"
* ```
*
* @remarks
* This function effectively combines the behavior of {@link createEncoder} and {@link createDecoder}.
* If you only need to encode or decode (but not both), consider using those functions instead.
*
* Here are some alternative examples using codec primitives instead of `createCodec`.
*
* ```ts
* // Fixed-size codec for unsigned 32-bit integers.
* const codec = getU32Codec();
* const bytes = codec.encode(42);
* // 0x2a000000
* const value = codec.decode(bytes);
* // 42
*
* // Variable-size codec for 32-bytes prefixed UTF-8 strings.
* const codec = addCodecSizePrefix(getUtf8Codec(), getU32Codec());
* const bytes = codec.encode("hello");
* // 0x0500000068656c6c6f
* const value = codec.decode(bytes);
* // "hello"
*
* // Variable-size codec for custom objects.
* type Person = { name: string; age: number };
* const codec: Codec<PersonInput, Person> = getStructCodec([
* ['name', addCodecSizePrefix(getUtf8Codec(), getU32Codec())],
* ['age', getU32Codec()],
* ]);
* const bytes = codec.encode({ name: "Bob", age: 42 });
* // 0x03000000426f622a000000
* const value = codec.decode(bytes);
* // { name: "Bob", age: 42 }
* ```
*
* @see {@link Codec}
* @see {@link FixedSizeCodec}
* @see {@link VariableSizeCodec}
* @see {@link createEncoder}
* @see {@link createDecoder}
* @see {@link getStructCodec}
* @see {@link getU32Codec}
* @see {@link getUtf8Codec}
* @see {@link addCodecSizePrefix}
*/
export function createCodec<TFrom, TTo extends TFrom = TFrom, TSize extends number = number>(
codec: Omit<FixedSizeCodec<TFrom, TTo, TSize>, 'decode' | 'encode'>,
): FixedSizeCodec<TFrom, TTo, TSize>;
export function createCodec<TFrom, TTo extends TFrom = TFrom>(
codec: Omit<VariableSizeCodec<TFrom, TTo>, 'decode' | 'encode'>,
): VariableSizeCodec<TFrom, TTo>;
export function createCodec<TFrom, TTo extends TFrom = TFrom>(
codec:
| Omit<FixedSizeCodec<TFrom, TTo>, 'decode' | 'encode'>
| Omit<VariableSizeCodec<TFrom, TTo>, 'decode' | 'encode'>,
): Codec<TFrom, TTo>;
export function createCodec<TFrom, TTo extends TFrom = TFrom>(
codec:
| Omit<FixedSizeCodec<TFrom, TTo>, 'decode' | 'encode'>
| Omit<VariableSizeCodec<TFrom, TTo>, 'decode' | 'encode'>,
): Codec<TFrom, TTo> {
return Object.freeze({
...codec,
decode: (bytes, offset = 0) => codec.read(bytes, offset)[0],
encode: value => {
const bytes = new Uint8Array(getEncodedSize(value, codec));
codec.write(value, bytes, 0);
return bytes;
},
});
}
/**
* Determines whether the given codec, encoder, or decoder is fixed-size.
*
* A fixed-size object is identified by the presence of a `fixedSize` property.
* If this property exists, the object is considered a {@link FixedSizeCodec},
* {@link FixedSizeEncoder}, or {@link FixedSizeDecoder}.
* Otherwise, it is assumed to be a {@link VariableSizeCodec},
* {@link VariableSizeEncoder}, or {@link VariableSizeDecoder}.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
* @returns `true` if the object is fixed-size, `false` otherwise.
*
* @example
* Checking a fixed-size encoder.
* ```ts
* const encoder = getU32Encoder();
* isFixedSize(encoder); // true
* ```
*
* @example
* Checking a variable-size encoder.
* ```ts
* const encoder = addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder());
* isFixedSize(encoder); // false
* ```
*
* @remarks
* This function is commonly used to distinguish between fixed-size and variable-size objects at runtime.
* If you need to enforce this distinction with type assertions, consider using {@link assertIsFixedSize}.
*
* @see {@link assertIsFixedSize}
*/
export function isFixedSize<TFrom, TSize extends number>(
encoder: FixedSizeEncoder<TFrom, TSize> | VariableSizeEncoder<TFrom>,
): encoder is FixedSizeEncoder<TFrom, TSize>;
export function isFixedSize<TTo, TSize extends number>(
decoder: FixedSizeDecoder<TTo, TSize> | VariableSizeDecoder<TTo>,
): decoder is FixedSizeDecoder<TTo, TSize>;
export function isFixedSize<TFrom, TTo extends TFrom, TSize extends number>(
codec: FixedSizeCodec<TFrom, TTo, TSize> | VariableSizeCodec<TFrom, TTo>,
): codec is FixedSizeCodec<TFrom, TTo, TSize>;
export function isFixedSize<TSize extends number>(
codec: { fixedSize: TSize } | { maxSize?: number },
): codec is { fixedSize: TSize };
export function isFixedSize(codec: { fixedSize: number } | { maxSize?: number }): codec is { fixedSize: number } {
return 'fixedSize' in codec && typeof codec.fixedSize === 'number';
}
/**
* Asserts that the given codec, encoder, or decoder is fixed-size.
*
* If the object is not fixed-size (i.e., it lacks a `fixedSize` property),
* this function throws a {@link SolanaError} with the code `SOLANA_ERROR__CODECS__EXPECTED_FIXED_LENGTH`.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
* @throws {SolanaError} If the object is not fixed-size.
*
* @example
* Asserting a fixed-size encoder.
* ```ts
* const encoder = getU32Encoder();
* assertIsFixedSize(encoder); // Passes
* ```
*
* @example
* Attempting to assert a variable-size encoder.
* ```ts
* const encoder = addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder());
* assertIsFixedSize(encoder); // Throws SolanaError
* ```
*
* @remarks
* This function is the assertion-based counterpart of {@link isFixedSize}.
* If you only need to check whether an object is fixed-size without throwing an error, use {@link isFixedSize} instead.
*
* @see {@link isFixedSize}
*/
export function assertIsFixedSize<TFrom, TSize extends number>(
encoder: FixedSizeEncoder<TFrom, TSize> | VariableSizeEncoder<TFrom>,
): asserts encoder is FixedSizeEncoder<TFrom, TSize>;
export function assertIsFixedSize<TTo, TSize extends number>(
decoder: FixedSizeDecoder<TTo, TSize> | VariableSizeDecoder<TTo>,
): asserts decoder is FixedSizeDecoder<TTo, TSize>;
export function assertIsFixedSize<TFrom, TTo extends TFrom, TSize extends number>(
codec: FixedSizeCodec<TFrom, TTo, TSize> | VariableSizeCodec<TFrom, TTo>,
): asserts codec is FixedSizeCodec<TFrom, TTo, TSize>;
export function assertIsFixedSize<TSize extends number>(
codec: { fixedSize: TSize } | { maxSize?: number },
): asserts codec is { fixedSize: TSize };
export function assertIsFixedSize(
codec: { fixedSize: number } | { maxSize?: number },
): asserts codec is { fixedSize: number } {
if (!isFixedSize(codec)) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_FIXED_LENGTH);
}
}
/**
* Determines whether the given codec, encoder, or decoder is variable-size.
*
* A variable-size object is identified by the absence of a `fixedSize` property.
* If this property is missing, the object is considered a {@link VariableSizeCodec},
* {@link VariableSizeEncoder}, or {@link VariableSizeDecoder}.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
* @returns `true` if the object is variable-size, `false` otherwise.
*
* @example
* Checking a variable-size encoder.
* ```ts
* const encoder = addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder());
* isVariableSize(encoder); // true
* ```
*
* @example
* Checking a fixed-size encoder.
* ```ts
* const encoder = getU32Encoder();
* isVariableSize(encoder); // false
* ```
*
* @remarks
* This function is the inverse of {@link isFixedSize}.
*
* @see {@link isFixedSize}
* @see {@link assertIsVariableSize}
*/
export function isVariableSize<TFrom>(encoder: Encoder<TFrom>): encoder is VariableSizeEncoder<TFrom>;
export function isVariableSize<TTo>(decoder: Decoder<TTo>): decoder is VariableSizeDecoder<TTo>;
export function isVariableSize<TFrom, TTo extends TFrom>(
codec: Codec<TFrom, TTo>,
): codec is VariableSizeCodec<TFrom, TTo>;
export function isVariableSize(codec: { fixedSize: number } | { maxSize?: number }): codec is { maxSize?: number };
export function isVariableSize(codec: { fixedSize: number } | { maxSize?: number }): codec is { maxSize?: number } {
return !isFixedSize(codec);
}
/**
* Asserts that the given codec, encoder, or decoder is variable-size.
*
* If the object is not variable-size (i.e., it has a `fixedSize` property),
* this function throws a {@link SolanaError} with the code `SOLANA_ERROR__CODECS__EXPECTED_VARIABLE_LENGTH`.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
* @throws {SolanaError} If the object is not variable-size.
*
* @example
* Asserting a variable-size encoder.
* ```ts
* const encoder = addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder());
* assertIsVariableSize(encoder); // Passes
* ```
*
* @example
* Attempting to assert a fixed-size encoder.
* ```ts
* const encoder = getU32Encoder();
* assertIsVariableSize(encoder); // Throws SolanaError
* ```
*
* @remarks
* This function is the assertion-based counterpart of {@link isVariableSize}.
* If you only need to check whether an object is variable-size without throwing an error, use {@link isVariableSize} instead.
*
* Also note that this function is the inverse of {@link assertIsFixedSize}.
*
* @see {@link isVariableSize}
* @see {@link assertIsFixedSize}
*/
export function assertIsVariableSize<TFrom>(encoder: Encoder<TFrom>): asserts encoder is VariableSizeEncoder<TFrom>;
export function assertIsVariableSize<TTo>(decoder: Decoder<TTo>): asserts decoder is VariableSizeDecoder<TTo>;
export function assertIsVariableSize<TFrom, TTo extends TFrom>(
codec: Codec<TFrom, TTo>,
): asserts codec is VariableSizeCodec<TFrom, TTo>;
export function assertIsVariableSize(
codec: { fixedSize: number } | { maxSize?: number },
): asserts codec is { maxSize?: number };
export function assertIsVariableSize(
codec: { fixedSize: number } | { maxSize?: number },
): asserts codec is { maxSize?: number } {
if (!isVariableSize(codec)) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_VARIABLE_LENGTH);
}
}

133
node_modules/@solana/codecs-core/src/combine-codec.ts generated vendored Normal file
View File

@@ -0,0 +1,133 @@
import {
SOLANA_ERROR__CODECS__ENCODER_DECODER_FIXED_SIZE_MISMATCH,
SOLANA_ERROR__CODECS__ENCODER_DECODER_MAX_SIZE_MISMATCH,
SOLANA_ERROR__CODECS__ENCODER_DECODER_SIZE_COMPATIBILITY_MISMATCH,
SolanaError,
} from '@solana/errors';
import {
Codec,
Decoder,
Encoder,
FixedSizeCodec,
FixedSizeDecoder,
FixedSizeEncoder,
isFixedSize,
VariableSizeCodec,
VariableSizeDecoder,
VariableSizeEncoder,
} from './codec';
/**
* Combines an `Encoder` and a `Decoder` into a `Codec`.
*
* That is, given a `Encoder<TFrom>` and a `Decoder<TTo>`, this function returns a `Codec<TFrom, TTo>`.
*
* This allows for modular composition by keeping encoding and decoding logic separate
* while still offering a convenient way to bundle them into a single `Codec`.
* This is particularly useful for library maintainers who want to expose `Encoders`,
* `Decoders`, and `Codecs` separately, enabling tree-shaking of unused logic.
*
* The provided `Encoder` and `Decoder` must be compatible in terms of:
* - **Fixed Size:** If both are fixed-size, they must have the same `fixedSize` value.
* - **Variable Size:** If either has a `maxSize` attribute, it must match the other.
*
* If these conditions are not met, a {@link SolanaError} will be thrown.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes (for fixed-size codecs).
*
* @param encoder - The `Encoder` to combine.
* @param decoder - The `Decoder` to combine.
* @returns A `Codec` that provides both `encode` and `decode` methods.
*
* @throws {SolanaError}
* - `SOLANA_ERROR__CODECS__ENCODER_DECODER_SIZE_COMPATIBILITY_MISMATCH`
* Thrown if the encoder and decoder have mismatched size types (fixed vs. variable).
* - `SOLANA_ERROR__CODECS__ENCODER_DECODER_FIXED_SIZE_MISMATCH`
* Thrown if both are fixed-size but have different `fixedSize` values.
* - `SOLANA_ERROR__CODECS__ENCODER_DECODER_MAX_SIZE_MISMATCH`
* Thrown if the `maxSize` attributes do not match.
*
* @example
* Creating a fixed-size `Codec` from an encoder and a decoder.
* ```ts
* const encoder = getU32Encoder();
* const decoder = getU32Decoder();
* const codec = combineCodec(encoder, decoder);
*
* const bytes = codec.encode(42); // 0x2a000000
* const value = codec.decode(bytes); // 42
* ```
*
* @example
* Creating a variable-size `Codec` from an encoder and a decoder.
* ```ts
* const encoder = addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder());
* const decoder = addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder());
* const codec = combineCodec(encoder, decoder);
*
* const bytes = codec.encode("hello"); // 0x0500000068656c6c6f
* const value = codec.decode(bytes); // "hello"
* ```
*
* @remarks
* The recommended pattern for defining codecs in libraries is to expose separate functions for the encoder, decoder, and codec.
* This allows users to import only what they need, improving tree-shaking efficiency.
*
* ```ts
* type MyType = \/* ... *\/;
* const getMyTypeEncoder = (): Encoder<MyType> => { \/* ... *\/ };
* const getMyTypeDecoder = (): Decoder<MyType> => { \/* ... *\/ };
* const getMyTypeCodec = (): Codec<MyType> =>
* combineCodec(getMyTypeEncoder(), getMyTypeDecoder());
* ```
*
* @see {@link Codec}
* @see {@link Encoder}
* @see {@link Decoder}
*/
export function combineCodec<TFrom, TTo extends TFrom, TSize extends number>(
encoder: FixedSizeEncoder<TFrom, TSize>,
decoder: FixedSizeDecoder<TTo, TSize>,
): FixedSizeCodec<TFrom, TTo, TSize>;
export function combineCodec<TFrom, TTo extends TFrom>(
encoder: VariableSizeEncoder<TFrom>,
decoder: VariableSizeDecoder<TTo>,
): VariableSizeCodec<TFrom, TTo>;
export function combineCodec<TFrom, TTo extends TFrom>(
encoder: Encoder<TFrom>,
decoder: Decoder<TTo>,
): Codec<TFrom, TTo>;
export function combineCodec<TFrom, TTo extends TFrom>(
encoder: Encoder<TFrom>,
decoder: Decoder<TTo>,
): Codec<TFrom, TTo> {
if (isFixedSize(encoder) !== isFixedSize(decoder)) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODER_DECODER_SIZE_COMPATIBILITY_MISMATCH);
}
if (isFixedSize(encoder) && isFixedSize(decoder) && encoder.fixedSize !== decoder.fixedSize) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODER_DECODER_FIXED_SIZE_MISMATCH, {
decoderFixedSize: decoder.fixedSize,
encoderFixedSize: encoder.fixedSize,
});
}
if (!isFixedSize(encoder) && !isFixedSize(decoder) && encoder.maxSize !== decoder.maxSize) {
throw new SolanaError(SOLANA_ERROR__CODECS__ENCODER_DECODER_MAX_SIZE_MISMATCH, {
decoderMaxSize: decoder.maxSize,
encoderMaxSize: encoder.maxSize,
});
}
return {
...decoder,
...encoder,
decode: decoder.decode,
encode: encoder.encode,
read: decoder.read,
write: encoder.write,
};
}

View File

@@ -0,0 +1,45 @@
import { SOLANA_ERROR__CODECS__EXPECTED_DECODER_TO_CONSUME_ENTIRE_BYTE_ARRAY, SolanaError } from '@solana/errors';
import { createDecoder, Decoder } from './codec';
/**
* Create a {@link Decoder} that asserts that the bytes provided to `decode` or `read` are fully consumed by the inner decoder
* @param decoder A decoder to wrap
* @returns A new decoder that will throw if provided with a byte array that it does not fully consume
*
* @typeParam T - The type of the decoder
*
* @remarks
* Note that this compares the offset after encoding to the length of the input byte array
*
* The `offset` parameter to `decode` and `read` is still considered, and will affect the new offset that is compared to the byte array length
*
* The error that is thrown by the returned decoder is a {@link SolanaError} with the code `SOLANA_ERROR__CODECS__EXPECTED_DECODER_TO_CONSUME_ENTIRE_BYTE_ARRAY`
*
* @example
* Create a decoder that decodes a `u32` (4 bytes) and ensures the entire byte array is consumed
* ```ts
* const decoder = createDecoderThatUsesExactByteArray(getU32Decoder());
* decoder.decode(new Uint8Array([0, 0, 0, 0])); // 0
* decoder.decode(new Uint8Array([0, 0, 0, 0, 0])); // throws
*
* // with an offset
* decoder.decode(new Uint8Array([0, 0, 0, 0, 0]), 1); // 0
* decoder.decode(new Uint8Array([0, 0, 0, 0, 0, 0]), 1); // throws
* ```
*/
export function createDecoderThatConsumesEntireByteArray<T>(decoder: Decoder<T>): Decoder<T> {
return createDecoder({
...decoder,
read(bytes, offset) {
const [value, newOffset] = decoder.read(bytes, offset);
if (bytes.length > newOffset) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_DECODER_TO_CONSUME_ENTIRE_BYTE_ARRAY, {
expectedLength: newOffset,
numExcessBytes: bytes.length - newOffset,
});
}
return [value, newOffset];
},
});
}

170
node_modules/@solana/codecs-core/src/fix-codec-size.ts generated vendored Normal file
View File

@@ -0,0 +1,170 @@
import { assertByteArrayHasEnoughBytesForCodec } from './assertions';
import { fixBytes } from './bytes';
import {
Codec,
createDecoder,
createEncoder,
Decoder,
Encoder,
FixedSizeCodec,
FixedSizeDecoder,
FixedSizeEncoder,
isFixedSize,
Offset,
} from './codec';
import { combineCodec } from './combine-codec';
/**
* Creates a fixed-size encoder from a given encoder.
*
* The resulting encoder ensures that encoded values always have the specified number of bytes.
* If the original encoded value is larger than `fixedBytes`, it is truncated.
* If it is smaller, it is padded with trailing zeroes.
*
* For more details, see {@link fixCodecSize}.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TSize - The fixed size of the encoded value in bytes.
*
* @param encoder - The encoder to wrap into a fixed-size encoder.
* @param fixedBytes - The fixed number of bytes to write.
* @returns A `FixedSizeEncoder` that ensures a consistent output size.
*
* @example
* ```ts
* const encoder = fixEncoderSize(getUtf8Encoder(), 4);
* encoder.encode("Hello"); // 0x48656c6c (truncated)
* encoder.encode("Hi"); // 0x48690000 (padded)
* encoder.encode("Hiya"); // 0x48697961 (same length)
* ```
*
* @remarks
* If you need a full codec with both encoding and decoding, use {@link fixCodecSize}.
*
* @see {@link fixCodecSize}
* @see {@link fixDecoderSize}
*/
export function fixEncoderSize<TFrom, TSize extends number>(
encoder: Encoder<TFrom>,
fixedBytes: TSize,
): FixedSizeEncoder<TFrom, TSize> {
return createEncoder({
fixedSize: fixedBytes,
write: (value: TFrom, bytes: Uint8Array, offset: Offset) => {
// Here we exceptionally use the `encode` function instead of the `write`
// function as using the nested `write` function on a fixed-sized byte
// array may result in a out-of-bounds error on the nested encoder.
const variableByteArray = encoder.encode(value);
const fixedByteArray =
variableByteArray.length > fixedBytes ? variableByteArray.slice(0, fixedBytes) : variableByteArray;
bytes.set(fixedByteArray, offset);
return offset + fixedBytes;
},
});
}
/**
* Creates a fixed-size decoder from a given decoder.
*
* The resulting decoder always reads exactly `fixedBytes` bytes from the input.
* If the nested decoder is also fixed-size, the bytes are truncated or padded as needed.
*
* For more details, see {@link fixCodecSize}.
*
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
*
* @param decoder - The decoder to wrap into a fixed-size decoder.
* @param fixedBytes - The fixed number of bytes to read.
* @returns A `FixedSizeDecoder` that ensures a consistent input size.
*
* @example
* ```ts
* const decoder = fixDecoderSize(getUtf8Decoder(), 4);
* decoder.decode(new Uint8Array([72, 101, 108, 108, 111])); // "Hell" (truncated)
* decoder.decode(new Uint8Array([72, 105, 0, 0])); // "Hi" (zeroes ignored)
* decoder.decode(new Uint8Array([72, 105, 121, 97])); // "Hiya" (same length)
* ```
*
* @remarks
* If you need a full codec with both encoding and decoding, use {@link fixCodecSize}.
*
* @see {@link fixCodecSize}
* @see {@link fixEncoderSize}
*/
export function fixDecoderSize<TTo, TSize extends number>(
decoder: Decoder<TTo>,
fixedBytes: TSize,
): FixedSizeDecoder<TTo, TSize> {
return createDecoder({
fixedSize: fixedBytes,
read: (bytes, offset) => {
assertByteArrayHasEnoughBytesForCodec('fixCodecSize', fixedBytes, bytes, offset);
// Slice the byte array to the fixed size if necessary.
if (offset > 0 || bytes.length > fixedBytes) {
bytes = bytes.slice(offset, offset + fixedBytes);
}
// If the nested decoder is fixed-size, pad and truncate the byte array accordingly.
if (isFixedSize(decoder)) {
bytes = fixBytes(bytes, decoder.fixedSize);
}
// Decode the value using the nested decoder.
const [value] = decoder.read(bytes, 0);
return [value, offset + fixedBytes];
},
});
}
/**
* Creates a fixed-size codec from a given codec.
*
* The resulting codec ensures that both encoding and decoding operate on a fixed number of bytes.
* When encoding:
* - If the encoded value is larger than `fixedBytes`, it is truncated.
* - If it is smaller, it is padded with trailing zeroes.
* - If it is exactly `fixedBytes`, it remains unchanged.
*
* When decoding:
* - Exactly `fixedBytes` bytes are read from the input.
* - If the nested decoder has a smaller fixed size, bytes are truncated or padded as necessary.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded value in bytes.
*
* @param codec - The codec to wrap into a fixed-size codec.
* @param fixedBytes - The fixed number of bytes to read/write.
* @returns A `FixedSizeCodec` that ensures both encoding and decoding conform to a fixed size.
*
* @example
* ```ts
* const codec = fixCodecSize(getUtf8Codec(), 4);
*
* const bytes1 = codec.encode("Hello"); // 0x48656c6c (truncated)
* const value1 = codec.decode(bytes1); // "Hell"
*
* const bytes2 = codec.encode("Hi"); // 0x48690000 (padded)
* const value2 = codec.decode(bytes2); // "Hi"
*
* const bytes3 = codec.encode("Hiya"); // 0x48697961 (same length)
* const value3 = codec.decode(bytes3); // "Hiya"
* ```
*
* @remarks
* If you only need to enforce a fixed size for encoding, use {@link fixEncoderSize}.
* If you only need to enforce a fixed size for decoding, use {@link fixDecoderSize}.
*
* ```ts
* const bytes = fixEncoderSize(getUtf8Encoder(), 4).encode("Hiya");
* const value = fixDecoderSize(getUtf8Decoder(), 4).decode(bytes);
* ```
*
* @see {@link fixEncoderSize}
* @see {@link fixDecoderSize}
*/
export function fixCodecSize<TFrom, TTo extends TFrom, TSize extends number>(
codec: Codec<TFrom, TTo>,
fixedBytes: TSize,
): FixedSizeCodec<TFrom, TTo, TSize> {
return combineCodec(fixEncoderSize(codec, fixedBytes), fixDecoderSize(codec, fixedBytes));
}

668
node_modules/@solana/codecs-core/src/index.ts generated vendored Normal file
View File

@@ -0,0 +1,668 @@
/**
* This package contains the core types and functions for encoding and decoding data structures on Solana. It can be used standalone, but it is also exported as part of Kit [`@solana/kit`](https://github.com/anza-xyz/kit/tree/main/packages/kit).
*
* This package is also part of the [`@solana/codecs` package](https://github.com/anza-xyz/kit/tree/main/packages/codecs) which acts as an entry point for all codec packages as well as for their documentation.
*
* ## Composing codecs
*
* The easiest way to create your own codecs is to compose the [various codecs](https://github.com/anza-xyz/kit/tree/main/packages/codecs) offered by this library. For instance, heres how you would define a codec for a `Person` object that contains a `name` string attribute and an `age` number stored in 4 bytes.
*
* ```ts
* type Person = { name: string; age: number };
* const getPersonCodec = (): Codec<Person> =>
* getStructCodec([
* ['name', addCodecSizePrefix(getUtf8Codec(), getU32Codec())],
* ['age', getU32Codec()],
* ]);
* ```
*
* This function returns a `Codec` object which contains both an `encode` and `decode` function that can be used to convert a `Person` type to and from a `Uint8Array`.
*
* ```ts
* const personCodec = getPersonCodec();
* const bytes = personCodec.encode({ name: 'John', age: 42 });
* const person = personCodec.decode(bytes);
* ```
*
* There is a significant library of composable codecs at your disposal, enabling you to compose complex types. You may be interested in the documentation of these other packages to learn more about them:
*
* - [`@solana/codecs-numbers`](https://github.com/anza-xyz/kit/tree/main/packages/codecs-numbers) for number codecs.
* - [`@solana/codecs-strings`](https://github.com/anza-xyz/kit/tree/main/packages/codecs-strings) for string codecs.
* - [`@solana/codecs-data-structures`](https://github.com/anza-xyz/kit/tree/main/packages/codecs-data-structures) for many data structure codecs such as objects, arrays, tuples, sets, maps, enums, discriminated unions, booleans, etc.
* - [`@solana/options`](https://github.com/anza-xyz/kit/tree/main/packages/options) for a Rust-like `Option` type and associated codec.
*
* You may also be interested in some of the helpers of this `@solana/codecs-core` library such as `transformCodec`, `fixCodecSize` or `reverseCodec` that create new codecs from existing ones.
*
* Note that all of these libraries are included in the [`@solana/codecs` package](https://github.com/anza-xyz/kit/tree/main/packages/codecs) as well as the main `@solana/kit` package for your convenience.
*
* ## Composing encoders and decoders
*
* Whilst Codecs can both encode and decode, it is possible to only focus on encoding or decoding data, enabling the unused logic to be tree-shaken. For instance, heres our previous example using Encoders only to encode a `Person` type.
*
* ```ts
* const getPersonEncoder = (): Encoder<Person> =>
* getStructEncoder([
* ['name', addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder())],
* ['age', getU32Encoder()],
* ]);
*
* const bytes = getPersonEncoder().encode({ name: 'John', age: 42 });
* ```
*
* The same can be done for decoding the `Person` type by using Decoders like so.
*
* ```ts
* const getPersonDecoder = (): Decoder<Person> =>
* getStructDecoder([
* ['name', addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder())],
* ['age', getU32Decoder()],
* ]);
*
* const person = getPersonDecoder().decode(bytes);
* ```
*
* ## Combining encoders and decoders
*
* Separating Codecs into Encoders and Decoders is particularly good practice for library maintainers as it allows their users to tree-shake any of the encoders and/or decoders they dont need. However, we may still want to offer a codec helper for users who need both for convenience.
*
* Thats why this library offers a `combineCodec` helper that creates a `Codec` instance from a matching `Encoder` and `Decoder`.
*
* ```ts
* const getPersonCodec = (): Codec<Person> => combineCodec(getPersonEncoder(), getPersonDecoder());
* ```
*
* This means library maintainers can offer Encoders, Decoders and Codecs for all their types whilst staying efficient and tree-shakeable. In summary, we recommend the following pattern when creating codecs for library types.
*
* ```ts
* type MyType = \/* ... *\/;
* const getMyTypeEncoder = (): Encoder<MyType> => { \/* ... *\/ };
* const getMyTypeDecoder = (): Decoder<MyType> => { \/* ... *\/ };
* const getMyTypeCodec = (): Codec<MyType> =>
* combineCodec(getMyTypeEncoder(), getMyTypeDecoder());
* ```
*
* ## Different From and To types
*
* When creating codecs, the encoded type is allowed to be looser than the decoded type. A good example of that is the u64 number codec:
*
* ```ts
* const u64Codec: Codec<number | bigint, bigint> = getU64Codec();
* ```
*
* As you can see, the first type parameter is looser since it accepts numbers or big integers, whereas the second type parameter only accepts big integers. Thats because when _encoding_ a u64 number, you may provide either a `bigint` or a `number` for convenience. However, when you decode a u64 number, you will always get a `bigint` because not all u64 values can fit in a JavaScript `number` type.
*
* ```ts
* const bytes = u64Codec.encode(42);
* const value = u64Codec.decode(bytes); // BigInt(42)
* ```
*
* This relationship between the type we encode “From” and decode “To” can be generalized in TypeScript as `To extends From`.
*
* Heres another example using an object with default values. You can read more about the `transformEncoder` helper below.
*
* ```ts
* type Person = { name: string, age: number };
* type PersonInput = { name: string, age?: number };
*
* const getPersonEncoder = (): Encoder<PersonInput> =>
* transformEncoder(
* getStructEncoder([
* ['name', addEncoderSizePrefix(getUtf8Encoder(), getU32Encoder())],
* ['age', getU32Encoder()],
* ]),
* input => { ...input, age: input.age ?? 42 }
* );
*
* const getPersonDecoder = (): Decoder<Person> =>
* getStructDecoder([
* ['name', addDecoderSizePrefix(getUtf8Decoder(), getU32Decoder())],
* ['age', getU32Decoder()],
* ]);
*
* const getPersonCodec = (): Codec<PersonInput, Person> =>
* combineCodec(getPersonEncoder(), getPersonDecoder())
* ```
*
* ## Fixed-size and variable-size codecs
*
* It is also worth noting that Codecs can either be of fixed size or variable size.
*
* `FixedSizeCodecs` have a `fixedSize` number attribute that tells us exactly how big their encoded data is in bytes.
*
* ```ts
* const myCodec: FixedSizeCodec<number> = getU32Codec();
* myCodec.fixedSize; // 4 bytes.
* ```
*
* On the other hand, `VariableSizeCodecs` do not know the size of their encoded data in advance. Instead, they will grab that information either from the provided encoded data or from the value to encode. For the former, we can simply access the length of the `Uint8Array`. For the latter, it provides a `getSizeFromValue` that tells us the encoded byte size of the provided value.
*
* ```ts
* const myCodec: VariableSizeCodec<string> = addCodecSizePrefix(getUtf8Codec(), getU32Codec());
* myCodec.getSizeFromValue('hello world'); // 4 + 11 bytes.
* ```
*
* Also note that, if the `VariableSizeCodec` is bounded by a maximum size, it can be provided as a `maxSize` number attribute.
*
* The following type guards are available to identify and/or assert the size of codecs: `isFixedSize`, `isVariableSize`, `assertIsFixedSize` and `assertIsVariableSize`.
*
* Finally, note that the same is true for `Encoders` and `Decoders`.
*
* - A `FixedSizeEncoder` has a `fixedSize` number attribute.
* - A `VariableSizeEncoder` has a `getSizeFromValue` function and an optional `maxSize` number attribute.
* - A `FixedSizeDecoder` has a `fixedSize` number attribute.
* - A `VariableSizeDecoder` has an optional `maxSize` number attribute.
*
* ## Creating custom codecs
*
* If composing codecs isnt enough for you, you may implement your own codec logic by using the `createCodec` function. This function requires an object with a `read` and a `write` function telling us how to read from and write to an existing byte array.
*
* The `read` function accepts the `bytes` to decode from and the `offset` at each we should start reading. It returns an array with two items:
*
* - The first item should be the decoded value.
* - The second item should be the next offset to read from.
*
* ```ts
* createCodec({
* read(bytes, offset) {
* const value = bytes[offset];
* return [value, offset + 1];
* },
* // ...
* });
* ```
*
* Reciprocally, the `write` function accepts the `value` to encode, the array of `bytes` to write the encoded value to and the `offset` at which it should be written. It should encode the given value, insert it in the byte array, and provide the next offset to write to as the return value.
*
* ```ts
* createCodec({
* write(value, bytes, offset) {
* bytes.set(value, offset);
* return offset + 1;
* },
* // ...
* });
* ```
*
* Additionally, we must specify the size of the codec. If we are defining a `FixedSizeCodec`, we must simply provide the `fixedSize` number attribute. For `VariableSizeCodecs`, we must provide the `getSizeFromValue` function as described in the previous section.
*
* ```ts
* // FixedSizeCodec.
* createCodec({
* fixedSize: 1,
* // ...
* });
*
* // VariableSizeCodec.
* createCodec({
* getSizeFromValue: (value: string) => value.length,
* // ...
* });
* ```
*
* Heres a concrete example of a custom codec that encodes any unsigned integer in a single byte. Since a single byte can only store integers from 0 to 255, if any other integer is provided it will take its modulo 256 to ensure it fits in a single byte. Because it always requires a single byte, that codec is a `FixedSizeCodec` of size `1`.
*
* ```ts
* const getModuloU8Codec = () =>
* createCodec<number>({
* fixedSize: 1,
* read(bytes, offset) {
* const value = bytes[offset];
* return [value, offset + 1];
* },
* write(value, bytes, offset) {
* bytes.set(value % 256, offset);
* return offset + 1;
* },
* });
* ```
*
* Note that, it is also possible to create custom encoders and decoders separately by using the `createEncoder` and `createDecoder` functions respectively and then use the `combineCodec` function on them just like we were doing with composed codecs.
*
* This approach is recommended to library maintainers as it allows their users to tree-shake any of the encoders and/or decoders they dont need.
*
* Heres our previous modulo u8 example but split into separate `Encoder`, `Decoder` and `Codec` instances.
*
* ```ts
* const getModuloU8Encoder = () =>
* createEncoder<number>({
* fixedSize: 1,
* write(value, bytes, offset) {
* bytes.set(value % 256, offset);
* return offset + 1;
* },
* });
*
* const getModuloU8Decoder = () =>
* createDecoder<number>({
* fixedSize: 1,
* read(bytes, offset) {
* const value = bytes[offset];
* return [value, offset + 1];
* },
* });
*
* const getModuloU8Codec = () => combineCodec(getModuloU8Encoder(), getModuloU8Decoder());
* ```
*
* Heres another example returning a `VariableSizeCodec`. This one transforms a simple string composed of characters from `a` to `z` to a buffer of numbers from `1` to `26` where `0` bytes are spaces.
*
* ```ts
* const alphabet = ' abcdefghijklmnopqrstuvwxyz';
*
* const getCipherEncoder = () =>
* createEncoder<string>({
* getSizeFromValue: value => value.length,
* write(value, bytes, offset) {
* const bytesToAdd = [...value].map(char => alphabet.indexOf(char));
* bytes.set(bytesToAdd, offset);
* return offset + bytesToAdd.length;
* },
* });
*
* const getCipherDecoder = () =>
* createDecoder<string>({
* read(bytes, offset) {
* const value = [...bytes.slice(offset)].map(byte => alphabet.charAt(byte)).join('');
* return [value, bytes.length];
* },
* });
*
* const getCipherCodec = () => combineCodec(getCipherEncoder(), getCipherDecoder());
* ```
*
* ## Transforming codecs
*
* It is possible to transform a `Codec<T>` to a `Codec<U>` by providing two mapping functions: one that goes from `T` to `U` and one that does the opposite.
*
* For instance, heres how you would map a `u32` integer into a `string` representation of that number.
*
* ```ts
* const getStringU32Codec = () =>
* transformCodec(
* getU32Codec(),
* (integerAsString: string): number => parseInt(integerAsString),
* (integer: number): string => integer.toString(),
* );
*
* getStringU32Codec().encode('42'); // new Uint8Array([42])
* getStringU32Codec().decode(new Uint8Array([42])); // "42"
* ```
*
* If a `Codec` has [different From and To types](#different-from-and-to-types), say `Codec<OldFrom, OldTo>`, and we want to map it to `Codec<NewFrom, NewTo>`, we must provide functions that map from `NewFrom` to `OldFrom` and from `OldTo` to `NewTo`.
*
* To illustrate that, lets take our previous `getStringU32Codec` example but make it use a `getU64Codec` codec instead as it returns a `Codec<number | bigint, bigint>`. Additionally, lets make it so our `getStringU64Codec` function returns a `Codec<number | string, string>` so that it also accepts numbers when encoding values. Heres what our mapping functions look like:
*
* ```ts
* const getStringU64Codec = () =>
* transformCodec(
* getU64Codec(),
* (integerInput: number | string): number | bigint =>
* typeof integerInput === 'string' ? BigInt(integerAsString) : integerInput,
* (integer: bigint): string => integer.toString(),
* );
* ```
*
* Note that the second function that maps the decoded type is optional. That means, you can omit it to simply update or loosen the type to encode whilst keeping the decoded type the same.
*
* This is particularly useful to provide default values to object structures. For instance, heres how we can map our `Person` codec to give a default value to its `age` attribute.
*
* ```ts
* type Person = { name: string; age: number; }
* const getPersonCodec = (): Codec<Person> => { \/* ... *\/ }
*
* type PersonInput = { name: string; age?: number; }
* const getPersonWithDefaultValueCodec = (): Codec<PersonInput, Person> =>
* transformCodec(
* getPersonCodec(),
* (person: PersonInput): Person => { ...person, age: person.age ?? 42 }
* )
* ```
*
* Similar helpers exist to map `Encoder` and `Decoder` instances allowing you to separate your codec logic into tree-shakeable functions. Heres our `getStringU32Codec` written that way.
*
* ```ts
* const getStringU32Encoder = () =>
* transformEncoder(getU32Encoder(), (integerAsString: string): number => parseInt(integerAsString));
* const getStringU32Decoder = () => transformDecoder(getU32Decoder(), (integer: number): string => integer.toString());
* const getStringU32Codec = () => combineCodec(getStringU32Encoder(), getStringU32Decoder());
* ```
*
* ## Fixing the size of codecs
*
* The `fixCodecSize` function allows you to bind the size of a given codec to the given fixed size.
*
* For instance, say you want to represent a base-58 string that uses exactly 32 bytes when decoded. Heres how you can use the `fixCodecSize` helper to achieve that.
*
* ```ts
* const get32BytesBase58Codec = () => fixCodecSize(getBase58Codec(), 32);
* ```
*
* You may also use the `fixEncoderSize` and `fixDecoderSize` functions to separate your codec logic like so:
*
* ```ts
* const get32BytesBase58Encoder = () => fixEncoderSize(getBase58Encoder(), 32);
* const get32BytesBase58Decoder = () => fixDecoderSize(getBase58Decoder(), 32);
* const get32BytesBase58Codec = () => combineCodec(get32BytesBase58Encoder(), get32BytesBase58Decoder());
* ```
*
* ## Prefixing codecs with their size
*
* The `addCodecSizePrefix` function allows you to store the byte size of any codec as a number prefix. This allows you to contain variable-size codecs to their actual size.
*
* When encoding, the size of the encoded data is stored before the encoded data itself. When decoding, the size is read first to know how many bytes to read next.
*
* For example, say we want to represent a variable-size base-58 string using a `u32` size prefix. Heres how you can use the `addCodecSizePrefix` function to achieve that.
*
* ```ts
* const getU32Base58Codec = () => addCodecSizePrefix(getBase58Codec(), getU32Codec());
*
* getU32Base58Codec().encode('hello world');
* // 0x0b00000068656c6c6f20776f726c64
* // | └-- Our encoded base-58 string.
* // └-- Our encoded u32 size prefix.
* ```
*
* You may also use the `addEncoderSizePrefix` and `addDecoderSizePrefix` functions to separate your codec logic like so:
*
* ```ts
* const getU32Base58Encoder = () => addEncoderSizePrefix(getBase58Encoder(), getU32Encoder());
* const getU32Base58Decoder = () => addDecoderSizePrefix(getBase58Decoder(), getU32Decoder());
* const getU32Base58Codec = () => combineCodec(getU32Base58Encoder(), getU32Base58Decoder());
* ```
*
* ## Adding sentinels to codecs
*
* Another way of delimiting the size of a codec is to use sentinels. The `addCodecSentinel` function allows us to add a sentinel to the end of the encoded data and to read until that sentinel is found when decoding. It accepts any codec and a `Uint8Array` sentinel responsible for delimiting the encoded data.
*
* ```ts
* const codec = addCodecSentinel(getUtf8Codec(), new Uint8Array([255, 255]));
* codec.encode('hello');
* // 0x68656c6c6fffff
* // | └-- Our sentinel.
* // └-- Our encoded string.
* ```
*
* Note that the sentinel _must not_ be present in the encoded data and _must_ be present in the decoded data for this to work. If this is not the case, dedicated errors will be thrown.
*
* ```ts
* const sentinel = new Uint8Array([108, 108]); // 'll'
* const codec = addCodecSentinel(getUtf8Codec(), sentinel);
*
* codec.encode('hello'); // Throws: sentinel is in encoded data.
* codec.decode(new Uint8Array([1, 2, 3])); // Throws: sentinel missing in decoded data.
* ```
*
* Separate `addEncoderSentinel` and `addDecoderSentinel` functions are also available.
*
* ```ts
* const bytes = addEncoderSentinel(getUtf8Encoder(), sentinel).encode('hello');
* const value = addDecoderSentinel(getUtf8Decoder(), sentinel).decode(bytes);
* ```
*
* ## Adjusting the size of codecs
*
* The `resizeCodec` helper re-defines the size of a given codec by accepting a function that takes the current size of the codec and returns a new size. This works for both fixed-size and variable-size codecs.
*
* ```ts
* // Fixed-size codec.
* const getBiggerU32Codec = () => resizeCodec(getU32Codec(), size => size + 4);
* getBiggerU32Codec().encode(42);
* // 0x2a00000000000000
* // | └-- Empty buffer space caused by the resizeCodec function.
* // └-- Our encoded u32 number.
*
* // Variable-size codec.
* const getBiggerUtf8Codec = () => resizeCodec(getUtf8Codec(), size => size + 4);
* getBiggerUtf8Codec().encode('ABC');
* // 0x41424300000000
* // | └-- Empty buffer space caused by the resizeCodec function.
* // └-- Our encoded string.
* ```
*
* Note that the `resizeCodec` function doesn't change any encoded or decoded bytes, it merely tells the `encode` and `decode` functions how big the `Uint8Array` should be before delegating to their respective `write` and `read` functions. In fact, this is completely bypassed when using the `write` and `read` functions directly. For instance:
*
* ```ts
* const getBiggerU32Codec = () => resizeCodec(getU32Codec(), size => size + 4);
*
* // Using the encode function.
* getBiggerU32Codec().encode(42);
* // 0x2a00000000000000
*
* // Using the lower-level write function.
* const myCustomBytes = new Uint8Array(4);
* getBiggerU32Codec().write(42, myCustomBytes, 0);
* // 0x2a000000
* ```
*
* So when would it make sense to use the `resizeCodec` function? This function is particularly useful when combined with the `offsetCodec` function described below. Whilst the `offsetCodec` may help us push the offset forward — e.g. to skip some padding — it won't change the size of the encoded data which means the last bytes will be truncated by how much we pushed the offset forward. The `resizeCodec` function can be used to fix that. For instance, here's how we can use the `resizeCodec` and the `offsetCodec` functions together to create a struct codec that includes some padding.
*
* ```ts
* const personCodec = getStructCodec([
* ['name', fixCodecSize(getUtf8Codec(), 8)],
* // There is a 4-byte padding between name and age.
* [
* 'age',
* offsetCodec(
* resizeCodec(getU32Codec(), size => size + 4),
* { preOffset: ({ preOffset }) => preOffset + 4 },
* ),
* ],
* ]);
*
* personCodec.encode({ name: 'Alice', age: 42 });
* // 0x416c696365000000000000002a000000
* // | | └-- Our encoded u32 (42).
* // | └-- The 4-bytes of padding we are skipping.
* // └-- Our 8-byte encoded string ("Alice").
* ```
*
* As usual, the `resizeEncoder` and `resizeDecoder` functions can also be used to achieve that.
*
* ```ts
* const getBiggerU32Encoder = () => resizeEncoder(getU32Codec(), size => size + 4);
* const getBiggerU32Decoder = () => resizeDecoder(getU32Codec(), size => size + 4);
* const getBiggerU32Codec = () => combineCodec(getBiggerU32Encoder(), getBiggerU32Decoder());
* ```
*
* ## Offsetting codecs
*
* The `offsetCodec` function is a powerful codec primitive that allows you to move the offset of a given codec forward or backwards. It accepts one or two functions that takes the current offset and returns a new offset.
*
* To understand how this works, let's take our previous `biggerU32Codec` example which encodes a `u32` number inside an 8-byte buffer.
*
* ```ts
* const biggerU32Codec = resizeCodec(getU32Codec(), size => size + 4);
* biggerU32Codec.encode(0xffffffff);
* // 0xffffffff00000000
* // | └-- Empty buffer space caused by the resizeCodec function.
* // └-- Our encoded u32 number.
* ```
*
* Now, let's say we want to move the offset of that codec 2 bytes forward so that the encoded number sits in the middle of the buffer. To achieve, this we can use the `offsetCodec` helper and provide a `preOffset` function that moves the "pre-offset" of the codec 2 bytes forward.
*
* ```ts
* const u32InTheMiddleCodec = offsetCodec(biggerU32Codec, {
* preOffset: ({ preOffset }) => preOffset + 2,
* });
* u32InTheMiddleCodec.encode(0xffffffff);
* // 0x0000ffffffff0000
* // └-- Our encoded u32 number is now in the middle of the buffer.
* ```
*
* We refer to this offset as the "pre-offset" because, once the inner codec is encoded or decoded, an additional offset will be returned which we refer to as the "post-offset". That "post-offset" is important as, unless we are reaching the end of our codec, it will be used by any further codecs to continue encoding or decoding data.
*
* By default, that "post-offset" is simply the addition of the "pre-offset" and the size of the encoded or decoded inner data.
*
* ```ts
* const u32InTheMiddleCodec = offsetCodec(biggerU32Codec, {
* preOffset: ({ preOffset }) => preOffset + 2,
* });
* u32InTheMiddleCodec.encode(0xffffffff);
* // 0x0000ffffffff0000
* // | | └-- Post-offset.
* // | └-- New pre-offset: The original pre-offset + 2.
* // └-- Pre-offset: The original pre-offset before we adjusted it.
* ```
*
* However, you may also provide a `postOffset` function to adjust the "post-offset". For instance, let's push the "post-offset" 2 bytes forward as well such that any further codecs will start doing their job at the end of our 8-byte `u32` number.
*
* ```ts
* const u32InTheMiddleCodec = offsetCodec(biggerU32Codec, {
* preOffset: ({ preOffset }) => preOffset + 2,
* postOffset: ({ postOffset }) => postOffset + 2,
* });
* u32InTheMiddleCodec.encode(0xffffffff);
* // 0x0000ffffffff0000
* // | | | └-- New post-offset: The original post-offset + 2.
* // | | └-- Post-offset: The original post-offset before we adjusted it.
* // | └-- New pre-offset: The original pre-offset + 2.
* // └-- Pre-offset: The original pre-offset before we adjusted it.
* ```
*
* Both the `preOffset` and `postOffset` functions offer the following attributes:
*
* - `bytes`: The entire byte array being encoded or decoded.
* - `preOffset`: The original and unaltered pre-offset.
* - `wrapBytes`: A helper function that wraps the given offset around the byte array length. E.g. `wrapBytes(-1)` will refer to the last byte of the byte array.
*
* Additionally, the post-offset function also provides the following attributes:
*
* - `newPreOffset`: The new pre-offset after the pre-offset function has been applied.
* - `postOffset`: The original and unaltered post-offset.
*
* Note that you may also decide to ignore these attributes to achieve absolute offsets. However, relative offsets are usually recommended as they won't break your codecs when composed with other codecs.
*
* ```ts
* const u32InTheMiddleCodec = offsetCodec(biggerU32Codec, {
* preOffset: () => 2,
* postOffset: () => 8,
* });
* u32InTheMiddleCodec.encode(0xffffffff);
* // 0x0000ffffffff0000
* ```
*
* Also note that any negative offset or offset that exceeds the size of the byte array will throw a `SolanaError` of code `SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE`.
*
* ```ts
* const u32InTheEndCodec = offsetCodec(biggerU32Codec, { preOffset: () => -4 });
* u32InTheEndCodec.encode(0xffffffff);
* // throws new SolanaError(SOLANA_ERROR__CODECS__OFFSET_OUT_OF_RANGE)
* ```
*
* To avoid this, you may use the `wrapBytes` function to wrap the offset around the byte array length. For instance, here's how we can use the `wrapBytes` function to move the pre-offset 4 bytes from the end of the byte array.
*
* ```ts
* const u32InTheEndCodec = offsetCodec(biggerU32Codec, {
* preOffset: ({ wrapBytes }) => wrapBytes(-4),
* });
* u32InTheEndCodec.encode(0xffffffff);
* // 0x00000000ffffffff
* ```
*
* As you can see, the `offsetCodec` helper allows you to jump all over the place with your codecs. This non-linear approach to encoding and decoding data allows you to achieve complex serialization strategies that would otherwise be impossible.
*
* As usual, the `offsetEncoder` and `offsetDecoder` functions can also be used to split your codec logic into tree-shakeable functions.
*
* ```ts
* const getU32InTheMiddleEncoder = () => offsetEncoder(biggerU32Encoder, { preOffset: ({ preOffset }) => preOffset + 2 });
* const getU32InTheMiddleDecoder = () => offsetDecoder(biggerU32Decoder, { preOffset: ({ preOffset }) => preOffset + 2 });
* const getU32InTheMiddleCodec = () => combineCodec(getU32InTheMiddleEncoder(), getU32InTheMiddleDecoder());
* ```
*
* ## Padding codecs
*
* The `padLeftCodec` and `padRightCodec` helpers can be used to add padding to the left or right of a given codec. They accept an `offset` number that tells us how big the padding should be.
*
* ```ts
* const getLeftPaddedCodec = () => padLeftCodec(getU16Codec(), 4);
* getLeftPaddedCodec().encode(0xffff);
* // 0x00000000ffff
* // | └-- Our encoded u16 number.
* // └-- Our 4-byte padding.
*
* const getRightPaddedCodec = () => padRightCodec(getU16Codec(), 4);
* getRightPaddedCodec().encode(0xffff);
* // 0xffff00000000
* // | └-- Our 4-byte padding.
* // └-- Our encoded u16 number.
* ```
*
* Note that both the `padLeftCodec` and `padRightCodec` functions are simple wrappers around the `offsetCodec` and `resizeCodec` functions. For more complex padding strategies, you may want to use the `offsetCodec` and `resizeCodec` functions directly instead.
*
* As usual, encoder-only and decoder-only helpers are available for these padding functions. Namely, `padLeftEncoder`, `padRightEncoder`, `padLeftDecoder` and `padRightDecoder`.
*
* ```ts
* const getMyPaddedEncoder = () => padLeftEncoder(getU16Encoder());
* const getMyPaddedDecoder = () => padLeftDecoder(getU16Decoder());
* const getMyPaddedCodec = () => combineCodec(getMyPaddedEncoder(), getMyPaddedDecoder());
* ```
*
* ## Reversing codecs
*
* The `reverseCodec` helper reverses the bytes of the provided `FixedSizeCodec`.
*
* ```ts
* const getBigEndianU64Codec = () => reverseCodec(getU64Codec());
* ```
*
* Note that number codecs can already do that for you via their `endian` option.
*
* ```ts
* const getBigEndianU64Codec = () => getU64Codec({ endian: Endian.Big });
* ```
*
* As usual, the `reverseEncoder` and `reverseDecoder` functions can also be used to achieve that.
*
* ```ts
* const getBigEndianU64Encoder = () => reverseEncoder(getU64Encoder());
* const getBigEndianU64Decoder = () => reverseDecoder(getU64Decoder());
* const getBigEndianU64Codec = () => combineCodec(getBigEndianU64Encoder(), getBigEndianU64Decoder());
* ```
*
* ## Byte helpers
*
* This package also provides utility functions for managing bytes such as:
*
* - `mergeBytes`: Concatenates an array of `Uint8Arrays` into a single `Uint8Array`.
* - `padBytes`: Pads a `Uint8Array` with zeroes (to the right) to the specified length.
* - `fixBytes`: Pads or truncates a `Uint8Array` so it has the specified length.
* - `containsBytes`: Checks if a `Uint8Array` contains another `Uint8Array` at a given offset.
*
* ```ts
* // Merge multiple Uint8Array buffers into one.
* mergeBytes([new Uint8Array([1, 2]), new Uint8Array([3, 4])]); // Uint8Array([1, 2, 3, 4])
*
* // Pad a Uint8Array buffer to the given size.
* padBytes(new Uint8Array([1, 2]), 4); // Uint8Array([1, 2, 0, 0])
* padBytes(new Uint8Array([1, 2, 3, 4]), 2); // Uint8Array([1, 2, 3, 4])
*
* // Pad and truncate a Uint8Array buffer to the given size.
* fixBytes(new Uint8Array([1, 2]), 4); // Uint8Array([1, 2, 0, 0])
* fixBytes(new Uint8Array([1, 2, 3, 4]), 2); // Uint8Array([1, 2])
*
* // Check if a Uint8Array contains another Uint8Array at a given offset.
* containsBytes(new Uint8Array([1, 2, 3, 4]), new Uint8Array([2, 3]), 1); // true
* containsBytes(new Uint8Array([1, 2, 3, 4]), new Uint8Array([2, 3]), 2); // false
* ```
*
* ---
*
* To read more about the available codecs and how to use them, check out the documentation of the main [`@solana/codecs` package](https://github.com/anza-xyz/kit/tree/main/packages/codecs).
*
* @packageDocumentation
*/
export * from './add-codec-sentinel';
export * from './add-codec-size-prefix';
export * from './array-buffers';
export * from './assertions';
export * from './bytes';
export * from './codec';
export * from './combine-codec';
export * from './decoder-entire-byte-array';
export * from './fix-codec-size';
export * from './offset-codec';
export * from './pad-codec';
export * from './readonly-uint8array';
export * from './resize-codec';
export * from './reverse-codec';
export * from './transform-codec';

379
node_modules/@solana/codecs-core/src/offset-codec.ts generated vendored Normal file
View File

@@ -0,0 +1,379 @@
import { assertByteArrayOffsetIsNotOutOfRange } from './assertions';
import { Codec, createDecoder, createEncoder, Decoder, Encoder, Offset } from './codec';
import { combineCodec } from './combine-codec';
import { ReadonlyUint8Array } from './readonly-uint8array';
// eslint-disable-next-line @typescript-eslint/no-explicit-any
type AnyEncoder = Encoder<any>;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
type AnyDecoder = Decoder<any>;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
type AnyCodec = Codec<any>;
/**
* Configuration object for modifying the offset of an encoder, decoder, or codec.
*
* This type defines optional functions for adjusting the **pre-offset** (before encoding/decoding)
* and the **post-offset** (after encoding/decoding). These functions allow precise control
* over where data is written or read within a byte array.
*
* @property preOffset - A function that modifies the offset before encoding or decoding.
* @property postOffset - A function that modifies the offset after encoding or decoding.
*
* @example
* Moving the pre-offset forward by 2 bytes.
* ```ts
* const config: OffsetConfig = {
* preOffset: ({ preOffset }) => preOffset + 2,
* };
* ```
*
* @example
* Moving the post-offset forward by 2 bytes.
* ```ts
* const config: OffsetConfig = {
* postOffset: ({ postOffset }) => postOffset + 2,
* };
* ```
*
* @example
* Using both pre-offset and post-offset together.
* ```ts
* const config: OffsetConfig = {
* preOffset: ({ preOffset }) => preOffset + 2,
* postOffset: ({ postOffset }) => postOffset + 4,
* };
* ```
*
* @see {@link offsetEncoder}
* @see {@link offsetDecoder}
* @see {@link offsetCodec}
*/
type OffsetConfig = {
postOffset?: PostOffsetFunction;
preOffset?: PreOffsetFunction;
};
/**
* Scope provided to the `preOffset` and `postOffset` functions,
* containing contextual information about the current encoding or decoding process.
*
* The pre-offset function modifies where encoding or decoding begins,
* while the post-offset function modifies where the next operation continues.
*
* @property bytes - The entire byte array being encoded or decoded.
* @property preOffset - The original offset before encoding or decoding starts.
* @property wrapBytes - A helper function that wraps offsets around the byte array length.
*
* @example
* Using `wrapBytes` to wrap a negative offset to the end of the byte array.
* ```ts
* const config: OffsetConfig = {
* preOffset: ({ wrapBytes }) => wrapBytes(-4), // Moves to last 4 bytes
* };
* ```
*
* @example
* Adjusting the offset dynamically based on the byte array size.
* ```ts
* const config: OffsetConfig = {
* preOffset: ({ bytes }) => bytes.length > 10 ? 4 : 2,
* };
* ```
*
* @see {@link PreOffsetFunction}
* @see {@link PostOffsetFunction}
*/
type PreOffsetFunctionScope = {
/** The entire byte array. */
bytes: ReadonlyUint8Array | Uint8Array;
/** The original offset prior to encode or decode. */
preOffset: Offset;
/** Wraps the offset to the byte array length. */
wrapBytes: (offset: Offset) => Offset;
};
/**
* A function that modifies the pre-offset before encoding or decoding.
*
* This function is used to adjust the starting position before writing
* or reading data in a byte array.
*
* @param scope - The current encoding or decoding context.
* @returns The new offset at which encoding or decoding should start.
*
* @example
* Skipping the first 2 bytes before writing or reading.
* ```ts
* const preOffset: PreOffsetFunction = ({ preOffset }) => preOffset + 2;
* ```
*
* @example
* Wrapping the offset to ensure it stays within bounds.
* ```ts
* const preOffset: PreOffsetFunction = ({ wrapBytes, preOffset }) => wrapBytes(preOffset + 10);
* ```
*
* @see {@link OffsetConfig}
* @see {@link PreOffsetFunctionScope}
*/
type PreOffsetFunction = (scope: PreOffsetFunctionScope) => Offset;
/**
* A function that modifies the post-offset after encoding or decoding.
*
* This function adjusts where the next encoder or decoder should start
* after the current operation has completed.
*
* @param scope - The current encoding or decoding context, including the modified pre-offset
* and the original post-offset.
* @returns The new offset at which the next operation should begin.
*
* @example
* Moving the post-offset forward by 4 bytes.
* ```ts
* const postOffset: PostOffsetFunction = ({ postOffset }) => postOffset + 4;
* ```
*
* @example
* Wrapping the post-offset within the byte array length.
* ```ts
* const postOffset: PostOffsetFunction = ({ wrapBytes, postOffset }) => wrapBytes(postOffset);
* ```
*
* @example
* Ensuring a minimum spacing of 8 bytes between values.
* ```ts
* const postOffset: PostOffsetFunction = ({ postOffset, newPreOffset }) =>
* Math.max(postOffset, newPreOffset + 8);
* ```
*
* @see {@link OffsetConfig}
* @see {@link PreOffsetFunctionScope}
*/
type PostOffsetFunction = (
scope: PreOffsetFunctionScope & {
/** The modified offset used to encode or decode. */
newPreOffset: Offset;
/** The original offset returned by the encoder or decoder. */
postOffset: Offset;
},
) => Offset;
/**
* Moves the offset of a given encoder before and/or after encoding.
*
* This function allows an encoder to write its encoded value at a different offset
* than the one originally provided. It supports both pre-offset adjustments
* (before encoding) and post-offset adjustments (after encoding).
*
* The pre-offset function determines where encoding should start, while the
* post-offset function adjusts where the next encoder should continue writing.
*
* For more details, see {@link offsetCodec}.
*
* @typeParam TFrom - The type of the value to encode.
*
* @param encoder - The encoder to adjust.
* @param config - An object specifying how the offset should be modified.
* @returns A new encoder with adjusted offsets.
*
* @example
* Moving the pre-offset forward by 2 bytes.
* ```ts
* const encoder = offsetEncoder(getU32Encoder(), {
* preOffset: ({ preOffset }) => preOffset + 2,
* });
* const bytes = new Uint8Array(10);
* encoder.write(42, bytes, 0); // Actually written at offset 2
* ```
*
* @example
* Moving the post-offset forward by 2 bytes.
* ```ts
* const encoder = offsetEncoder(getU32Encoder(), {
* postOffset: ({ postOffset }) => postOffset + 2,
* });
* const bytes = new Uint8Array(10);
* const nextOffset = encoder.write(42, bytes, 0); // Next encoder starts at offset 6 instead of 4
* ```
*
* @example
* Using `wrapBytes` to ensure an offset wraps around the byte array length.
* ```ts
* const encoder = offsetEncoder(getU32Encoder(), {
* preOffset: ({ wrapBytes }) => wrapBytes(-4), // Moves offset to last 4 bytes of the array
* });
* const bytes = new Uint8Array(10);
* encoder.write(42, bytes, 0); // Writes at bytes.length - 4
* ```
*
* @remarks
* If you need both encoding and decoding offsets to be adjusted, use {@link offsetCodec}.
*
* @see {@link offsetCodec}
* @see {@link offsetDecoder}
*/
export function offsetEncoder<TEncoder extends AnyEncoder>(encoder: TEncoder, config: OffsetConfig): TEncoder {
return createEncoder({
...encoder,
write: (value, bytes, preOffset) => {
const wrapBytes = (offset: Offset) => modulo(offset, bytes.length);
const newPreOffset = config.preOffset ? config.preOffset({ bytes, preOffset, wrapBytes }) : preOffset;
assertByteArrayOffsetIsNotOutOfRange('offsetEncoder', newPreOffset, bytes.length);
const postOffset = encoder.write(value, bytes, newPreOffset);
const newPostOffset = config.postOffset
? config.postOffset({ bytes, newPreOffset, postOffset, preOffset, wrapBytes })
: postOffset;
assertByteArrayOffsetIsNotOutOfRange('offsetEncoder', newPostOffset, bytes.length);
return newPostOffset;
},
}) as TEncoder;
}
/**
* Moves the offset of a given decoder before and/or after decoding.
*
* This function allows a decoder to read its input from a different offset
* than the one originally provided. It supports both pre-offset adjustments
* (before decoding) and post-offset adjustments (after decoding).
*
* The pre-offset function determines where decoding should start, while the
* post-offset function adjusts where the next decoder should continue reading.
*
* For more details, see {@link offsetCodec}.
*
* @typeParam TTo - The type of the decoded value.
*
* @param decoder - The decoder to adjust.
* @param config - An object specifying how the offset should be modified.
* @returns A new decoder with adjusted offsets.
*
* @example
* Moving the pre-offset forward by 2 bytes.
* ```ts
* const decoder = offsetDecoder(getU32Decoder(), {
* preOffset: ({ preOffset }) => preOffset + 2,
* });
* const bytes = new Uint8Array([0, 0, 42, 0]); // Value starts at offset 2
* decoder.read(bytes, 0); // Actually reads from offset 2
* ```
*
* @example
* Moving the post-offset forward by 2 bytes.
* ```ts
* const decoder = offsetDecoder(getU32Decoder(), {
* postOffset: ({ postOffset }) => postOffset + 2,
* });
* const bytes = new Uint8Array([42, 0, 0, 0]);
* const [value, nextOffset] = decoder.read(bytes, 0); // Next decoder starts at offset 6 instead of 4
* ```
*
* @example
* Using `wrapBytes` to read from the last 4 bytes of an array.
* ```ts
* const decoder = offsetDecoder(getU32Decoder(), {
* preOffset: ({ wrapBytes }) => wrapBytes(-4), // Moves offset to last 4 bytes of the array
* });
* const bytes = new Uint8Array([0, 0, 0, 0, 0, 0, 0, 42]); // Value stored at the last 4 bytes
* decoder.read(bytes, 0); // Reads from bytes.length - 4
* ```
*
* @remarks
* If you need both encoding and decoding offsets to be adjusted, use {@link offsetCodec}.
*
* @see {@link offsetCodec}
* @see {@link offsetEncoder}
*/
export function offsetDecoder<TDecoder extends AnyDecoder>(decoder: TDecoder, config: OffsetConfig): TDecoder {
return createDecoder({
...decoder,
read: (bytes, preOffset) => {
const wrapBytes = (offset: Offset) => modulo(offset, bytes.length);
const newPreOffset = config.preOffset ? config.preOffset({ bytes, preOffset, wrapBytes }) : preOffset;
assertByteArrayOffsetIsNotOutOfRange('offsetDecoder', newPreOffset, bytes.length);
const [value, postOffset] = decoder.read(bytes, newPreOffset);
const newPostOffset = config.postOffset
? config.postOffset({ bytes, newPreOffset, postOffset, preOffset, wrapBytes })
: postOffset;
assertByteArrayOffsetIsNotOutOfRange('offsetDecoder', newPostOffset, bytes.length);
return [value, newPostOffset];
},
}) as TDecoder;
}
/**
* Moves the offset of a given codec before and/or after encoding and decoding.
*
* This function allows a codec to encode and decode values at custom offsets
* within a byte array. It modifies both the **pre-offset** (where encoding/decoding starts)
* and the **post-offset** (where the next operation should continue).
*
* This is particularly useful when working with structured binary formats
* that require skipping reserved bytes, inserting padding, or aligning fields at
* specific locations.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
*
* @param codec - The codec to adjust.
* @param config - An object specifying how the offset should be modified.
* @returns A new codec with adjusted offsets.
*
* @example
* Moving the pre-offset forward by 2 bytes when encoding and decoding.
* ```ts
* const codec = offsetCodec(getU32Codec(), {
* preOffset: ({ preOffset }) => preOffset + 2,
* });
* const bytes = new Uint8Array(10);
* codec.write(42, bytes, 0); // Actually written at offset 2
* codec.read(bytes, 0); // Actually read from offset 2
* ```
*
* @example
* Moving the post-offset forward by 2 bytes when encoding and decoding.
* ```ts
* const codec = offsetCodec(getU32Codec(), {
* postOffset: ({ postOffset }) => postOffset + 2,
* });
* const bytes = new Uint8Array(10);
* codec.write(42, bytes, 0);
* // Next encoding starts at offset 6 instead of 4
* codec.read(bytes, 0);
* // Next decoding starts at offset 6 instead of 4
* ```
*
* @example
* Using `wrapBytes` to loop around negative offsets.
* ```ts
* const codec = offsetCodec(getU32Codec(), {
* preOffset: ({ wrapBytes }) => wrapBytes(-4), // Moves offset to last 4 bytes
* });
* const bytes = new Uint8Array(10);
* codec.write(42, bytes, 0); // Writes at bytes.length - 4
* codec.read(bytes, 0); // Reads from bytes.length - 4
* ```
*
* @remarks
* If you only need to adjust offsets for encoding, use {@link offsetEncoder}.
* If you only need to adjust offsets for decoding, use {@link offsetDecoder}.
*
* ```ts
* const bytes = new Uint8Array(10);
* offsetEncoder(getU32Encoder(), { preOffset: ({ preOffset }) => preOffset + 2 }).write(42, bytes, 0);
* const [value] = offsetDecoder(getU32Decoder(), { preOffset: ({ preOffset }) => preOffset + 2 }).read(bytes, 0);
* ```
*
* @see {@link offsetEncoder}
* @see {@link offsetDecoder}
*/
export function offsetCodec<TCodec extends AnyCodec>(codec: TCodec, config: OffsetConfig): TCodec {
return combineCodec(offsetEncoder(codec, config), offsetDecoder(codec, config)) as TCodec;
}
/** A modulo function that handles negative dividends and zero divisors. */
function modulo(dividend: number, divisor: number) {
if (divisor === 0) return 0;
return ((dividend % divisor) + divisor) % divisor;
}

197
node_modules/@solana/codecs-core/src/pad-codec.ts generated vendored Normal file
View File

@@ -0,0 +1,197 @@
import { Codec, Decoder, Encoder, Offset } from './codec';
import { combineCodec } from './combine-codec';
import { offsetDecoder, offsetEncoder } from './offset-codec';
import { resizeDecoder, resizeEncoder } from './resize-codec';
// eslint-disable-next-line @typescript-eslint/no-explicit-any
type AnyEncoder = Encoder<any>;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
type AnyDecoder = Decoder<any>;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
type AnyCodec = Codec<any>;
/**
* Adds left padding to the given encoder, shifting the encoded value forward
* by `offset` bytes whilst increasing the size of the encoder accordingly.
*
* For more details, see {@link padLeftCodec}.
*
* @typeParam TFrom - The type of the value to encode.
*
* @param encoder - The encoder to pad.
* @param offset - The number of padding bytes to add before encoding.
* @returns A new encoder with left padding applied.
*
* @example
* ```ts
* const encoder = padLeftEncoder(getU16Encoder(), 2);
* const bytes = encoder.encode(0xffff); // 0x0000ffff (0xffff written at offset 2)
* ```
*
* @see {@link padLeftCodec}
* @see {@link padLeftDecoder}
*/
export function padLeftEncoder<TEncoder extends AnyEncoder>(encoder: TEncoder, offset: Offset): TEncoder {
return offsetEncoder(
resizeEncoder(encoder, size => size + offset),
{ preOffset: ({ preOffset }) => preOffset + offset },
);
}
/**
* Adds right padding to the given encoder, extending the encoded value by `offset`
* bytes whilst increasing the size of the encoder accordingly.
*
* For more details, see {@link padRightCodec}.
*
* @typeParam TFrom - The type of the value to encode.
*
* @param encoder - The encoder to pad.
* @param offset - The number of padding bytes to add after encoding.
* @returns A new encoder with right padding applied.
*
* @example
* ```ts
* const encoder = padRightEncoder(getU16Encoder(), 2);
* const bytes = encoder.encode(0xffff); // 0xffff0000 (two extra bytes added at the end)
* ```
*
* @see {@link padRightCodec}
* @see {@link padRightDecoder}
*/
export function padRightEncoder<TEncoder extends AnyEncoder>(encoder: TEncoder, offset: Offset): TEncoder {
return offsetEncoder(
resizeEncoder(encoder, size => size + offset),
{ postOffset: ({ postOffset }) => postOffset + offset },
);
}
/**
* Adds left padding to the given decoder, shifting the decoding position forward
* by `offset` bytes whilst increasing the size of the decoder accordingly.
*
* For more details, see {@link padLeftCodec}.
*
* @typeParam TTo - The type of the decoded value.
*
* @param decoder - The decoder to pad.
* @param offset - The number of padding bytes to skip before decoding.
* @returns A new decoder with left padding applied.
*
* @example
* ```ts
* const decoder = padLeftDecoder(getU16Decoder(), 2);
* const value = decoder.decode(new Uint8Array([0, 0, 0x12, 0x34])); // 0xffff (reads from offset 2)
* ```
*
* @see {@link padLeftCodec}
* @see {@link padLeftEncoder}
*/
export function padLeftDecoder<TDecoder extends AnyDecoder>(decoder: TDecoder, offset: Offset): TDecoder {
return offsetDecoder(
resizeDecoder(decoder, size => size + offset),
{ preOffset: ({ preOffset }) => preOffset + offset },
);
}
/**
* Adds right padding to the given decoder, extending the post-offset by `offset`
* bytes whilst increasing the size of the decoder accordingly.
*
* For more details, see {@link padRightCodec}.
*
* @typeParam TTo - The type of the decoded value.
*
* @param decoder - The decoder to pad.
* @param offset - The number of padding bytes to skip after decoding.
* @returns A new decoder with right padding applied.
*
* @example
* ```ts
* const decoder = padRightDecoder(getU16Decoder(), 2);
* const value = decoder.decode(new Uint8Array([0x12, 0x34, 0, 0])); // 0xffff (ignores trailing bytes)
* ```
*
* @see {@link padRightCodec}
* @see {@link padRightEncoder}
*/
export function padRightDecoder<TDecoder extends AnyDecoder>(decoder: TDecoder, offset: Offset): TDecoder {
return offsetDecoder(
resizeDecoder(decoder, size => size + offset),
{ postOffset: ({ postOffset }) => postOffset + offset },
);
}
/**
* Adds left padding to the given codec, shifting the encoding and decoding positions
* forward by `offset` bytes whilst increasing the size of the codec accordingly.
*
* This ensures that values are read and written at a later position in the byte array,
* while the padding bytes remain unused.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
*
* @param codec - The codec to pad.
* @param offset - The number of padding bytes to add before encoding and decoding.
* @returns A new codec with left padding applied.
*
* @example
* ```ts
* const codec = padLeftCodec(getU16Codec(), 2);
* const bytes = codec.encode(0xffff); // 0x0000ffff (0xffff written at offset 2)
* const value = codec.decode(bytes); // 0xffff (reads from offset 2)
* ```
*
* @remarks
* If you only need to apply padding for encoding, use {@link padLeftEncoder}.
* If you only need to apply padding for decoding, use {@link padLeftDecoder}.
*
* ```ts
* const bytes = padLeftEncoder(getU16Encoder(), 2).encode(0xffff);
* const value = padLeftDecoder(getU16Decoder(), 2).decode(bytes);
* ```
*
* @see {@link padLeftEncoder}
* @see {@link padLeftDecoder}
*/
export function padLeftCodec<TCodec extends AnyCodec>(codec: TCodec, offset: Offset): TCodec {
return combineCodec(padLeftEncoder(codec, offset), padLeftDecoder(codec, offset)) as TCodec;
}
/**
* Adds right padding to the given codec, extending the encoded and decoded value
* by `offset` bytes whilst increasing the size of the codec accordingly.
*
* The extra bytes remain unused, ensuring that the next operation starts further
* along the byte array.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
*
* @param codec - The codec to pad.
* @param offset - The number of padding bytes to add after encoding and decoding.
* @returns A new codec with right padding applied.
*
* @example
* ```ts
* const codec = padRightCodec(getU16Codec(), 2);
* const bytes = codec.encode(0xffff); // 0xffff0000 (two extra bytes added)
* const value = codec.decode(bytes); // 0xffff (ignores padding bytes)
* ```
*
* @remarks
* If you only need to apply padding for encoding, use {@link padRightEncoder}.
* If you only need to apply padding for decoding, use {@link padRightDecoder}.
*
* ```ts
* const bytes = padRightEncoder(getU16Encoder(), 2).encode(0xffff);
* const value = padRightDecoder(getU16Decoder(), 2).decode(bytes);
* ```
*
* @see {@link padRightEncoder}
* @see {@link padRightDecoder}
*/
export function padRightCodec<TCodec extends AnyCodec>(codec: TCodec, offset: Offset): TCodec {
return combineCodec(padRightEncoder(codec, offset), padRightDecoder(codec, offset)) as TCodec;
}

View File

@@ -0,0 +1,21 @@
/**
* A read-only variant of `Uint8Array`.
*
* This type prevents modifications to the array by omitting mutable methods such as `copyWithin`,
* `fill`, `reverse`, `set`, and `sort`, while still allowing indexed access to elements.
*
* @example
* ```ts
* const bytes: ReadonlyUint8Array = new Uint8Array([1, 2, 3]);
* console.log(bytes[0]); // 1
* bytes[0] = 42; // Type error: Cannot assign to '0' because it is a read-only property.
* ```
*/
export interface ReadonlyUint8Array<TArrayBuffer extends ArrayBufferLike = ArrayBufferLike> extends Omit<
Uint8Array<TArrayBuffer>,
TypedArrayMutableProperties
> {
readonly [n: number]: number;
}
type TypedArrayMutableProperties = 'copyWithin' | 'fill' | 'reverse' | 'set' | 'sort';

209
node_modules/@solana/codecs-core/src/resize-codec.ts generated vendored Normal file
View File

@@ -0,0 +1,209 @@
import { SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, SolanaError } from '@solana/errors';
import {
Codec,
createDecoder,
createEncoder,
Decoder,
Encoder,
FixedSizeCodec,
FixedSizeDecoder,
FixedSizeEncoder,
isFixedSize,
} from './codec';
import { combineCodec } from './combine-codec';
// eslint-disable-next-line @typescript-eslint/no-explicit-any
type AnyEncoder = Encoder<any>;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
type AnyDecoder = Decoder<any>;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
type AnyCodec = Codec<any>;
/**
* Updates the size of a given encoder.
*
* This function modifies the size of an encoder using a provided transformation function.
* For fixed-size encoders, it updates the `fixedSize` property, and for variable-size
* encoders, it adjusts the size calculation based on the encoded value.
*
* If the new size is negative, an error will be thrown.
*
* For more details, see {@link resizeCodec}.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TSize - The original fixed size of the encoded value.
* @typeParam TNewSize - The new fixed size after resizing.
*
* @param encoder - The encoder whose size will be updated.
* @param resize - A function that takes the current size and returns the new size.
* @returns A new encoder with the updated size.
*
* @example
* Increasing the size of a `u16` encoder by 2 bytes.
* ```ts
* const encoder = resizeEncoder(getU16Encoder(), size => size + 2);
* encoder.encode(0xffff); // 0xffff0000 (two extra bytes added)
* ```
*
* @example
* Shrinking a `u32` encoder to only use 2 bytes.
* ```ts
* const encoder = resizeEncoder(getU32Encoder(), () => 2);
* encoder.fixedSize; // 2
* ```
*
* @see {@link resizeCodec}
* @see {@link resizeDecoder}
*/
export function resizeEncoder<TFrom, TSize extends number, TNewSize extends number>(
encoder: FixedSizeEncoder<TFrom, TSize>,
resize: (size: TSize) => TNewSize,
): FixedSizeEncoder<TFrom, TNewSize>;
export function resizeEncoder<TEncoder extends AnyEncoder>(
encoder: TEncoder,
resize: (size: number) => number,
): TEncoder;
export function resizeEncoder<TEncoder extends AnyEncoder>(
encoder: TEncoder,
resize: (size: number) => number,
): TEncoder {
if (isFixedSize(encoder)) {
const fixedSize = resize(encoder.fixedSize);
if (fixedSize < 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: fixedSize,
codecDescription: 'resizeEncoder',
});
}
return createEncoder({ ...encoder, fixedSize }) as TEncoder;
}
return createEncoder({
...encoder,
getSizeFromValue: value => {
const newSize = resize(encoder.getSizeFromValue(value));
if (newSize < 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: newSize,
codecDescription: 'resizeEncoder',
});
}
return newSize;
},
}) as TEncoder;
}
/**
* Updates the size of a given decoder.
*
* This function modifies the size of a decoder using a provided transformation function.
* For fixed-size decoders, it updates the `fixedSize` property to reflect the new size.
* Variable-size decoders remain unchanged, as their size is determined dynamically.
*
* If the new size is negative, an error will be thrown.
*
* For more details, see {@link resizeCodec}.
*
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The original fixed size of the decoded value.
* @typeParam TNewSize - The new fixed size after resizing.
*
* @param decoder - The decoder whose size will be updated.
* @param resize - A function that takes the current size and returns the new size.
* @returns A new decoder with the updated size.
*
* @example
* Expanding a `u16` decoder to read 4 bytes instead of 2.
* ```ts
* const decoder = resizeDecoder(getU16Decoder(), size => size + 2);
* decoder.fixedSize; // 4
* ```
*
* @example
* Shrinking a `u32` decoder to only read 2 bytes.
* ```ts
* const decoder = resizeDecoder(getU32Decoder(), () => 2);
* decoder.fixedSize; // 2
* ```
*
* @see {@link resizeCodec}
* @see {@link resizeEncoder}
*/
export function resizeDecoder<TFrom, TSize extends number, TNewSize extends number>(
decoder: FixedSizeDecoder<TFrom, TSize>,
resize: (size: TSize) => TNewSize,
): FixedSizeDecoder<TFrom, TNewSize>;
export function resizeDecoder<TDecoder extends AnyDecoder>(
decoder: TDecoder,
resize: (size: number) => number,
): TDecoder;
export function resizeDecoder<TDecoder extends AnyDecoder>(
decoder: TDecoder,
resize: (size: number) => number,
): TDecoder {
if (isFixedSize(decoder)) {
const fixedSize = resize(decoder.fixedSize);
if (fixedSize < 0) {
throw new SolanaError(SOLANA_ERROR__CODECS__EXPECTED_POSITIVE_BYTE_LENGTH, {
bytesLength: fixedSize,
codecDescription: 'resizeDecoder',
});
}
return createDecoder({ ...decoder, fixedSize }) as TDecoder;
}
return decoder;
}
/**
* Updates the size of a given codec.
*
* This function modifies the size of both the codec using a provided
* transformation function. It is useful for adjusting the allocated byte size for
* encoding and decoding without altering the underlying data structure.
*
* If the new size is negative, an error will be thrown.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The original fixed size of the encoded/decoded value (for fixed-size codecs).
* @typeParam TNewSize - The new fixed size after resizing (for fixed-size codecs).
*
* @param codec - The codec whose size will be updated.
* @param resize - A function that takes the current size and returns the new size.
* @returns A new codec with the updated size.
*
* @example
* Expanding a `u16` codec from 2 to 4 bytes.
* ```ts
* const codec = resizeCodec(getU16Codec(), size => size + 2);
* const bytes = codec.encode(0xffff); // 0xffff0000 (two extra bytes added)
* const value = codec.decode(bytes); // 0xffff (reads original two bytes)
* ```
*
* @example
* Shrinking a `u32` codec to only use 2 bytes.
* ```ts
* const codec = resizeCodec(getU32Codec(), () => 2);
* codec.fixedSize; // 2
* ```
*
* @remarks
* If you only need to resize an encoder, use {@link resizeEncoder}.
* If you only need to resize a decoder, use {@link resizeDecoder}.
*
* ```ts
* const bytes = resizeEncoder(getU32Encoder(), (size) => size + 2).encode(0xffff);
* const value = resizeDecoder(getU32Decoder(), (size) => size + 2).decode(bytes);
* ```
*
* @see {@link resizeEncoder}
* @see {@link resizeDecoder}
*/
export function resizeCodec<TFrom, TTo extends TFrom, TSize extends number, TNewSize extends number>(
codec: FixedSizeCodec<TFrom, TTo, TSize>,
resize: (size: TSize) => TNewSize,
): FixedSizeCodec<TFrom, TTo, TNewSize>;
export function resizeCodec<TCodec extends AnyCodec>(codec: TCodec, resize: (size: number) => number): TCodec;
export function resizeCodec<TCodec extends AnyCodec>(codec: TCodec, resize: (size: number) => number): TCodec {
return combineCodec(resizeEncoder(codec, resize), resizeDecoder(codec, resize)) as TCodec;
}

159
node_modules/@solana/codecs-core/src/reverse-codec.ts generated vendored Normal file
View File

@@ -0,0 +1,159 @@
import {
assertIsFixedSize,
createDecoder,
createEncoder,
FixedSizeCodec,
FixedSizeDecoder,
FixedSizeEncoder,
} from './codec';
import { combineCodec } from './combine-codec';
import { ReadonlyUint8Array } from './readonly-uint8array';
function copySourceToTargetInReverse(
source: ReadonlyUint8Array,
target_WILL_MUTATE: Uint8Array,
sourceOffset: number,
sourceLength: number,
targetOffset: number = 0,
) {
while (sourceOffset < --sourceLength) {
const leftValue = source[sourceOffset];
target_WILL_MUTATE[sourceOffset + targetOffset] = source[sourceLength];
target_WILL_MUTATE[sourceLength + targetOffset] = leftValue;
sourceOffset++;
}
if (sourceOffset === sourceLength) {
target_WILL_MUTATE[sourceOffset + targetOffset] = source[sourceOffset];
}
}
/**
* Reverses the bytes of a fixed-size encoder.
*
* Given a `FixedSizeEncoder`, this function returns a new `FixedSizeEncoder` that
* reverses the bytes within the fixed-size byte array when encoding.
*
* This can be useful to modify endianness or for other byte-order transformations.
*
* For more details, see {@link reverseCodec}.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TSize - The fixed size of the encoded value in bytes.
*
* @param encoder - The fixed-size encoder to reverse.
* @returns A new encoder that writes bytes in reverse order.
*
* @example
* Encoding a `u16` value in reverse order.
* ```ts
* const encoder = reverseEncoder(getU16Encoder({ endian: Endian.Big }));
* const bytes = encoder.encode(0x1234); // 0x3412 (bytes are flipped)
* ```
*
* @see {@link reverseCodec}
* @see {@link reverseDecoder}
*/
export function reverseEncoder<TFrom, TSize extends number>(
encoder: FixedSizeEncoder<TFrom, TSize>,
): FixedSizeEncoder<TFrom, TSize> {
assertIsFixedSize(encoder);
return createEncoder({
...encoder,
write: (value: TFrom, bytes, offset) => {
const newOffset = encoder.write(value, bytes, offset);
copySourceToTargetInReverse(
bytes /* source */,
bytes /* target_WILL_MUTATE */,
offset /* sourceOffset */,
offset + encoder.fixedSize /* sourceLength */,
);
return newOffset;
},
});
}
/**
* Reverses the bytes of a fixed-size decoder.
*
* Given a `FixedSizeDecoder`, this function returns a new `FixedSizeDecoder` that
* reverses the bytes within the fixed-size byte array before decoding.
*
* This can be useful to modify endianness or for other byte-order transformations.
*
* For more details, see {@link reverseCodec}.
*
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the decoded value in bytes.
*
* @param decoder - The fixed-size decoder to reverse.
* @returns A new decoder that reads bytes in reverse order.
*
* @example
* Decoding a reversed `u16` value.
* ```ts
* const decoder = reverseDecoder(getU16Decoder({ endian: Endian.Big }));
* const value = decoder.decode(new Uint8Array([0x34, 0x12])); // 0x1234 (bytes are flipped back)
* ```
*
* @see {@link reverseCodec}
* @see {@link reverseEncoder}
*/
export function reverseDecoder<TTo, TSize extends number>(
decoder: FixedSizeDecoder<TTo, TSize>,
): FixedSizeDecoder<TTo, TSize> {
assertIsFixedSize(decoder);
return createDecoder({
...decoder,
read: (bytes, offset) => {
const reversedBytes = bytes.slice();
copySourceToTargetInReverse(
bytes /* source */,
reversedBytes /* target_WILL_MUTATE */,
offset /* sourceOffset */,
offset + decoder.fixedSize /* sourceLength */,
);
return decoder.read(reversedBytes, offset);
},
});
}
/**
* Reverses the bytes of a fixed-size codec.
*
* Given a `FixedSizeCodec`, this function returns a new `FixedSizeCodec` that
* reverses the bytes within the fixed-size byte array during encoding and decoding.
*
* This can be useful to modify endianness or for other byte-order transformations.
*
* @typeParam TFrom - The type of the value to encode.
* @typeParam TTo - The type of the decoded value.
* @typeParam TSize - The fixed size of the encoded/decoded value in bytes.
*
* @param codec - The fixed-size codec to reverse.
* @returns A new codec that encodes and decodes bytes in reverse order.
*
* @example
* Reversing a `u16` codec.
* ```ts
* const codec = reverseCodec(getU16Codec({ endian: Endian.Big }));
* const bytes = codec.encode(0x1234); // 0x3412 (bytes are flipped)
* const value = codec.decode(bytes); // 0x1234 (bytes are flipped back)
* ```
*
* @remarks
* If you only need to reverse an encoder, use {@link reverseEncoder}.
* If you only need to reverse a decoder, use {@link reverseDecoder}.
*
* ```ts
* const bytes = reverseEncoder(getU16Encoder()).encode(0x1234);
* const value = reverseDecoder(getU16Decoder()).decode(bytes);
* ```
*
* @see {@link reverseEncoder}
* @see {@link reverseDecoder}
*/
export function reverseCodec<TFrom, TTo extends TFrom, TSize extends number>(
codec: FixedSizeCodec<TFrom, TTo, TSize>,
): FixedSizeCodec<TFrom, TTo, TSize> {
return combineCodec(reverseEncoder(codec), reverseDecoder(codec));
}

208
node_modules/@solana/codecs-core/src/transform-codec.ts generated vendored Normal file
View File

@@ -0,0 +1,208 @@
import {
Codec,
createCodec,
createDecoder,
createEncoder,
Decoder,
Encoder,
FixedSizeCodec,
FixedSizeDecoder,
FixedSizeEncoder,
isVariableSize,
VariableSizeCodec,
VariableSizeDecoder,
VariableSizeEncoder,
} from './codec';
import { ReadonlyUint8Array } from './readonly-uint8array';
/**
* Transforms an encoder by mapping its input values.
*
* This function takes an existing `Encoder<A>` and returns an `Encoder<B>`, allowing values of type `B`
* to be converted into values of type `A` before encoding. The transformation is applied via the `unmap` function.
*
* This is useful for handling type conversions, applying default values, or structuring data before encoding.
*
* For more details, see {@link transformCodec}.
*
* @typeParam TOldFrom - The original type expected by the encoder.
* @typeParam TNewFrom - The new type that will be transformed before encoding.
*
* @param encoder - The encoder to transform.
* @param unmap - A function that converts values of `TNewFrom` into `TOldFrom` before encoding.
* @returns A new encoder that accepts `TNewFrom` values and transforms them before encoding.
*
* @example
* Encoding a string by counting its characters and storing the length as a `u32`.
* ```ts
* const encoder = transformEncoder(getU32Encoder(), (value: string) => value.length);
* encoder.encode("hello"); // 0x05000000 (stores length 5)
* ```
*
* @see {@link transformCodec}
* @see {@link transformDecoder}
*/
export function transformEncoder<TOldFrom, TNewFrom, TSize extends number>(
encoder: FixedSizeEncoder<TOldFrom, TSize>,
unmap: (value: TNewFrom) => TOldFrom,
): FixedSizeEncoder<TNewFrom, TSize>;
export function transformEncoder<TOldFrom, TNewFrom>(
encoder: VariableSizeEncoder<TOldFrom>,
unmap: (value: TNewFrom) => TOldFrom,
): VariableSizeEncoder<TNewFrom>;
export function transformEncoder<TOldFrom, TNewFrom>(
encoder: Encoder<TOldFrom>,
unmap: (value: TNewFrom) => TOldFrom,
): Encoder<TNewFrom>;
export function transformEncoder<TOldFrom, TNewFrom>(
encoder: Encoder<TOldFrom>,
unmap: (value: TNewFrom) => TOldFrom,
): Encoder<TNewFrom> {
return createEncoder({
...(isVariableSize(encoder)
? { ...encoder, getSizeFromValue: (value: TNewFrom) => encoder.getSizeFromValue(unmap(value)) }
: encoder),
write: (value: TNewFrom, bytes, offset) => encoder.write(unmap(value), bytes, offset),
});
}
/**
* Transforms a decoder by mapping its output values.
*
* This function takes an existing `Decoder<A>` and returns a `Decoder<B>`, allowing values of type `A`
* to be converted into values of type `B` after decoding. The transformation is applied via the `map` function.
*
* This is useful for post-processing, type conversions, or enriching decoded data.
*
* For more details, see {@link transformCodec}.
*
* @typeParam TOldTo - The original type returned by the decoder.
* @typeParam TNewTo - The new type that will be transformed after decoding.
*
* @param decoder - The decoder to transform.
* @param map - A function that converts values of `TOldTo` into `TNewTo` after decoding.
* @returns A new decoder that decodes into `TNewTo`.
*
* @example
* Decoding a stored `u32` length into a string of `'x'` characters.
* ```ts
* const decoder = transformDecoder(getU32Decoder(), (length) => 'x'.repeat(length));
* decoder.decode(new Uint8Array([0x05, 0x00, 0x00, 0x00])); // "xxxxx"
* ```
*
* @see {@link transformCodec}
* @see {@link transformEncoder}
*/
export function transformDecoder<TOldTo, TNewTo, TSize extends number>(
decoder: FixedSizeDecoder<TOldTo, TSize>,
map: (value: TOldTo, bytes: ReadonlyUint8Array | Uint8Array, offset: number) => TNewTo,
): FixedSizeDecoder<TNewTo, TSize>;
export function transformDecoder<TOldTo, TNewTo>(
decoder: VariableSizeDecoder<TOldTo>,
map: (value: TOldTo, bytes: ReadonlyUint8Array | Uint8Array, offset: number) => TNewTo,
): VariableSizeDecoder<TNewTo>;
export function transformDecoder<TOldTo, TNewTo>(
decoder: Decoder<TOldTo>,
map: (value: TOldTo, bytes: ReadonlyUint8Array | Uint8Array, offset: number) => TNewTo,
): Decoder<TNewTo>;
export function transformDecoder<TOldTo, TNewTo>(
decoder: Decoder<TOldTo>,
map: (value: TOldTo, bytes: ReadonlyUint8Array | Uint8Array, offset: number) => TNewTo,
): Decoder<TNewTo> {
return createDecoder({
...decoder,
read: (bytes: ReadonlyUint8Array | Uint8Array, offset) => {
const [value, newOffset] = decoder.read(bytes, offset);
return [map(value, bytes, offset), newOffset];
},
});
}
/**
* Transforms a codec by mapping its input and output values.
*
* This function takes an existing `Codec<A, B>` and returns a `Codec<C, D>`, allowing:
* - Values of type `C` to be transformed into `A` before encoding.
* - Values of type `B` to be transformed into `D` after decoding.
*
* This is useful for adapting codecs to work with different representations, handling default values, or
* converting between primitive and structured types.
*
* @typeParam TOldFrom - The original type expected by the codec.
* @typeParam TNewFrom - The new type that will be transformed before encoding.
* @typeParam TOldTo - The original type returned by the codec.
* @typeParam TNewTo - The new type that will be transformed after decoding.
*
* @param codec - The codec to transform.
* @param unmap - A function that converts values of `TNewFrom` into `TOldFrom` before encoding.
* @param map - A function that converts values of `TOldTo` into `TNewTo` after decoding (optional).
* @returns A new codec that encodes `TNewFrom` and decodes into `TNewTo`.
*
* @example
* Mapping a `u32` codec to encode string lengths and decode them into `'x'` characters.
* ```ts
* const codec = transformCodec(
* getU32Codec(),
* (value: string) => value.length, // Encode string length
* (length) => 'x'.repeat(length) // Decode length into a string of 'x's
* );
*
* const bytes = codec.encode("hello"); // 0x05000000 (stores length 5)
* const value = codec.decode(bytes); // "xxxxx"
* ```
*
* @remarks
* If only input transformation is needed, use {@link transformEncoder}.
* If only output transformation is needed, use {@link transformDecoder}.
*
* ```ts
* const bytes = transformEncoder(getU32Encoder(), (value: string) => value.length).encode("hello");
* const value = transformDecoder(getU32Decoder(), (length) => 'x'.repeat(length)).decode(bytes);
* ```
*
* @see {@link transformEncoder}
* @see {@link transformDecoder}
*/
export function transformCodec<TOldFrom, TNewFrom, TTo extends TNewFrom & TOldFrom, TSize extends number>(
codec: FixedSizeCodec<TOldFrom, TTo, TSize>,
unmap: (value: TNewFrom) => TOldFrom,
): FixedSizeCodec<TNewFrom, TTo, TSize>;
export function transformCodec<TOldFrom, TNewFrom, TTo extends TNewFrom & TOldFrom>(
codec: VariableSizeCodec<TOldFrom, TTo>,
unmap: (value: TNewFrom) => TOldFrom,
): VariableSizeCodec<TNewFrom, TTo>;
export function transformCodec<TOldFrom, TNewFrom, TTo extends TNewFrom & TOldFrom>(
codec: Codec<TOldFrom, TTo>,
unmap: (value: TNewFrom) => TOldFrom,
): Codec<TNewFrom, TTo>;
export function transformCodec<
TOldFrom,
TNewFrom,
TOldTo extends TOldFrom,
TNewTo extends TNewFrom,
TSize extends number,
>(
codec: FixedSizeCodec<TOldFrom, TOldTo, TSize>,
unmap: (value: TNewFrom) => TOldFrom,
map: (value: TOldTo, bytes: ReadonlyUint8Array | Uint8Array, offset: number) => TNewTo,
): FixedSizeCodec<TNewFrom, TNewTo, TSize>;
export function transformCodec<TOldFrom, TNewFrom, TOldTo extends TOldFrom, TNewTo extends TNewFrom>(
codec: VariableSizeCodec<TOldFrom, TOldTo>,
unmap: (value: TNewFrom) => TOldFrom,
map: (value: TOldTo, bytes: ReadonlyUint8Array | Uint8Array, offset: number) => TNewTo,
): VariableSizeCodec<TNewFrom, TNewTo>;
export function transformCodec<TOldFrom, TNewFrom, TOldTo extends TOldFrom, TNewTo extends TNewFrom>(
codec: Codec<TOldFrom, TOldTo>,
unmap: (value: TNewFrom) => TOldFrom,
map: (value: TOldTo, bytes: ReadonlyUint8Array | Uint8Array, offset: number) => TNewTo,
): Codec<TNewFrom, TNewTo>;
export function transformCodec<TOldFrom, TNewFrom, TOldTo extends TOldFrom, TNewTo extends TNewFrom>(
codec: Codec<TOldFrom, TOldTo>,
unmap: (value: TNewFrom) => TOldFrom,
map?: (value: TOldTo, bytes: ReadonlyUint8Array | Uint8Array, offset: number) => TNewTo,
): Codec<TNewFrom, TNewTo> {
return createCodec({
...transformEncoder(codec, unmap),
read: map ? transformDecoder(codec, map).read : (codec.read as unknown as Decoder<TNewTo>['read']),
});
}

20
node_modules/@solana/codecs-numbers/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,20 @@
Copyright (c) 2023 Solana Labs, Inc
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

130
node_modules/@solana/codecs-numbers/README.md generated vendored Normal file
View File

@@ -0,0 +1,130 @@
[![npm][npm-image]][npm-url]
[![npm-downloads][npm-downloads-image]][npm-url]
<br />
[![code-style-prettier][code-style-prettier-image]][code-style-prettier-url]
[code-style-prettier-image]: https://img.shields.io/badge/code_style-prettier-ff69b4.svg?style=flat-square
[code-style-prettier-url]: https://github.com/prettier/prettier
[npm-downloads-image]: https://img.shields.io/npm/dm/@solana/codecs-numbers?style=flat
[npm-image]: https://img.shields.io/npm/v/@solana/codecs-numbers?style=flat
[npm-url]: https://www.npmjs.com/package/@solana/codecs-numbers
# @solana/codecs-numbers
This package contains codecs for numbers of different sizes and endianness. It can be used standalone, but it is also exported as part of Kit [`@solana/kit`](https://github.com/anza-xyz/kit/tree/main/packages/kit).
This package is also part of the [`@solana/codecs` package](https://github.com/anza-xyz/kit/tree/main/packages/codecs) which acts as an entry point for all codec packages as well as for their documentation.
## Integer codecs
This package provides ten codecs of five different byte sizes for integers. Five of them store unsigned integers and the other five store signed integers.
```ts
// Unsigned integers.
getU8Codec().encode(42); // 0x2a
getU16Codec().encode(42); // 0x2a00
getU32Codec().encode(42); // 0x2a000000
getU64Codec().encode(42); // 0x2a00000000000000
getU128Codec().encode(42); // 0x2a000000000000000000000000000000
// Signed integers.
getI8Codec().encode(-42); // 0xd6
getI16Codec().encode(-42); // 0xd6ff
getI32Codec().encode(-42); // 0xd6ffffff
getI64Codec().encode(-42); // 0xd6ffffffffffffff
getI128Codec().encode(-42); // 0xd6ffffffffffffffffffffffffffffff
```
By default, integers are stored using little endianness but you may change this behaviour via the `endian` option. This option is available for every codec that uses more than a single byte.
```ts
// Big-endian unsigned integers.
getU16Codec({ endian: Endian.Big }).encode(42); // 0x002a
getU32Codec({ endian: Endian.Big }).encode(42); // 0x0000002a
getU64Codec({ endian: Endian.Big }).encode(42); // 0x000000000000002a
getU128Codec({ endian: Endian.Big }).encode(42); // 0x0000000000000000000000000000002a
// Big-endian signed integers.
getI16Codec({ endian: Endian.Big }).encode(-42); // 0xffd6
getI32Codec({ endian: Endian.Big }).encode(-42); // 0xffffffd6
getI64Codec({ endian: Endian.Big }).encode(-42); // 0xffffffffffffffd6
getI128Codec({ endian: Endian.Big }).encode(-42); // 0xffffffffffffffffffffffffffffffd6
```
All integer codecs are of type `Codec<number>` except for the `u64`, `u128`, `i64` and `i128` codecs which are of type `Codec<number | bigint, bigint>`. This means we can provide either a `number` of a `bigint` value to encode but the decoded value will always be a `bigint`. This is because JavaScript's native `number` type does not support numbers larger than `2^53 - 1` and these large integer codecs have the potential to go over that value.
```ts
const bytesFromNumber = getU64Codec().encode(42);
getU64Codec().decode(bytesFromNumber); // BigInt(42)
// OR
const bytesFromBigInt = getU64Codec().encode(BigInt(42));
getU64Codec().decode(bytesFromBigInt); // BigInt(42)
```
Finally, for each of these `get*Codec` functions, separate `get*Encoder` and `get*Decoder` functions exist to focus on only one side of the serialization and tree-shake the rest of the functions away.
```ts
const bytes = getU8Encoder().encode(42);
const value = getU8Decoder().decode(bytes);
```
## Decimal number codecs
This package also provides two codecs for floating numbers. One using 32 bits and one using 64 bits.
```ts
getF32Codec().encode(-1.5); // 0x0000c0bf
getF64Codec().encode(-1.5); // 0x000000000000f8bf
```
Similarly to the integer codecs, they are stored in little-endian by default but may be stored in big-endian using the `endian` option.
```ts
getF32Codec({ endian: Endian.Big }).encode(-1.5); // 0xbfc00000
getF64Codec({ endian: Endian.Big }).encode(-1.5); // 0xbff8000000000000
```
Note that based on the selected codec, some of the precision of the number you are encoding may be lost when decoding it. For instance, when storing `3.1415` using a `f32` codec, you will not get the exact same number back.
```ts
const bytes = getF32Codec().encode(3.1415); // 0x560e4940
const value = getF32Codec().decode(bytes); // 3.1414999961853027 !== 3.1415
```
As usual, separate encoder and decoder functions are available for these codecs.
```ts
getF32Encoder().encode(-1.5);
getF32Decoder().decode(new Uint8Array([...]));
getF64Encoder().encode(-1.5);
getF64Decoder().decode(new Uint8Array([...]));
```
## Short u16 codec
This last integer codec is less common `VariableSizeCodec` that stores an unsigned integer using between 1 to 3 bytes depending on the value of that integer.
```ts
const bytes = getShortU16Codec().encode(42); // 0x2a
const value = getShortU16Codec().decode(bytes); // 42
```
If the provided integer is equal to or lower than `0x7f`, it will be stored as-is, using a single byte. However, if the integer is above `0x7f`, then the top bit is set and the remaining value is stored in the next bytes. Each byte follows the same pattern until the third byte. The third byte, if needed, uses all 8 bits to store the last byte of the original value.
In other words, this codec provides an extendable size that adapts based on the integer. In the illustration below, you can see the `0` and `1` byte flags for each scenario as well as the available bits to store the integer marked with `X`.
```
0XXXXXXX <- From 0 to 127.
1XXXXXXX 0XXXXXXX <- From 128 to 16,383.
1XXXXXXX 1XXXXXXX XXXXXXXX <- From 16,384 to 4,194,303.
```
This codec is mainly used internally when encoding and decoding Solana transactions.
Separate encoder and decoder functions are also available via `getShortU16Encoder` and `getShortU16Decoder` respectively.
---
To read more about the available codecs and how to use them, check out the documentation of the main [`@solana/codecs` package](https://github.com/anza-xyz/kit/tree/main/packages/codecs).

View File

@@ -0,0 +1,326 @@
'use strict';
var errors = require('@solana/errors');
var codecsCore = require('@solana/codecs-core');
// src/assertions.ts
function assertNumberIsBetweenForCodec(codecDescription, min, max, value) {
if (value < min || value > max) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__NUMBER_OUT_OF_RANGE, {
codecDescription,
max,
min,
value
});
}
}
// src/common.ts
var Endian = /* @__PURE__ */ ((Endian2) => {
Endian2[Endian2["Little"] = 0] = "Little";
Endian2[Endian2["Big"] = 1] = "Big";
return Endian2;
})(Endian || {});
function isLittleEndian(config) {
return config?.endian === 1 /* Big */ ? false : true;
}
function numberEncoderFactory(input) {
return codecsCore.createEncoder({
fixedSize: input.size,
write(value, bytes, offset) {
if (input.range) {
assertNumberIsBetweenForCodec(input.name, input.range[0], input.range[1], value);
}
const arrayBuffer = new ArrayBuffer(input.size);
input.set(new DataView(arrayBuffer), value, isLittleEndian(input.config));
bytes.set(new Uint8Array(arrayBuffer), offset);
return offset + input.size;
}
});
}
function numberDecoderFactory(input) {
return codecsCore.createDecoder({
fixedSize: input.size,
read(bytes, offset = 0) {
codecsCore.assertByteArrayIsNotEmptyForCodec(input.name, bytes, offset);
codecsCore.assertByteArrayHasEnoughBytesForCodec(input.name, input.size, bytes, offset);
const view = new DataView(codecsCore.toArrayBuffer(bytes, offset, input.size));
return [input.get(view, isLittleEndian(input.config)), offset + input.size];
}
});
}
// src/f32.ts
var getF32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "f32",
set: (view, value, le) => view.setFloat32(0, Number(value), le),
size: 4
});
var getF32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getFloat32(0, le),
name: "f32",
size: 4
});
var getF32Codec = (config = {}) => codecsCore.combineCodec(getF32Encoder(config), getF32Decoder(config));
var getF64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "f64",
set: (view, value, le) => view.setFloat64(0, Number(value), le),
size: 8
});
var getF64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getFloat64(0, le),
name: "f64",
size: 8
});
var getF64Codec = (config = {}) => codecsCore.combineCodec(getF64Encoder(config), getF64Decoder(config));
var getI128Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i128",
range: [-BigInt("0x7fffffffffffffffffffffffffffffff") - 1n, BigInt("0x7fffffffffffffffffffffffffffffff")],
set: (view, value, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const rightMask = 0xffffffffffffffffn;
view.setBigInt64(leftOffset, BigInt(value) >> 64n, le);
view.setBigUint64(rightOffset, BigInt(value) & rightMask, le);
},
size: 16
});
var getI128Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const left = view.getBigInt64(leftOffset, le);
const right = view.getBigUint64(rightOffset, le);
return (left << 64n) + right;
},
name: "i128",
size: 16
});
var getI128Codec = (config = {}) => codecsCore.combineCodec(getI128Encoder(config), getI128Decoder(config));
var getI16Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i16",
range: [-Number("0x7fff") - 1, Number("0x7fff")],
set: (view, value, le) => view.setInt16(0, Number(value), le),
size: 2
});
var getI16Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getInt16(0, le),
name: "i16",
size: 2
});
var getI16Codec = (config = {}) => codecsCore.combineCodec(getI16Encoder(config), getI16Decoder(config));
var getI32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i32",
range: [-Number("0x7fffffff") - 1, Number("0x7fffffff")],
set: (view, value, le) => view.setInt32(0, Number(value), le),
size: 4
});
var getI32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getInt32(0, le),
name: "i32",
size: 4
});
var getI32Codec = (config = {}) => codecsCore.combineCodec(getI32Encoder(config), getI32Decoder(config));
var getI64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i64",
range: [-BigInt("0x7fffffffffffffff") - 1n, BigInt("0x7fffffffffffffff")],
set: (view, value, le) => view.setBigInt64(0, BigInt(value), le),
size: 8
});
var getI64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getBigInt64(0, le),
name: "i64",
size: 8
});
var getI64Codec = (config = {}) => codecsCore.combineCodec(getI64Encoder(config), getI64Decoder(config));
var getI8Encoder = () => numberEncoderFactory({
name: "i8",
range: [-Number("0x7f") - 1, Number("0x7f")],
set: (view, value) => view.setInt8(0, Number(value)),
size: 1
});
var getI8Decoder = () => numberDecoderFactory({
get: (view) => view.getInt8(0),
name: "i8",
size: 1
});
var getI8Codec = () => codecsCore.combineCodec(getI8Encoder(), getI8Decoder());
var getShortU16Encoder = () => codecsCore.createEncoder({
getSizeFromValue: (value) => {
if (value <= 127) return 1;
if (value <= 16383) return 2;
return 3;
},
maxSize: 3,
write: (value, bytes, offset) => {
assertNumberIsBetweenForCodec("shortU16", 0, 65535, value);
const shortU16Bytes = [0];
for (let ii = 0; ; ii += 1) {
const alignedValue = Number(value) >> ii * 7;
if (alignedValue === 0) {
break;
}
const nextSevenBits = 127 & alignedValue;
shortU16Bytes[ii] = nextSevenBits;
if (ii > 0) {
shortU16Bytes[ii - 1] |= 128;
}
}
bytes.set(shortU16Bytes, offset);
return offset + shortU16Bytes.length;
}
});
var getShortU16Decoder = () => codecsCore.createDecoder({
maxSize: 3,
read: (bytes, offset) => {
let value = 0;
let byteCount = 0;
while (++byteCount) {
const byteIndex = byteCount - 1;
const currentByte = bytes[offset + byteIndex];
const nextSevenBits = 127 & currentByte;
value |= nextSevenBits << byteIndex * 7;
if ((currentByte & 128) === 0) {
break;
}
}
return [value, offset + byteCount];
}
});
var getShortU16Codec = () => codecsCore.combineCodec(getShortU16Encoder(), getShortU16Decoder());
var getU128Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u128",
range: [0n, BigInt("0xffffffffffffffffffffffffffffffff")],
set: (view, value, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const rightMask = 0xffffffffffffffffn;
view.setBigUint64(leftOffset, BigInt(value) >> 64n, le);
view.setBigUint64(rightOffset, BigInt(value) & rightMask, le);
},
size: 16
});
var getU128Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const left = view.getBigUint64(leftOffset, le);
const right = view.getBigUint64(rightOffset, le);
return (left << 64n) + right;
},
name: "u128",
size: 16
});
var getU128Codec = (config = {}) => codecsCore.combineCodec(getU128Encoder(config), getU128Decoder(config));
var getU16Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u16",
range: [0, Number("0xffff")],
set: (view, value, le) => view.setUint16(0, Number(value), le),
size: 2
});
var getU16Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getUint16(0, le),
name: "u16",
size: 2
});
var getU16Codec = (config = {}) => codecsCore.combineCodec(getU16Encoder(config), getU16Decoder(config));
var getU32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u32",
range: [0, Number("0xffffffff")],
set: (view, value, le) => view.setUint32(0, Number(value), le),
size: 4
});
var getU32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getUint32(0, le),
name: "u32",
size: 4
});
var getU32Codec = (config = {}) => codecsCore.combineCodec(getU32Encoder(config), getU32Decoder(config));
var getU64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u64",
range: [0n, BigInt("0xffffffffffffffff")],
set: (view, value, le) => view.setBigUint64(0, BigInt(value), le),
size: 8
});
var getU64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getBigUint64(0, le),
name: "u64",
size: 8
});
var getU64Codec = (config = {}) => codecsCore.combineCodec(getU64Encoder(config), getU64Decoder(config));
var getU8Encoder = () => numberEncoderFactory({
name: "u8",
range: [0, Number("0xff")],
set: (view, value) => view.setUint8(0, Number(value)),
size: 1
});
var getU8Decoder = () => numberDecoderFactory({
get: (view) => view.getUint8(0),
name: "u8",
size: 1
});
var getU8Codec = () => codecsCore.combineCodec(getU8Encoder(), getU8Decoder());
exports.Endian = Endian;
exports.assertNumberIsBetweenForCodec = assertNumberIsBetweenForCodec;
exports.getF32Codec = getF32Codec;
exports.getF32Decoder = getF32Decoder;
exports.getF32Encoder = getF32Encoder;
exports.getF64Codec = getF64Codec;
exports.getF64Decoder = getF64Decoder;
exports.getF64Encoder = getF64Encoder;
exports.getI128Codec = getI128Codec;
exports.getI128Decoder = getI128Decoder;
exports.getI128Encoder = getI128Encoder;
exports.getI16Codec = getI16Codec;
exports.getI16Decoder = getI16Decoder;
exports.getI16Encoder = getI16Encoder;
exports.getI32Codec = getI32Codec;
exports.getI32Decoder = getI32Decoder;
exports.getI32Encoder = getI32Encoder;
exports.getI64Codec = getI64Codec;
exports.getI64Decoder = getI64Decoder;
exports.getI64Encoder = getI64Encoder;
exports.getI8Codec = getI8Codec;
exports.getI8Decoder = getI8Decoder;
exports.getI8Encoder = getI8Encoder;
exports.getShortU16Codec = getShortU16Codec;
exports.getShortU16Decoder = getShortU16Decoder;
exports.getShortU16Encoder = getShortU16Encoder;
exports.getU128Codec = getU128Codec;
exports.getU128Decoder = getU128Decoder;
exports.getU128Encoder = getU128Encoder;
exports.getU16Codec = getU16Codec;
exports.getU16Decoder = getU16Decoder;
exports.getU16Encoder = getU16Encoder;
exports.getU32Codec = getU32Codec;
exports.getU32Decoder = getU32Decoder;
exports.getU32Encoder = getU32Encoder;
exports.getU64Codec = getU64Codec;
exports.getU64Decoder = getU64Decoder;
exports.getU64Encoder = getU64Encoder;
exports.getU8Codec = getU8Codec;
exports.getU8Decoder = getU8Decoder;
exports.getU8Encoder = getU8Encoder;
//# sourceMappingURL=index.browser.cjs.map
//# sourceMappingURL=index.browser.cjs.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,284 @@
import { SolanaError, SOLANA_ERROR__CODECS__NUMBER_OUT_OF_RANGE } from '@solana/errors';
import { combineCodec, createDecoder, createEncoder, assertByteArrayIsNotEmptyForCodec, assertByteArrayHasEnoughBytesForCodec, toArrayBuffer } from '@solana/codecs-core';
// src/assertions.ts
function assertNumberIsBetweenForCodec(codecDescription, min, max, value) {
if (value < min || value > max) {
throw new SolanaError(SOLANA_ERROR__CODECS__NUMBER_OUT_OF_RANGE, {
codecDescription,
max,
min,
value
});
}
}
// src/common.ts
var Endian = /* @__PURE__ */ ((Endian2) => {
Endian2[Endian2["Little"] = 0] = "Little";
Endian2[Endian2["Big"] = 1] = "Big";
return Endian2;
})(Endian || {});
function isLittleEndian(config) {
return config?.endian === 1 /* Big */ ? false : true;
}
function numberEncoderFactory(input) {
return createEncoder({
fixedSize: input.size,
write(value, bytes, offset) {
if (input.range) {
assertNumberIsBetweenForCodec(input.name, input.range[0], input.range[1], value);
}
const arrayBuffer = new ArrayBuffer(input.size);
input.set(new DataView(arrayBuffer), value, isLittleEndian(input.config));
bytes.set(new Uint8Array(arrayBuffer), offset);
return offset + input.size;
}
});
}
function numberDecoderFactory(input) {
return createDecoder({
fixedSize: input.size,
read(bytes, offset = 0) {
assertByteArrayIsNotEmptyForCodec(input.name, bytes, offset);
assertByteArrayHasEnoughBytesForCodec(input.name, input.size, bytes, offset);
const view = new DataView(toArrayBuffer(bytes, offset, input.size));
return [input.get(view, isLittleEndian(input.config)), offset + input.size];
}
});
}
// src/f32.ts
var getF32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "f32",
set: (view, value, le) => view.setFloat32(0, Number(value), le),
size: 4
});
var getF32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getFloat32(0, le),
name: "f32",
size: 4
});
var getF32Codec = (config = {}) => combineCodec(getF32Encoder(config), getF32Decoder(config));
var getF64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "f64",
set: (view, value, le) => view.setFloat64(0, Number(value), le),
size: 8
});
var getF64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getFloat64(0, le),
name: "f64",
size: 8
});
var getF64Codec = (config = {}) => combineCodec(getF64Encoder(config), getF64Decoder(config));
var getI128Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i128",
range: [-BigInt("0x7fffffffffffffffffffffffffffffff") - 1n, BigInt("0x7fffffffffffffffffffffffffffffff")],
set: (view, value, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const rightMask = 0xffffffffffffffffn;
view.setBigInt64(leftOffset, BigInt(value) >> 64n, le);
view.setBigUint64(rightOffset, BigInt(value) & rightMask, le);
},
size: 16
});
var getI128Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const left = view.getBigInt64(leftOffset, le);
const right = view.getBigUint64(rightOffset, le);
return (left << 64n) + right;
},
name: "i128",
size: 16
});
var getI128Codec = (config = {}) => combineCodec(getI128Encoder(config), getI128Decoder(config));
var getI16Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i16",
range: [-Number("0x7fff") - 1, Number("0x7fff")],
set: (view, value, le) => view.setInt16(0, Number(value), le),
size: 2
});
var getI16Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getInt16(0, le),
name: "i16",
size: 2
});
var getI16Codec = (config = {}) => combineCodec(getI16Encoder(config), getI16Decoder(config));
var getI32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i32",
range: [-Number("0x7fffffff") - 1, Number("0x7fffffff")],
set: (view, value, le) => view.setInt32(0, Number(value), le),
size: 4
});
var getI32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getInt32(0, le),
name: "i32",
size: 4
});
var getI32Codec = (config = {}) => combineCodec(getI32Encoder(config), getI32Decoder(config));
var getI64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i64",
range: [-BigInt("0x7fffffffffffffff") - 1n, BigInt("0x7fffffffffffffff")],
set: (view, value, le) => view.setBigInt64(0, BigInt(value), le),
size: 8
});
var getI64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getBigInt64(0, le),
name: "i64",
size: 8
});
var getI64Codec = (config = {}) => combineCodec(getI64Encoder(config), getI64Decoder(config));
var getI8Encoder = () => numberEncoderFactory({
name: "i8",
range: [-Number("0x7f") - 1, Number("0x7f")],
set: (view, value) => view.setInt8(0, Number(value)),
size: 1
});
var getI8Decoder = () => numberDecoderFactory({
get: (view) => view.getInt8(0),
name: "i8",
size: 1
});
var getI8Codec = () => combineCodec(getI8Encoder(), getI8Decoder());
var getShortU16Encoder = () => createEncoder({
getSizeFromValue: (value) => {
if (value <= 127) return 1;
if (value <= 16383) return 2;
return 3;
},
maxSize: 3,
write: (value, bytes, offset) => {
assertNumberIsBetweenForCodec("shortU16", 0, 65535, value);
const shortU16Bytes = [0];
for (let ii = 0; ; ii += 1) {
const alignedValue = Number(value) >> ii * 7;
if (alignedValue === 0) {
break;
}
const nextSevenBits = 127 & alignedValue;
shortU16Bytes[ii] = nextSevenBits;
if (ii > 0) {
shortU16Bytes[ii - 1] |= 128;
}
}
bytes.set(shortU16Bytes, offset);
return offset + shortU16Bytes.length;
}
});
var getShortU16Decoder = () => createDecoder({
maxSize: 3,
read: (bytes, offset) => {
let value = 0;
let byteCount = 0;
while (++byteCount) {
const byteIndex = byteCount - 1;
const currentByte = bytes[offset + byteIndex];
const nextSevenBits = 127 & currentByte;
value |= nextSevenBits << byteIndex * 7;
if ((currentByte & 128) === 0) {
break;
}
}
return [value, offset + byteCount];
}
});
var getShortU16Codec = () => combineCodec(getShortU16Encoder(), getShortU16Decoder());
var getU128Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u128",
range: [0n, BigInt("0xffffffffffffffffffffffffffffffff")],
set: (view, value, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const rightMask = 0xffffffffffffffffn;
view.setBigUint64(leftOffset, BigInt(value) >> 64n, le);
view.setBigUint64(rightOffset, BigInt(value) & rightMask, le);
},
size: 16
});
var getU128Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const left = view.getBigUint64(leftOffset, le);
const right = view.getBigUint64(rightOffset, le);
return (left << 64n) + right;
},
name: "u128",
size: 16
});
var getU128Codec = (config = {}) => combineCodec(getU128Encoder(config), getU128Decoder(config));
var getU16Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u16",
range: [0, Number("0xffff")],
set: (view, value, le) => view.setUint16(0, Number(value), le),
size: 2
});
var getU16Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getUint16(0, le),
name: "u16",
size: 2
});
var getU16Codec = (config = {}) => combineCodec(getU16Encoder(config), getU16Decoder(config));
var getU32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u32",
range: [0, Number("0xffffffff")],
set: (view, value, le) => view.setUint32(0, Number(value), le),
size: 4
});
var getU32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getUint32(0, le),
name: "u32",
size: 4
});
var getU32Codec = (config = {}) => combineCodec(getU32Encoder(config), getU32Decoder(config));
var getU64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u64",
range: [0n, BigInt("0xffffffffffffffff")],
set: (view, value, le) => view.setBigUint64(0, BigInt(value), le),
size: 8
});
var getU64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getBigUint64(0, le),
name: "u64",
size: 8
});
var getU64Codec = (config = {}) => combineCodec(getU64Encoder(config), getU64Decoder(config));
var getU8Encoder = () => numberEncoderFactory({
name: "u8",
range: [0, Number("0xff")],
set: (view, value) => view.setUint8(0, Number(value)),
size: 1
});
var getU8Decoder = () => numberDecoderFactory({
get: (view) => view.getUint8(0),
name: "u8",
size: 1
});
var getU8Codec = () => combineCodec(getU8Encoder(), getU8Decoder());
export { Endian, assertNumberIsBetweenForCodec, getF32Codec, getF32Decoder, getF32Encoder, getF64Codec, getF64Decoder, getF64Encoder, getI128Codec, getI128Decoder, getI128Encoder, getI16Codec, getI16Decoder, getI16Encoder, getI32Codec, getI32Decoder, getI32Encoder, getI64Codec, getI64Decoder, getI64Encoder, getI8Codec, getI8Decoder, getI8Encoder, getShortU16Codec, getShortU16Decoder, getShortU16Encoder, getU128Codec, getU128Decoder, getU128Encoder, getU16Codec, getU16Decoder, getU16Encoder, getU32Codec, getU32Decoder, getU32Encoder, getU64Codec, getU64Decoder, getU64Encoder, getU8Codec, getU8Decoder, getU8Encoder };
//# sourceMappingURL=index.browser.mjs.map
//# sourceMappingURL=index.browser.mjs.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,284 @@
import { SolanaError, SOLANA_ERROR__CODECS__NUMBER_OUT_OF_RANGE } from '@solana/errors';
import { combineCodec, createDecoder, createEncoder, assertByteArrayIsNotEmptyForCodec, assertByteArrayHasEnoughBytesForCodec, toArrayBuffer } from '@solana/codecs-core';
// src/assertions.ts
function assertNumberIsBetweenForCodec(codecDescription, min, max, value) {
if (value < min || value > max) {
throw new SolanaError(SOLANA_ERROR__CODECS__NUMBER_OUT_OF_RANGE, {
codecDescription,
max,
min,
value
});
}
}
// src/common.ts
var Endian = /* @__PURE__ */ ((Endian2) => {
Endian2[Endian2["Little"] = 0] = "Little";
Endian2[Endian2["Big"] = 1] = "Big";
return Endian2;
})(Endian || {});
function isLittleEndian(config) {
return config?.endian === 1 /* Big */ ? false : true;
}
function numberEncoderFactory(input) {
return createEncoder({
fixedSize: input.size,
write(value, bytes, offset) {
if (input.range) {
assertNumberIsBetweenForCodec(input.name, input.range[0], input.range[1], value);
}
const arrayBuffer = new ArrayBuffer(input.size);
input.set(new DataView(arrayBuffer), value, isLittleEndian(input.config));
bytes.set(new Uint8Array(arrayBuffer), offset);
return offset + input.size;
}
});
}
function numberDecoderFactory(input) {
return createDecoder({
fixedSize: input.size,
read(bytes, offset = 0) {
assertByteArrayIsNotEmptyForCodec(input.name, bytes, offset);
assertByteArrayHasEnoughBytesForCodec(input.name, input.size, bytes, offset);
const view = new DataView(toArrayBuffer(bytes, offset, input.size));
return [input.get(view, isLittleEndian(input.config)), offset + input.size];
}
});
}
// src/f32.ts
var getF32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "f32",
set: (view, value, le) => view.setFloat32(0, Number(value), le),
size: 4
});
var getF32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getFloat32(0, le),
name: "f32",
size: 4
});
var getF32Codec = (config = {}) => combineCodec(getF32Encoder(config), getF32Decoder(config));
var getF64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "f64",
set: (view, value, le) => view.setFloat64(0, Number(value), le),
size: 8
});
var getF64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getFloat64(0, le),
name: "f64",
size: 8
});
var getF64Codec = (config = {}) => combineCodec(getF64Encoder(config), getF64Decoder(config));
var getI128Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i128",
range: [-BigInt("0x7fffffffffffffffffffffffffffffff") - 1n, BigInt("0x7fffffffffffffffffffffffffffffff")],
set: (view, value, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const rightMask = 0xffffffffffffffffn;
view.setBigInt64(leftOffset, BigInt(value) >> 64n, le);
view.setBigUint64(rightOffset, BigInt(value) & rightMask, le);
},
size: 16
});
var getI128Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const left = view.getBigInt64(leftOffset, le);
const right = view.getBigUint64(rightOffset, le);
return (left << 64n) + right;
},
name: "i128",
size: 16
});
var getI128Codec = (config = {}) => combineCodec(getI128Encoder(config), getI128Decoder(config));
var getI16Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i16",
range: [-Number("0x7fff") - 1, Number("0x7fff")],
set: (view, value, le) => view.setInt16(0, Number(value), le),
size: 2
});
var getI16Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getInt16(0, le),
name: "i16",
size: 2
});
var getI16Codec = (config = {}) => combineCodec(getI16Encoder(config), getI16Decoder(config));
var getI32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i32",
range: [-Number("0x7fffffff") - 1, Number("0x7fffffff")],
set: (view, value, le) => view.setInt32(0, Number(value), le),
size: 4
});
var getI32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getInt32(0, le),
name: "i32",
size: 4
});
var getI32Codec = (config = {}) => combineCodec(getI32Encoder(config), getI32Decoder(config));
var getI64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i64",
range: [-BigInt("0x7fffffffffffffff") - 1n, BigInt("0x7fffffffffffffff")],
set: (view, value, le) => view.setBigInt64(0, BigInt(value), le),
size: 8
});
var getI64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getBigInt64(0, le),
name: "i64",
size: 8
});
var getI64Codec = (config = {}) => combineCodec(getI64Encoder(config), getI64Decoder(config));
var getI8Encoder = () => numberEncoderFactory({
name: "i8",
range: [-Number("0x7f") - 1, Number("0x7f")],
set: (view, value) => view.setInt8(0, Number(value)),
size: 1
});
var getI8Decoder = () => numberDecoderFactory({
get: (view) => view.getInt8(0),
name: "i8",
size: 1
});
var getI8Codec = () => combineCodec(getI8Encoder(), getI8Decoder());
var getShortU16Encoder = () => createEncoder({
getSizeFromValue: (value) => {
if (value <= 127) return 1;
if (value <= 16383) return 2;
return 3;
},
maxSize: 3,
write: (value, bytes, offset) => {
assertNumberIsBetweenForCodec("shortU16", 0, 65535, value);
const shortU16Bytes = [0];
for (let ii = 0; ; ii += 1) {
const alignedValue = Number(value) >> ii * 7;
if (alignedValue === 0) {
break;
}
const nextSevenBits = 127 & alignedValue;
shortU16Bytes[ii] = nextSevenBits;
if (ii > 0) {
shortU16Bytes[ii - 1] |= 128;
}
}
bytes.set(shortU16Bytes, offset);
return offset + shortU16Bytes.length;
}
});
var getShortU16Decoder = () => createDecoder({
maxSize: 3,
read: (bytes, offset) => {
let value = 0;
let byteCount = 0;
while (++byteCount) {
const byteIndex = byteCount - 1;
const currentByte = bytes[offset + byteIndex];
const nextSevenBits = 127 & currentByte;
value |= nextSevenBits << byteIndex * 7;
if ((currentByte & 128) === 0) {
break;
}
}
return [value, offset + byteCount];
}
});
var getShortU16Codec = () => combineCodec(getShortU16Encoder(), getShortU16Decoder());
var getU128Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u128",
range: [0n, BigInt("0xffffffffffffffffffffffffffffffff")],
set: (view, value, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const rightMask = 0xffffffffffffffffn;
view.setBigUint64(leftOffset, BigInt(value) >> 64n, le);
view.setBigUint64(rightOffset, BigInt(value) & rightMask, le);
},
size: 16
});
var getU128Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const left = view.getBigUint64(leftOffset, le);
const right = view.getBigUint64(rightOffset, le);
return (left << 64n) + right;
},
name: "u128",
size: 16
});
var getU128Codec = (config = {}) => combineCodec(getU128Encoder(config), getU128Decoder(config));
var getU16Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u16",
range: [0, Number("0xffff")],
set: (view, value, le) => view.setUint16(0, Number(value), le),
size: 2
});
var getU16Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getUint16(0, le),
name: "u16",
size: 2
});
var getU16Codec = (config = {}) => combineCodec(getU16Encoder(config), getU16Decoder(config));
var getU32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u32",
range: [0, Number("0xffffffff")],
set: (view, value, le) => view.setUint32(0, Number(value), le),
size: 4
});
var getU32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getUint32(0, le),
name: "u32",
size: 4
});
var getU32Codec = (config = {}) => combineCodec(getU32Encoder(config), getU32Decoder(config));
var getU64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u64",
range: [0n, BigInt("0xffffffffffffffff")],
set: (view, value, le) => view.setBigUint64(0, BigInt(value), le),
size: 8
});
var getU64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getBigUint64(0, le),
name: "u64",
size: 8
});
var getU64Codec = (config = {}) => combineCodec(getU64Encoder(config), getU64Decoder(config));
var getU8Encoder = () => numberEncoderFactory({
name: "u8",
range: [0, Number("0xff")],
set: (view, value) => view.setUint8(0, Number(value)),
size: 1
});
var getU8Decoder = () => numberDecoderFactory({
get: (view) => view.getUint8(0),
name: "u8",
size: 1
});
var getU8Codec = () => combineCodec(getU8Encoder(), getU8Decoder());
export { Endian, assertNumberIsBetweenForCodec, getF32Codec, getF32Decoder, getF32Encoder, getF64Codec, getF64Decoder, getF64Encoder, getI128Codec, getI128Decoder, getI128Encoder, getI16Codec, getI16Decoder, getI16Encoder, getI32Codec, getI32Decoder, getI32Encoder, getI64Codec, getI64Decoder, getI64Encoder, getI8Codec, getI8Decoder, getI8Encoder, getShortU16Codec, getShortU16Decoder, getShortU16Encoder, getU128Codec, getU128Decoder, getU128Encoder, getU16Codec, getU16Decoder, getU16Encoder, getU32Codec, getU32Decoder, getU32Encoder, getU64Codec, getU64Decoder, getU64Encoder, getU8Codec, getU8Decoder, getU8Encoder };
//# sourceMappingURL=index.native.mjs.map
//# sourceMappingURL=index.native.mjs.map

File diff suppressed because one or more lines are too long

326
node_modules/@solana/codecs-numbers/dist/index.node.cjs generated vendored Normal file
View File

@@ -0,0 +1,326 @@
'use strict';
var errors = require('@solana/errors');
var codecsCore = require('@solana/codecs-core');
// src/assertions.ts
function assertNumberIsBetweenForCodec(codecDescription, min, max, value) {
if (value < min || value > max) {
throw new errors.SolanaError(errors.SOLANA_ERROR__CODECS__NUMBER_OUT_OF_RANGE, {
codecDescription,
max,
min,
value
});
}
}
// src/common.ts
var Endian = /* @__PURE__ */ ((Endian2) => {
Endian2[Endian2["Little"] = 0] = "Little";
Endian2[Endian2["Big"] = 1] = "Big";
return Endian2;
})(Endian || {});
function isLittleEndian(config) {
return config?.endian === 1 /* Big */ ? false : true;
}
function numberEncoderFactory(input) {
return codecsCore.createEncoder({
fixedSize: input.size,
write(value, bytes, offset) {
if (input.range) {
assertNumberIsBetweenForCodec(input.name, input.range[0], input.range[1], value);
}
const arrayBuffer = new ArrayBuffer(input.size);
input.set(new DataView(arrayBuffer), value, isLittleEndian(input.config));
bytes.set(new Uint8Array(arrayBuffer), offset);
return offset + input.size;
}
});
}
function numberDecoderFactory(input) {
return codecsCore.createDecoder({
fixedSize: input.size,
read(bytes, offset = 0) {
codecsCore.assertByteArrayIsNotEmptyForCodec(input.name, bytes, offset);
codecsCore.assertByteArrayHasEnoughBytesForCodec(input.name, input.size, bytes, offset);
const view = new DataView(codecsCore.toArrayBuffer(bytes, offset, input.size));
return [input.get(view, isLittleEndian(input.config)), offset + input.size];
}
});
}
// src/f32.ts
var getF32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "f32",
set: (view, value, le) => view.setFloat32(0, Number(value), le),
size: 4
});
var getF32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getFloat32(0, le),
name: "f32",
size: 4
});
var getF32Codec = (config = {}) => codecsCore.combineCodec(getF32Encoder(config), getF32Decoder(config));
var getF64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "f64",
set: (view, value, le) => view.setFloat64(0, Number(value), le),
size: 8
});
var getF64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getFloat64(0, le),
name: "f64",
size: 8
});
var getF64Codec = (config = {}) => codecsCore.combineCodec(getF64Encoder(config), getF64Decoder(config));
var getI128Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i128",
range: [-BigInt("0x7fffffffffffffffffffffffffffffff") - 1n, BigInt("0x7fffffffffffffffffffffffffffffff")],
set: (view, value, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const rightMask = 0xffffffffffffffffn;
view.setBigInt64(leftOffset, BigInt(value) >> 64n, le);
view.setBigUint64(rightOffset, BigInt(value) & rightMask, le);
},
size: 16
});
var getI128Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const left = view.getBigInt64(leftOffset, le);
const right = view.getBigUint64(rightOffset, le);
return (left << 64n) + right;
},
name: "i128",
size: 16
});
var getI128Codec = (config = {}) => codecsCore.combineCodec(getI128Encoder(config), getI128Decoder(config));
var getI16Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i16",
range: [-Number("0x7fff") - 1, Number("0x7fff")],
set: (view, value, le) => view.setInt16(0, Number(value), le),
size: 2
});
var getI16Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getInt16(0, le),
name: "i16",
size: 2
});
var getI16Codec = (config = {}) => codecsCore.combineCodec(getI16Encoder(config), getI16Decoder(config));
var getI32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i32",
range: [-Number("0x7fffffff") - 1, Number("0x7fffffff")],
set: (view, value, le) => view.setInt32(0, Number(value), le),
size: 4
});
var getI32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getInt32(0, le),
name: "i32",
size: 4
});
var getI32Codec = (config = {}) => codecsCore.combineCodec(getI32Encoder(config), getI32Decoder(config));
var getI64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i64",
range: [-BigInt("0x7fffffffffffffff") - 1n, BigInt("0x7fffffffffffffff")],
set: (view, value, le) => view.setBigInt64(0, BigInt(value), le),
size: 8
});
var getI64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getBigInt64(0, le),
name: "i64",
size: 8
});
var getI64Codec = (config = {}) => codecsCore.combineCodec(getI64Encoder(config), getI64Decoder(config));
var getI8Encoder = () => numberEncoderFactory({
name: "i8",
range: [-Number("0x7f") - 1, Number("0x7f")],
set: (view, value) => view.setInt8(0, Number(value)),
size: 1
});
var getI8Decoder = () => numberDecoderFactory({
get: (view) => view.getInt8(0),
name: "i8",
size: 1
});
var getI8Codec = () => codecsCore.combineCodec(getI8Encoder(), getI8Decoder());
var getShortU16Encoder = () => codecsCore.createEncoder({
getSizeFromValue: (value) => {
if (value <= 127) return 1;
if (value <= 16383) return 2;
return 3;
},
maxSize: 3,
write: (value, bytes, offset) => {
assertNumberIsBetweenForCodec("shortU16", 0, 65535, value);
const shortU16Bytes = [0];
for (let ii = 0; ; ii += 1) {
const alignedValue = Number(value) >> ii * 7;
if (alignedValue === 0) {
break;
}
const nextSevenBits = 127 & alignedValue;
shortU16Bytes[ii] = nextSevenBits;
if (ii > 0) {
shortU16Bytes[ii - 1] |= 128;
}
}
bytes.set(shortU16Bytes, offset);
return offset + shortU16Bytes.length;
}
});
var getShortU16Decoder = () => codecsCore.createDecoder({
maxSize: 3,
read: (bytes, offset) => {
let value = 0;
let byteCount = 0;
while (++byteCount) {
const byteIndex = byteCount - 1;
const currentByte = bytes[offset + byteIndex];
const nextSevenBits = 127 & currentByte;
value |= nextSevenBits << byteIndex * 7;
if ((currentByte & 128) === 0) {
break;
}
}
return [value, offset + byteCount];
}
});
var getShortU16Codec = () => codecsCore.combineCodec(getShortU16Encoder(), getShortU16Decoder());
var getU128Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u128",
range: [0n, BigInt("0xffffffffffffffffffffffffffffffff")],
set: (view, value, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const rightMask = 0xffffffffffffffffn;
view.setBigUint64(leftOffset, BigInt(value) >> 64n, le);
view.setBigUint64(rightOffset, BigInt(value) & rightMask, le);
},
size: 16
});
var getU128Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const left = view.getBigUint64(leftOffset, le);
const right = view.getBigUint64(rightOffset, le);
return (left << 64n) + right;
},
name: "u128",
size: 16
});
var getU128Codec = (config = {}) => codecsCore.combineCodec(getU128Encoder(config), getU128Decoder(config));
var getU16Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u16",
range: [0, Number("0xffff")],
set: (view, value, le) => view.setUint16(0, Number(value), le),
size: 2
});
var getU16Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getUint16(0, le),
name: "u16",
size: 2
});
var getU16Codec = (config = {}) => codecsCore.combineCodec(getU16Encoder(config), getU16Decoder(config));
var getU32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u32",
range: [0, Number("0xffffffff")],
set: (view, value, le) => view.setUint32(0, Number(value), le),
size: 4
});
var getU32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getUint32(0, le),
name: "u32",
size: 4
});
var getU32Codec = (config = {}) => codecsCore.combineCodec(getU32Encoder(config), getU32Decoder(config));
var getU64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u64",
range: [0n, BigInt("0xffffffffffffffff")],
set: (view, value, le) => view.setBigUint64(0, BigInt(value), le),
size: 8
});
var getU64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getBigUint64(0, le),
name: "u64",
size: 8
});
var getU64Codec = (config = {}) => codecsCore.combineCodec(getU64Encoder(config), getU64Decoder(config));
var getU8Encoder = () => numberEncoderFactory({
name: "u8",
range: [0, Number("0xff")],
set: (view, value) => view.setUint8(0, Number(value)),
size: 1
});
var getU8Decoder = () => numberDecoderFactory({
get: (view) => view.getUint8(0),
name: "u8",
size: 1
});
var getU8Codec = () => codecsCore.combineCodec(getU8Encoder(), getU8Decoder());
exports.Endian = Endian;
exports.assertNumberIsBetweenForCodec = assertNumberIsBetweenForCodec;
exports.getF32Codec = getF32Codec;
exports.getF32Decoder = getF32Decoder;
exports.getF32Encoder = getF32Encoder;
exports.getF64Codec = getF64Codec;
exports.getF64Decoder = getF64Decoder;
exports.getF64Encoder = getF64Encoder;
exports.getI128Codec = getI128Codec;
exports.getI128Decoder = getI128Decoder;
exports.getI128Encoder = getI128Encoder;
exports.getI16Codec = getI16Codec;
exports.getI16Decoder = getI16Decoder;
exports.getI16Encoder = getI16Encoder;
exports.getI32Codec = getI32Codec;
exports.getI32Decoder = getI32Decoder;
exports.getI32Encoder = getI32Encoder;
exports.getI64Codec = getI64Codec;
exports.getI64Decoder = getI64Decoder;
exports.getI64Encoder = getI64Encoder;
exports.getI8Codec = getI8Codec;
exports.getI8Decoder = getI8Decoder;
exports.getI8Encoder = getI8Encoder;
exports.getShortU16Codec = getShortU16Codec;
exports.getShortU16Decoder = getShortU16Decoder;
exports.getShortU16Encoder = getShortU16Encoder;
exports.getU128Codec = getU128Codec;
exports.getU128Decoder = getU128Decoder;
exports.getU128Encoder = getU128Encoder;
exports.getU16Codec = getU16Codec;
exports.getU16Decoder = getU16Decoder;
exports.getU16Encoder = getU16Encoder;
exports.getU32Codec = getU32Codec;
exports.getU32Decoder = getU32Decoder;
exports.getU32Encoder = getU32Encoder;
exports.getU64Codec = getU64Codec;
exports.getU64Decoder = getU64Decoder;
exports.getU64Encoder = getU64Encoder;
exports.getU8Codec = getU8Codec;
exports.getU8Decoder = getU8Decoder;
exports.getU8Encoder = getU8Encoder;
//# sourceMappingURL=index.node.cjs.map
//# sourceMappingURL=index.node.cjs.map

File diff suppressed because one or more lines are too long

284
node_modules/@solana/codecs-numbers/dist/index.node.mjs generated vendored Normal file
View File

@@ -0,0 +1,284 @@
import { SolanaError, SOLANA_ERROR__CODECS__NUMBER_OUT_OF_RANGE } from '@solana/errors';
import { combineCodec, createDecoder, createEncoder, assertByteArrayIsNotEmptyForCodec, assertByteArrayHasEnoughBytesForCodec, toArrayBuffer } from '@solana/codecs-core';
// src/assertions.ts
function assertNumberIsBetweenForCodec(codecDescription, min, max, value) {
if (value < min || value > max) {
throw new SolanaError(SOLANA_ERROR__CODECS__NUMBER_OUT_OF_RANGE, {
codecDescription,
max,
min,
value
});
}
}
// src/common.ts
var Endian = /* @__PURE__ */ ((Endian2) => {
Endian2[Endian2["Little"] = 0] = "Little";
Endian2[Endian2["Big"] = 1] = "Big";
return Endian2;
})(Endian || {});
function isLittleEndian(config) {
return config?.endian === 1 /* Big */ ? false : true;
}
function numberEncoderFactory(input) {
return createEncoder({
fixedSize: input.size,
write(value, bytes, offset) {
if (input.range) {
assertNumberIsBetweenForCodec(input.name, input.range[0], input.range[1], value);
}
const arrayBuffer = new ArrayBuffer(input.size);
input.set(new DataView(arrayBuffer), value, isLittleEndian(input.config));
bytes.set(new Uint8Array(arrayBuffer), offset);
return offset + input.size;
}
});
}
function numberDecoderFactory(input) {
return createDecoder({
fixedSize: input.size,
read(bytes, offset = 0) {
assertByteArrayIsNotEmptyForCodec(input.name, bytes, offset);
assertByteArrayHasEnoughBytesForCodec(input.name, input.size, bytes, offset);
const view = new DataView(toArrayBuffer(bytes, offset, input.size));
return [input.get(view, isLittleEndian(input.config)), offset + input.size];
}
});
}
// src/f32.ts
var getF32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "f32",
set: (view, value, le) => view.setFloat32(0, Number(value), le),
size: 4
});
var getF32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getFloat32(0, le),
name: "f32",
size: 4
});
var getF32Codec = (config = {}) => combineCodec(getF32Encoder(config), getF32Decoder(config));
var getF64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "f64",
set: (view, value, le) => view.setFloat64(0, Number(value), le),
size: 8
});
var getF64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getFloat64(0, le),
name: "f64",
size: 8
});
var getF64Codec = (config = {}) => combineCodec(getF64Encoder(config), getF64Decoder(config));
var getI128Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i128",
range: [-BigInt("0x7fffffffffffffffffffffffffffffff") - 1n, BigInt("0x7fffffffffffffffffffffffffffffff")],
set: (view, value, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const rightMask = 0xffffffffffffffffn;
view.setBigInt64(leftOffset, BigInt(value) >> 64n, le);
view.setBigUint64(rightOffset, BigInt(value) & rightMask, le);
},
size: 16
});
var getI128Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const left = view.getBigInt64(leftOffset, le);
const right = view.getBigUint64(rightOffset, le);
return (left << 64n) + right;
},
name: "i128",
size: 16
});
var getI128Codec = (config = {}) => combineCodec(getI128Encoder(config), getI128Decoder(config));
var getI16Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i16",
range: [-Number("0x7fff") - 1, Number("0x7fff")],
set: (view, value, le) => view.setInt16(0, Number(value), le),
size: 2
});
var getI16Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getInt16(0, le),
name: "i16",
size: 2
});
var getI16Codec = (config = {}) => combineCodec(getI16Encoder(config), getI16Decoder(config));
var getI32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i32",
range: [-Number("0x7fffffff") - 1, Number("0x7fffffff")],
set: (view, value, le) => view.setInt32(0, Number(value), le),
size: 4
});
var getI32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getInt32(0, le),
name: "i32",
size: 4
});
var getI32Codec = (config = {}) => combineCodec(getI32Encoder(config), getI32Decoder(config));
var getI64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "i64",
range: [-BigInt("0x7fffffffffffffff") - 1n, BigInt("0x7fffffffffffffff")],
set: (view, value, le) => view.setBigInt64(0, BigInt(value), le),
size: 8
});
var getI64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getBigInt64(0, le),
name: "i64",
size: 8
});
var getI64Codec = (config = {}) => combineCodec(getI64Encoder(config), getI64Decoder(config));
var getI8Encoder = () => numberEncoderFactory({
name: "i8",
range: [-Number("0x7f") - 1, Number("0x7f")],
set: (view, value) => view.setInt8(0, Number(value)),
size: 1
});
var getI8Decoder = () => numberDecoderFactory({
get: (view) => view.getInt8(0),
name: "i8",
size: 1
});
var getI8Codec = () => combineCodec(getI8Encoder(), getI8Decoder());
var getShortU16Encoder = () => createEncoder({
getSizeFromValue: (value) => {
if (value <= 127) return 1;
if (value <= 16383) return 2;
return 3;
},
maxSize: 3,
write: (value, bytes, offset) => {
assertNumberIsBetweenForCodec("shortU16", 0, 65535, value);
const shortU16Bytes = [0];
for (let ii = 0; ; ii += 1) {
const alignedValue = Number(value) >> ii * 7;
if (alignedValue === 0) {
break;
}
const nextSevenBits = 127 & alignedValue;
shortU16Bytes[ii] = nextSevenBits;
if (ii > 0) {
shortU16Bytes[ii - 1] |= 128;
}
}
bytes.set(shortU16Bytes, offset);
return offset + shortU16Bytes.length;
}
});
var getShortU16Decoder = () => createDecoder({
maxSize: 3,
read: (bytes, offset) => {
let value = 0;
let byteCount = 0;
while (++byteCount) {
const byteIndex = byteCount - 1;
const currentByte = bytes[offset + byteIndex];
const nextSevenBits = 127 & currentByte;
value |= nextSevenBits << byteIndex * 7;
if ((currentByte & 128) === 0) {
break;
}
}
return [value, offset + byteCount];
}
});
var getShortU16Codec = () => combineCodec(getShortU16Encoder(), getShortU16Decoder());
var getU128Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u128",
range: [0n, BigInt("0xffffffffffffffffffffffffffffffff")],
set: (view, value, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const rightMask = 0xffffffffffffffffn;
view.setBigUint64(leftOffset, BigInt(value) >> 64n, le);
view.setBigUint64(rightOffset, BigInt(value) & rightMask, le);
},
size: 16
});
var getU128Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => {
const leftOffset = le ? 8 : 0;
const rightOffset = le ? 0 : 8;
const left = view.getBigUint64(leftOffset, le);
const right = view.getBigUint64(rightOffset, le);
return (left << 64n) + right;
},
name: "u128",
size: 16
});
var getU128Codec = (config = {}) => combineCodec(getU128Encoder(config), getU128Decoder(config));
var getU16Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u16",
range: [0, Number("0xffff")],
set: (view, value, le) => view.setUint16(0, Number(value), le),
size: 2
});
var getU16Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getUint16(0, le),
name: "u16",
size: 2
});
var getU16Codec = (config = {}) => combineCodec(getU16Encoder(config), getU16Decoder(config));
var getU32Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u32",
range: [0, Number("0xffffffff")],
set: (view, value, le) => view.setUint32(0, Number(value), le),
size: 4
});
var getU32Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getUint32(0, le),
name: "u32",
size: 4
});
var getU32Codec = (config = {}) => combineCodec(getU32Encoder(config), getU32Decoder(config));
var getU64Encoder = (config = {}) => numberEncoderFactory({
config,
name: "u64",
range: [0n, BigInt("0xffffffffffffffff")],
set: (view, value, le) => view.setBigUint64(0, BigInt(value), le),
size: 8
});
var getU64Decoder = (config = {}) => numberDecoderFactory({
config,
get: (view, le) => view.getBigUint64(0, le),
name: "u64",
size: 8
});
var getU64Codec = (config = {}) => combineCodec(getU64Encoder(config), getU64Decoder(config));
var getU8Encoder = () => numberEncoderFactory({
name: "u8",
range: [0, Number("0xff")],
set: (view, value) => view.setUint8(0, Number(value)),
size: 1
});
var getU8Decoder = () => numberDecoderFactory({
get: (view) => view.getUint8(0),
name: "u8",
size: 1
});
var getU8Codec = () => combineCodec(getU8Encoder(), getU8Decoder());
export { Endian, assertNumberIsBetweenForCodec, getF32Codec, getF32Decoder, getF32Encoder, getF64Codec, getF64Decoder, getF64Encoder, getI128Codec, getI128Decoder, getI128Encoder, getI16Codec, getI16Decoder, getI16Encoder, getI32Codec, getI32Decoder, getI32Encoder, getI64Codec, getI64Decoder, getI64Encoder, getI8Codec, getI8Decoder, getI8Encoder, getShortU16Codec, getShortU16Decoder, getShortU16Encoder, getU128Codec, getU128Decoder, getU128Encoder, getU16Codec, getU16Decoder, getU16Encoder, getU32Codec, getU32Decoder, getU32Encoder, getU64Codec, getU64Decoder, getU64Encoder, getU8Codec, getU8Decoder, getU8Encoder };
//# sourceMappingURL=index.node.mjs.map
//# sourceMappingURL=index.node.mjs.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,27 @@
/**
* Ensures that a given number falls within a specified range.
*
* If the number is outside the allowed range, an error is thrown.
* This function is primarily used to validate values before encoding them in a codec.
*
* @param codecDescription - A string describing the codec that is performing the validation.
* @param min - The minimum allowed value (inclusive).
* @param max - The maximum allowed value (inclusive).
* @param value - The number to validate.
*
* @throws {@link SolanaError} if the value is out of range.
*
* @example
* Validating a number within range.
* ```ts
* assertNumberIsBetweenForCodec('u8', 0, 255, 42); // Passes
* ```
*
* @example
* Throwing an error for an out-of-range value.
* ```ts
* assertNumberIsBetweenForCodec('u8', 0, 255, 300); // Throws
* ```
*/
export declare function assertNumberIsBetweenForCodec(codecDescription: string, min: bigint | number, max: bigint | number, value: bigint | number): void;
//# sourceMappingURL=assertions.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"assertions.d.ts","sourceRoot":"","sources":["../../src/assertions.ts"],"names":[],"mappings":"AAEA;;;;;;;;;;;;;;;;;;;;;;;;GAwBG;AACH,wBAAgB,6BAA6B,CACzC,gBAAgB,EAAE,MAAM,EACxB,GAAG,EAAE,MAAM,GAAG,MAAM,EACpB,GAAG,EAAE,MAAM,GAAG,MAAM,EACpB,KAAK,EAAE,MAAM,GAAG,MAAM,QAUzB"}

View File

@@ -0,0 +1,84 @@
import { Codec, Decoder, Encoder, FixedSizeCodec, FixedSizeDecoder, FixedSizeEncoder } from '@solana/codecs-core';
/**
* Represents an encoder for numbers and bigints.
*
* This type allows encoding values that are either `number` or `bigint`.
* Depending on the specific implementation, the encoded output may have a fixed or variable size.
*
* @see {@link FixedSizeNumberEncoder}
*/
export type NumberEncoder = Encoder<bigint | number>;
/**
* Represents a fixed-size encoder for numbers and bigints.
*
* This encoder serializes values using an exact number of bytes, defined by `TSize`.
*
* @typeParam TSize - The number of bytes used for encoding.
*
* @see {@link NumberEncoder}
*/
export type FixedSizeNumberEncoder<TSize extends number = number> = FixedSizeEncoder<bigint | number, TSize>;
/**
* Represents a decoder for numbers and bigints.
*
* This type supports decoding values as either `number` or `bigint`, depending on the implementation.
*
* @see {@link FixedSizeNumberDecoder}
*/
export type NumberDecoder = Decoder<bigint> | Decoder<number>;
/**
* Represents a fixed-size decoder for numbers and bigints.
*
* This decoder reads a fixed number of bytes (`TSize`) and converts them into a `number` or `bigint`.
*
* @typeParam TSize - The number of bytes expected for decoding.
*
* @see {@link NumberDecoder}
*/
export type FixedSizeNumberDecoder<TSize extends number = number> = FixedSizeDecoder<bigint, TSize> | FixedSizeDecoder<number, TSize>;
/**
* Represents a codec for encoding and decoding numbers and bigints.
*
* - The encoded value can be either a `number` or a `bigint`.
* - The decoded value will always be either a `number` or `bigint`, depending on the implementation.
*
* @see {@link FixedSizeNumberCodec}
*/
export type NumberCodec = Codec<bigint | number, bigint> | Codec<bigint | number, number>;
/**
* Represents a fixed-size codec for encoding and decoding numbers and bigints.
*
* This codec uses a specific number of bytes (`TSize`) for serialization.
* The encoded value can be either a `number` or `bigint`, but the decoded value will always be a `number` or `bigint`,
* depending on the implementation.
*
* @typeParam TSize - The number of bytes used for encoding and decoding.
*
* @see {@link NumberCodec}
*/
export type FixedSizeNumberCodec<TSize extends number = number> = FixedSizeCodec<bigint | number, bigint, TSize> | FixedSizeCodec<bigint | number, number, TSize>;
/**
* Configuration options for number codecs that use more than one byte.
*
* This configuration applies to all number codecs except `u8` and `i8`,
* allowing the user to specify the endianness of serialization.
*/
export type NumberCodecConfig = {
/**
* Specifies whether numbers should be encoded in little-endian or big-endian format.
*
* @defaultValue `Endian.Little`
*/
endian?: Endian;
};
/**
* Defines the byte order used for number serialization.
*
* - `Little`: The least significant byte is stored first.
* - `Big`: The most significant byte is stored first.
*/
export declare enum Endian {
Little = 0,
Big = 1
}
//# sourceMappingURL=common.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"common.d.ts","sourceRoot":"","sources":["../../src/common.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,KAAK,EAAE,OAAO,EAAE,OAAO,EAAE,cAAc,EAAE,gBAAgB,EAAE,gBAAgB,EAAE,MAAM,qBAAqB,CAAC;AAClH;;;;;;;GAOG;AACH,MAAM,MAAM,aAAa,GAAG,OAAO,CAAC,MAAM,GAAG,MAAM,CAAC,CAAC;AAErD;;;;;;;;GAQG;AACH,MAAM,MAAM,sBAAsB,CAAC,KAAK,SAAS,MAAM,GAAG,MAAM,IAAI,gBAAgB,CAAC,MAAM,GAAG,MAAM,EAAE,KAAK,CAAC,CAAC;AAE7G;;;;;;GAMG;AACH,MAAM,MAAM,aAAa,GAAG,OAAO,CAAC,MAAM,CAAC,GAAG,OAAO,CAAC,MAAM,CAAC,CAAC;AAE9D;;;;;;;;GAQG;AACH,MAAM,MAAM,sBAAsB,CAAC,KAAK,SAAS,MAAM,GAAG,MAAM,IAC1D,gBAAgB,CAAC,MAAM,EAAE,KAAK,CAAC,GAC/B,gBAAgB,CAAC,MAAM,EAAE,KAAK,CAAC,CAAC;AAEtC;;;;;;;GAOG;AACH,MAAM,MAAM,WAAW,GAAG,KAAK,CAAC,MAAM,GAAG,MAAM,EAAE,MAAM,CAAC,GAAG,KAAK,CAAC,MAAM,GAAG,MAAM,EAAE,MAAM,CAAC,CAAC;AAE1F;;;;;;;;;;GAUG;AACH,MAAM,MAAM,oBAAoB,CAAC,KAAK,SAAS,MAAM,GAAG,MAAM,IACxD,cAAc,CAAC,MAAM,GAAG,MAAM,EAAE,MAAM,EAAE,KAAK,CAAC,GAC9C,cAAc,CAAC,MAAM,GAAG,MAAM,EAAE,MAAM,EAAE,KAAK,CAAC,CAAC;AAErD;;;;;GAKG;AACH,MAAM,MAAM,iBAAiB,GAAG;IAC5B;;;;OAIG;IACH,MAAM,CAAC,EAAE,MAAM,CAAC;CACnB,CAAC;AAEF;;;;;GAKG;AACH,oBAAY,MAAM;IACd,MAAM,IAAA;IACN,GAAG,IAAA;CACN"}

View File

@@ -0,0 +1,87 @@
import { FixedSizeCodec, FixedSizeDecoder, FixedSizeEncoder } from '@solana/codecs-core';
import { NumberCodecConfig } from './common';
/**
* Returns an encoder for 32-bit floating-point numbers (`f32`).
*
* This encoder serializes `f32` values using 4 bytes.
* Floating-point values may lose precision when encoded.
*
* For more details, see {@link getF32Codec}.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeEncoder<number, 4>` for encoding `f32` values.
*
* @example
* Encoding an `f32` value.
* ```ts
* const encoder = getF32Encoder();
* const bytes = encoder.encode(-1.5); // 0x0000c0bf
* ```
*
* @see {@link getF32Codec}
*/
export declare const getF32Encoder: (config?: NumberCodecConfig) => FixedSizeEncoder<bigint | number, 4>;
/**
* Returns a decoder for 32-bit floating-point numbers (`f32`).
*
* This decoder deserializes `f32` values from 4 bytes.
* Some precision may be lost during decoding due to floating-point representation.
*
* For more details, see {@link getF32Codec}.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeDecoder<number, 4>` for decoding `f32` values.
*
* @example
* Decoding an `f32` value.
* ```ts
* const decoder = getF32Decoder();
* const value = decoder.decode(new Uint8Array([0x00, 0x00, 0xc0, 0xbf])); // -1.5
* ```
*
* @see {@link getF32Codec}
*/
export declare const getF32Decoder: (config?: NumberCodecConfig) => FixedSizeDecoder<number, 4>;
/**
* Returns a codec for encoding and decoding 32-bit floating-point numbers (`f32`).
*
* This codec serializes `f32` values using 4 bytes.
* Due to the IEEE 754 floating-point representation, some precision loss may occur.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeCodec<number, number, 4>` for encoding and decoding `f32` values.
*
* @example
* Encoding and decoding an `f32` value.
* ```ts
* const codec = getF32Codec();
* const bytes = codec.encode(-1.5); // 0x0000c0bf
* const value = codec.decode(bytes); // -1.5
* ```
*
* @example
* Using big-endian encoding.
* ```ts
* const codec = getF32Codec({ endian: Endian.Big });
* const bytes = codec.encode(-1.5); // 0xbfc00000
* ```
*
* @remarks
* `f32` values follow the IEEE 754 single-precision floating-point standard.
* Precision loss may occur for certain values.
*
* - If you need higher precision, consider using {@link getF64Codec}.
* - If you need integer values, consider using {@link getI32Codec} or {@link getU32Codec}.
*
* Separate {@link getF32Encoder} and {@link getF32Decoder} functions are available.
*
* ```ts
* const bytes = getF32Encoder().encode(-1.5);
* const value = getF32Decoder().decode(bytes);
* ```
*
* @see {@link getF32Encoder}
* @see {@link getF32Decoder}
*/
export declare const getF32Codec: (config?: NumberCodecConfig) => FixedSizeCodec<bigint | number, number, 4>;
//# sourceMappingURL=f32.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"f32.d.ts","sourceRoot":"","sources":["../../src/f32.ts"],"names":[],"mappings":"AAAA,OAAO,EAAgB,cAAc,EAAE,gBAAgB,EAAE,gBAAgB,EAAE,MAAM,qBAAqB,CAAC;AAEvG,OAAO,EAAE,iBAAiB,EAAE,MAAM,UAAU,CAAC;AAG7C;;;;;;;;;;;;;;;;;;;GAmBG;AACH,eAAO,MAAM,aAAa,GAAI,SAAQ,iBAAsB,KAAG,gBAAgB,CAAC,MAAM,GAAG,MAAM,EAAE,CAAC,CAM5F,CAAC;AAEP;;;;;;;;;;;;;;;;;;;GAmBG;AACH,eAAO,MAAM,aAAa,GAAI,SAAQ,iBAAsB,KAAG,gBAAgB,CAAC,MAAM,EAAE,CAAC,CAMnF,CAAC;AAEP;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAwCG;AACH,eAAO,MAAM,WAAW,GAAI,SAAQ,iBAAsB,KAAG,cAAc,CAAC,MAAM,GAAG,MAAM,EAAE,MAAM,EAAE,CAAC,CACxC,CAAC"}

View File

@@ -0,0 +1,87 @@
import { FixedSizeCodec, FixedSizeDecoder, FixedSizeEncoder } from '@solana/codecs-core';
import { NumberCodecConfig } from './common';
/**
* Returns an encoder for 64-bit floating-point numbers (`f64`).
*
* This encoder serializes `f64` values using 8 bytes.
* Floating-point values may lose precision when encoded.
*
* For more details, see {@link getF64Codec}.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeEncoder<number, 8>` for encoding `f64` values.
*
* @example
* Encoding an `f64` value.
* ```ts
* const encoder = getF64Encoder();
* const bytes = encoder.encode(-1.5); // 0x000000000000f8bf
* ```
*
* @see {@link getF64Codec}
*/
export declare const getF64Encoder: (config?: NumberCodecConfig) => FixedSizeEncoder<bigint | number, 8>;
/**
* Returns a decoder for 64-bit floating-point numbers (`f64`).
*
* This decoder deserializes `f64` values from 8 bytes.
* Some precision may be lost during decoding due to floating-point representation.
*
* For more details, see {@link getF64Codec}.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeDecoder<number, 8>` for decoding `f64` values.
*
* @example
* Decoding an `f64` value.
* ```ts
* const decoder = getF64Decoder();
* const value = decoder.decode(new Uint8Array([0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf8, 0xbf])); // -1.5
* ```
*
* @see {@link getF64Codec}
*/
export declare const getF64Decoder: (config?: NumberCodecConfig) => FixedSizeDecoder<number, 8>;
/**
* Returns a codec for encoding and decoding 64-bit floating-point numbers (`f64`).
*
* This codec serializes `f64` values using 8 bytes.
* Due to the IEEE 754 floating-point representation, some precision loss may occur.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeCodec<number, number, 8>` for encoding and decoding `f64` values.
*
* @example
* Encoding and decoding an `f64` value.
* ```ts
* const codec = getF64Codec();
* const bytes = codec.encode(-1.5); // 0x000000000000f8bf
* const value = codec.decode(bytes); // -1.5
* ```
*
* @example
* Using big-endian encoding.
* ```ts
* const codec = getF64Codec({ endian: Endian.Big });
* const bytes = codec.encode(-1.5); // 0xbff8000000000000
* ```
*
* @remarks
* `f64` values follow the IEEE 754 double-precision floating-point standard.
* Precision loss may still occur but is significantly lower than `f32`.
*
* - If you need smaller floating-point values, consider using {@link getF32Codec}.
* - If you need integer values, consider using {@link getI64Codec} or {@link getU64Codec}.
*
* Separate {@link getF64Encoder} and {@link getF64Decoder} functions are available.
*
* ```ts
* const bytes = getF64Encoder().encode(-1.5);
* const value = getF64Decoder().decode(bytes);
* ```
*
* @see {@link getF64Encoder}
* @see {@link getF64Decoder}
*/
export declare const getF64Codec: (config?: NumberCodecConfig) => FixedSizeCodec<bigint | number, number, 8>;
//# sourceMappingURL=f64.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"f64.d.ts","sourceRoot":"","sources":["../../src/f64.ts"],"names":[],"mappings":"AAAA,OAAO,EAAgB,cAAc,EAAE,gBAAgB,EAAE,gBAAgB,EAAE,MAAM,qBAAqB,CAAC;AAEvG,OAAO,EAAE,iBAAiB,EAAE,MAAM,UAAU,CAAC;AAG7C;;;;;;;;;;;;;;;;;;;GAmBG;AACH,eAAO,MAAM,aAAa,GAAI,SAAQ,iBAAsB,KAAG,gBAAgB,CAAC,MAAM,GAAG,MAAM,EAAE,CAAC,CAM5F,CAAC;AAEP;;;;;;;;;;;;;;;;;;;GAmBG;AACH,eAAO,MAAM,aAAa,GAAI,SAAQ,iBAAsB,KAAG,gBAAgB,CAAC,MAAM,EAAE,CAAC,CAMnF,CAAC;AAEP;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAwCG;AACH,eAAO,MAAM,WAAW,GAAI,SAAQ,iBAAsB,KAAG,cAAc,CAAC,MAAM,GAAG,MAAM,EAAE,MAAM,EAAE,CAAC,CACxC,CAAC"}

View File

@@ -0,0 +1,91 @@
import { FixedSizeCodec, FixedSizeDecoder, FixedSizeEncoder } from '@solana/codecs-core';
import { NumberCodecConfig } from './common';
/**
* Returns an encoder for 128-bit signed integers (`i128`).
*
* This encoder serializes `i128` values using 16 bytes.
* Values can be provided as either `number` or `bigint`.
*
* For more details, see {@link getI128Codec}.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeEncoder<number | bigint, 16>` for encoding `i128` values.
*
* @example
* Encoding an `i128` value.
* ```ts
* const encoder = getI128Encoder();
* const bytes = encoder.encode(-42n); // 0xd6ffffffffffffffffffffffffffffff
* ```
*
* @see {@link getI128Codec}
*/
export declare const getI128Encoder: (config?: NumberCodecConfig) => FixedSizeEncoder<bigint | number, 16>;
/**
* Returns a decoder for 128-bit signed integers (`i128`).
*
* This decoder deserializes `i128` values from 16 bytes.
* The decoded value is always a `bigint`.
*
* For more details, see {@link getI128Codec}.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeDecoder<bigint, 16>` for decoding `i128` values.
*
* @example
* Decoding an `i128` value.
* ```ts
* const decoder = getI128Decoder();
* const value = decoder.decode(new Uint8Array([
* 0xd6, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
* 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
* ])); // -42n
* ```
*
* @see {@link getI128Codec}
*/
export declare const getI128Decoder: (config?: NumberCodecConfig) => FixedSizeDecoder<bigint, 16>;
/**
* Returns a codec for encoding and decoding 128-bit signed integers (`i128`).
*
* This codec serializes `i128` values using 16 bytes.
* Values can be provided as either `number` or `bigint`, but the decoded value is always a `bigint`.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeCodec<number | bigint, bigint, 16>` for encoding and decoding `i128` values.
*
* @example
* Encoding and decoding an `i128` value.
* ```ts
* const codec = getI128Codec();
* const bytes = codec.encode(-42n); // 0xd6ffffffffffffffffffffffffffffff
* const value = codec.decode(bytes); // -42n
* ```
*
* @example
* Using big-endian encoding.
* ```ts
* const codec = getI128Codec({ endian: Endian.Big });
* const bytes = codec.encode(-42n); // 0xffffffffffffffffffffffffffffd6
* ```
*
* @remarks
* This codec supports values between `-2^127` and `2^127 - 1`.
* Since JavaScript `number` cannot safely represent values beyond `2^53 - 1`, the decoded value is always a `bigint`.
*
* - If you need a smaller signed integer, consider using {@link getI64Codec} or {@link getI32Codec}.
* - If you need a larger signed integer, consider using a custom codec.
* - If you need unsigned integers, consider using {@link getU128Codec}.
*
* Separate {@link getI128Encoder} and {@link getI128Decoder} functions are available.
*
* ```ts
* const bytes = getI128Encoder().encode(-42);
* const value = getI128Decoder().decode(bytes);
* ```
*
* @see {@link getI128Encoder}
* @see {@link getI128Decoder}
*/
export declare const getI128Codec: (config?: NumberCodecConfig) => FixedSizeCodec<bigint | number, bigint, 16>;
//# sourceMappingURL=i128.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"i128.d.ts","sourceRoot":"","sources":["../../src/i128.ts"],"names":[],"mappings":"AAAA,OAAO,EAAgB,cAAc,EAAE,gBAAgB,EAAE,gBAAgB,EAAE,MAAM,qBAAqB,CAAC;AAEvG,OAAO,EAAE,iBAAiB,EAAE,MAAM,UAAU,CAAC;AAG7C;;;;;;;;;;;;;;;;;;;GAmBG;AACH,eAAO,MAAM,cAAc,GAAI,SAAQ,iBAAsB,KAAG,gBAAgB,CAAC,MAAM,GAAG,MAAM,EAAE,EAAE,CAa9F,CAAC;AAEP;;;;;;;;;;;;;;;;;;;;;;GAsBG;AACH,eAAO,MAAM,cAAc,GAAI,SAAQ,iBAAsB,KAAG,gBAAgB,CAAC,MAAM,EAAE,EAAE,CAYrF,CAAC;AAEP;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAyCG;AACH,eAAO,MAAM,YAAY,GAAI,SAAQ,iBAAsB,KAAG,cAAc,CAAC,MAAM,GAAG,MAAM,EAAE,MAAM,EAAE,EAAE,CACxC,CAAC"}

View File

@@ -0,0 +1,87 @@
import { FixedSizeCodec, FixedSizeDecoder, FixedSizeEncoder } from '@solana/codecs-core';
import { NumberCodecConfig } from './common';
/**
* Returns an encoder for 16-bit signed integers (`i16`).
*
* This encoder serializes `i16` values using 2 bytes.
* Values can be provided as either `number` or `bigint`.
*
* For more details, see {@link getI16Codec}.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeEncoder<number | bigint, 2>` for encoding `i16` values.
*
* @example
* Encoding an `i16` value.
* ```ts
* const encoder = getI16Encoder();
* const bytes = encoder.encode(-42); // 0xd6ff
* ```
*
* @see {@link getI16Codec}
*/
export declare const getI16Encoder: (config?: NumberCodecConfig) => FixedSizeEncoder<bigint | number, 2>;
/**
* Returns a decoder for 16-bit signed integers (`i16`).
*
* This decoder deserializes `i16` values from 2 bytes.
* The decoded value is always a `number`.
*
* For more details, see {@link getI16Codec}.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeDecoder<number, 2>` for decoding `i16` values.
*
* @example
* Decoding an `i16` value.
* ```ts
* const decoder = getI16Decoder();
* const value = decoder.decode(new Uint8Array([0xd6, 0xff])); // -42
* ```
*
* @see {@link getI16Codec}
*/
export declare const getI16Decoder: (config?: NumberCodecConfig) => FixedSizeDecoder<number, 2>;
/**
* Returns a codec for encoding and decoding 16-bit signed integers (`i16`).
*
* This codec serializes `i16` values using 2 bytes.
* Values can be provided as either `number` or `bigint`, but the decoded value is always a `number`.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeCodec<number | bigint, number, 2>` for encoding and decoding `i16` values.
*
* @example
* Encoding and decoding an `i16` value.
* ```ts
* const codec = getI16Codec();
* const bytes = codec.encode(-42); // 0xd6ff
* const value = codec.decode(bytes); // -42
* ```
*
* @example
* Using big-endian encoding.
* ```ts
* const codec = getI16Codec({ endian: Endian.Big });
* const bytes = codec.encode(-42); // 0xffd6
* ```
*
* @remarks
* This codec supports values between `-2^15` (`-32,768`) and `2^15 - 1` (`32,767`).
*
* - If you need a smaller signed integer, consider using {@link getI8Codec}.
* - If you need a larger signed integer, consider using {@link getI32Codec}.
* - If you need unsigned integers, consider using {@link getU16Codec}.
*
* Separate {@link getI16Encoder} and {@link getI16Decoder} functions are available.
*
* ```ts
* const bytes = getI16Encoder().encode(-42);
* const value = getI16Decoder().decode(bytes);
* ```
*
* @see {@link getI16Encoder}
* @see {@link getI16Decoder}
*/
export declare const getI16Codec: (config?: NumberCodecConfig) => FixedSizeCodec<bigint | number, number, 2>;
//# sourceMappingURL=i16.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"i16.d.ts","sourceRoot":"","sources":["../../src/i16.ts"],"names":[],"mappings":"AAAA,OAAO,EAAgB,cAAc,EAAE,gBAAgB,EAAE,gBAAgB,EAAE,MAAM,qBAAqB,CAAC;AAEvG,OAAO,EAAE,iBAAiB,EAAE,MAAM,UAAU,CAAC;AAG7C;;;;;;;;;;;;;;;;;;;GAmBG;AACH,eAAO,MAAM,aAAa,GAAI,SAAQ,iBAAsB,KAAG,gBAAgB,CAAC,MAAM,GAAG,MAAM,EAAE,CAAC,CAO5F,CAAC;AAEP;;;;;;;;;;;;;;;;;;;GAmBG;AACH,eAAO,MAAM,aAAa,GAAI,SAAQ,iBAAsB,KAAG,gBAAgB,CAAC,MAAM,EAAE,CAAC,CAMnF,CAAC;AAEP;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAwCG;AACH,eAAO,MAAM,WAAW,GAAI,SAAQ,iBAAsB,KAAG,cAAc,CAAC,MAAM,GAAG,MAAM,EAAE,MAAM,EAAE,CAAC,CACxC,CAAC"}

View File

@@ -0,0 +1,87 @@
import { FixedSizeCodec, FixedSizeDecoder, FixedSizeEncoder } from '@solana/codecs-core';
import { NumberCodecConfig } from './common';
/**
* Returns an encoder for 32-bit signed integers (`i32`).
*
* This encoder serializes `i32` values using 4 bytes.
* Values can be provided as either `number` or `bigint`.
*
* For more details, see {@link getI32Codec}.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeEncoder<number | bigint, 4>` for encoding `i32` values.
*
* @example
* Encoding an `i32` value.
* ```ts
* const encoder = getI32Encoder();
* const bytes = encoder.encode(-42); // 0xd6ffffff
* ```
*
* @see {@link getI32Codec}
*/
export declare const getI32Encoder: (config?: NumberCodecConfig) => FixedSizeEncoder<bigint | number, 4>;
/**
* Returns a decoder for 32-bit signed integers (`i32`).
*
* This decoder deserializes `i32` values from 4 bytes.
* The decoded value is always a `number`.
*
* For more details, see {@link getI32Codec}.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeDecoder<number, 4>` for decoding `i32` values.
*
* @example
* Decoding an `i32` value.
* ```ts
* const decoder = getI32Decoder();
* const value = decoder.decode(new Uint8Array([0xd6, 0xff, 0xff, 0xff])); // -42
* ```
*
* @see {@link getI32Codec}
*/
export declare const getI32Decoder: (config?: NumberCodecConfig) => FixedSizeDecoder<number, 4>;
/**
* Returns a codec for encoding and decoding 32-bit signed integers (`i32`).
*
* This codec serializes `i32` values using 4 bytes.
* Values can be provided as either `number` or `bigint`, but the decoded value is always a `number`.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeCodec<number | bigint, number, 4>` for encoding and decoding `i32` values.
*
* @example
* Encoding and decoding an `i32` value.
* ```ts
* const codec = getI32Codec();
* const bytes = codec.encode(-42); // 0xd6ffffff
* const value = codec.decode(bytes); // -42
* ```
*
* @example
* Using big-endian encoding.
* ```ts
* const codec = getI32Codec({ endian: Endian.Big });
* const bytes = codec.encode(-42); // 0xffffffd6
* ```
*
* @remarks
* This codec supports values between `-2^31` (`-2,147,483,648`) and `2^31 - 1` (`2,147,483,647`).
*
* - If you need a smaller signed integer, consider using {@link getI16Codec} or {@link getI8Codec}.
* - If you need a larger signed integer, consider using {@link getI64Codec}.
* - If you need unsigned integers, consider using {@link getU32Codec}.
*
* Separate {@link getI32Encoder} and {@link getI32Decoder} functions are available.
*
* ```ts
* const bytes = getI32Encoder().encode(-42);
* const value = getI32Decoder().decode(bytes);
* ```
*
* @see {@link getI32Encoder}
* @see {@link getI32Decoder}
*/
export declare const getI32Codec: (config?: NumberCodecConfig) => FixedSizeCodec<bigint | number, number, 4>;
//# sourceMappingURL=i32.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"i32.d.ts","sourceRoot":"","sources":["../../src/i32.ts"],"names":[],"mappings":"AAAA,OAAO,EAAgB,cAAc,EAAE,gBAAgB,EAAE,gBAAgB,EAAE,MAAM,qBAAqB,CAAC;AAEvG,OAAO,EAAE,iBAAiB,EAAE,MAAM,UAAU,CAAC;AAG7C;;;;;;;;;;;;;;;;;;;GAmBG;AACH,eAAO,MAAM,aAAa,GAAI,SAAQ,iBAAsB,KAAG,gBAAgB,CAAC,MAAM,GAAG,MAAM,EAAE,CAAC,CAO5F,CAAC;AAEP;;;;;;;;;;;;;;;;;;;GAmBG;AACH,eAAO,MAAM,aAAa,GAAI,SAAQ,iBAAsB,KAAG,gBAAgB,CAAC,MAAM,EAAE,CAAC,CAMnF,CAAC;AAEP;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAwCG;AACH,eAAO,MAAM,WAAW,GAAI,SAAQ,iBAAsB,KAAG,cAAc,CAAC,MAAM,GAAG,MAAM,EAAE,MAAM,EAAE,CAAC,CACxC,CAAC"}

View File

@@ -0,0 +1,90 @@
import { FixedSizeCodec, FixedSizeDecoder, FixedSizeEncoder } from '@solana/codecs-core';
import { NumberCodecConfig } from './common';
/**
* Returns an encoder for 64-bit signed integers (`i64`).
*
* This encoder serializes `i64` values using 8 bytes.
* Values can be provided as either `number` or `bigint`.
*
* For more details, see {@link getI64Codec}.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeEncoder<number | bigint, 8>` for encoding `i64` values.
*
* @example
* Encoding an `i64` value.
* ```ts
* const encoder = getI64Encoder();
* const bytes = encoder.encode(-42n); // 0xd6ffffffffffffff
* ```
*
* @see {@link getI64Codec}
*/
export declare const getI64Encoder: (config?: NumberCodecConfig) => FixedSizeEncoder<bigint | number, 8>;
/**
* Returns a decoder for 64-bit signed integers (`i64`).
*
* This decoder deserializes `i64` values from 8 bytes.
* The decoded value is always a `bigint`.
*
* For more details, see {@link getI64Codec}.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeDecoder<bigint, 8>` for decoding `i64` values.
*
* @example
* Decoding an `i64` value.
* ```ts
* const decoder = getI64Decoder();
* const value = decoder.decode(new Uint8Array([
* 0xd6, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
* ])); // -42n
* ```
*
* @see {@link getI64Codec}
*/
export declare const getI64Decoder: (config?: NumberCodecConfig) => FixedSizeDecoder<bigint, 8>;
/**
* Returns a codec for encoding and decoding 64-bit signed integers (`i64`).
*
* This codec serializes `i64` values using 8 bytes.
* Values can be provided as either `number` or `bigint`, but the decoded value is always a `bigint`.
*
* @param config - Optional configuration to specify endianness (little by default).
* @returns A `FixedSizeCodec<number | bigint, bigint, 8>` for encoding and decoding `i64` values.
*
* @example
* Encoding and decoding an `i64` value.
* ```ts
* const codec = getI64Codec();
* const bytes = codec.encode(-42n); // 0xd6ffffffffffffff
* const value = codec.decode(bytes); // -42n
* ```
*
* @example
* Using big-endian encoding.
* ```ts
* const codec = getI64Codec({ endian: Endian.Big });
* const bytes = codec.encode(-42n); // 0xffffffffffffffd6
* ```
*
* @remarks
* This codec supports values between `-2^63` and `2^63 - 1`.
* Since JavaScript `number` cannot safely represent values beyond `2^53 - 1`, the decoded value is always a `bigint`.
*
* - If you need a smaller signed integer, consider using {@link getI32Codec} or {@link getI16Codec}.
* - If you need a larger signed integer, consider using {@link getI128Codec}.
* - If you need unsigned integers, consider using {@link getU64Codec}.
*
* Separate {@link getI64Encoder} and {@link getI64Decoder} functions are available.
*
* ```ts
* const bytes = getI64Encoder().encode(-42);
* const value = getI64Decoder().decode(bytes);
* ```
*
* @see {@link getI64Encoder}
* @see {@link getI64Decoder}
*/
export declare const getI64Codec: (config?: NumberCodecConfig) => FixedSizeCodec<bigint | number, bigint, 8>;
//# sourceMappingURL=i64.d.ts.map

Some files were not shown because too many files have changed in this diff Show More