Aleo developers write smart contracts in Leo, a statically-typed language influenced by Rust and JavaScript that compiles to zero-knowledge circuits. AI coding agents can accelerate Leo development through purpose-built skills that teach correct patterns and IDE tools that provide compiler feedback in real time.
Leo: Aleo's programming language
Leo is a high-level language built specifically for writing zero-knowledge applications on Aleo. It exists because general-purpose smart contract languages like Solidity cannot target zero-knowledge proof systems. The reason is straightforward: a zero-knowledge proof attests to the correct execution of a fixed arithmetic circuit, and that circuit must be fully determined at compile time. Languages designed for EVM execution rely on dynamic dispatch, variable-length data structures, and runtime-determined control flow. None of that is expressible as a static circuit.
Leo gives you a developer-friendly syntax while enforcing the constraints that zero-knowledge circuits require. Its type system and language rules guarantee that every valid Leo program compiles into a finite, deterministic arithmetic circuit suitable for proof generation.
Syntax and influences
Leo borrows from Rust and JavaScript in deliberate ways. From Rust: explicit integer widths (u8, u16, u32, u64, u128), signed variants (i8 through i128), the field and group types native to elliptic curve cryptography, bool, address, and composite types defined with struct and record. Ownership and borrowing do not appear in Leo, but the emphasis on explicit types and the absence of implicit coercion will feel familiar if you write Rust.
From JavaScript: a more approachable structural style. Programs are organized into named blocks. Functions (called transitions) accept named parameters and return explicitly typed outputs. The visual rhythm of a Leo file, with its curly braces, let bindings, and return statements, reads comfortably to anyone who has written TypeScript.
Why the constraints exist
Several restrictions in Leo surprise developers on first encounter. No dynamically sized arrays or vectors. No native string type. Loops must have bounds known at compile time. No heap allocation. No recursion.
These are not oversights.
A zero-knowledge circuit is a fixed directed acyclic graph of arithmetic gates. Every variable, every branch, every loop iteration corresponds to concrete gates in the circuit. If a loop could run an unbounded number of times, the circuit size would be unknown at compile time, and proof generation would be impossible. If arrays could grow dynamically, the prover would need a circuit that handles every possible size, which is either infinite or artificially capped.
Leo enforces these constraints at the language level so that compilation always succeeds and the resulting circuit has a predictable, bounded size. This is not a limitation unique to Leo. It is inherent to all zero-knowledge programming. Leo just makes the constraints explicit and ergonomic rather than leaving you to discover them through opaque compiler errors.
Core language constructs
A Leo source file declares a program with a unique name and network identifier. Inside the program, you define:
- Structs: Named collections of typed fields for public composite data.
- Records: Like structs but with an implicit
ownerfield of typeaddress. Records are private state: encrypted on-chain and only visible to their owner. - Mappings: Key-value stores in public on-chain storage, analogous to Solidity's
mappingtype. - Constructors: Required for all programs. The constructor defines the program's upgrade policy using decorators like
@noupgrade,@admin,@checksum, or@custom. - Functions (
fn): The entry points for program execution. A function takes typed inputs, performs computation, and produces typed outputs. Functions execute off-chain on the caller's machine; only the resulting proof and encrypted outputs are submitted to the network. final { }blocks: Optional on-chain execution requests returned from functions. In practice you return aFinalvalue from the function body and call afinal fnscope. Mapping and storage updates must happen in those final scopes, because public state is consensus-dependent.
This dual execution model (off-chain function execution paired with on-chain final scopes) is one of Aleo's defining architectural choices. Private computation happens in the function body. Public state updates happen in final fn logic invoked through final { ... }.
Your first Leo program
Here is a minimal private token. It supports minting tokens to an address and transferring tokens between addresses. All balances are held in records, so they are private by default: only the record owner can see the balance.
This uses current Leo syntax and is copy-pasteable into a Leo project created with leo new.
program private_token.aleo {
@noupgrade
constructor() {}
record Token {
owner: address,
amount: u64,
}
fn mint(receiver: address, amount: u64) -> Token {
return Token {
owner: receiver,
amount: amount,
};
}
fn transfer(
sender_token: Token,
receiver: address,
amount: u64,
) -> (Token, Token) {
let remaining: u64 = sender_token.amount - amount;
let sender_change: Token = Token {
owner: sender_token.owner,
amount: remaining,
};
let receiver_token: Token = Token {
owner: receiver,
amount: amount,
};
return (sender_change, receiver_token);
}
}
Walking through the code
program private_token.aleo names the program and associates it with the Aleo network. Every deployed program must have a unique name on-chain.
record Token defines a record type with two fields: owner (an address) and amount (a u64). Because this is a record rather than a struct, each Token instance is encrypted on-chain. Only the address in the owner field can decrypt and spend it.
The mint function takes a receiver address and an amount, then returns a new Token record assigned to that receiver. In a production token, you would gate minting behind authorization logic, but this example keeps things clear.
The transfer function consumes an existing Token record (sender_token), subtracts the transfer amount, and produces two new records: one returning change to the sender, one delivering the specified amount to the receiver. This is a UTXO-style model. The input record is consumed (it cannot be spent again), and fresh records are created. If amount exceeds sender_token.amount, the subtraction underflows and the Leo compiler's runtime check causes the transaction to fail. No overdrafts.
To create this project locally:
leo new private_token
cd private_token
Replace the contents of src/main.leo with the code above, then compile:
leo build
If the build succeeds, Leo has compiled your program into Aleo instructions (the build/ directory will contain the .aleo bytecode and the proving/verifying keys). You can then run a function locally:
leo run mint aleo1abc...your_address 1000u64
This executes the mint function off-chain, generates a zero-knowledge proof, and prints the resulting Token record.
AI-assisted development with skills
Writing Leo introduces a learning curve even for experienced developers. The language's constraints, its dual execution model, and its record-based state management all require new mental models. AI coding agents can flatten this curve, but only if they have access to accurate, up-to-date knowledge about Leo and the Aleo ecosystem.
What skills are
Skills are structured knowledge packages injected into an AI coding agent's context. Rather than relying on the agent's pretrained knowledge, which may be outdated or incomplete for a rapidly evolving ecosystem, skills provide authoritative, version-specific guidance that the agent can reference during code generation, review, and debugging.
A skill typically consists of curated documentation, canonical code patterns, compiler rules, common error resolutions, and architectural guidance. When a skill is loaded, the agent gains domain expertise: it knows which Leo syntax is current, which patterns the compiler accepts, and which approaches lead to provable, deployable programs.
The Aleo skill set
The Aleo skills package covers several domains:
- Smart contract development: Leo syntax, record and struct definitions, function design,
final { }blocks, mapping operations, and type system rules. Includes canonical patterns for common operations and explains the reasoning behind Leo's constraints. - Deployment: The full workflow from
leo buildthrough testnet deployment withsnarkos developer deployto mainnet launch, including fee estimation and key management. - Frontend integration: Using the Aleo SDK (
@provablehq/sdk) and the@provablehq/wasmpackage to connect web applications to Aleo programs, including wallet adapter integration for browser-based signing. - Backend and infrastructure: Running SnarkOS nodes, indexing on-chain data, and interacting with the Aleo REST API for reading public mappings and transaction data.
- Staking and validators: Aleo's proof-of-stake consensus mechanics, delegator operations, and validator setup.
Installing skills
Skills are designed to work with major AI-assisted development environments. Installation depends on your tooling:
- Claude Code / Amp / Codex / OpenCode: Skills install as
SKILL.mdfiles to each agent's native global skills directory (e.g.,~/.claude/skills/,~/.config/amp/skills/). Use--localfor project-level installation. - Cursor: Skills install to
.cursor/rules/as.mdcfiles (project-local only). - Windsurf: Summary installs globally to
~/.codeium/windsurf/memories/global_rules.md, full skills to.windsurf/rules/. - Cline / Roo Code / Continue: Skills install to each agent's global rules directory (e.g.,
~/Documents/Cline/Rules/,~/.roo/rules/). - Gemini CLI: Summary installs to
~/.gemini/GEMINI.md. - Aider: Skills install to
~/.aleo-skills/, referenced viaread:entries in.aider.conf.yml. - GitHub Copilot: Skills install to
.github/instructions/as.instructions.mdfiles.
Run curl -fsSL https://aleoforagents.com/skills.sh | bash to auto-detect your harness and install.
Compiler-in-the-loop development
The most powerful application of AI tooling for Aleo development is not code generation in isolation. It is the tight feedback loop between the agent, the Leo compiler, and the Aleo runtime.
The Leo CLI as an agent tool
Any AI agent with shell access can use the Leo CLI directly. The relevant commands:
leo build: Compiles a Leo program and returns compiler output, including errors with line numbers and explanations.leo run <function> <inputs>: Executes a specific transition with provided inputs and returns the outputs.leo test: Runs the program's test suite (scripts annotated with@test).snarkos developer deploy: Deploys a compiled program to testnet or mainnet.
The feedback loop
With the Leo CLI available, an AI agent can execute a complete development cycle without human intervention:
- Write: The agent generates a Leo program based on your specification.
- Compile: The agent runs
leo buildto check the program. If the compiler returns errors, the agent reads the error messages, identifies the issue, and modifies the code. - Fix: Common errors (type mismatches, undeclared variables, invalid operations on record types) are resolved using skill knowledge and the compiler's diagnostic output.
- Test: Once the program compiles, the agent runs
leo runwith test inputs to verify that transitions produce expected outputs. If outputs are wrong, the agent traces the logic and corrects it. - Deploy: When tests pass, the agent can deploy to testnet using
snarkos developer deploy, confirm the deployment, and report the on-chain program ID back to you.
This loop works with any agent that can execute shell commands (Claude Code, Cursor, Codex, Amp, Cline, and others). Skills provide the Leo knowledge; the compiler provides the feedback.
This loop is particularly valuable for Leo because the compiler's error messages often reference circuit constraints that are unfamiliar to developers new to zero-knowledge programming. An agent equipped with Aleo skills can interpret these errors, explain them in plain language, and apply the correct fix in seconds.
Practical impact
Compiler-in-the-loop development reduces iteration time on a new Leo program from hours to minutes. You describe a desired program in natural language, and the agent produces a working, tested, testnet-deployed program through iterative compilation and correction. You review the final output rather than debugging each intermediate state.
This does not eliminate the need for developer understanding. Code review, security auditing, and architectural decisions still require human judgment. But satisfying the compiler, fixing type errors, and adjusting for circuit constraints is exactly the kind of repetitive, pattern-based work that AI agents handle well.
Deployment: testnet to mainnet
Deploying a Leo program to the Aleo network means compiling the program into Aleo instructions, generating proving and verifying keys, and submitting a deployment transaction.
Build
Start with a clean build:
leo build
This compiles the Leo source into Aleo instructions (a lower-level representation) and generates the cryptographic keys needed for proving and verification. Output goes to the build/ directory. The .aleo file contains the compiled program; the .prover and .verifier files contain the keys.
Testnet deployment
Aleo maintains a testnet for development and testing. To deploy:
snarkos developer deploy private_token.aleo \
--private-key <YOUR_PRIVATE_KEY> \
--query https://api.explorer.provable.com/v1 \
--path ./build/ \
--broadcast https://api.explorer.provable.com/v1/testnet/transaction/broadcast \
--fee 1000000
The --fee parameter is in microcredits (1 credit = 1,000,000 microcredits). Deployment costs are proportional to compiled program size: larger programs with more transitions and more complex logic produce larger circuits, which require more on-chain storage and higher fees. A simple program like the private token example costs a fraction of a credit. Complex programs with many transitions and large finalize blocks cost significantly more.
Getting testnet credits
Testnet credits are available through the Aleo faucet. Visit the official Aleo faucet or use the /faucet command in the Aleo Discord server. Testnet credits have no monetary value and are for development and testing only.
Mainnet deployment
Mainnet deployment uses the same snarkos developer deploy command with the mainnet API endpoint and real Aleo credits. The process is identical to testnet, but the consequences are permanent: the program name is reserved on deployment, and future code changes are governed only by the constructor's upgrade policy (@noupgrade, @admin, @checksum, @custom).
Before deploying to mainnet:
- Test all transitions with representative inputs.
- Check edge cases, particularly arithmetic underflow and overflow conditions.
- Review the program for correctness. Choose the constructor upgrade policy deliberately.
- Confirm you have sufficient credits for the deployment fee.
Program upgradability
Aleo supports policy-controlled program upgradability through constructors. When you deploy a program, its constructor defines the upgrade policy. Four built-in decorators control this:
@noupgrade- the program can never be changed after deployment@admin(address)- only the specified address can authorize upgrades@checksum- upgrades require matching a pre-committed code hash@custom- custom upgrade logic defined in the constructor itself
The constructor function itself is immutable after deployment. The upgrade rules are permanent, even if the program code can change. Choose @noupgrade for permanently immutable logic. Use @admin or @checksum if you need controlled upgradeability.
Additional Leo language features
Beyond the core constructs above, Leo includes several features that matter for production programs.
Storage variables and vectors
Leo supports persistent on-chain state through storage declarations. Unlike mappings (which are key-value pairs), storage variables hold a single value directly:
final fn finalize_increment() {
let current: u64 = count.unwrap_or(0u64);
count = current + 1u64;
}
program counter.aleo {
@noupgrade
constructor() {}
storage count: u64;
storage items: [field];
fn increment() -> Final {
return final { finalize_increment(); };
}
}
Storage values behave like optionals. They may not be set yet, so use .unwrap_or(default) when reading them. Storage vectors (storage vec: [T];) provide ordered on-chain collections. Both storage variables and vectors persist across transactions and are publicly readable.
Optional types
Leo supports optional types with the T? syntax. An optional value is either present or absent. Optional types show up frequently when reading from mappings or storage in final { } blocks:
fn check_balance(addr: address) {
final {
let maybe_balance: u64? = balances.get(addr);
let balance: u64 = maybe_balance.unwrap_or(0u64);
assert(balance > 0u64);
}
}
Use .unwrap() when you expect the value to exist (panics if absent) and .unwrap_or(default) to provide a fallback. Mapping and storage reads return optionals because the key may not exist yet.
Const generics
Programs can use const generics to parameterize functions over compile-time constants:
fn sum_array::[N: u32](arr: [u64; N]) -> u64 {
let total: u64 = 0u64;
for i: u32 in 0u32..N {
total = total + arr[i];
}
return total;
}
Testing annotations
Leo provides built-in test support. Tests are written as script blocks (standalone execution contexts outside the program scope) and annotated with @test. The @should_fail annotation marks tests expected to panic:
@test
script test_transfer_valid() {
let token: Token = Token { owner: aleo1..., amount: 100u64 };
let (change, output) = private_token.aleo/transfer(token, aleo1..., 50u64);
assert_eq(change.amount, 50u64);
assert_eq(output.amount, 50u64);
}
@test
@should_fail
script test_transfer_overdraft() {
let token: Token = Token { owner: aleo1..., amount: 10u64 };
private_token.aleo/transfer(token, aleo1..., 50u64); // Should panic: underflow
}
Tests run with leo test and execute outside the program's proof context, making them fast for iteration.
Leo CLI commands
Beyond leo build and leo run, the Leo toolchain includes:
leo devnet- spin up a local development network for testing (use--clean-onlyto reset state)leo devnode- run a single local validator nodeleo synthesize- generate proving/verifying keys without deployingleo fmt- format Leo source codeleo upgrade- upgrade a deployed program (if the constructor allows it)leo test- run test scripts annotated with@test
Aleo ecosystem
The Aleo ecosystem includes several shipped projects you can build on or integrate with:
- USAD stablecoin - a USD-pegged stablecoin on Aleo with private transfers
- Shield wallet - a privacy-focused wallet for managing Aleo assets
- Request Finance - invoice and payment infrastructure integrated with Aleo
- create-leo-app - scaffolding tool for bootstrapping new Leo projects with frontend templates (
npm create leo-app@latest) - Token Registry Program - a shared on-chain registry for token metadata and standards compliance
- Wallet adapter library (
@demox-labs/aleo-wallet-adapter) - a framework-agnostic library for connecting dApps to Aleo wallets, similar to Solana's wallet adapter pattern
These provide ready-made infrastructure that reduces custom code for common tasks like payments, identity, and wallet connectivity.
Key patterns
The following patterns are common building blocks in Aleo application development. Each is a brief description; full implementations with explanations are in the Aleo skills package.
Private token: The UTXO-based record model from the first Leo program above. Tokens are held in encrypted records, and transfers consume input records while producing new output records. Amounts, sender, and receiver are all hidden from external observers.
Public-to-private conversion (shielding): Moves assets from public mapping state to private record state. A finalize block decrements a public balance, and the transition produces a private record with the corresponding amount. The reverse (unshielding) consumes a private record and credits a public mapping. Users can move fluidly between public and private representations of the same asset.
Commit-reveal: A two-phase pattern for applications like auctions or voting where inputs must be hidden until a reveal phase. In the commit phase, a user submits a cryptographic commitment (typically a hash of their value concatenated with a random nonce) stored in a public mapping. In the reveal phase, they submit the original value and nonce; the finalize block verifies the commitment and records the revealed value.
Multisig: Implements m-of-n approval for program operations. Approvals are tracked in a public mapping, and a finalize block checks whether the approval threshold has been met before executing the guarded operation. Signer identity is verified through self.caller in transitions and signature-based authentication, since Aleo does not have Solidity-style msg.sender in the same way.
Oracle consumption: A pattern for reading external data on-chain. An oracle operator submits data through a dedicated transition with a finalize block that writes to a public mapping. Consumer programs read from this mapping in their own finalize blocks. Cross-program data reading through public mappings enables composable oracle-driven applications.
From Solidity to Leo
If you are coming from Ethereum development, translating your mental model to Aleo requires understanding both the conceptual mappings and where the analogy breaks down.
Concept mapping
| Solidity | Leo | Notes |
|---|---|---|
contract | program | A program is the deployment unit on Aleo. |
function / external | fn | Functions are the callable entry points. They execute off-chain. |
mapping(K => V) | mapping | Leo mappings are public on-chain state, accessed only in finalize blocks. |
msg.sender | self.caller | Available in transitions to identify the caller. |
| Events | Record creation | Records serve as private receipts. There is no direct event log equivalent. |
storage variables | mapping entries / storage declarations | Public state lives in mappings. Programs can also use storage variables for persistent on-chain state. |
struct | struct / record | Structs are public data; records are private (encrypted) data with an owner. |
require() | assert() / assert_eq() | Assertion failures abort the transition. |
What does not translate
Dynamic arrays and strings: Leo has no bytes, string, or dynamically sized array type. All data structures must have sizes known at compile time. If your Solidity contract processes variable-length strings, you will need to rethink the data model, typically using fixed-size integer arrays or offloading string handling to the frontend.
Reentrancy: Not possible in Leo. Transitions execute off-chain and produce a proof; they do not call back into other contracts during execution. The finalize phase is sequential and does not support external calls that could reenter the originating program. An entire category of Solidity vulnerabilities does not exist here.
Dynamic dispatch and interfaces: Solidity supports interface-based polymorphism, abstract contracts, and delegate calls. Leo has none of these. Cross-program calls exist (one program can call another program's transitions), but they are statically resolved at compile time. There is no equivalent of calling an address that could be any contract implementing an interface.
Inheritance: Leo programs do not support inheritance. No is keyword, no base contracts, no virtual functions. Code reuse happens through explicit composition: importing another program and calling its transitions.
Gas model: Aleo does not use a gas model in the Ethereum sense. Transition execution happens off-chain and has no per-opcode cost. Deployment costs credits proportional to program size. Transaction fees cover proof verification and finalize execution. The cost model is simpler and more predictable than Ethereum's gas mechanism, but you need to understand that computation cost is front-loaded into proof generation time rather than metered at execution.
The privacy shift
The most significant mental model change is around data visibility. In Solidity, all contract state is public by default (even private variables are readable from the blockchain). Privacy requires off-chain workarounds. In Leo, it is the reverse: record-based state is private, and making data public requires explicit use of mappings and finalize blocks.
When you design applications on Aleo, start from the assumption that user data is private and selectively expose what needs to be public for consensus or composability. This is the opposite of the Ethereum pattern, where you start with everything public and try to shield sensitive data.
Sources
Frequently Asked Questions
What programming languages is Leo similar to?
Leo draws its primary influences from Rust and JavaScript. From Rust, it inherits explicit static typing with specific integer widths (u8, u16, u32, u64, u128 and their signed counterparts), struct and record types, and a general emphasis on type safety. From JavaScript, it borrows its structural style: curly-brace delimited blocks, let bindings, and an overall syntax flow that feels approachable to web developers. However, Leo is not a general-purpose language. It is specifically designed to compile to zero-knowledge circuits, which means it lacks features common to both Rust and JavaScript such as dynamic memory allocation, variable-length data structures, recursion, and runtime-dependent control flow.
Can I use JavaScript/TypeScript to interact with Aleo?
Yes. The Aleo SDK (@provablehq/sdk) is a TypeScript-compatible library that allows frontend and backend applications to interact with the Aleo network. You can use it to create and submit transactions, decrypt records, read public mapping state, and generate proofs in the browser or in Node.js using the companion WASM package (@provablehq/wasm). Most Aleo dApps use a TypeScript frontend that communicates with deployed Leo programs through the SDK, combined with wallet adapters like Leo Wallet or Puzzle Wallet for user-facing transaction signing.
How do AI coding agents help with Leo development?
AI coding agents accelerate Leo development in several ways. With Aleo-specific skills loaded, agents have access to current Leo syntax rules, canonical patterns, and ecosystem knowledge that may not be in their base training data. Agents that can execute shell commands use the Leo compiler directly - they write code, compile it with `leo build`, read error messages, fix issues, and test transitions with `leo run` in a tight feedback loop without human intervention. This compiler-in-the-loop approach is especially valuable for Leo because many compiler errors relate to zero-knowledge circuit constraints that are unfamiliar to developers new to the ecosystem. The agent can interpret these errors and apply correct fixes in seconds.
What are the limitations of Leo compared to Solidity?
Leo lacks several features that Solidity developers take for granted: dynamic arrays, strings, mappings with iteration, inheritance, interfaces, abstract contracts, delegate calls, and runtime-determined control flow like unbounded loops or recursion. These limitations exist because Leo compiles to zero-knowledge circuits, which must be fixed in size and fully deterministic at compile time. Aleo also uses policy-controlled upgradability via constructors (`@noupgrade`, `@admin`, `@checksum`, `@custom`), which means upgrade behavior is explicit and fixed at deployment time. On the other hand, Leo provides native privacy through records, eliminates entire vulnerability classes like reentrancy, and offers a more predictable cost model since proof generation happens off-chain.
How much does it cost to deploy a program on Aleo?
Deployment costs on Aleo are denominated in credits and are proportional to the compiled program size. A simple program with one or two transitions may cost less than one credit to deploy. Complex programs with many transitions, large finalize blocks, and extensive type definitions will cost more, potentially several credits. The exact cost depends on the size of the compiled Aleo instructions and the associated proving and verifying keys. You specify the fee in microcredits (1 credit = 1,000,000 microcredits) when submitting the deployment transaction. Testnet deployment uses free testnet credits available from the Aleo faucet.
Is there a playground or REPL for Leo?
Aleo provides an online IDE at play.leo-lang.org where you can write, compile, and run Leo programs directly in your browser without installing any tooling. This is the quickest way to experiment with Leo syntax and test small programs. For local development, the Leo CLI is the primary tool: you can compile and run transitions from the command line using leo build and leo run. While there is no traditional REPL (read-eval-print loop) since Leo programs must be compiled to circuits before execution, the combination of the online playground and the fast local CLI provides a rapid iteration workflow.
Can Leo programs call other Leo programs?
Yes. Leo supports cross-program invocations where one program calls a transition defined in another deployed program. These calls are statically resolved at compile time, meaning you must import the target program and reference its entry functions explicitly. The call is not dynamic dispatch as in Solidity; you cannot call an arbitrary program at a runtime-determined address. Cross-program calls enable composability: for example, a DeFi protocol program can call a token program's transfer transition. However, each cross-program call adds to the overall circuit size and proof generation time, so deep call chains should be designed carefully.
How do I test my Leo program before deploying?
Leo provides several testing avenues. Locally, you can use leo run to execute individual transitions with specific inputs and verify the outputs. This runs the full proof generation pipeline on your machine and confirms that the program compiles and transitions produce correct results. For more structured testing, you can write test scripts annotated with `@test` that run with `leo test`. The Aleo testnet provides a full network environment where you can deploy your program and test it under realistic conditions, including finalize block execution and mapping state persistence. AI agents with shell access can automate this entire cycle: compile, run with test inputs, check outputs, fix any issues, and deploy to testnet, all in a single automated workflow.