MonmouthMonmouth Docs

AI Inference

Overview

The AI Inference precompile (0x1000) provides on-chain model inference for validation, scoring, and classification tasks. This is not for running full LLMs on-chain — it's for lightweight, deterministic inference operations that need protocol-level trust.

Address

0x0000000000000000000000000000000000001000

Use Cases

  • Transaction risk scoring: Compute risk score before execution
  • Agent behavior classification: Classify agent actions in real-time
  • Anomaly detection: Flag unusual transaction patterns
  • Reputation scoring: Compute trust scores from behavioral data

Interface

interface IAIInference {
    /// @notice Run inference on a registered model
    /// @param modelId The registered model identifier
    /// @param input The input tensor data (ABI-encoded)
    /// @return output The inference result (ABI-encoded)
    /// @return confidence The confidence score (0-10000, representing 0-100%)
    function infer(
        bytes32 modelId,
        bytes calldata input
    ) external view returns (bytes memory output, uint256 confidence);
 
    /// @notice Get registered model metadata
    /// @param modelId The model identifier
    /// @return name The model name
    /// @return version The model version
    /// @return inputSize Expected input dimensions
    function getModelInfo(
        bytes32 modelId
    ) external view returns (
        string memory name,
        uint256 version,
        uint256 inputSize
    );
}

SDK Usage

import { AIInference } from '@monmouth/wallet-sdk'
 
const inference = new AIInference(wallet)
 
// Risk scoring
const riskScore = await inference.scoreRisk({
  transaction: tx,
  agentDid: 'did:monmouth:agent:0x...',
})
 
console.log(riskScore)
// { score: 0.12, label: 'low-risk', confidence: 0.94 }

Registered Models

Model IDNameDescription
0x01risk-scorer-v1Transaction risk assessment
0x02behavior-classifier-v1Agent behavior classification
0x03anomaly-detector-v1Anomaly detection for transactions

Gas Costs

OperationGas
infer (base)50,000
Per input dimension100
getModelInfo5,000

Security

  • Models are registered and audited as part of protocol governance
  • Inference is deterministic — same input always produces same output
  • No external API calls — all computation happens in the execution environment

On this page