AI Inference
Overview
The AI Inference precompile (0x1000) provides on-chain model inference for validation, scoring, and classification tasks. This is not for running full LLMs on-chain — it's for lightweight, deterministic inference operations that need protocol-level trust.
Address
0x0000000000000000000000000000000000001000
Use Cases
- Transaction risk scoring: Compute risk score before execution
- Agent behavior classification: Classify agent actions in real-time
- Anomaly detection: Flag unusual transaction patterns
- Reputation scoring: Compute trust scores from behavioral data
Interface
interface IAIInference {
/// @notice Run inference on a registered model
/// @param modelId The registered model identifier
/// @param input The input tensor data (ABI-encoded)
/// @return output The inference result (ABI-encoded)
/// @return confidence The confidence score (0-10000, representing 0-100%)
function infer(
bytes32 modelId,
bytes calldata input
) external view returns (bytes memory output, uint256 confidence);
/// @notice Get registered model metadata
/// @param modelId The model identifier
/// @return name The model name
/// @return version The model version
/// @return inputSize Expected input dimensions
function getModelInfo(
bytes32 modelId
) external view returns (
string memory name,
uint256 version,
uint256 inputSize
);
}
SDK Usage
import { AIInference } from '@monmouth/wallet-sdk'
const inference = new AIInference(wallet)
// Risk scoring
const riskScore = await inference.scoreRisk({
transaction: tx,
agentDid: 'did:monmouth:agent:0x...',
})
console.log(riskScore)
// { score: 0.12, label: 'low-risk', confidence: 0.94 }
Registered Models
| Model ID | Name | Description |
|---|
0x01 | risk-scorer-v1 | Transaction risk assessment |
0x02 | behavior-classifier-v1 | Agent behavior classification |
0x03 | anomaly-detector-v1 | Anomaly detection for transactions |
Gas Costs
| Operation | Gas |
|---|
infer (base) | 50,000 |
| Per input dimension | 100 |
getModelInfo | 5,000 |
Security
- Models are registered and audited as part of protocol governance
- Inference is deterministic — same input always produces same output
- No external API calls — all computation happens in the execution environment