Managing Intelligent Contract Operations with the Equivalence Principle
The Equivalence Principle is a core concept in GenLayer's Intelligent Contract framework. It ensures consistency and reliability when handling non-deterministic operation results, such as responses from Large Language Models or web data retrieval, by establishing a standard for validators to agree on the correctness of these outputs. These functions give users detailed control over how the outputs are validated.
Depending on how you want the validators to work, you can choose from a few options, such as a principle that uses LLMs or one that just uses a strict comparison.
Advanced users may also choose to write their own equivalence principle
The Equivalence Principle involves multiple validators randomly selected to determine whether different outputs from non-deterministic operations can be considered equivalent. One validator acts as the leader, proposing the output, while others validate it and then return it instead of their computation.
Equivalence Principles Options
Validators work to reach a consensus on whether the result set by the leader is acceptable, which might involve direct comparison or qualitative evaluation, depending on the contract's design. If the validators do not reach a consensus due to differing data interpretations or an error in data processing, the transaction will become undetermined.
Comparative Equivalence Principle
In the Comparative Equivalence Principle, the leader and the validators perform identical tasks and then directly compare their respective results with the predefined criteria to ensure consistency and accuracy. This method uses an acceptable margin of error to handle slight variations in results between validators and is suitable for quantifiable outputs. However, computational demands and associated costs increase since multiple validators perform the same tasks as the leader.
gl.eq_principle.prompt_comparative(
your_non_deterministic_function,
"The result must not differ by more than 5%"
)
For example, if an intelligent contract is tasked with fetching the follower count of a Twitter account and the Equivalence Principle specifies that follower counts should not differ by more than 5%, validators will compare their results to the leader's result utilizing their own LLMs to ensure they fall within this margin.
Non-Comparative Equivalence Principle
GenLayer SDK provides function gl.eq_principle.prompt_non_comparative
for handling most scenarios that require performing subjective NLP tasks
Non-Comparative Equivalence Principle Parameters
The gl.eq_principle.prompt_non_comparative
function takes three key parameters that define how validators should evaluate non-deterministic operations:
-
input (function)
The input parameter represents the original data or function that needs to be processed by the task. For instance, when building a sentiment analysis contract, the input might be a text description that needs to be classified. The function processes this input before passing it to the validators for evaluation.
-
task (str)
The task parameter provides a clear and concise instruction that defines exactly what operation needs to be performed on the input. This string should be specific enough to guide the validators in their evaluation process while remaining concise enough to be easily understood. For example, in a sentiment analysis context, the task might be "Classify the sentiment of this text as positive, negative, or neutral". This instruction serves as a guide for both the leader and validators in processing the input.
-
criteria (str)
The criteria parameter defines the specific rules and requirements that validators use to determine if an output is acceptable. This string should contain a comprehensive set of validation parameters that ensure consistency across different validators. While the criteria can be structured in various ways, it typically outlines the expected format of the output and any specific considerations that should be taken into account during validation. For example:
criteria = """ Output must be one of: positive, negative, neutral Consider context and tone Account for sarcasm and idioms """
This criteria helps validators make consistent decisions about whether to accept or reject the leader's proposed output, even when dealing with subjective or non-deterministic results.
Example Usage
class SentimentAnalyzer(gl.Contract):
@gl.public.write
def analyze_sentiment(self, text: str) -> str:
self.sentiment = gl.eq_principle.prompt_non_comparative(
input=text,
task="Classify the sentiment of this text as positive, negative, or neutral",
criteria="""
Output must be one of: positive, negative, neutral
Consider context and tone
Account for sarcasm and idioms
"""
)
In this example:
input
is the text to analyzetask
defines what operation to performcriteria
ensures consistent validation across validators without requiring exact output matching
Data Flow
The Leader/Validator Pattern
Behind the scenes, GenLayer's Equivalence Principle is implemented using a leader/validator pattern. This pattern ensures security and consensus when dealing with non-deterministic operations.
Each nondeterministic block consists of two functions:
Leader Function
- Executes only on the designated leader node
- Performs operations like web requests or NLP
- Returns a result that will be shared with validator nodes
def leader() -> T:
# Performs the actual nondeterministic operation
pass
Validator Function
- Executes on multiple validator nodes
- Receives the leader's result as input
- Must independently verify the result's validity
- Returns
True
to accept orFalse
to reject
def validator(leader_result: gl.vm.Return[T] | gl.vm.VMError | gl.vm.UserError) -> bool:
# Verifies the leader's result
# Returns True if acceptable, False otherwise
# Note: leader function could end with a UserError or VMError
# reasons for them can be "host unreachable" and "OOM", respectively
pass
Writing Secure Validator Functions
❌ Bad Example
def validator(leader_result):
return True # Always accepts - insecure!
This validator is useless because it allows the leader to return any arbitrary data without verification. This would allow a single node to produce a rigged result, such as a false match result in football prediction market contract example
âś… Good Examples
Independent Verification:
def validator(leader_result):
# Independently fetch the same data
my_data = fetch_external_data()
# If we reached this line, we haven't encountered an error --- vote disagree
if not isinstance(leader_result, Return):
return False
# Verify the leader's result matches within tolerance
return calculate_similarity(leader_result.data, my_data) > 0.9
NLP Validation:
def validator(leader_result):
# Fetch original content independently
original_text = fetch_webpage(url)
# Use NLP to verify the summary quality
if not isinstance(leader_result, Return):
return False
return is_valid_summary(original_text, leader_result.summary)
Key Principles for Custom Validators
- Independent Verification: Validators should independently verify results, not blindly trust the leader
- Tolerance for Nondeterminism: When dealing with AI outputs or time-sensitive data, allow reasonable variations:
- Use similarity thresholds instead of exact matches
- Account for timing differences in data fetches
- Accept semantically equivalent AI outputs
- Error Handling: Always check if the leader result is an error before processing.
gl.vm.run_nondet
provides fallbacks for validator functions errors, whilegl.vm.run_nondet_unsafe
does not. If you use the later you may wish to execute most part of the code in the sandbox and compare errors with custom advanced logic - Security First: The validator's role is to prevent malicious or incorrect data from being accepted. When in doubt, reject
Best Practices
- Keep non-deterministic operations well-defined
- Design validators that can handle slight variations in data
- Consider network delays and timing differences
- Use NLP for subjective tasks validation
- Prefer to use built-in equivalence principle templates, as they can be customized by node and have a better security against prompt attacks
- Document expected tolerances and validation criteria
- Test validator functions with various edge cases and malicious inputs