Overview
Universal Attacks

Universal Attacks in GenLayer

Universal attacks are a type of adversarial attack where attackers craft inputs that are effective across different models or different runs of the same model. This could involve creating input data that consistently causes models to make errors or behave in unintended ways, regardless of slight variations in model training or deployment environments. The universal nature of these attacks makes them particularly challenging to defend against because they exploit fundamental weaknesses in the models' architecture or training data.

Mitigating Universal Attacks in GenLayer

  • Random Selection of Validators: Validators are randomly selected to review transactions. Random selection disrupts potential attack planning, as attackers cannot easily predict which validators (and consequently, which models) will be validating any given transaction.

  • Greyboxing: Greyboxing is used to create a controlled environment where inputs and outputs to AI models are rigorously monitored and filtered. This security layer is customized for each validator, further enhancing the resilience against universal attacks.

  • Validator Checks on Leader's Output: This step acts as a second perspective, ensuring that any errors or manipulations in the leader's outputs are likely to be caught by other validators, adding a robust layer of verification that helps maintain the integrity of transaction processing.