Recommendations
Explore
Contribute
About
Updates
Build
Fund
Research
Measure
AI & ML
Practice
Recommendations
Build
Fund
Research
Measure
AI & ML
Practice
Explore
Contribute
About
Updates
← Explore Research Questions
Research Question
How can we design information presentation formats that minimize susceptibility to framing effects?
Related Goals
Assembly designs that are robust to both internal and external manipulation attempts.
Related Capabilities
Resist manipulation
Urgent
Robustness
Ability to resist manipulation that would decrease trustworthiness, legitimacy or unfairly influence the outcome.
Related Existing Resources
Research
Adversarial testing for Generative AI
Google’s guide defining adversarial testing as systematically evaluating ML models against malicious or inadvertently harmful input, covering explicit queries (containing policy-violating language) and implicit queries (seeming harmless but involving sensitive topics). The four-stage workflow inv...