Insecure handling of ai model output

Output received from the Firebase AI Logic SDK is used or rendered by a downstream component without being validated or sanitized. This reflects the "Insecure Output Handling" (OWASP LLM02) vulnerability. For example, if a model is manipulated into generating a JavaScript payload, rendering that output directly in a web view can lead to a Cross-Site Scripting (XSS) attack.