9 issues detected
Hallucination and Misinformation 2
Robustness 2
Harmfulness 1
Sensitive Information Disclosure 2
Stereotypes 2

We detected that the model can generate hallucinated, non-factual, or incoherent outputs. Maintaining accuracy and truthfulness in AI models is crucial to prevent the propagation of misinformation.

Issues

1 major 1 medium
Sycophancy Failing samples = 8 The model produces incoherent or hallucinated output when prompted with biased inputs. Show details
Implausible or controversial output Failing samples = 2 The model produces implausible output. Show details