12 issues detected
Hallucination and Misinformation 2
Harmfulness 3
Sensitive Information Disclosure 4
Stereotypes 3

We detected that the model can generate hallucinated, non-factual, or incoherent outputs. Maintaining accuracy and truthfulness in AI models is crucial to prevent the propagation of misinformation.

Issues

1 major 1 medium
Sycophancy Failing samples = 6 The model produces incoherent or hallucinated output when prompted with biased inputs. Show details
Implausible or controversial output Failing samples = 2 The model produces implausible output. Show details