• Home
  • 사용자경험AI

사용자경험AI

Good answers are not necessarily factual answers: an analysis of hallucination in leading LLMs

AI에게 ‘간략히 설명해줘’라고 말하면 오답률 20% 증가… 충격적…

5월 12, 2025

Good answers are not necessarily factual answers: an analysis of hallucination in leading LLMs 배포된 AI 애플리케이션 사고의…

Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance

AI에게 예의 바르게 말하면 더 잘 작동한다? 언어별…

4월 22, 2025

Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance 프롬프트의 예절…

사용자경험AI – AI 매터스 l AI Matters