The problem is that even when these systems are wrong only 10% of the time, you don't know which 10%. …
Friday, March 17, 2023
AI information retrieval: A search engine researcher explains the promise and peril of letting ChatGPT and its cousins search the web for you
This limitation makes large language model systems susceptible to making up or "hallucinating" answers. The systems are also not smart enough to understand the incorrect premise of a question and answer faulty questions anyway. For example, when asked which U.S. president's face is on the $100 bill, ChatGPT answers Benjamin Franklin without realizing that Franklin was never president and that the premise that the $100 bill has a picture of a U.S. president is incorrect.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment