Home Physics Why ChatGPT Is Not Dependable

Why ChatGPT Is Not Dependable

0
Why ChatGPT Is Not Dependable

[ad_1]

I’ll begin with the easy reality: ChatGPT just isn’t a dependable answerer of questions.

To attempt to clarify why from scratch can be a heavy elevate, however fortuitously, Stephen Wolfram has already completed the heavy lifting for us in his article, “What’s ChatGPT Doing… and Why Does It Work?” [1] In a PF thread discussing this text, I attempted to summarize as briefly as I may the important thing message of Wolfram’s article. Here’s what I stated in my put up there [2]:

ChatGPT doesn’t make use of the meanings of phrases in any respect. All it’s doing is producing textual content phrase by phrase based mostly on relative phrase frequencies in its coaching knowledge. It’s utilizing correlations between phrases, however that’s not the identical as correlations within the underlying data that the phrases symbolize (a lot much less causation). ChatGPT actually has no concept that the phrases it strings collectively symbolize something.

In different phrases, ChatGPT just isn’t designed to truly reply questions or present data. In truth, it’s explicitly designed not to do these issues, as a result of, as I stated within the quote above, it solely works with phrases in themselves; it doesn’t work with, and doesn’t even have any idea of, the data that the phrases symbolize. And that makes it unreliable, by design.

So, to present some examples of misconceptions that I’ve encountered: once you ask ChatGPT a query that you just may suppose can be answerable by a Google Search, ChatGPT is not doing that. While you ask ChatGPT a query that you just may suppose can be answerable by trying in a database (as Wolfram Alpha, for instance, does once you ask it one thing like “what’s the distance from New York to Los Angeles?”), ChatGPT is not doing that. And so forth, for any worth of “which you may suppose can be answerable by…”. And the identical is true for those who substitute “in search of data in its coaching knowledge” for any of the above: the truth that, for instance, there are an enormous physique of posts on Instagram in ChatGPT’s coaching knowledge doesn’t imply that for those who ask it a query about Instagram posts, it should have a look at these posts in its coaching knowledge and analyze them with a view to reply the query. It received’t. Whereas there may be, after all, voluminous data in ChatGPT’s coaching knowledge for a human reader, ChatGPT doesn’t use, and even comprehend, any of that data. Actually all it will get from its coaching knowledge is relative phrase frequencies.

So why do ChatGPT responses appear like they’re dependable? Why do they appear like they have to be coming from a course of that “is aware of” the data concerned? As a result of our cognitive methods are designed to interpret issues that means. After we see textual content that seems to be syntactically, grammatically right and appears like it’s confidently asserting one thing, we assume that it should have been produced, if not by an precise human, at the very least by an “AI” that’s producing the textual content based mostly on some sort of precise data. In different phrases, ChatGPT fools our cognitive methods into attributing qualities to it that it doesn’t even have.

This safety gap, if you’ll, in our cognitive methods just isn’t a latest discovery. Human con artists have made use of a lot the identical methods all through human historical past. The one distinction with the human con artists is that they have been doing it deliberately, whereas ChatGPT has no intentions in any respect and is doing it as a aspect impact of its design. However the finish result’s a lot the identical: let the reader beware.

[1] https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

[2] https://www.physicsforums.com/threads/stephen-wolfram-explains-how-chatgpt-works.1050431/post-6903906

[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here