[kwlug-disc] ChatGTP + Bing: we're entering a new paradigm and it's mind-blowing

Ronald Barnes ron at ronaldbarnes.ca
Thu Feb 16 06:31:58 EST 2023


Doug Moen wrote on 15/02/2023 12.56:

> The first time this happened was in the 1960's, with [the ELIZA 
> program](https://en.wikipedia.org/wiki/ELIZA 
> <https://en.wikipedia.org/wiki/ELIZA>), which simulated a 
> psychotherapist. Public reaction was weirdly similar.

Your own link describes that claim as highly contentious.

>> ELIZA's creator, Weizenbaum, regarded the program as a method to 
>> show the superficiality of communication between man and machine, 
>> but was surprised by the number of individuals who attributed 
>> human-like feelings to the computer program,

We attribute human-like feelings to Furbies too, so what?


Go try Eliza and see for yourself if it is a convincing human interaction:

https://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm


Like carrying on a conversation with an uninterested partner:

"Huh."
"Oh really?"
"You don't say?"
"No way, tell me more."

etc.  But actually dumber.



> I read a CACM article

Which starts with:

> Since the release of the ChatGPT interactive AI assistant it has been
> surprising to see some of the snide, passive-aggressive reactions
> from some (not all) members of the software engineering community, in
> the style of  "it's just inference from bad data". Let's get real,
> folks, it is truly game-changing. The kind of thing that you witness
> once in a generation. (The last two times were object-oriented
> programming and the World-Wide Web.)

Sounds about right.

> ChatGPT is a bullshit generator

Sounds like that "snide, passive-aggressive reaction" mentioned in your 
link.


 From the CACM link:

> The problem is that ChatGPT, in spite of its cocky self-assurance 
> (This is your result! It will work! No ifs and buts!) gives you, for 
> a non-trivial problem, an answer that may work but may also almost 
> work.

First, that's utterly wrong about ChatGPT: a disclaimer is right there 
on the front page of ChatGPT that it gets stuff wrong. Clicking through 
without reading it is a human problem. Author should acknowledge the 
disclaimer.


Also, it's been in a beta release for a whole 2½ months and one
must expect some wonky results from a beta release.


Also, I linked to at least one source of it being aggressively wrong, 
argumentative, and pretty much demanding an apology from the user, so 
everyone knows it is not and does not claim to be 100% accurate.


Note that (many) humans have the exact same failings. No humans give 
100% accurate answers 100% of the time, even within their field of 
expertise.



I fed it the function question:

> I have a function which gives 0 for 0, 1 for 1, 4 for 2, 9 for 3, 16
> for 4, 25 for 5, and 35 for 6. Can you please give me the code in 
> Python

It gave the square function:

> Sure! Based on the sequence of values you've provided, it seems like
> the function is simply returning the square of the input value. 
> Here's the code in Python:

With functioning (but wrong) code.

Note its own disclaimer showing the lie of the CACM article "This is
your result! It will work! No if and buts!", rather, "...it seems like...").


After arguing with it for quite a while, I finally fed it a function
that works inside the Python interpreter:

> def my_function(x):
> 
> return round((x-0.05)**2, 0)
> 
> for i in range(7):
> 
> my_function(i)


It responded:

 > The my_function you provided takes an input x and returns the square
 > of x-0.05, rounded to the nearest integer. However, the code snippet
 > you posted doesn't actually do anything with the output of my_function
 > for each integer in the range of 0 to 6 inclusive.

That is 100% correct.

 > To print out the results of my_function for each integer in the range
 > of 0 to 6, you can modify the code like this:

Correct, and helpful code provided, using a for loop, f-string, etc.


Then it showed the nicely formatted output, which Thunderbird keeps 
mangling, so I'll only show the *last* line:

 > Input: 6, Output: 36.0


Huh, weird & wrong. But, it "knew" that, in a way:

 > Note that the output for an input value of 6 is 36, not 35 as you
 > requested. If you would like to modify the function to return 35 for
 > an input value of 6, you can use one of the other implementations
 > provided in this conversation.

Bugger, it spat out the wrong output for one value, but recognized it, 
pointed me to earlier implementations, which were also wrong.

I responded:

 > The previous implementations all give the wrong output for 6.  Except
 > mine. Please verify the function's output for input value 6

ChatGPT:

 > I apologize for the confusion. You are correct that the previous
 > implementations I provided return 36 instead of 35 for an input value
 > of 6. The implementation you provided with the round function does
 > correctly return 35 for an input value of 6.
 >
 > To verify this, you can run the following code snippet:

[valid Python code redacted]

 > This should output 35, indicating that the function returns the
 > correct value for an input of 6.



That is f*cking impressive language skills.


Admittedly it couldn't derive the function itself, but neither could I 
(well, at least I couldn't bother, I figured the 35 might be a typo and 
wasn't willing to put the time in).


But calling it merely a "bullshit generator" is mistaking cynicism for 
wisdom.

There's a reason MS put $10,000,000,000 into ChatGPT and Google is 
shitting bricks right now.

We're about to undergo a transition, for better or worse, and I agree 
with CACM article; it could be on the scale of "the www", mobile 
devices, etc.




Asking Google for the answer to the function just spat out a gazillion 
links to some Python stuff, no idea how long it'd take to find the 
answer that way, if at all.

Too bad there's not a bot that can summarize a giant list of search 
results for me.

I'd definitely use that.


rb




More information about the kwlug-disc mailing list