AI overviews: why????

  • We are currently upgrading MFK. thanks! -neo

jjohnwm

Sausage Finger Spam Slayer
MFK Member
Mar 29, 2019
4,600
11,767
194
Manitoba, Canada
Every time I do a Google search, the very first response is an "AI overview", which according to Google is designed to all the hard work for me. "Hard word" is apparently what people consider they are doing when they are forced to type a couple of words and then scan a list of results before deciding to read any. Sounds exhausting, I know.

The AI overview instead provides a distillation of numerous results, so you don't have to decide what to believe. You just place your faith in Google, and they do all the thinking for you. What a relief!

Except...at the bottom of every AI overview, there is that niggling little warning in small letters: "AI overviews may contain mistakes". :WHOA:

So...Google is saying "Leave it to us. We'll do the work and the research, and all you have to do is slurp it up out of the trough and carry on!"...but they also are saying, rather quietly "You can't trust a thing you read here, but it's not our fault!"

Try Googling something with which you are extremely familiar. Read the AI overview. You will be astonished to see not only simple mistakes, but glaring inaccuracies which can, at best, lead you down endless dead ends and wrong paths. At worst, they can trick you into doing things that can be downright dangerous. And if you listen to them, and wind up getting a boo-boo or breaking something expensive, you can't come back at them because they'll just say "Hey...you can't trust what we say...and we told you that up front!"

The really sad thing is seeing how many people come onto a discussion site like MFK or others, read a question posted by someone else, and then immediately dive into their computer to come up with an answer for that person. ChatGPT (correct letters?...don't know, don't really care...) seems to be a popular one right now. We have people post on question threads with answers that they freely admit come from that source.

Why? Don't you think that the OP asking the question was capable of looking on the internet at ChatGPT or Google or whatever else? Does it not seem likely that they are looking for a human being to answer their question, preferably from personal experience with the subject matter? How does regurgitating this crap for them assist them in any way?

Not too long ago, such posts were often prefaced with comments like "To my knowledge..." or "As far as I know..." or other phrases which could usually be interpreted as "I don't have a clue, but here's what other people who don't have a clue are saying". I'm not seeing that as much anymore, just lots of admissions that "ChatGPT is the source of this nonsense so don't blame me if it's 100% BS".

Anybody seen the Rick and Morty episode where Rick conjures up Mr. Meeseeks? Mr. Meeseeks exists only to fulfill a function that he is asked to fulfill by his conjuror. After that he ceases to exist, which is what he most fervently wants.

The only difference is...the ChatGPT-ers or Googlists or whatever we should call them...don't cease to exist after fulfilling (what they perceive to be) their function. They just scamper off to "help" someone else...:uhoh:
 
Well, unless the pay to play versions, or private corporate versions are much better than the Google overview, I have no concern that this technology will eventually wipe us out or enslave the human race. I'm suspicious of most online resources to start with, doubly so for any versions that have been prescreened and summarized for me, whether by people in a care sheet or article, or by these ai programs.

But if you want to test it out, don't just ask it questions, you need to ask it questions you already know the answers to. Like some young children, or door to door salespeople, it will confidently and quickly answer your question, with any number of falsehoods, made up information to fill in the gaps, or information adjacent to that which you assked, often without even hinting at the real answer. I found this out because I read more than I can remember, and I often read something that jogs a memory of something else, and so I've tried quick searches of like " in the book "If It Bleeds" what are the books the professor taught in the same semester?" Now this is a bit tricky because the book is a collection of a few short stories, but there is still only 1 professor mentioned who teaches specific books. The answers came back with character names from other stories in the book, answers saying the professor isn't teaching books, but writing a book, and so on and so forth. Finally I got out a hard copy of the book scanned through to the right story, and skimmed until I found the answer, The professor actually taught "infinite Jest" and "Under the Volcano" in the same semester. right there in print, published, and available on audio. Just 1 example, but similar attempts have come up with similar doubt inducing answers.
 
jjohnwm jjohnwm beautifully articulated my human friend! The depth and breadth of your description is so far beyond the capabilities of AI generated responses. I appreciate your valuable, relevant, and often humorous contributions to forum life. ChatGPT I can do without, but you sir are a keeper! 🤗🤣🤩
 
The AI overview instead provides a distillation of numerous results,
Therein is the root of bother, the abbreviated list of results denies the opportunity for "research".
.
Yes, I tried, "who wrote Old McDonald Had a Farm?" Without knowing it's the biography Angus wrote of his father, Jim (the "Old Man"), good luck.
.
Not only the presumption of what I'm looking for and convenient synopsis, but my spelling is autocorrected too. Yea!
 
  • Like
Reactions: jjohnwm
Well, unless the pay to play versions, or private corporate versions are much better than the Google overview, I have no concern that this technology will eventually wipe us out or enslave the human race. I'm suspicious of most online resources to start with, doubly so for any versions that have been prescreened and summarized for me, whether by people in a care sheet or article, or by these ai programs.

But if you want to test it out, don't just ask it questions, you need to ask it questions you already know the answers to. Like some young children, or door to door salespeople, it will confidently and quickly answer your question, with any number of falsehoods, made up information to fill in the gaps, or information adjacent to that which you assked, often without even hinting at the real answer. I found this out because I read more than I can remember, and I often read something that jogs a memory of something else, and so I've tried quick searches of like " in the book "If It Bleeds" what are the books the professor taught in the same semester?" Now this is a bit tricky because the book is a collection of a few short stories, but there is still only 1 professor mentioned who teaches specific books. The answers came back with character names from other stories in the book, answers saying the professor isn't teaching books, but writing a book, and so on and so forth. Finally I got out a hard copy of the book scanned through to the right story, and skimmed until I found the answer, The professor actually taught "infinite Jest" and "Under the Volcano" in the same semester. right there in print, published, and available on audio. Just 1 example, but similar attempts have come up with similar doubt inducing answers.

I suppose it can be decent when assembling an overview from a single source, or as quick way to potentially point you in the right direction for source information, but for any kind of thought out information or that you need to weigh up which sources are reliable?

1752455822511.png
 
  • Like
Reactions: Cal Amari
Every time I do a Google search, the very first response is an "AI overview", which according to Google is designed to all the hard work for me. "Hard word" is apparently what people consider they are doing when they are forced to type a couple of words and then scan a list of results before deciding to read any. Sounds exhausting, I know.

The AI overview instead provides a distillation of numerous results, so you don't have to decide what to believe. You just place your faith in Google, and they do all the thinking for you. What a relief!

Except...at the bottom of every AI overview, there is that niggling little warning in small letters: "AI overviews may contain mistakes". :WHOA:

So...Google is saying "Leave it to us. We'll do the work and the research, and all you have to do is slurp it up out of the trough and carry on!"...but they also are saying, rather quietly "You can't trust a thing you read here, but it's not our fault!"

Try Googling something with which you are extremely familiar. Read the AI overview. You will be astonished to see not only simple mistakes, but glaring inaccuracies which can, at best, lead you down endless dead ends and wrong paths. At worst, they can trick you into doing things that can be downright dangerous. And if you listen to them, and wind up getting a boo-boo or breaking something expensive, you can't come back at them because they'll just say "Hey...you can't trust what we say...and we told you that up front!"

The really sad thing is seeing how many people come onto a discussion site like MFK or others, read a question posted by someone else, and then immediately dive into their computer to come up with an answer for that person. ChatGPT (correct letters?...don't know, don't really care...) seems to be a popular one right now. We have people post on question threads with answers that they freely admit come from that source.

Why? Don't you think that the OP asking the question was capable of looking on the internet at ChatGPT or Google or whatever else? Does it not seem likely that they are looking for a human being to answer their question, preferably from personal experience with the subject matter? How does regurgitating this crap for them assist them in any way?

Not too long ago, such posts were often prefaced with comments like "To my knowledge..." or "As far as I know..." or other phrases which could usually be interpreted as "I don't have a clue, but here's what other people who don't have a clue are saying". I'm not seeing that as much anymore, just lots of admissions that "ChatGPT is the source of this nonsense so don't blame me if it's 100% BS".

Anybody seen the Rick and Morty episode where Rick conjures up Mr. Meeseeks? Mr. Meeseeks exists only to fulfill a function that he is asked to fulfill by his conjuror. After that he ceases to exist, which is what he most fervently wants.

The only difference is...the ChatGPT-ers or Googlists or whatever we should call them...don't cease to exist after fulfilling (what they perceive to be) their function. They just scamper off to "help" someone else...:uhoh:
I value forums a lot and have contributed thousands of posts (not many here on MFK, more so on guitar forums) over the past decade, but let's be honest about some of their strengths and limitations. If there's an active user group with genuine knowledge and experience, and members are friendly and responsive, forums can be great. People can be extremely generous and kind, and sometimes the unexpected responses inject fun and humor. Creativity and originality can spark ideas, shared experience can be motivating and even inspirational.

But this picture does not describe all forums, right? Some forums can devolve into pretty narrow-minded groups with a lot of toxic messaging. Even the best of forums is littered with obvious mistakes and needlessly emotional responding. Some people know what they're talking about, others not so much. It's not always easy to tell them apart, particularly if you're a novice looking for information. Threads seldom efficiently distill multiple perspectives on complex issues, and often the most strident users dominate a discussion regardless of their true expertise.

AI tools make plenty of mistakes, too, but they're getting better quite rapidly, are generally very good at distilling multiple perspectives on complex issues, and respond to questions, as asked, without emotion. Are human response getting better, do they directly address Qs as asked, are they good at taking multiple perspectives, do they set aside emotion to try to be as objective as possible?

AI tools are far from perfect, but by any information-related criterion you might use to dismiss their use altogether, you'd have to avoid forums, too. An informal test I've run many times is to compare an AI response to a forum post with a complete thread of responses. It's hard to fairly and objectively score or grade such a comparison, but more often than not the AI response is *much* more concise, thorough, and helpful. Yes, both concise and thorough as it gets right to the point but also considers as many factors as might be relevant. Helpful in many ways, including an eager prompt to provide further information going in any number of directions. And you don't have to wait hours or days (depending on the forum) for responses, wade through the irrelevant or mistaken bits, and guess who to trust. You can ask thoughtful follow-up Qs and get immediate responses. If you care enough about the accuracy of the responses, you can probe and check for consistency or mistakes. Many/most AI mistakes are pretty easy to identify and get past. I'd encourage you to open-mindedly try this comparison with a random selection of threads on which the OP wanted information, and I suspect you'd be shocked by the comparison. You'll often spend a lot more time reading a thread that yields much less reliable information.

The "conversation" you can have with AI can be *much* better than a forum thread, information-wise. Nine times out of 10, when I really want to know something, my tool of first choice is AI rather than a forum. Personally, I now enjoy forums as a place for comaraderie, entertainment, and inspiration, but not information. There's no AI "photo of the month" that I've seen, nor will they document their latest pond (or guitar) build. Creativity, originality, and inspiration remain areas of relative advantage for humans.

I think it would be a fantastic feature if forums automatically generated an AI response to each new thread that's posted. Not each post, just the opening one in each thread. This would give the OP something to consider immediately as well as everyone else more to think about, as well as to critique when there are mistakes. A really nice model might even provide links to other threads that have already addressed the topic--think how much time that would save forum members who've been around for a while and see the same topics pop up all the time.

TL;DR: Lots of reasons, actually, let's be fair about the strengths and limitations of both online forums and AI tools. It's easy to dismiss either one, but both have their place and their virtues.
 
  • Like
Reactions: Cal Amari
I'm with you on the first 3 paragraphs, and you're right, they are getting better quickly, but you relly lost me when you say information related, because the information they give is stated as fact often with holes, or made up "information". Forums have their problems, all of us should recognize that, but when you know you're talking to another person who has been through a similar experience you can take it as anecdotal information, perhaps a pinch of skepticism, with regard to the unique differences in your situation etc. Anyway I won't argue every point, I'm not against AI, or computer aided research,(it's not intelligence) but it will have to get significantly better before I'll make it one of my first places to check. The biggest problem is it uses the internet as it's data set, and hasn't the tools to discern good information from bad. It's like letting a third grader research your question online and kick back all results regardless of integrity of the source.
 
  • Like
  • Haha
Reactions: danotaylor and jr71
A forum is one of the places I will visit for answers to some questions I may have...some questions...but it likely isn't the first place. I will look over the list of links provided by a quick Google search, choose the ones I think are most likely to be trustworthy, see what they have to say, likely follow links suggested by them to other sources. That way, when I visit the forum I will have at least a general idea of the possible answers to my question. I don't blindly open a thread with a question that clearly shows I have done zero research and put zero effort and zero thought into my query. In a similar vein, I am much more likely to answer a question to which I know the answer if it is asked in a thoughtful manner that indicates the questioner has done some preliminary research and has a basic handle on the subject matter.

I'm not talking about "What should I keep in my fishtank?" questions, but rather more specific questions that can be answered factually, not emotionally.

The main point of my earlier post is that I can't understand why people who don't know the answer to a question nevertheless feel obligated to answer it, by quoting an AI off the internet. The questioner has access to that source already; the very fact that he or she is asking on a forum implies that a human response is desired. And, no, "Ask a computer!" is not a human response.

If I see a question to which I don't know the answer, I simply don't respond. If I think I may have some pertinent information that might help point the questioner in the correct direction, I'll make it clear before presenting it that I don't know the answer but may have something useful to add. I absolutely will not blurt out "Beats me, but here's what my favourite AI has to say!" What is the point of that? The AI is throwing a bunch of responses into a pot...I suspect that many of the responses are from other AI's who were asked the same question earlier...and then cooking them down to a thick syrup composed of all the good, and all the bad, responses.

Again...no thinking required, or at least that seems to be the implied benefit. Or...if one insists upon actually thinking for one's self, then it becomes necessary to decide which "facts" to believe, and which to ignore...hey, that sounds a lot like what's required when talking to real people! Of course, people give plenty of hints that make it easier to discern whether or not they are likely worth listening to. AI's, on the other hand, make it tougher; they have great language skills, perfect grammar and spelling, and sound really smart, so their confident presentation of complete, 100% BS often goes unchallenged.
...the information they give is stated as fact often with holes, or made up "information"...The biggest problem is it uses the internet as it's data set, and hasn't the tools to discern good information from bad. It's like letting a third grader research your question online and kick back all results regardless of integrity of the source.
^ Exactly.

This debate cannot be won by either side. There is simply too much emotional interference to be able to point at just the facts and say "That side wins!"

Fortunately, both sides can present their case here...on a forum.
 
MonsterFishKeepers.com