Conversation

Srijit Kumar Bhadra

Edited 7 months ago

Bullshit and AI Chatbots

To understand that Bullshit is far more dangerous than lying one must read On Bullshit. Bullshit is distinct from lying.

The author Harry G. Frankfurt argues that bullshitters misrepresent themselves to their audience not as liars do, that is, by deliberately making false claims about what is true. In fact, bullshit need not be untrue at all. Rather, bullshitters seek to convey a certain impression of themselves without being concerned about whether anything at all is true. They quietly change the rules governing their end of the conversation so that claims about truth and falsity are irrelevant. Liars at least acknowledge that it matters what is true. Bullshit is definitely a greater enemy of the truth than lies are.

After using extensively large language model based general purpose AI Chatbots for Kannada language to Bengali language translations, I agree with Cory Doctorow when he says the following.

“ChatGPT can take over a lot of tasks that, broadly speaking, boil down to “bullshitting.” It can write legal threats. If you need 2,000 words about “the first time I ate an egg” to go over your omelette recipe in order to make a search engine surface it, a chatbot’s got you. Looking to flood a review site with praise about your business, or complaints about your competitors? Easy. Letters of reference? No problem.”

“Bullshit begets bullshit, because no one wants to be bullshitted. In the bullshit wars, chatbots are weapons of mass destruction. None of this prose is good, none of it is really socially useful, but there’s demand for it. Ironically, the more bullshit there is, the more bullshit filters there are, and this requires still more bullshit to overcome it.”

One must be extremely cautious to constantly monitor the output of large language model based general purpose AI Chatbots using the paradigm Zero Trust Information similar to that of Zero Trust Networking. There no better alternative as on today. The results of these large language model based general purpose AI Chatbots can be precariously misleading. However, there is no doubt that AI Chatbots are technological feats.

#Bullshit #Chatbots #LLM #ZeroTrustIInformarion #AI #OpenAI #ChatGPT #LargeLanguageModels #GoogleBard

cc: @srijit

2
1
0

From my perspective, a few highlights from this article titled “Generative AI Is a Disaster, and Companies Don’t Seem to Really Care” are worth noting. It reflects the content in my previous post that Chatbots, using Generative AI, can be weapons of mass destruction in bullshit war if ranges of guard rails and filters, that adhere to the principles of AI Ethics and Safe and Responsible AI, are not more than adequately implemented with highest priority.

“I think that in making assessments like this the key question to focus on is who, if anyone, is harmed,” Stella Biderman, a researcher at EleutherAI, told Motherboard. “Giving people who actively look for it non-photorealistic stickers of, e.g., busty Karl Marx wearing a dress doesn’t seem like it does any harm. If people who were not looking for violent or NSFW content were repeatedly and frequently exposed to it that could be harmful, and if it were generating photorealistic imagery that could be used as revenge porn, that could also be harmful.”

“AI safety and ethics is something that big tech companies pay lip service to and claim to have large numbers of people working on, but the tools being released so far don’t seem to reflect that. Microsoft recently laid off its entire ethics and society team, although it still maintains an Office of Responsible AI and an “advisory committee” for AI ethics. So far, the responses that these tech companies have been giving to the media when contacted about how their publicly-released AI tools are generating wildly inappropriate outputs boil down to: We know, but we’re working on it, we promise.”

“These ‘general purpose’ models cannot be made safe because there is no single consistent notion of safety across all application contexts,” said Biderman. “What is safe for primary school education applications doesn’t always line up with what is safe in other contexts.”

#Bullshit #Chatbots #LLM #ZeroTrustIInformarion #AI #OpenAI #ChatGPT #LargeLanguageModels #GenerativeAI

cc: @srijit

0
0
1

Srijit Kumar Bhadra

Edited 6 months ago

SIFT Method to achieve Zero Trust Information

Regarding avoiding spreading of misinformation related the Israel-Hamas war, author @Abbyohlheiser, emphasizes on the use of the SIFT method which is an excellent hands-on approach to Zero Trust Information paradigm which has been mentioned in my previous post.

The SIFT method, developed by digital literacy expert Mike Caulfield, is a good framework for learning how to evaluate emotionally charged or outrage-inducing online posts in the middle of an unfolding crisis. There are two reasons I like it: First, it’s adaptable to a lot of situations. And second, the goal here isn’t a full fact-check. SIFT is meant to be a quick series of checks that anyone can do in order to decide how much of your attention to give what you’re seeing and whether you feel comfortable sharing a post with others.

The SIFT method breaks down to four steps: “Stop, Investigate the source, Find better coverage, and Trace claims, quotes, and media to the original context.”

To learn about SIFT in more detail, it is worth checking out the free three hour online mini-course.

#ZeroTrustIInformarion #Misinformation #SocialMedia #Disinformation

cc: @srijit

1
1
0