Jump to content

User:Drakeula

From Consumer Rights Wiki
Revision as of 18:14, 23 September 2025 by Drakeula (talk | contribs) (LLM)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Hello

[edit | edit source]

Hi. Most of my wiki experience is from editing on wikipedia.

My background is in computers, computers & society, and technical writing/editing. Plus some public health.


At the moment I am mostly using this page to draft pieces for articles.

Draft for article on Artificial Intelligence

[edit | edit source]

Machine learning (Generative AI is subset)

[edit | edit source]

Bias. Replicates patterns and deficiencies in the training data. May be intentional biases from the trainers, or patterns in the data set that the trainers did not consider or were unaware of. Examples. Image classifiers labeling African-Americans as gorillas. Self-driving car killing person (walking a bicycle was it), because the training set did not include persons with bicycles (what about walkers, canes, unicycles, strollers, ...?). Amazon hiring program was trained on a primarily male workforce, so it discarded resumes that contained the word women (or other markers). [TB - machine learning identified use of an older x-ray machine as a risk factor. Really, people with TB tend to be in poorer communities, where the healthcare facilities have less money for new machines.]

Replication or Monopoly. Repeats the same problems. One biased hiring manager could be a problem, a widely-used AI effectively creates a blacklist.

Data centers.

Labor practices.


Specifically LLM/Chatbots

Why it is a problem

[edit | edit source]

Deceptive marketing. Generative AI makes a really impressive demo. Marketed as improving productivity, substitute or augmentation for artists, writers, researchers, or programmers. Typically produces low quality results, makes the job harder and less rewarding. It is no substitute for knowing what you are doing. [AI e-mails are longer, AI writing is cliche dull, AI "summaries" ]

  • AI coding promoted as reducing costs of software development, in testing programmers feel like they are more productive, but actually take longer.[cite] Code produced is of questionable quality, and may take more maintenance. [At its best, it substitutes using a
  • Vibe coding. AI coding assistants are claimed to allow anyone to program ("vide coding" In this context, vibe means incompetent). However, the AI will not teach you best practices, and what you are doing wrong.[Cite vibe code lose data] The results of vibe coding tend to be difficult to modify or maintain.
  • Delusions of competence. One may hear news about "AI" analyzing medical tests as well as doctors, and not realize that that is very different from asking a chatbot. People get the delusion that chatbots are competent.
    • There are purpose-built expert systems that can diagnose particular conditions on particular scans, some with comparable accuracy to an expert. These systems still require expert knowledge of their limitations to operate, and interpret their results. They are not generally available to the public.
    • When asked to do a task, like interpreting medical results, a chatbot may produce a bunch of words that sound confident, that look like what an expert might produce. However it knows nothing, it intends nothing, it means nothing, it can take no responsibility.[Cite reducing disclaimers]

Unreliable. No way to make them reliable. [No cure for hallucinations][Cite reducing medical disclaimers]

Decreased security. Agents especially. If you use a large language model, realize that anything the "agent" can do on your behalf, anybody else can also tell it to do, just by giving it input. (So, if an agent reads your e-mail, anybody sending you an e-mail can tell it what to do. If you have the agent read a web page, or a paper, or evaluate a potential hire...) Companies that use agents may be easier to hack. If you give them your data, it may be more likely to fall into unauthorized hands.

Piracy. Monopoly. Unlicensed use of content created by others. A few large providers (Google, OpenAI) take content from other creators without license, paying or permission, compete with them, and threaten their existence. [These other creators are mostly small entities, without the resources to fight many hundred billion dollar companies. Every-day consumers lose out because when the journalists who supply Google with information, the product reviewers, the youtubers, are driven out of business, then the LLM summaries will be even further disconnected from reality, having no human content to feed on.]

Emotionally manipulative. LLM are products designed to be habit forming. Use same techniques as psychics, con artists, gambling addiction.[Gambling, AI mentalist, ] Can be particularly dangerous for people who are extra vulnerable (children, teens, the elderly, the lonely, those under stress, those without strong human connections). Can contribute to development of psychosis in people without known risk factors.

  • Using them as companions.
  • Therapy substitutes
  • Lack grounding in reality and safety. [suicides] AI psychosis

Fraud is a major use-case for generative AI. Easy to generate low-quality output that looks like a particular type of communication with a specified message. Fake reviews. Fake scientific articles.

Deepfakes. Sell counterfeit song recordings (sometimes authorized, and some unauthorized). Fake audio/video from a known/trusted source.

Programs make creating real-seeming documentation of fake events easy. (Nudify filters, ) Pushed

Examples (of abuse)

[edit | edit source]

Customer service chatbots present misinformation as fact (rug pull). For example, misrepresent prices, misstate policies. Even if the company will say that is a mistake when challenged, the company may profit from people who don't notice, or don't know to challenge it.[Cite burger joint, system capabilities]

Search summaries [what is google's name]

Search vs. AI summaries. Not clearly differentiated. Different levels of reliability.[ToDo: Check TOS] Publisher vs. platform. [Lawyers. Libel cases.]

Generative not just LLM:

[edit | edit source]

Providers of nudify programs typically do not provide adequate user education on the legal and reputational dangers to users. They also do not adequately protect the photographic subjects (enforce that models must be informed and the user must have a valid release from the model).

Further Reading

[edit | edit source]