Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Categories
Random page
Top Contributors
Recent changes
Special pages
Contribute
Create a page
How to help
Wiki policy
Article suggestion list
Articles in need of work
Help
Frequently asked questions
Join the discord!
Help about MediaWiki
Moderators' noticeboard
Report a bug
Consumer Rights Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
User:Drakeula
(section)
User page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
Purge cache
General
What links here
Related changes
User contributions
Logs
View user groups
Page information
Cargo data
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===LLM=== '''Deceptive marketing.''' Generative AI makes a really impressive demo. Marketed as improving productivity, substitute or augmentation for artists, writers, researchers, or programmers. Typically produces low quality results, makes the job harder and less rewarding. It is no substitute for knowing what you are doing. [AI e-mails are longer, AI writing is cliche dull, AI "summaries" ] *AI coding promoted as reducing costs of software development, in testing programmers feel like they are more productive, but actually take longer.[cite] Code produced is of questionable quality, and may take more maintenance. [At its best, it substitutes using a *Vibe coding. AI coding assistants are claimed to allow anyone to program ("vide coding" In this context, vibe means incompetent). However, the AI will not teach you best practices, and what you are doing wrong.[Cite vibe code lose data] The results of vibe coding tend to be difficult to modify or maintain. *Delusions of competence. One may hear news about "AI" analyzing medical tests as well as doctors, and not realize that that is very different from asking a chatbot. People get the delusion that chatbots are competent. **There are purpose-built expert systems that can diagnose particular conditions on particular scans, some with comparable accuracy to an expert. These systems still require expert knowledge of their limitations to operate, and interpret their results. They are not generally available to the public. **When asked to do a task, like interpreting medical results, a chatbot may produce a bunch of words that sound confident, that look like what an expert might produce. However it knows nothing, it intends nothing, it means nothing, it can take no responsibility.[Cite reducing disclaimers] '''Unreliable.''' No way to make them reliable. [No cure for hallucinations][Cite reducing medical disclaimers] '''Decreased security.''' Agents especially. If you use a large language model, realize that anything the "agent" can do on your behalf, anybody else can also tell it to do, just by giving it input. (So, if an agent reads your e-mail, anybody sending you an e-mail can tell it what to do. If you have the agent read a web page, or a paper, or evaluate a potential hire...) Companies that use agents may be easier to hack. If you give them your data, it may be more likely to fall into unauthorized hands. '''Piracy. Monopoly.''' Unlicensed use of content created by others. A few large providers (Google, OpenAI) take content from other creators without license, paying or permission, compete with them, and threaten their existence. [These other creators are mostly small entities, without the resources to fight many hundred billion dollar companies. Every-day consumers lose out because when the journalists who supply Google with information, the product reviewers, the youtubers, are driven out of business, then the LLM summaries will be even further disconnected from reality, having no human content to feed on.] '''Emotionally manipulative.''' LLM are products designed to be habit forming. Use same techniques as psychics, con artists, gambling addiction.[Gambling, AI mentalist, ] Can be particularly dangerous for people who are extra vulnerable (children, teens, the elderly, the lonely, those under stress, those without strong human connections). Can contribute to development of psychosis in people without known risk factors. *Using them as companions. *Therapy substitutes *Lack grounding in reality and safety. [suicides] AI psychosis '''Fraud''' is a major use-case for generative AI. Easy to generate low-quality output that looks like a particular type of communication with a specified message. Fake reviews. Fake scientific articles. '''Deepfakes.''' Sell counterfeit song recordings (sometimes authorized, and some unauthorized). Fake audio/video from a known/trusted source. Programs make creating real-seeming documentation of fake events easy. (Nudify filters, ) Pushed
Summary:
Please note that all contributions to Consumer Rights Wiki are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 International (see
Consumer Rights Wiki:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
To protect the wiki against automated edit spam, we kindly ask you to solve the following hCaptcha:
Cancel
Editing help
(opens in new window)
Search
Search
Editing
User:Drakeula
(section)
Add topic