Jump to content

Artificial intelligence

From Consumer Rights Wiki
(Redirected from LLM)

⚠️ Article status notice: This Article's Relevance Is Under Review

This article has been flagged for questionable relevance. Its connection to the systemic consumer protection issues outlined in the Mission statement and Moderator Guidelines isn't clear.

If you believe this notice has been placed in error, or once you have made the required improvements, please visit the Moderators' noticeboard or the #appeals channel on our Discord server: Join Here.

Notice: This Article's Relevance Is Under Review

To justify the relevance of this article:

  • Provide evidence demonstrating how the issue reflects broader consumer exploitation (e.g., systemic patterns, recurring incidents, or related company policies).
  • Link the problem to modern forms of consumer protection concerns, such as privacy violations, barriers to repair, or ownership rights.

If you believe this notice has been placed in error, or once you have made the required improvements, please visit either the Moderator's noticeboard, or the #appeals channel on our Discord server: Join Here.

Article Status Notice: Inappropriate Tone/Word Usage

This article needs additional work to meet the wiki's Content Guidelines and be in line with our Mission Statement for comprehensive coverage of consumer protection issues. Specifically it uses wording throughout that is non-compliant with the Editorial guidelines of this wiki.

Learn more ▼

Artificial intelligence (AI) is a field of computer science producing systems that aim to solve problems which humans solve by using intelligence. Under the consumer and industry space, it is commonly referred to as chatbots or large language models (LLMs), which have been a main focus of industry since the November 2022 launch of ChatGPT, with tens of billions of dollars in funding allocated to producing more popular LLMs. Also a significant focus are text-to-image models, which "draw" an image using written prompt, and less commonly, text-to-video models, which extend the text-to-image concept across several smooth video frames.

So far, no AI solutions are intelligent. AI is not a new concept - it has been of interest as early as the 1950s. AI is a catch-all, it encompasses many areas and techniques, so merely saying that something uses AI tells one little about it.

Generative artificial intelligence models are trained through vast amounts of existing human-generated content. Using the example of an LLM, by gathering statistics on patterns of words that people use, the model can generate sequences of words that seem similar to what a person might have written. LLM do not understand anything, they can not reason. Everything they generate is just a randomly modulated pattern of tokens. People reading sequences of tokens sometimes see things they think of as being true. Sequences which do not make sense to the reader, or which are false are called hallucination. LLM are typically trained to produce output which is pleasing to people, exhibiting dark patterns, for example they often produce output which seems confidently-written, use patterns which praise the user (sycophancy) and emotionally manipulative language.

LLM are a glorified autocomplete. People are used to dealing with people, and many overestimate the abilities of things that exhibit complex, person like patterns. Promoters of “AI” systems take advantage of this tendency, using suggestive names (like “reasoning,” and “learning”) and grand claims (“PHD level”), which make it harder for people to understand these systems.

From November 2022 to 2025, venture capitalists and companies threw hundreds of billions into AI, but received minimal returns. When companies seek returns, consumers can expect that products may be orphaned, services may be reduced, customer data to be sold or repurposed, costs to rise, and companies to reduce staff or fail. Historically, AI has had brief periods of intense hype, followed by disillusionment, and “AI winters.”

The current well-funded, industry of artificial intelligence tools has resulted in rampant unethical use of content. Startups intending to produce AI services have been scraping the internet for content to train future models at a fast pace, and members of the field are concerned that they are approaching the limit of publicly-available content to train from.[1]

Why is it a problem

[edit | edit source]

Unethical training of data

[edit | edit source]
Further reading: Artificial intelligence/training

User's works are sometimes silently trained without the user's explicit consent, as was the case for Adobe's AI policy.

Privacy concerns of online AI models

[edit | edit source]

There are several concerns with using online AI models like ChatGPT (OpenAI), not only because they are proprietary, but also because there is no guarantee to where your data ends up being stored or used for. Recent developments in local AI models are an alternative to these online AI models, as they work offline once they are downloaded from platforms like HuggingFace. Common models to run are like Llama (Meta), DeepSeek (DeepSeek), Phi (Microsoft), Mistral (Mistral AI), Gemma (Google).

In some cases, these AI models can also be hijacked for malicious purposes. Demonstrated from the usage of Comet (Perplexity), users can run arbitrary prompts to the browser's built-in AI assistant via hiding text in the HTML comments, non-visible webpage text, or simple comments on a webpage.[2] These arbitrary prompts can then be abused to hijack sensitive information, or worse, break into high-value accounts, such as for banking or game libraries.[3]

Further reading

[edit | edit source]

References

[edit | edit source]
  1. Tremayne-Pengelly, Alexandra (16 Dec 2024). "Ilya Sutskever Warns A.I. Is Running Out of Data—Here's What Will Happen Next". Observer.
  2. "Tweet from Brave". X (formerly Twitter). Aug 20, 2025. Retrieved Aug 24, 2025.
  3. "Tweet from zack (in SF)". X (formerly Twitter). Aug 23, 2025. Retrieved Aug 24, 2025.