Jump to content

Artificial intelligence: Difference between revisions

From Consumer Rights Wiki
Kirb (talk | contribs)
Case studies: Trying to make wiki section even more neutral
-u-n- (talk | contribs)
m Unethical website scraping: Rename section, add brief information
 
(22 intermediate revisions by 9 users not shown)
Line 1: Line 1:
'''Artificial intelligence''' (AI) is a field of computer science producing software that aims to ultimately replace all manual labor. AI is not a new concept - it has been of interest as early as the 1950s. Since the November 2022 launch of [[ChatGPT]], [[wikipedia:Large language model|large language model]] (LLM) chatbots have been a main focus of the industry, with billions of dollars in funding allocated to producing more "intelligent" LLMs. Also a significant focus are [[wikipedia:Text-to-image model|text-to-image models]], which "draw" an image using written instructions, and [[wikipedia:Text-to-video model|text-to-video models]], which extend the text-to-image concept across several smooth video frames.
{{Irrelevant}}{{ToneWarning}}


[[wikipedia:Generative artificial intelligence|Generative artificial intelligence]] models are trained through vast amounts of existing human-generated content. Using the example of an LLM, by learning about common trends in sentence structure, the model is able to form complete sentences and show artificial "knowledge" of a topic. The artificial nature may cause [[wikipedia:Hallucination (artificial intelligence)|hallucination]] through confidently-written, but mostly or entirely incorrect, output.
'''Artificial intelligence''' (AI) is a field of computer science producing systems that aim to solve problems which humans solve by using intelligence. Under the consumer and industry space, it is commonly referred to as chatbots or [[wikipedia:Large language model|large language models]] (LLMs), which have been a main focus of industry
since the November 2022 launch of [[ChatGPT]], with tens of billions of dollars in funding allocated to producing more popular LLMs. Also a significant focus are [[wikipedia:Text-to-image model|text-to-image models]], which "draw" an image using written prompt, and less commonly, [[wikipedia:Text-to-video model|text-to-video models]], which extend the text-to-image concept across several smooth video frames.


The current well-funded, lucrative industry of artificial intelligence tools has resulted in rampant unethical use of content. Startups intending to produce AI services have been scraping the internet for content to train future models at a concerning pace, with no regard for copyright law, as members of the field are concerned that they are approaching the limit of publicly-available content to train from.<ref>https://observer.com/2024/12/openai-cofounder-ilya-sutskever-ai-data-peak/</ref>
So far, no AI solutions are intelligent.  AI is not a new concept - it has been of interest as early as the 1950s. AI is a catch-all, it encompasses many areas and techniques, so merely saying that something uses AI tells one little about it.


== Unethical website scraping ==
[[wikipedia:Generative artificial intelligence|Generative artificial intelligence]] models are trained through vast amounts of existing human-generated content. Using the example of an LLM, by gathering statistics on patterns of words that people use, the model can generate sequences of words that seem similar to what a person might have written.  LLM do not understand anything, they can not reason.  Everything they generate is just a randomly modulated pattern of tokens.  People reading sequences of tokens sometimes see things they think of as being true.  Sequences which do not make sense to the reader, or which are false are called [[wikipedia:Hallucination (artificial intelligence)|hallucination]].  LLM are typically trained to produce output which is pleasing to people, exhibiting [[dark patterns]], for example they often produce output which seems confidently-written, use patterns which praise the user (sycophancy) and emotionally manipulative language.
While "mainstream" companies such as [[OpenAI]], [[Anthropic]], and [[Meta]] appear to correctly follow industry-standard practice for web crawlers, others ignore them, causing [[wikipedia:Denial-of-service attack|distributed denial of service attacks]] which damage access to freely-accessible websites. This is particularly an issue for websites that are large or contain many dynamic links.


Ethical website scrapers, known as "spiders" that crawl the web, follow a certain set of minimum guidelines. Specifically, they follow [[wikipedia:robots.txt|robots.txt]], a text file found at the root of a domain that indicates:
LLM are a glorified autocomplete.  People are used to dealing with people, and many overestimate the abilities of things that exhibit complex, person like patterns.  Promoters of “AI” systems take advantage of this tendency,  using suggestive names (like “reasoning,” and “learning”) and grand claims (“PHD level”), which make it harder for people to understand these systems.


* Paths bots are allowed to index
From November 2022 to 2025, venture capitalists and companies threw hundreds of billions into AI, but received minimal returns.  When companies seek returns, consumers can expect that products may be orphaned, services may be reduced, customer data to be sold or repurposed, costs to rise, and companies to  reduce staff or fail.  Historically, AI has had brief periods of intense hype, followed by disillusionment, and “AI winters.”
* Paths bots should not index
* How long the bot should wait in between requests to the server, to reduce load
* The [[wikipedia:Sitemaps|sitemap]] of the website's content


These rules are typically configured for all bots, with minor adjustments made to individual bots as needed. Additionally, specific web pages may use the [[wikipedia:noindex|robots meta tag]] to control use of their output.
The current well-funded, industry of artificial intelligence tools has resulted in rampant unethical use of content. Startups intending to produce AI services have been scraping the internet for content to train future models at a fast pace, and members of the field are concerned that they are approaching the limit of publicly-available content to train from.<ref>{{Cite web |last=Tremayne-Pengelly |first=Alexandra |date=16 Dec 2024 |title=Ilya Sutskever Warns A.I. Is Running Out of Data—Here’s What Will Happen Next |url=https://observer.com/2024/12/openai-cofounder-ilya-sutskever-ai-data-peak/ |website=Observer}}</ref>


While it is good practice for a bot to respect robots.txt, there is no requirement for it, and there is no punishment for not following a website's wishes. It is additionally standard practice, but in no way enforced, that bots use a [[wikipedia:User-Agent header|User-Agent header]] to uniquely identify itself. This allows a website operator to observe a bot's traffic patterns, potentially blocking the bot outright if its scraping is not desirable. The header also typically contains a URL or email address that can be used to contact the operator in case of anomalies observed in its traffic.
==Why is it a problem==
===Unethical training of data===
:Further reading: [[Artificial intelligence/training]]


Unethical AI scraper bots do not follow robots.txt - in fact, they may not even request this file at all. They typically completely ignore it, instead opting to start from an entry point such as the root home page (<code>/</code>), working its way through an exponentially growing list of links as it finds them, with little to no delay between requests. The bots use false User-Agent header strings that would correspond to real web browsers on desktop or mobile operating systems - blocking them would also block legitimate users, or at least legitimate users on VPNs.
User's works are sometimes silently trained without the user's explicit consent, as was the case for [[Adobe's AI policy]].


Some AI services opt to use separate User-Agent strings, potentially also ignoring robots.txt, when a request is made through user command rather than as part of model training. For example, ChatGPT identifies itself as <code>ChatGPT-User</code> rather than its standard <code>OpenAI</code> when it uses the "search the web" command - even if searching the web was an automatic decision. In a less favorable example, Perplexity AI in this same situation falsely identifies as a standard Chrome web browser running on Windows. AI companies defend this under the belief that they are not a "spider", but rather a "user agent" (like a web browser), when called upon by a user's request.<ref name="perplexity-aws" />
===Privacy concerns of online AI models===
There are several concerns with using online AI models like [[ChatGPT]] ([[OpenAI]]), not only because they are proprietary, but also because there is no guarantee to where your data ends up being stored or used for. Recent developments in local AI models are an alternative to these online AI models, as they work offline once they are downloaded from platforms like [https://huggingface.co/ HuggingFace]. Common models to run are like Llama ([[Meta]]), DeepSeek ([[DeepSeek]]), Phi ([[Microsoft]]), Mistral ([[Mistral AI]]), Gemma ([[Google]]).


Less legitimate bots use a wide distribution of IP addresses, further reducing options for the website to protect itself. This is in a clear attempt to bypass IP-based request throttling and rate limiting the website may implement. They are also known to ignore HTTP response status codes that indicate a server error ([[wikipedia:HTTP status code#5xx server errors|5xx]]), or warnings that the client needs to slow down ([[wikipedia:HTTP status code#429|429 Too Many Requests]]) or has been entirely blocked ([[wikipedia:HTTP status code#403|403 Forbidden]]).
In some cases, these AI models can also be hijacked for malicious purposes. Demonstrated from the usage of Comet ([[Perplexity]]), users can run arbitrary prompts to the browser's built-in AI assistant via hiding text in the HTML comments, non-visible webpage text, or simple comments on a webpage.<ref>{{Cite web |date=Aug 20, 2025 |title=Tweet from Brave |url=https://xcancel.com/brave/status/1958152314914508893#m |access-date=Aug 24, 2025 |website=X (formerly [[Twitter]])}}</ref> These arbitrary prompts can then be abused to hijack sensitive information, or worse, break into high-value accounts, such as for banking or game libraries.<ref>{{Cite web |date=Aug 23, 2025 |title=Tweet from zack (in SF) |url=https://xcancel.com/zack_overflow/status/1959308058200551721 |access-date=Aug 24, 2025 |website=X (formerly [[Twitter]])}}</ref>


=== Effect on users ===
==Further reading==
To protect against unethical crawlers, due to concerns of both intellectual property and service disruption, websites adopt practices that affect the experience of real users:


* '''Bot check walls''': The user may be required to pass a security check "wall". While usually automatic for the user, this can affect legitimate bots. When a website protection service such as [[Cloudflare]] is not confident as to whether the visitor is legitimate, it may present a CAPTCHA to be manually filled out. An example is "Google Sorry", a CAPTCHA wall frequently seen when using Google Search via a VPN.
*[[Dark pattern]]
* '''Login walls''': Should bots be found to pass CAPTCHA walls, the website may advance to requiring logging in to view content. A major recent example of this is [[YouTube]]'s "Sign in to confirm you're not a bot" messages.
*[[Automatic Content Recognition]]
* '''JavaScript requirement''': Most websites do not need JavaScript to deliver their content. However, as many scrapers expect content to be found directly in the HTML, it is often an easy workaround to use JavaScript to "insert" the content after the page has loaded. This may reduce the responsiveness of the website, increasing points of failure, and preventing security-conscious users who disable JavaScript from viewing the website.
*[[Palantir]]
* '''IP address blocking''': Blocking IP addresses, especially by blocking entire providers via their [[wikipedia:Autonomous system (Internet)|autonomous system number]], always comes with some risk of blocking legitimate users. Particularly, this may restrict access to users making use of a VPN.
*[[Meta]]
* '''Heuristic blocking''': Patterns in request headers may give away that the request is being made by an unethical bot, despite attempts to act as a legitimate visitor. Heuristics are imperfect and may block legitimate users, especially those that may use less common browsers.
*[[Yandex]]
*[[TikTok & AI-powered Ad Tracking]]
*[[Flock License Plate Readers]]
*[[Ring]]
*[[Waymo]]
*[[Google]]


In rare situations, a website operator may redirect detected bot traffic, such as to download speed test files hosted by ISPs containing multiple gigabytes of random garbage data. This may have the effect of disrupting the bot, but its effectiveness is unknown.
==References==
 
{{Reflist}}
The need to respond to unethical scraping also further consolidates the web into the control of a few large [[wikipedia:Web application firewall|web application firewall]] (WAF) services, most notably [[Cloudflare]], as website owners find themselves otherwise unable to protect their service from being disrupted by such traffic.
 
=== Case studies ===
==== Diaspora ====
On 27 December 2024, the open-source social network project Diaspora noted that 70% of traffic across its infrastructure was in service of AI scrapers.<ref name="geraspora">https://pod.geraspora.de/posts/17342163</ref> Particularly, the project noted that bots had followed links to crawl every individual edit in their [[#MediaWiki|MediaWiki]] instance, causing an exponential increase in the number of unique requests being made.
 
==== LVFS ====
The [https://fwupd.org/ Linux Vendor Firmware Service] (LVFS) provides a free central store of firmware updates, such as for UEFI motherboards and SSD controllers. This feature is integrated with many Linux distributions through the <code>fwupd</code> daemon. For situations where internet access is not permitted, the service allows users to make a local mirror of the entire 100+ GB store.
 
On 9 January 2025, the project announced that it would introduce a login wall around its mirror feature, citing unnecessary use of its bandwidth.<ref>https://lore.kernel.org/lvfs-announce/zDlhotSvKqnMDfkCKaE_u4-8uvWsgkuj18ifLBwrLN9vWWrIJjrYQ-QfhpY3xuwIXuZgzOVajW99ymoWmijTdngeFRVjM0BxhPZquUzbDfM=@hughsie.com/T/</ref> Up to 1,000 files may be downloaded per day without logging in. The author later mentioned on Mastodon that the problem appears to be caused by AI scraping.<ref>https://mastodon.social/@hughsie/113871373001227969</ref>
 
==== LWN.net ====
On 21 January 2025, Jonathan Corbet, maintainer of the Linux news website [[wikipedia:LWN.net|LWN.net]], made the following [https://social.kernel.org/notice/AqJkUigsjad3gQc664 post] to social.kernel.org:
 
<blockquote>
Should you be wondering why @LWN #LWN is occasionally sluggish... since the new year, the DDOS onslaughts from AI-scraper bots has picked up considerably. Only a small fraction of our traffic is serving actual human readers at this point. At times, some bot decides to hit us from hundreds of IP addresses at once, clogging the works. They don't identify themselves as bots, and robots.txt is the only thing they *don't* read off the site.
 
This is beyond unsustainable. We are going to have to put time into deploying some sort of active defenses just to keep the site online. I think I'd even rather be writing about accounting systems than dealing with this cr*p. And it's not just us, of course; this behavior is going to wreck the net even more than it's already wrecked.
</blockquote>
 
He later commented:<ref>https://www.heise.de/en/news/AI-bots-paralyze-Linux-news-site-and-others-10252162.html</ref>
 
<blockquote>
We do indeed see a kind of pattern. Every IP stays below the threshold for our fuses, but the overload is overwhelming. Any form of active defense will probably have to figure out to block entire subnets instead of individual addresses, and even that might not be enough.
</blockquote>
 
==== MediaWiki ====
[[wikipedia:MediaWiki|MediaWiki]] is of particular interest to LLM training due to the vast amount of factual, plain-text content wikis tend to hold. While [[wikipedia:Wikipedia|Wikipedia]] and the [[wikipedia:Wikimedia Foundation|Wikimedia Foundation]] host the most well-known wikis, numerous smaller wikis exist thanks to the work of many independent editors. The strength of wiki architecture is its ability for every edit to be audited by anyone, at any time - you can still view [https://en.wikipedia.org/w/index.php?oldid=1 the first edit to Wikipedia] from 2002. This makes wikis a hybrid of a static website and a dynamic web app, which becomes problematic when poorly-designed bots attempt to scrape them.<ref name="geraspora" />
 
<!-- COI alert: I, [[User:kirb]], am an admin for The Apple Wiki. Hopefully this is neutral enough?
-->The Apple Wiki, which documents internal details of Apple's hardware and software, holds more than 50,000 articles. On 2 August 2024, with a repeat occurrence on 5 January 2025, the service was disrupted by scraping efforts.<ref>https://theapplewiki.com/wiki/The_Apple_Wiki:Community_portal#Bot_traffic_abuse</ref> The wiki contains a considerable amount of information that is scraped by legitimate security research tools, making it difficult for the website to block non-legitimate requests. Efforts to block unethical scraping and protect the wiki have disrupted these legitimate tools. The large article count, combined with more than 280,000 total edits over the wiki's lifetime, create an untenable situation where it is simply not possible to scrape the website without causing significant service disruption.
 
==== Perplexity AI and news outlets ====
[[Perplexity AI]], founded in August 2022, is a large language model that aims to be viewed as a general search engine. It encourages users to consume news through its summaries of stories.
 
On 15 June 2024, Apple blog MacStories found that Perplexity does not follow its own documented policies when accessing content the user requests from the web. In their testing, the scraper pretended to be Chrome 111 running on Windows 10, connecting from an IP address not found in Perplexity's posted IP address ranges.<ref>https://rknight.me/blog/perplexity-ai-is-lying-about-its-user-agent/</ref> Two days later, this was corroborated by WIRED.<ref>https://www.wired.com/story/perplexity-is-a-bullshit-machine/</ref> Perplexity responded by removing its list of IP addresses.
 
On 27 June 2024, [[Amazon]] announced an investigation into Perplexity AI, citing a terms of service clause requiring bots hosted on Amazon Web Services to honor robots.txt:<ref name="perplexity-aws">https://www.wired.com/story/aws-perplexity-bot-scraping-investigation/</ref>
 
<blockquote>
"AWS's terms of service prohibit abusive and illegal activities and our customers are responsible for complying with those terms," [AWS spokesperson Patrick] Neighorn said in a statement. "We routinely receive reports of alleged abuse from a variety of sources and engage our customers to understand those reports."
</blockquote>
 
== References ==
<references />


[[Category:Artificial intelligence]]
[[Category:Artificial intelligence]]

Latest revision as of 17:50, 25 September 2025

⚠️ Article status notice: This Article's Relevance Is Under Review

This article has been flagged for questionable relevance. Its connection to the systemic consumer protection issues outlined in the Mission statement and Moderator Guidelines isn't clear.

If you believe this notice has been placed in error, or once you have made the required improvements, please visit the Moderators' noticeboard or the #appeals channel on our Discord server: Join Here.

Notice: This Article's Relevance Is Under Review

To justify the relevance of this article:

  • Provide evidence demonstrating how the issue reflects broader consumer exploitation (e.g., systemic patterns, recurring incidents, or related company policies).
  • Link the problem to modern forms of consumer protection concerns, such as privacy violations, barriers to repair, or ownership rights.

If you believe this notice has been placed in error, or once you have made the required improvements, please visit either the Moderator's noticeboard, or the #appeals channel on our Discord server: Join Here.

Article Status Notice: Inappropriate Tone/Word Usage

This article needs additional work to meet the wiki's Content Guidelines and be in line with our Mission Statement for comprehensive coverage of consumer protection issues. Specifically it uses wording throughout that is non-compliant with the Editorial guidelines of this wiki.

Learn more ▼

Artificial intelligence (AI) is a field of computer science producing systems that aim to solve problems which humans solve by using intelligence. Under the consumer and industry space, it is commonly referred to as chatbots or large language models (LLMs), which have been a main focus of industry since the November 2022 launch of ChatGPT, with tens of billions of dollars in funding allocated to producing more popular LLMs. Also a significant focus are text-to-image models, which "draw" an image using written prompt, and less commonly, text-to-video models, which extend the text-to-image concept across several smooth video frames.

So far, no AI solutions are intelligent. AI is not a new concept - it has been of interest as early as the 1950s. AI is a catch-all, it encompasses many areas and techniques, so merely saying that something uses AI tells one little about it.

Generative artificial intelligence models are trained through vast amounts of existing human-generated content. Using the example of an LLM, by gathering statistics on patterns of words that people use, the model can generate sequences of words that seem similar to what a person might have written. LLM do not understand anything, they can not reason. Everything they generate is just a randomly modulated pattern of tokens. People reading sequences of tokens sometimes see things they think of as being true. Sequences which do not make sense to the reader, or which are false are called hallucination. LLM are typically trained to produce output which is pleasing to people, exhibiting dark patterns, for example they often produce output which seems confidently-written, use patterns which praise the user (sycophancy) and emotionally manipulative language.

LLM are a glorified autocomplete. People are used to dealing with people, and many overestimate the abilities of things that exhibit complex, person like patterns. Promoters of “AI” systems take advantage of this tendency, using suggestive names (like “reasoning,” and “learning”) and grand claims (“PHD level”), which make it harder for people to understand these systems.

From November 2022 to 2025, venture capitalists and companies threw hundreds of billions into AI, but received minimal returns. When companies seek returns, consumers can expect that products may be orphaned, services may be reduced, customer data to be sold or repurposed, costs to rise, and companies to reduce staff or fail. Historically, AI has had brief periods of intense hype, followed by disillusionment, and “AI winters.”

The current well-funded, industry of artificial intelligence tools has resulted in rampant unethical use of content. Startups intending to produce AI services have been scraping the internet for content to train future models at a fast pace, and members of the field are concerned that they are approaching the limit of publicly-available content to train from.[1]

Why is it a problem

[edit | edit source]

Unethical training of data

[edit | edit source]
Further reading: Artificial intelligence/training

User's works are sometimes silently trained without the user's explicit consent, as was the case for Adobe's AI policy.

Privacy concerns of online AI models

[edit | edit source]

There are several concerns with using online AI models like ChatGPT (OpenAI), not only because they are proprietary, but also because there is no guarantee to where your data ends up being stored or used for. Recent developments in local AI models are an alternative to these online AI models, as they work offline once they are downloaded from platforms like HuggingFace. Common models to run are like Llama (Meta), DeepSeek (DeepSeek), Phi (Microsoft), Mistral (Mistral AI), Gemma (Google).

In some cases, these AI models can also be hijacked for malicious purposes. Demonstrated from the usage of Comet (Perplexity), users can run arbitrary prompts to the browser's built-in AI assistant via hiding text in the HTML comments, non-visible webpage text, or simple comments on a webpage.[2] These arbitrary prompts can then be abused to hijack sensitive information, or worse, break into high-value accounts, such as for banking or game libraries.[3]

Further reading

[edit | edit source]

References

[edit | edit source]
  1. Tremayne-Pengelly, Alexandra (16 Dec 2024). "Ilya Sutskever Warns A.I. Is Running Out of Data—Here's What Will Happen Next". Observer.
  2. "Tweet from Brave". X (formerly Twitter). Aug 20, 2025. Retrieved Aug 24, 2025.
  3. "Tweet from zack (in SF)". X (formerly Twitter). Aug 23, 2025. Retrieved Aug 24, 2025.