විකිපීඩියා:Large language models
මෙම පිටුව ඉංග්රීසි ව්යාපෘතියෙන් මෙහි ගෙනවිත් ඇත. අන්තර්ගතය අපගේ ව්යාපෘතිය සමඟ 100% නොගැලපෙනමුත් අපගේ අවශ්යතාවයන් වලට අනුකූලව ගොඩනගා ගැනීම සඳහා මූලික පියවර ලෙස භාවිතා කළ හැකිය. |
මෙය රචනාවකි. මෙහි විකිපීඩියාවේ සංස්කාරකවරයකුගේ හෝ කිහිපදෙනෙකුගේ උපදෙස් සහ අදහස් දැක්වෙයි. මෙම පිටුව විශ්වකෝෂ ලිපියක් හෝ විකිපීඩියාවේ ප්රතිපත්ති හෝ මාර්ගෝපදේශ වලින් එකක් නොවේ. මෙය ප්රජාව විසින් හොඳින් අධ්යයනය කොට පිළිගැනීමට ලක් වූවක් ද නොවේ. සමහර රචනා පුළුල් ප්රතිමානයන් (widespread norms) නිරූපණය කරන අතර; අනෙක්වා සුළුතර මත පමණක් නිරූපණය කරයි. මෙම අදහස් බුද්ධිමත් ලෙසින් සලකා බලන්න. |
මෙම පිටුවේ අඩංගු කරුණු සැකෙවින්: Avoid using large language models (LLMs) to write original content or generate references. LLMs can be used for certain tasks (like copyediting, summarization, and paraphrasing) if the editor has substantial prior experience in the intended task and rigorously scrutinizes the results before publishing them. |
“ | Large language models have limited reliability, limited understanding, limited range, and hence need human supervision. | ” |
—Michael Osborne, Professor of Machine Learning, University of Oxford[1] |
While large language models (colloquially termed "AI chatbots" in some contexts) can be very useful, machine-generated text (much like human-generated text) can contain errors or flaws, or be outright useless.
Specifically, asking an LLM to "write a Wikipedia article" can sometimes cause the output to be outright fabrication, complete with fictitious references. It may be biased, may libel living people, or may violate copyrights. Thus, all text generated by LLMs should be verified by editors before use in articles.
Editors who are not fully aware of these risks and not able to overcome the limitations of these tools, should not edit with their assistance. LLMs should not be used for tasks with which the editor does not have substantial familiarity. Their outputs should be rigorously scrutinized for compliance with all applicable policies. In any case, editors should avoid publishing content on Wikipedia obtained by asking LLMs to write original content. Even if such content has been heavily edited, alternatives that do not use machine-generated content are preferable. As with all edits, an editor is fully responsible for their LLM-assisted edits.
Wikipedia is not a testing ground. Using LLMs to write one's talk page comments or edit summaries, in a non-transparent way, is strongly discouraged. LLMs used to generate or modify text should be mentioned in the edit summary, even if their terms of service do not require it.
Risks and relevant policies
[සංස්කරණය]Original research and "hallucinations"
[සංස්කරණය]Wikipedia articles must not contain original research – i.e. facts, allegations, and ideas for which no reliable, published sources exist. This includes any analysis or synthesis of published material that serves to reach or imply a conclusion not stated by the sources. To demonstrate that you are not adding original research, you must be able to cite reliable, published sources. They should be directly related to the topic of the article and directly support the material being presented.
LLMs are pattern completion programs: They generate text by outputting the words most likely to come after the previous ones. They learn these patterns from their training data, which includes a wide variety of content from the Internet and elsewhere, including works of fiction, low-effort forum posts, unstructured and low-quality content optimized for SEO, and so on. Because of this, LLMs will sometimes "draw conclusions" which, even if they seem superficially familiar, are not present in any single reliable source. They can also comply with prompts with absurd premises, like "The following is an article about the benefits of eating crushed glass". Finally, LLMs can make things up, which is a statistically inevitable byproduct of their design, called "hallucination". All of this is, in practical terms, equivalent to original research.
As LLMs often output accurate statements, and since their outputs are typically plausible-sounding and given with an air of confidence, any time that they deliver a useful-seeming result, people may have difficulty detecting the above problems. An average user who believes that they are in possession of a useful tool, who maybe did a spot check for accuracy and "didn't see any problems", is biased to accept the output as provided; but it is highly likely that there are problems. Even if 90% of the content is okay and 10% is false, that is a huge problem in an encyclopedia. LLMs' outputs become worse when they are asked questions that are complicated, about obscure subjects, or told to do tasks to which they are not suited (e.g. tasks which require extensive knowledge or analysis).
Unsourced or unverifiable content
[සංස්කරණය]Readers must be able to check that any of the information within Wikipedia articles is not just made up. This means all material must be attributable to reliable, published sources. Additionally, quotations and any material challenged or likely to be challenged must be supported by inline citations
LLMs do not follow Wikipedia's policies on verifiability and reliable sourcing. LLMs sometimes exclude citations altogether or cite sources that don't meet Wikipedia's reliability standards (including citing Wikipedia as a source). In some case, they hallucinate citations of non-existent references by making up titles, authors, and URLs.
LLM-hallucinated content, in addition to being original research as explained above, also breaks the verifiability policy, as it can't be verified because it is made up: there are no references to find.
Algorithmic bias and non-neutral point of view
[සංස්කරණය]Articles must not take sides, but should explain the sides, fairly and without editorial bias. This applies to both what you say and how you say it.
LLMs can produce content that is neutral-seeming in tone, but not necessarily in substance. This concern is especially strong for biographies of living persons.
Copyright violations
[සංස්කරණය]If you want to import text that you have found elsewhere or that you have co-authored with others (including LLMs), you can only do so if it is available under terms that are compatible with the CC BY-SA license.
An LLM can generate copyright-violating material.[a] Generated text may include verbatim snippets from non-free content or be a derivative work. In addition, using LLMs to summarize copyrighted content (like news articles) may produce excessively close paraphrases.
The copyright status of LLMs trained on copyrighted material is not yet fully understood. Their output may not be compatible with the CC BY-SA license and the GNU license used for text published on Wikipedia.
Usage
[සංස්කරණය]Specific competence is required
[සංස්කරණය]LLMs are assistive tools, and cannot replace human judgment. Careful judgment is needed to determine whether such tools fit a given purpose. Editors using LLMs are expected to familiarize themselves with a given LLM's inherent limitations and then must overcome these limitations, to ensure that their edits comply with relevant guidelines and policies. To this end, prior to using an LLM, editors should have gained substantial experience doing the same or a more advanced task without LLM assistance.[b]
Some editors are competent at making unassisted edits but repeatedly make inappropriate LLM-assisted edits despite a sincere effort to contribute. Such editors are assumed to lack competence in this specific sense. They may be unaware of the risks and inherent limitations or be aware but not be able to overcome them to ensure policy-compliance. In such a case, an editor may be banned from aiding themselves with such tools (i.e., restricted to only making unassisted edits). This is a specific type of limited ban. Alternatively, or in addition, they may be partially blocked from a certain namespace or namespaces.
Disclosure
[සංස්කරණය]Every edit that incorporates LLM output should be marked as LLM-assisted by identifying the name and, if possible, version of the AI in the edit summary. This applies to all namespaces.
Writing articles
[සංස්කරණය]Pasting raw large language models' outputs directly into the editing window to create a new article or add substantial new prose to existing articles generally leads to poor results. LLMs can be used to copyedit or expand existing text and to generate ideas for new or existing articles. Every change to an article must comply with all applicable policies and guidelines. This means that the editor must become familiar with the sourcing landscape for the topic in question and then carefully evaluate the text for its neutrality in general, and verifiability with repect to cited sources. If citations are generated as part of the output, they must verify that the corresponding sources are non-fictitious, reliable, relevant, and suitable sources, and check for text–source integrity.
If using an LLM as a writing advisor, i.e. asking for outlines, how to improve paragraphs, criticism of text, etc., editors should remain aware that the information it gives is unreliable. If using an LLM for copyediting, summarization, and paraphrasing, editors should remain aware that it may not properly detect grammatical errors, interpret syntactic ambiguities, or keep key information intact. It is possible to ask the LLM to correct deficiencies in its own output, such as missing information in a summary or an unencyclopedic, e.g., promotional, tone, and while these could be worthwhile attempts, they should not be relied on in place of manual corrections. The output may need to be heavily edited or scrapped. Due diligence and common sense are required when choosing whether to incorporate the suggestions and changes.
Raw LLM outputs should not be added directly into drafts either. Drafts are works in progress and their initial versions often fall short of the standard required for articles, but enabling editors to develop article content by starting from an unaltered LLM-outputted initial version is not one of the purposes of draft space or user space.
Be constructive
[සංස්කරණය]Wikipedia relies on volunteer efforts to review new content for compliance with our core content policies. This is often time consuming. The informal social contract on Wikipedia is that editors will put significant effort into their contributions, so that other editors do not need to "clean up after them". Editors should ensure that their LLM-assisted edits are a net positive to the encyclopedia, and do not increase the maintenance burden on other volunteers.
LLMs should not be used for unapproved bot-like editing (WP:MEATBOT), or anything even approaching bot-like editing. Using LLMs to assist high-speed editing in article space has a high chance of failing the standards of responsible use due to the difficulty in rigorously scrutinizing content for compliance with all applicable policies.
Wikipedia is not a testing ground for LLM development, for example, by running experiments or trials on Wikipedia for this sole purpose. Edits to Wikipedia are made to advance the encyclopedia, not a technology. This is not meant to prohibit editors from responsibly experimenting with LLMs in their userspace for the purposes of improving Wikipedia.
Editors should not use LLMs to write comments. Communication is at the root of Wikipedia's decision-making process and it is presumed that editors contributing to the English-language Wikipedia possess the ability to communicate effectively. It matters for communication to have one's own thoughts and find an authentic way of expressing them. Using machine-generated text fails this requirement since it is not a surrogate for putting in personal effort and engaging constructively.
Repeated misuse of LLMs forms a pattern of disruptive editing, and may lead to a block or ban.
Sources with LLM-generated text
[සංස්කරණය]LLM-created works are not Wikipedia:Verifiability § Reliable sources. Unless their outputs were published by reliable outlets with rigorous oversight and it can be verified that the content was evaluated for accuracy by the publisher, they should not be cited.
Handling suspected LLM-generated content
[සංස්කරණය]An editor who identifies LLM-originated content that does not comply with our core content policies—and decides not to remove it outright (which is generally fine to do)—should either edit it to make it comply or alert other editors of the issue. The first thing to check is that the referenced works actually exist. All factual claims then need to be verified against the provided sources. Presence of text‑source integrity must be established. Anything that turns out not to comply with the policies should then be removed.
To alert other editors, the editor who responds to the issue should place {{AI-generated|date=දෙසැම්බර් 2024}}
at the top of the affected article or draft (only if that editor does not feel capable of quickly resolving the issue on their own). In biographies of living persons, non-policy compliant LLM-originated content should be removed immediately—without waiting for discussion, or for someone else to resolve the tagged issue.
If removal as described above would result in deletion of the entire contents of the article or draft, it then becomes a candidate for deletion.[c] If the entire page appears to be factually incorrect or relies on fabricated sources, speedy deletion per WP:G3 (Pure vandalism and blatant hoaxes) may be appropriate.
The following templates can be used to warn editors on their talk pages:
See also
[සංස්කරණය]- Wikipedia:WikiProject AI Cleanup, a group of editors focusing on the issue of non-policy-compliant LLM-originated content
- Wikipedia:Artificial intelligence, an essay about the use of artificial intelligence on Wikipedia and Wikimedia projects
- Wikipedia:Computer-generated content, a draft of a proposed policy on using computer-generated content in general on Wikipedia
- Wikipedia:Using neural network language models on Wikipedia, an essay about large language models specifically
- Artwork title, a surviving article initially developed from raw LLM output (before this page had been developed)
- m:Research:Implications of ChatGPT for knowledge integrity on Wikipedia, an ongoing (as of July 2023) Wikimedia research project
Demonstrations
[සංස්කරණය]- User:JPxG/LLM demonstration (wikitext markup, table rotation, reference analysis, article improvement suggestions, plot summarization, reference- and infobox-based expansion, proseline repair, uncited text tagging, table formatting and color schemes)
- User:JPxG/LLM demonstration 2 (suggestions for article improvement, explanations of unclear maintenance templates based on article text)
- User:Fuzheado/ChatGPT (PyWikiBot code, writing from scratch, Wikidata parsing, CSV parsing)
- User:DraconicDark/ChatGPT (lead expansion)
- Wikipedia:Using neural network language models on Wikipedia/Transcripts (showcases several actual mainspace LLM-assisted copyedits)
- User:WeatherWriter/LLM Experiment 1 (identifying sourced and unsourced information)
- User:WeatherWriter/LLM Experiment 2 (identifying sourced and unsourced information, including a non-English source)
- User:WeatherWriter/LLM Experiment 3 (identifying sourced and unsourced information, only six of seven tests successful)
- Wikipedia:Articles for deletion/ChatGPT and Wikipedia:Articles for deletion/Planet of the Apes (humorous April Fools' nominations generated almost entirely by large language models).
Notes
[සංස්කරණය]- ^ This also applies to cases in which the AI model is in a jurisdiction where works generated solely by AI is not copyrightable, although with very low probability.
- ^ For example, someone skilled at dealing with vandalism but doing very little article work should probably not start creating articles using LLMs. Instead, they should first gather actual experience at article creation without the assistance of the LLM.
- ^ Whenever a new article largely consists of unedited output of a large language model, it may be draftified, per WP:DRAFTREASON.As long as the title indicates a topic that has some potential merit, it may be worth it to stubify or blank-and-redirect. Likewise, drafts about viable new topics may be convertible to "skeleton drafts", i.e. near-blanked, by leaving only a brief definition of the subject. Creators of such pages should be suitably notified or warned. Whenever suspected LLM-generated content is concerned, editors are discouraged from contesting instances of removal through reversal without discussing first.When an alternative to deletion is considered, editors should still be mindful of any outstanding copyright or similar critical issues which would necessitate deletion.
References
[සංස්කරණය]- ^ Smith, Adam (25 January 2023). "What Is ChatGPT? And Will It Steal Our Jobs?". Context. Thomson Reuters Foundation. සම්ප්රවේශය 27 January 2023.