Jump to content

Wikipedia:Village pump (technical)/Archive 221

From Wikipedia, the free encyclopedia

Simple summaries: editor survey and 2-week mobile study

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Hi everyone! I'm writing on behalf of the Web Team. Over the past year, the team has been exploring ways to make the wikis more accessible to readers globally through different projects around content discovery. One of the ideas we’ve been discussing is the presentation of machine-generated, but editor moderated, simple summaries for readers. These summaries take existing Wikipedia text, and simplify it for interested readers. Readers will show interest by opting into the feature and clicking to open the summary on pages where it is available. As part of our exploration into this idea, in the next two weeks we will be launching:

1. An editor survey on English, Spanish, French, and Japanese Wikipedias. This survey will ask editors on their preferences for generating, editing, and moderating summaries, as well as their thoughts on the project overall. We will use the data from this survey to propose the initial moderation workflows for a future version of a summary feature.

2. A two-week experiment on the mobile website. This experiment will allow a small set (10%) of readers to opt into and open pre-generated summaries on a set of articles for two weeks. After two weeks, we will turn the experiment off and use the data collected to determine whether users are interested in summaries and open them frequently, as well as whether summaries aid the overall experience.

After the completion of these two steps, we’ll be publishing our results on the project page and reaching out to discuss whether to proceed with building this feature and provide some options for its associated workflows for editors. You are welcome to leave questions around the project here or on the project talk page. EBlackorby-WMF (talk) 18:20, 2 June 2025 (UTC)

  • Yuck. --MZMcBride (talk) 20:52, 2 June 2025 (UTC)
  • Yuck. —Cryptic 21:46, 2 June 2025 (UTC)
    Yuck. Also, this should probably be at VPPR or VPWMF. Cremastra (uc) 21:58, 2 June 2025 (UTC)
    @EBlackorby-WMF But seriously. I'm grinning with horror. Just because Google has rolled out its AI summaries doesn't mean we need to one-up them.
    I sincerely beg you not to test this, on mobile or anywhere else. This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word "machine-generated" is used instead
    You also say this has been "discussed" which is thoroughly laughable as the "discussion" you link to has exactly one participant, the original poster, who is another WMF employee. Cremastra (uc) 22:04, 2 June 2025 (UTC)
  • What a coincidence! I had just read this article (https://www.theverge.com/news/676933/gmail-ai-summaries-workspace-android-ios) a day ago and wondered if there would be a similar feature on Wikipedia. As long as this machine/AI-generated summary feature is opt-in, I don't see any downsides to having it available for interested readers. The attention spans of the younger generations are shrinking, and some would rather read a short summary of the World War II article than a 13,033-word long article; this feature would be useful and beneficial for them. Some1 (talk) 22:43, 2 June 2025 (UTC)
    We can read the lead, which is a convenient, short summary written by real people. Cremastra (uc) 22:45, 2 June 2025 (UTC)
    Have you seen our leads lately? Izno (talk) 22:49, 2 June 2025 (UTC)
    All right, they're a reasonably short summary. In any case, even in articles with longer leads like Romeo and Juliet it is possible to skim over or ignore the parts that disinterest me and still extract valuable information. Cremastra (uc) 22:51, 2 June 2025 (UTC)
    AI-generated simple summary of Dopamine
    The lead of Romeo and Juliet isn't as long as the lead of World War II, which I'd linked. It seems like these AI-generated simple summaries are ~5 sentences long, which is much shorter (and more digestible) than the average leads of (non-stub) articles. Some1 (talk) 02:28, 3 June 2025 (UTC)
    Also, concerns about this feature should focus on the "This summary has not been checked for verifiable accuracy" part, not because "it's AI". Some1 (talk) 02:39, 3 June 2025 (UTC)
    The first paragraph is generally the digestible summary of the summary. This is enforced technologically in mobile views, which is where most of the view of the above-maligned younger generations are going to be coming from, as only the first paragraph is shown before the infobox. For World War II, that is six sentences. CMD (talk) 04:58, 3 June 2025 (UTC)
    The first paragraph of Donald Trump is two sentences long. I want to see how "Aya" summarizes that Trump article and other contentious articles such as Gaza genocide, Zionism, Race and intelligence, etc. Some1 (talk) 03:00, 4 June 2025 (UTC)
    @Some1 I only have experience with the beautifully named c4ai-aya-expanse-32b but I'd describe it as almost "on the level of OpenAI". Which is where a lot of companies are at. It refuses to tell me that I am wrong, even when I am intentionally saying stuff that is clearly incorrect. It says things like You're right; the poem I generated does indeed draw inspiration from the well-known and beloved "duck poetry" genre. Polygnotus (talk) 03:10, 4 June 2025 (UTC)
    Some1, summarizing contentious articles and how that will almost certainly be exploited and weaponized by partisan actors is something that interests me too. For interest, see this recent query from a reader, where they appear to either a) assume that we control Google knowledge panels and/or b) that we should amend the wording to ensure the part they regard as important fits into the summary. It is the first time that I have seen something like this. Sean.hoyland (talk) 07:54, 4 June 2025 (UTC)
    Oh we get those every now and then, not just for CTOPs. CMD (talk) 08:07, 4 June 2025 (UTC)
    @Some1 For what it's worth, we can in fact see how Aya summarized the Zionism article, because it's in the set of summaries that readers would have seen. You may be asking yourself, how did one of the most contentious geopolitical topics in existence get into the summary set that "[made] an effort to exclude controversial, inappropriate or legally libelous material"? Great question!
    Theodor Herzl started the Zionist movement in the late 1800s, dreaming of a Jewish state. Zionism wanted Jews to have their own land, free from European persecution. They chose Palestine, an area important in Jewish history. At first, they considered other places, but settled on Palestine, which was then part of the Ottoman Empire. This led to the Israeli-Palestinian conflict. The goal was to create a Jewish majority in Palestine, and after Israel's independence in 1948, Zionism became its national ideology. Zionism has different groups with varying views, but all support a Jewish homeland in Palestine. Some see it as a return to their ancient home, while others criticize it as colonialist.
    Leaving aside any other issues here: Does that first sentence seem weird to you? That's because the LLM has confused the article caption with the actual article. Gnomingstuff (talk) 20:40, 10 June 2025 (UTC)
    Thanks for the link, Gnomingstuff. That Zionism summary doesn't look too bad for an AI-generated simple summary imo. Some1 (talk) 22:46, 10 June 2025 (UTC)
    @Some1 Well the first sentence is wrong. Herzl didn't "start" Zionism. "This led to the Israeli-Palestinian conflict" is wrong, because there are about 48 trillion factors that led to that conflict. It also leaves out some of the most important stuff (Balfour).
    "creating a Jewish majority" is a weaselword for what many would describe as a genocide.
    "Some see it as a return to their ancient home, while others criticize it as colonialist." is an oversimplification of an incredibly difficult topic where people have many opinions, usually stupid ones.
    So to say it doesn't look too bad is very optimistic. Polygnotus (talk) 22:54, 10 June 2025 (UTC)
@EBlackorby-WMF Hi! As you can tell, your proposal does not align with what the community actually wants.
As you may or may not be aware, the WMF and the Wikipedia community have a very difficult and tense relationship.
It sounds like you guys already did a lot of work, without getting input from the community.
You link to this with the text we’ve been discussing but that must've been an internal WMF discussion because no one responded to that post.
Perhaps the Movement Communications team forgot to actually communicate with the movement?
I recommend stopping, and in the future asking for feedback at a far far earlier stage (but of course I know you won't).
There are many people here who are happy to help you by saying why we dislike certain ideas. But you need to involve those people at an early stage (during brainstorming), otherwise it is difficult to change course and you've wasted a lot of time and energy.
The WMF as a whole makes this same mistake over and over and over again. If you want to hear all the downsides and problems with a proposal, you can ask me on my talkpage. Polygnotus (talk) 05:15, 3 June 2025 (UTC)
How can you tell that from 5 people responding ? Have you run your own research into this ? —TheDJ (talkcontribs) 14:01, 3 June 2025 (UTC)
@TheDJ Let's not argue for the sake of arguing. It might confuse them. This isn't a distro-war. Polygnotus (talk) 15:30, 3 June 2025 (UTC)
Did a lot of work even go into this? It takes all of 5 minutes to set up the world's 327482th ChatGPT wrapper claiming to be a brand new feature. It feels slapdash and easy for them to throw up and it's such an insult to the community. 🌸⁠wasianpower⁠🌸 (talk • contribs) 04:20, 4 June 2025 (UTC)
@Wasianpower apparently development of this has been on-going since September 2024 (helpful link: mw:Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries). JackFromWisconsin (talk | contribs) 04:22, 4 June 2025 (UTC)
Yikes. 🌸⁠wasianpower⁠🌸 (talk • contribs) 04:31, 4 June 2025 (UTC)
  • The replies above may be missing the point. Imagine you were a WMF staffer. Would you want to fix boring bugs or would you want to work on AI for your CV? Johnuniq (talk) 05:49, 3 June 2025 (UTC)
    @Johnuniq I certainly would prefer playing with shiny new toys over rewriting 20+ year old PHP. Polygnotus (talk) 06:08, 3 June 2025 (UTC)
    I'd want to fix boring bugs, particularly in the Action API. 😀 Anomie 11:22, 3 June 2025 (UTC)
    Imagine you were a WMF staffer. Would you want to fix boring bugs? Hell yes I would. Have you seen my contributions here on en.WP? All I do is fix boring bugs so that our readers and editors get the pages, links, text, and images that they expect, with a minimum of errors. The WMF needs more gnome developers. – Jonesey95 (talk) 12:56, 3 June 2025 (UTC)
    WMF, hire Anomie and Jonesey95! Polygnotus (talk) 15:29, 3 June 2025 (UTC)
    @Polygnotus: The joke in my reply is that I did work for the WMF from 2012 to 2020, and fixing boring bugs and working on the Action API are things I did. I also worked on Scribunto, TemplateSandbox, TemplateStyles, Special:ApiSandbox, and a bunch of other stuff. But eventually I wound up with a manager who seemed primarily interested in trying to climb the management ladder (yay kiss up kick down), and in general office politics became a bigger and bigger requirement, and eventually they fired me, no real reason given. Even if they'd be willing to rehire me, I hear the office politics have continued to get worse since so I still might not fit in anymore. Anomie 00:03, 4 June 2025 (UTC)
    @Anomie I've heard that WMF CEO is a decently paid position. Maybe you have some spare time? Polygnotus (talk) 03:49, 4 June 2025 (UTC)
    I don't think I'd do very well at that job. My skills are elsewhere. Anomie 11:49, 4 June 2025 (UTC)
    @Anomie Then you'll fit right in. Polygnotus (talk) 18:29, 4 June 2025 (UTC)
    I would want to fix longstanding bugs too, and while as a volunteer I'm probably not as dedicated as Jonesey95 I actually did apply for a job with the Wikimedia Foundation and got an interview but was turned down. * Pppery * it has begun... 15:31, 3 June 2025 (UTC)
    @Pppery Why were you turned down? I imagine you have asked that question. Also by whom? Polygnotus (talk) 15:34, 3 June 2025 (UTC)
    "While we were impressed with your qualifications and experience, we have decided to proceed with other candidates". No other detail, and I didn't ask. I don't think "by whom" is a useful question to ask; the email I got was from a member of the recruiting team but I'm sure she wasn't the one actually making the decision. * Pppery * it has begun... 15:37, 3 June 2025 (UTC)
    Cui bono? Is there some financial incentive to make this decision which, from our point of view, doesn’t make any sense? I mean for the foundation as a whole, not any individual working there. I’m trying to understand where the pressure to do this is coming from. 3df (talk) 02:45, 4 June 2025 (UTC)
    @3df, believe it or not, there are people out there who think this kind of thing is useful, and a great many of them work in development. No ulterior or secret motive is necessary to explain it. -- asilvering (talk) 02:47, 4 June 2025 (UTC)
    @Asilvering Agreed. Although it would be very very beneficial for the AI company to have people correcting the output of their AI for free. They normally have to hire people to do that job. Polygnotus (talk) 02:50, 4 June 2025 (UTC)
    I'd prefer to make something useful. Hammy (talk) 22:40, 12 June 2025 (UTC)
  • Nope, I don't want an additional floating window of content for editors to argue over. Not helpful or better than a simple article lead. Zanahary 06:36, 4 June 2025 (UTC)
  • Keep AI out of Wikipedia. That is all. WMF staffers looking to pad their resumes with AI-related projects need to be looking for new employers. Carrite (talk) 16:01, 3 June 2025 (UTC)
  • I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)
    But 99% of the AI consumers knowingly interact with is trained on Wikipedia, so they don't need wikipedia.org for that. So the WMF is proposing making a shittier version of something that already exists. Polygnotus (talk) 16:49, 3 June 2025 (UTC)
    It would be good if we had our own version of it, where we could control what is shown and how it is shown, instead of having a for-profit company modify our content as they please with no way for anyone to do anything about it, and no free and open alternative. Matma Rex talk 17:39, 3 June 2025 (UTC)
    That appears to be based on a bunch of incorrect assumptions. It is not like a nuke, we don't need to have it just because others do.
    we could control what is shown and how it is shown Being able to set a system prompt is not control, you'd have to train your own model, which means either copyright violations on a massive scale or training on model exclusively on Wikipedia data, meaning it would be completely inferior to what is available.
    instead of having a for-profit company modify our content as they please with no way for anyone to do anything about it Have you read WP:REUSE? This is what you signed up for.
    and no free and open alternative What are you talking about? Anyone can download ollama. https://ollama.com/ The WMF does not have the money and brainpower required to develop a serious alternative to the existing models, and if they try that is a clear indication that they don't understand their role. But the screenshot says that the name of the model is Aya. Aya is a family of models by Cohere Labs. https://cohere.com/research/aya Which is a for profit company. Polygnotus (talk) 18:10, 3 June 2025 (UTC)
    @Polygnotus Your comparison to nuclear bombs seems out of proportion.
    Being able to set a system prompt is not control I don't mean a system prompt, I mean the user interface around the summary (see the mockup earlier in the thread, with ample warnings and notes, and compare it to the summary in Google or whatever else) and I mean the editorial control to hide or override these summaries if they're egregiously wrong, which I hope will be available if this experiment becomes a real feature.
    Have you read WP:REUSE I think it's a bit rude of you to imply I don't know what I'm talking about. Anyway, have you seen how the content is actually presented by the various LLM companies? They don't generally cite content as they should (and not just from Wikipedia), and as far as I can tell no one yet has managed to force them to do it.
    What are you talking about? Anyone can download ollama Famously, everyone on the planet has a computer able to run large language models, and will not mind waiting several seconds or minutes for the results to come out. Oh, wait. Local models are only a viable alternative for a small group of people.
    I don't think you replied to what I said, only to things you imagined I said. I'm happy to argue for a bit, but please slow down. Matma Rex talk 21:01, 3 June 2025 (UTC)
    They don't generally cite content as they should (and not just from Wikipedia), and as far as I can tell no one yet has managed to force them to do it. DuckDuckGo does these days. Izno (talk) 21:06, 3 June 2025 (UTC)
    @Matma Rex The nuke thing is a famous example of game theory, see Mutually Assured Destruction for more.
    I mean the user interface around the summary (see the mockup earlier in the thread, with ample warnings and notes, and compare it to the summary in Google or whatever else) and I mean the editorial control to hide or override these summaries if they're egregiously wrong, which I hope will be available if this experiment becomes a real feature. People do not read banners and warnings, see Banner blindness. You can never make a banner big enough to force people to read it. override these summaries if they're egregiously wrong Even the example they provided is already egregiously wrong, of course they will be. Having humans override the after the fact is not a reasonable solution to a giant problem.
    I don't think WP:REUSE is a very popular page, and there are tons of people who don't realize that basically anyone can copy anything from Wikipedia, and no one does anything about it, even if they do not follow the terms of the license.
    have you seen how the content is actually presented by the various LLM companies? They don't generally cite content as they should (and not just from Wikipedia), and as far as I can tell no one yet has managed to force them to do it. Yes, I have, which is why my opinion is what it is.
    Local models are only a viable alternative for a small group of people. agreed. You talked about no free and open alternative which is why I mentioned Ollama.
    please slow down I mean if they really do this I think we've lost the war and I'll just leave Wikipedia. Or set up an alternative and then leave. Polygnotus (talk) 21:13, 3 June 2025 (UTC)
    What war? Whose against whom? And what does MAD has to do with this discussion? Do you think we're building Skynet here or something? I am baffled and at a loss as to how to reply to this. Matma Rex talk 21:18, 3 June 2025 (UTC)
    @Matma Rex
    You wrote what does MAD has to do with this discussion? in response to me writing The nuke thing is a famous example of game theory, see Mutually Assured Destruction for more. which was my reponse to Your comparison to nuclear bombs seems out of proportion in response to me writing It is not like a nuke, we don't need to have it just because others do.
    See how meta-conversations are near impossible on Wikipedia (and real life)? Polygnotus (talk) 21:24, 3 June 2025 (UTC)
    One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)
    @Femke You seem to ignore this comment where I explained that the WMF can't compete with AI companies whose core business is to develop AI models, the fact that a model trained exclusively on Wikipedia data would be far inferior to a model trained on a far far larger dataset, and the fact that they are using Aya. as long as we have good safeguards in place What do you mean? Polygnotus (talk) 18:37, 3 June 2025 (UTC)
    As in: moderation before something is put to readers, rather than after the fact. Which would in practice restrict the feature to high-priority technical articles, given that we have limited editor time for this. I don't know enough about the specifics of Aya to comment intelligently there. —Femke 🐦 (talk) 18:42, 3 June 2025 (UTC)
    @Femke I think you know that is not what the WMF is proposing. So your comments make no sense. we might try this They are not proposing that we try anything. They are proposing giving the most important screen real estate we have (the WP:LEAD) of every article to a for-profit company. Polygnotus (talk) 18:45, 3 June 2025 (UTC)
    In the comment above, they say that the moderator workflow is still to be determined. You're probably right they don't have a 'check first' workflow in mind, but if there is consensus to implement this (and it seems from this discussion so far that there probably isn't), I imagine the community would only be okay with this with extremely strong moderation in place. Like, the CMD example below is something that needs to be avoided at all costs.
    Perhaps, it's time to start a Wikiproject and some type of contest to fix the problem identified and ensure we write articles that people can actually understand. My Challenges seem not to work as an encouragement. —Femke 🐦 (talk) 18:52, 3 June 2025 (UTC)
    You think people are lining up to check the work of an AI model? Especially when summarizing complicated technical topics most people don't even understand? Polygnotus (talk) 18:58, 3 June 2025 (UTC)
    I think AGF applies here. — Qwerfjkltalk 18:52, 3 June 2025 (UTC)
    @Qwerfjkl What do you mean? No one believes it is malice, right? Polygnotus (talk) 18:53, 3 June 2025 (UTC)
    Well, hyperbolic, then. — Qwerfjkltalk 18:55, 3 June 2025 (UTC)
    ? Polygnotus (talk) 19:17, 3 June 2025 (UTC)
    Just to be clear, I oppose the test of unverified AI summaries as is slated for next week to 10% of mobile editors (phab:T393940). The foundation cannot dictate content. —Femke 🐦 (talk) 06:49, 4 June 2025 (UTC)
  • A truly ghastly idea. In other words: Yuck. Since all WMF proposals steamroller on despite what the actual community says, I hope I will at least see the survey and that—unlike some WMF surveys—it includes one or more options to answer "NO". Yngvadottir (talk) 17:02, 3 June 2025 (UTC)
    It sure looks like they are planning to ask casual readers who use the mobile app. And if you ask them, their answer would probably be "yes". But that doesn't mean that it is a good idea. And introducing AI summaries would probably lead to a fork and an exodus. I would honestly be shocked if AI is not the final straw in the relationship between the WMF and the community. Polygnotus (talk) 17:17, 3 June 2025 (UTC)
    The survey linked below was much as I expected: almost nowhere to express disapproval of the whole putrid idea. Bonus: video without closed captions. Of text. On YouTube. Classy and inclusive. Yngvadottir (talk) 23:24, 3 June 2025 (UTC)
    @Yngvadottir I really enjoyed the question about a An AI-powered dashboard that shows the quality and metrics for summaries for my wiki.
    What is an AI-powered dashboard??? This was clearly written with someone who is unfamiliar with the capabilities and limitations of AI technology.
    You'd expect something like "powered by Kibana".
    Also my wiki shows that they want to dump this trash on multiple wikis, despite the fact that everyone who speaks languages other than English and English knows how much worse the experience in another language is, compared to English. Polygnotus (talk) 01:57, 4 June 2025 (UTC)
    Yea I saw that too. The survey that I did had nowhere for me to say that it was a bad thing. In fact there were even worse ideas listed there too, such as edits to summaries might not even be listed anywhere unlike edits to normal pages. 2007GabrielT (talk) 12:17, 4 June 2025 (UTC)
    Absolutely agree. I would definitely stop contributing if AI is used to create content (I have been around since 2007) Ita140188 (talk) 07:33, 12 June 2025 (UTC)
Laudable goal, but if it is to go through, it should be only if established editors, i.e. extended confirmed editors, decide if the generated summary can supercede the current lead, or decide that the generated content requires modifications before using. – robertsky (talk) 19:03, 3 June 2025 (UTC)
@Robertsky if the generated summary can supercede the current lead That is not what they are proposing at all... if established editors, i.e. extended confirmed editors, decide that is also not what they are proposing decide that the generated content requires modifications before using that is also not what they are proposing. Polygnotus (talk) 19:06, 3 June 2025 (UTC)
@Polygnotus, The lead is supposed to be the summary of the article. Why have another machine generated summary if the lead is doing the job? editor moderated is what they are proposing, and they asked for editors' preferences for generating, editing, and moderating summaries. So I am suggesting as such. – robertsky (talk) 19:17, 3 June 2025 (UTC)
@Robertsky Why have another machine generated summary if the lead is doing the job? Are you asking me that? That is the WMFs proposal, and I am saying it is a bad idea...
Look at the screenshot. It shows both the current lead and the AI summary that contains multiple errors.
You think people are lining up to check the work of an AI model? Especially when summarizing complicated technical topics most people don't even understand?
My brother in Zeus, Cohere Labs is worth billions. Do you want Wikipedia volunteers to work for them for free??? You do realize that AI companies hire people to do the work you seem to think should be done by unpaid volunteers?
https://time.com/6247678/openai-chatgpt-kenya-workers/ Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic Polygnotus (talk) 19:19, 3 June 2025 (UTC)
@Polygnotus, I am not disagreeing with you... 😉 – robertsky (talk) 19:29, 3 June 2025 (UTC)
Praise be to Zeus! Polygnotus (talk) 19:30, 3 June 2025 (UTC)
A note that the WMF has begun requesting responses to surveys via the QuickSurveys extension, so some (like me) should get a prompt inviting you to the survey if enabled. Some of the questions... aren't great if I'm honest. – Isochrone (talk) 20:45, 3 June 2025 (UTC)
@Isochrone How can we opt in? Can we get some screenshots? Polygnotus (talk) 20:49, 3 June 2025 (UTC)
https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq
Since the WMF is willing to be this sneaky, I don't think we should feel guilty if we fill in the survey a couple hundred times. Polygnotus (talk) 21:04, 3 June 2025 (UTC)
Whilst I am not against sharing the survey, let's not intentionally skew the results :) – Isochrone (talk) 21:05, 3 June 2025 (UTC)
Let's intentionally skew the results! The WMF intentionally skewed it by picking who to show it to; the community should skew the results to tell the WMF to stop trying to put AI in Wikipedia! Polygnotus (talk) 21:14, 3 June 2025 (UTC)
Yes, and then the WMF will just throw out all of the survey data because it has been openly rigged. doozy (talkcontribs)⫸ 23:16, 3 June 2025 (UTC)
@Doozy Exactly! That is what we want! Polygnotus (talk) 23:17, 3 June 2025 (UTC)
And then the legitimate consensus among the people who took the survey will probably be thrown out, requiring us to rely even further on RfCs and other community headaches. doozy (talkcontribs)⫸ 23:20, 3 June 2025 (UTC)
Perfect! We can use the existing Wikipedia ways to get consensus, like RfCs and talkpages and all that. Polygnotus (talk) 23:21, 3 June 2025 (UTC)
Point conceded. I had no idea where you were going with this initially. doozy (talkcontribs)⫸ 23:23, 3 June 2025 (UTC)
@Doozy I am very confusing (sorry about that). Polygnotus (talk) 23:25, 3 June 2025 (UTC)
I think this is a really unhelpful thing to post. jp×g🗯️ 13:22, 5 June 2025 (UTC)
I disagree. Polygnotus (talk) 13:29, 5 June 2025 (UTC)
  • If this were to actually happen, some or many readers would just glance at the summary instead of reading the article. Since the summary will form the glancers' entire impression of the subject, it needs to be extremely accurate. I suspect it is often not. Even if editor moderation helps this problem, you may as well just go to Simple English Wikipedia and get the same thing but actually made by humans. doozy (talkcontribs)⫸ 20:54, 3 June 2025 (UTC)
  • Haven’t we been getting good press for being a more reliable alternative to AI summaries in search engines? If they’re getting the wrong answers, let’s not copy their homework. 3df (talk) 21:16, 3 June 2025 (UTC)
  • Oppose. We already have summaries of our encyclopedia articles: the lead sections of our encyclopedia articles are the summaries of the article. Also, Wikipedia is already a massive collection of great summaries, because writing an encyclopedia (tertiary source) is the skill of summarizing secondary sources such as newspapers and books. Also, our leads (summaries) are so good that Google and other search engines use them in their knowledge panels. Wikipedia and AI are in the same business (summarizing) and we humans at Wikipedia are better at it than AI. I see little good that can come from mixing in hallucinated AI summaries next to our high quality summaries, when we can just have our high quality summaries by themselves. –Novem Linguae (talk) 22:12, 3 June 2025 (UTC)
    Some links for more technical folks: mw:Extension:ArticleSummaries, phab:T393940 (deployment task), phab:T374143 (epic), phab:T385502, phab:T395128 (analysis of results) –Novem Linguae (talk) 04:23, 4 June 2025 (UTC)
    Thanks! So a planned deployment to 1% of readers on Monday 9 June, and then expanding to 10% of readers on Tuesday 10 June, and a possible further expansion on Monday 16 June. CMD (talk) 04:35, 4 June 2025 (UTC)
    @Novem Linguae Do you happen to have access to that stuff hosted on Google?
    Do they really think a CTR>4% is a meaningful metric to call this a success? Polygnotus (talk) 04:38, 4 June 2025 (UTC)
    I do not have access. It's probably WMF-only. –Novem Linguae (talk) 04:50, 4 June 2025 (UTC)
  • Are we for real?? Just last week I noted to staff that any rollout of LLM content would be unacceptable to the community, and now we're suggesting integrating it into a core aspect of content? Leads are already a shortened version of a page. The best leads have been carefully crafted by dozens of editors and represent some of the best content in the world. Using LLMs to generate what is effectively a worse lead will only amplify problems like at Sambhaji (angry readers read a lead but not the body). Devs, please realize that we are not going to accept integrated LLMs, and stop wasting time on such boondoggles. I will strongly oppose the integration of this feature into EnWP or any other Wiki. Our reputation is on the line here. CaptainEek Edits Ho Cap'n! 23:09, 3 June 2025 (UTC)
    Oh, and to be clear: I absolutely, categorically, totally oppose the 2-week testing period. I support removing it via site CSS, or just blacking out the site. Or better yet, the Foundation can back down from this foolish course of action. CaptainEek Edits Ho Cap'n! 23:22, 3 June 2025 (UTC)
    Thanks @CaptainEek, it's really heartening to see an arb saying this. -- asilvering (talk) 02:19, 4 June 2025 (UTC)
    An arb's view (whether for or against) has no more weight than that of any other single editor. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:10, 4 June 2025 (UTC)
  • I feel like people seriously underestimate the brand risk this sort of thing has. Wikipedia's brand is reliability, traceability of changes and "anyone can fix it". AI is the opposite of these things. Public sentiment (among some corners) is reacting against the AI trend quite harshly. Projects like this threaten the brand good-will that Wikipedia has based on its unique strengths. The current climate gives Wikipedia a unique opportunity to be the answer to the AI-ification of everything. Why are we throwing that away? Bawolff (talk) 00:17, 4 June 2025 (UTC)
  • I am going to join the chorus of editors saying that this is a uniquely bad idea. But I'll focus on one thing from the survey: generic readers should not be the ones who decide whether the AI summary is "helpful". If I come to Wikipedia because I want to know who founded the Roman Empire, and AI tells me it was Tiberius, I would be eternally grateful... until I find out that it was Augustus. I don't doubt that this was created with the best of intentions, but I really hope that the WMF spends its resources elsewhere (for example, fixing the remaining mobile communication bugs). HouseBlaster (talk • he/they) 01:10, 4 June 2025 (UTC)
  • The use of AI today comes with a caveat that the "AI can be wrong" and "AI should not be trusted 100%". I am not saying that Wikipedia is 100% right, but Wikipedia should remain to be more reliable than whatever Grok or ChatGPT can write up. If we use AI, we would go down into their level of reliability, which I didn't think to be the best way to go. We have seen many people, both with good and bad intentions, have criticized the reliability of Wikipedia. Going down the AI route will only reduce our "perceived reliability" even more. SunDawn Hail Me! 01:20, 4 June 2025 (UTC)
    "Hail Me" and vaguely Iron Cross–shaped symbols. Yikes. LightNightLights (talkcontribs) 18:03, 4 June 2025 (UTC)
    My reply distracts from the AI, but I am surprised that no one has pointed this out. LightNightLights (talkcontribs) 18:08, 4 June 2025 (UTC)
    @LightNightLights What you see is probably the Cross_pattée although unicode describes it as a Maltese cross. The Iron Cross is different, compare this with that. After a discussion on their talkpage they replaced the hail me with something more appropriate. Polygnotus (talk) 03:58, 6 June 2025 (UTC)
    Wikipedia’s perceived reliability has been climbing in recent years. To implement LLMs or similar generative AI would mean a massive decline in that perceived reliability we have tried so hard to build up. To add to you saying “AI can be wrong”, it is made to give different replies even when asked the same question. I don’t believe the replies should differ, it should give me what it believes to be most likely correct. It probably would be a part of each page that doesn’t change each time you visit unless it’s edited, but the actual generation is what I worry about. I would like to point out however that I already dislike AI in most contexts, and so am probably only considering half the argument. I am however trying to be sincere, as a casual reader of Wikipedia who relies on it for those random facts you find nowhere else, AI should not be used. 2601:985:4003:7E20:4DF9:E16A:EB9E:4BB9 (talk) 04:40, 9 June 2025 (UTC)
  • Copying my comment from Village pump (WMF), I believe it is better here:
    I feel like Simple Article Summaries (SAS) are contrary to a lot of things readers want in an encyclopedia. Readers come to the site trusting that we can give them all the information they want, while (crucially!) substantiating everything we say with sourcing and adhering to NPOV. While other readers could feel differently than I when I decided to join this community, without these two things, Wikipedia would be just another site.
    I've experimented with using AI on an encyclopedia. I've had it review my writing. I've asked it to write, with the intention to find shortcomings in my own ideas (if I forgot to say something). Just today, I delt with a user who has made over a thousand edits who cited sources that have never existed, at what appears to be the direction of a LLM. There is absolutely no evidence I've seen, either lived or in my line of work at an AI company, which would lead me to believe that an LLM can stick to the facts. Even the output in your survey is fraught with hallucinations.
    Likewise, using LLMs in my line of work, I've noticed the personality fluctuate in dramatic ways with model updates. I've tried my very hardest to correct it with a custom prompt, instructing it to use prose and maintain a neutral, skeptical perspective, but even this has not worked. There is absolutely no evidence I've seen, either lived or in my line of work at an AI company, which would lead me to believe an LLM can write neutrally. The most obvious example is WP:NOTCENSORED, whereas LLMs very much are.
    Yes, human editors can introduce reliabilty and NPOV issues. But as a collective mass, it evens out into a beautiful corpus. With Simple Article Summaries, you propose giving one singular editor with known reliabilty and NPOV issues a platform at the very top of any given article, whist giving zero editorial control to others. It reenforces the idea that Wikipedia cannot be relied on, destroying a decade of policy work. It reenforces the belief that unsourced, charged content can be added, because this platforms it. I don't think I would feel comfortable contributing to an encyclopedia like this. No other community has masterered collaboration to such a wonderous extent, and this would throw that away. Scaledish! Talkish? Statish. 01:44, 4 June 2025 (UTC)
Every article on Wikipedia has a WP:LEDE which summarizes the article. We do not need another summary to summarize that summary above the summary. This is a solution looking for a problem. I won't necessarily disagree that some article ledes are overly technical, but the solution to that is to address it among the communities of editors which oversee those articles. Furthermore, if a possible Wikipedia reader is daunted by the language in a lede and is too incurious or thoughtful to try and understand the language presented, they can always use an AI tool on their own time to summarize it. Adding our own AI summary only serves as a barrier to understanding, and is not an improvement in any way. I cannot state strongly enough how much I oppose any implementation of generative AI on Wikipedia. A Cynical Idealist (talk) 02:01, 4 June 2025 (UTC)
Oh god no. What a horrible way to damage Wikipedia's brand image right when we need to be emphasizing it the most. Think of the signal you'd be sending to everyone on the internet who is currently thinking "at least Wikipedia is still written by humans". -- asilvering (talk) 02:12, 4 June 2025 (UTC)
  • Count me as a "yuck". I've been experimenting quite a lot with useful applications of LLMs in my professional work, and naturally also tried some Wikipedia-themed experiments. I've even specifically tried to generate leads for my GAs. The AI-generated leads are terrible and I have never used a word of one! Even if they were of acceptable quality, I agree with others that any use of AI to generate user-visitor content will torpedo Wikipedia's current reputation as the last bastion of actual facts on the internet. Leads are already supposed to provide an accessible, brief overview of the article. If the WMF feels they too rarely achieve that goal, they could fund an editathon with prizes like the World Destubathon. ~ L 🌸 (talk) 02:27, 4 June 2025 (UTC)
    Oh, thinking of things we already have: the Simple English Wikipedia also exists! I've edited there a little and it is definitely a project that could use more love. ~ L 🌸 (talk) 02:37, 4 June 2025 (UTC)
    Another crucial consideration: this cannot be treated as "just" a test. Wikipedia will become the "main character" of social media for a day, as a villain. Thousands of people will hear about the feature, not examine it for themselves, conclude that Wikipedia doesn't care about fact-checking any more, and never learn that the test ended. Anything that is implemented as user-facing on English Wikipedia must be treated as a fully complete feature that en-wiki fully endorses. (Also, apologies for so many instances of "staircase wit" -- it just keeps seeming worse the more I think about it.) ~ L 🌸 (talk) 02:46, 4 June 2025 (UTC)
    Good points, and agree with the ending parenthetical. CMD (talk) 02:52, 4 June 2025 (UTC)
  • Going to expand on my early "yuck" a bit, since I'm a bit dumbfounded nobody's brought this up yet. The limiting factor on the English Wikipedia - which is the Foundation's primary revenue source - is the time of good-faith volunteer editors. This is extremely reminiscient of short descriptions: generate simplified summaries of articles by some automated off-wiki process, find out they're inevitably terrible, foist them off on the "infinite supply of volunteers" to fix. This was, I'm not even sure when, sometime before March 2017 at the very least, and we're only now starting to catch up. At least then, someone, somewhere, thought the descriptions would be usable as-is. You don't have that excuse this time, and we're not going to put up with it again. —Cryptic 03:14, 4 June 2025 (UTC)
Gross. If this happens on Wikipedia, I'm done. What makes Wikipedia unique in 2025 is we don't accept unedited AI garbage. If a non-wiki-editable process is responsible for showing content on Wikipedia, you have successfully killed the website. Its "Wiki"pedia for goodness sake. JackFromWisconsin (talk | contribs) 03:34, 4 June 2025 (UTC)
@JackFromWisconsin, fwiw, they do appear to be planning on allowing editors to edit it. In my opinion, this is even worse, because then they're generating a huge and completely unnecessary cleanup problem for volunteers to handle, but there you are. -- asilvering (talk) 03:36, 4 June 2025 (UTC)
I saw that on the survey. Just a completely unnecessary use of AI. Our backlogs are log enough as is. We don't need a "check AI summaries" backlog too. If a new gadget were to be developed to show human-made summaries, that'd okay. (But arguably unnecessary, that's what the LEAD is for!) But any moment AI gets involved, it will completely ruin the process. There are only so much volunteer hours. JackFromWisconsin (talk | contribs) 03:40, 4 June 2025 (UTC)
@JackFromWisconsin Think of the poor multi-billion dollar AI company who now has to pay up to 2 dollars an hour for people to correct the output of their model! They really deserve our help, for free. Polygnotus (talk) 03:42, 4 June 2025 (UTC)
One of the options on the survey said that these edits might not even be listed (so no edit history). Do we want it to be easer for people vandalize something already likely to be a false AI hallucination? 2007GabrielT (talk) 12:21, 4 June 2025 (UTC)
  • I'm curious about what Thryduulf and WhatamIdoing's thoughts are on all this. They've been advocates for AI in past Village Pump discussions, so it'd be interesting to hear from a different perspective. Some1 (talk) 03:54, 4 June 2025 (UTC)
    I don't advocate for AI. I advocate against creating unenforceable policies and guidelines about AI, and I advocate against editors making unproveable accusations that others are wrongly using AI, especially when those people are not native English speakers.
    For myself, I'd never enable something like this. In fact, I turn it off at every opportunity. If anyone reading this happens to have a contact at DuckDuckGo, please tell them that I'd like a way to turn off their AI that persists even in a private/incognito window. WhatamIdoing (talk) 04:30, 4 June 2025 (UTC)
    @WhatamIdoing please try https://html.duckduckgo.com/html/ Cramulator (talk) 03:10, 9 June 2025 (UTC)
    Thanks! WhatamIdoing (talk) 03:25, 9 June 2025 (UTC)
    @Some1 just as you did with WAID you have misstated my position: I do not advocate for AI, I advocate against policies and guidelines related to AI that are based on hypoerbole, misinformation and assumptions rather than facts and evidence. I advocate against allowing (let alone encouraging) personal attacks on editors based on unprovable (and frequently incorrect) assumptions.
    As for the actual subject of this discussion, the idea of a non-technical ~5 sentence summary of a long article is a good one, iff that summary is always both accurate and neutral. Based on the evidence presented in this discussion, the current implementation of this idea does not consistently meet those requirements. Given that some of our articles are tagged as having disputed accuracy and/or disputed neutrality, I am not sure that it is possible to meet both requirements. Thryduulf (talk) 10:35, 4 June 2025 (UTC)
    While we are on this subject I should thank you for this erstwhile thankless task — lately I have been disturbed by the apparent willingness of many people to just say stuff about this subject, seemingly with no concern whatsoever for whether it is true or false, on the general idea that it owns the libs/cons/bros/sises/etc. jp×g🗯️ 13:33, 5 June 2025 (UTC)
  • No charlotte 👸♥ 04:07, 4 June 2025 (UTC)
    Indeed, no. Andre🚐 04:10, 4 June 2025 (UTC)
  • I received the invitation to take the survey and was quite displeased when I saw what it was about. I have major concerns regarding the reliability of LLM-generated summaries, the extra workload maintaining their accuracy would create for human editors, and the emissions/environmental impact of LLMs. I don't think any of the potential benefits outweigh the costs here and would urge any WMF staff reading this to reconsider this idea. Ethmostigmus 🌿 (talk | contribs) 04:09, 4 June 2025 (UTC)
The view of the Wikimedia Foundation on Wikipedia is evident in the framing of this survey. The first two suggestions for adding "simple summaries"—indeed, an obfuscation of the significant role that artificial intelligence wields in this process—suppose that administrators should be responsible for managing simple summaries. At best, that is a dangerously negligent perspective; at worst, it is woefully scornful of a community that has persisted for nearly a quarter century without the idea that administrators should supercede content. What has been proposed is a box that is invisible to many editors who have not opted to use Vector 2022 and which appears to be difficult to edit. elijahpepe@wikipedia (he/him) 04:39, 4 June 2025 (UTC)
Good point, I also found the proposals that admins could be responsible for the AI tasks to be alarming and almost shocking. "Admin effort" is one of our most precious resources, and it shows a real lack of understanding about en-wiki's needs and priorities to propose inventing a new task for admins to deal with, let alone one so spirit-killing as AI cleanup. ~ L 🌸 (talk) 04:55, 4 June 2025 (UTC)
I got so many thoughts about this, and they're all "fuck LLMs, turn back right now". LilianaUwU (talk / contributions) 04:31, 4 June 2025 (UTC)
Done editing if this is enacted. Best space on the Internet and of course you have to fuck it up Toffeenix (talk) 05:35, 4 June 2025 (UTC)
  • I just got the survey, responded to it (I hate the idea with a burning passion and made sure to share my opinion in the survey) and immediately searched for the discussion because I knew there'd be one. There are so many problems with this whole idea, and they all boil down to "AI that we as editors of the website can't directly control". Of course, directly controlling an AI to output a specific summary defeats the purpose of using an AI. Also, from MOS:INTRO: The lead section should briefly summarize the most important points covered in an article, in such a way that it can stand on its own as a concise version of the article. So we already have a "simple summary", and it's one that we can directly edit, thus maintaining both the quality of the encyclopedia and the trust those who use Wikipedia have in us. Why do we need a second one?
    Side note: When I did the survey, the order of responses, which ranged from "Good" on the left to "Bad" on the right for the first few pages, got switched so "Bad" was on the left and "Good" was on the right on the last page. I'm glad I happened to glance (without even really meaning to) at those responses or I wouldn't have even noticed the (accidental?) switcheroo. Gestrid (talk) 05:44, 4 June 2025 (UTC)
  • LLMs are notoriously bad at summarising. Abusing vulnerable people (mobile visitors) as guinea pigs for an experiment which is unlikely to collect any useful information is, at best, rude. The WMF should stick to its better ideas about LLMs, which are about using them for anything but prose generation. For example, instead of presenting a summary of the article as if it were true, you could pick some sentences from the article and quote them verbatim, possibly ask some questions about them. Nemo 06:08, 4 June 2025 (UTC)
  • 'Opppose in the strongest of all possible ways. Are y'all (by that, I mean WMF) trying to kill Wikipedia? Because this is a good step in that way. We're trying to keep AI out of Wikipedia, not have the powers that be force it on us and tell us we like it. - The Bushranger One ping only 06:05, 4 June 2025 (UTC)
I can't help feeling that what this would create is a duplicate lead that editors would not just have to verify but also moderate and curate. The lead of some articles is the result of potentially long-winded discussion, this risks requiring all that work again. These would also require oversight to ensure they are not targeted by vandals or POV pushers, as they would be highly visible. The leads if many articles should be shorter and less cluttered, but something that would in effect duplicate the lead (and the work required to maintain the lead) doesn't seem like the greatest idea. -- LCU ActivelyDisinterested «@» °∆t° 06:21, 4 June 2025 (UTC)
The mass addition of unverified AI text by any editor would also seem against current policy. -- LCU ActivelyDisinterested «@» °∆t° 06:37, 4 June 2025 (UTC)
Talk:Gaza_genocide#RfC:_Genocide_in_wikivoice/opening_sentence offers a recent and prominent example of how much discussion goes into deciding how article leads are worded. ViridianPenguin🐧 (💬) 07:16, 4 June 2025 (UTC)
These kind of discussions are common inside and outside of contentious topic areas. Even seemingly simple articles may have had weeks of discussion on how to correctly summarise the article in the lead. -- LCU ActivelyDisinterested «@» °∆t° 10:15, 4 June 2025 (UTC)
  • Absolutely not, all of the work that countless people put into these articles being thrown away in favour of LLMs we don't even really understand. Besides that, we have simple for a reason. Weirdguyz (talk) 06:30, 4 June 2025 (UTC)
  • Absolutely not This duplicates the existing summary function of our article leads. If readers find that an article lead poorly summarizes its content, that is one of many avenues by which we hope to recruit them into editors improving the content they care about. As HouseBlaster notes, when other LLMs ask for thumbs up/down feedback, they are assessing whether you liked the response, not whether you double-checked its accuracy. To implement that here will train models to reinforce reader preconceptions, rather than accurately summarize our articles. As CMD demonstrates below, even the dopamine example demonstrates hallucinations that will severely diminish our reliability, so even a mobile trial is untenable. ViridianPenguin🐧 (💬) 06:52, 4 June 2025 (UTC)
    To reiterate A Cynical Idealist's criticism over on Meta, Simple Article Summaries#Userlytics experiment presenting the flawed dopamine summary to eight subjects is laughably insufficient as evidence to proceed with showing AI summaries to ~10K of our readers, and no explanation is given to ignore that a whopping quarter of respondents dislike the feature. ViridianPenguin🐧 (💬) 07:08, 4 June 2025 (UTC)
  • Are you serious? We're trying to discourage AI use, not promote it. I think we are operating at cross-purposes. Liz Read! Talk! 07:06, 4 June 2025 (UTC)
  • Not a good idea. Pavlor (talk) 07:15, 4 June 2025 (UTC)
  • Very bad idea. First, LLMs have inherent reliabilty issues (as demonstrated above, the summary in the demo video used to promote the idea is wrong). The effort to correct summaries would need people willing to check the article in detail; it would swallow up editor time that could better be spent on improving article leads. Second, it would do a lot of damage to Wikipedia's brand. On social media, people are praising the existence of WP as something that has not turned into AI slop. Even if AI were only used for small summaries, it would raise suspicions and upset people.OsFish (talk) 07:26, 4 June 2025 (UTC)
  • Strongest possible oppose. We do not want, need, or wish for AI on this project. At all. Ever. Stifle (talk) 07:51, 4 June 2025 (UTC)
  • So in a world where Wikipedia is one of the only respected bastions against hallucinated bullshit information by LLMs, our response is to implement them onto the project??? Not a hope in hell. ser! (chat to me - see my edits) 07:53, 4 June 2025 (UTC)
  • Absolutely not for obvious reasons that have already been written about above. JavaHurricane 08:00, 4 June 2025 (UTC)
  • I have my doubts that trying to duplicate Google's work (or "work") or the efforts editors writing leads engage in is worthwhile. Having AIs that can read an article and write a WP:LEAD compliant lead would be something actually useful, as it can be fairly difficult for people. Or AIs that you feed sources to and write or expand articles with them, that would be even better. One should avoid the tendency to develop AIs in imitation/inspiration of existing efforts, because chances are you are duplicating existing work/reinventing the wheel for no good reason. Jo-Jo Eumerus (talk) 08:34, 4 June 2025 (UTC)
  • In my opinion, this would be a gross overstep of the bounds we typically expect the Foundation to adhere to. We are not paid, obviously, but if this is implemented over the objection of the community, the Foundation would effectively be signalling intent for Wikipedia to become one more publication that has fired their editors, in part or in whole, in order to put out AI slop. Alpha3031 (tc) 10:49, 4 June 2025 (UTC)
  • Oppose - the lead exists for a reason. AI summaries are extremely unreliable (as this thread constantly mentions), and it would ruin Wikipedia's reliability. VERIFIABILITY is one of our core policies - AI just hallucinates, leaves stuff out, and makes shit up. This is proof that the WMF does not care about the community of volunteers that work this website. PhoenixCaelestis (Talk · Contributions) 11:25, 4 June 2025 (UTC)
  • Strong oppose, including the trial. This is utter disrespect to both readers and editors, and dangerous to the point where we risk losing plenty of both. AI generated stuff is not only inaccurate and unreliable, it is also cheap and therefore almost worthless (since you can get it everywhere nowadays). Even the trial is highly problematic as it sends the wrong signal to the users (namely that Wikipedia became enshittified), and can therefore irreversibly damage Wikipedia's reputation. Let's keep Wikipedia AI-free – that's a strength, not a weakness. --Jens Lallensack (talk) 11:37, 4 June 2025 (UTC)
  • Oppose - Making AI generated summaries is exactly none of our business. Google are already on top of that with their AI Overview nonsense -- let's leave that stuff to them. DJ-Aomand (talk) 11:55, 4 June 2025 (UTC)
I don't understand the point. A piece of text is generated, which in best-case scenarios is duplicating already existing text, then editors review it and edit for quality? If they're going to be manually edited afterwards anyway, what's the purpose of automatically generating them? 207.11.240.2 (talk) 12:22, 4 June 2025 (UTC)
I'm very uncomfortable with both this proposal and with the direction of travel it suggests. This is either going to wastefully duplicate what's already there, or require scarce editor time to fix it up. Even if this is not affecting the underlying article content, it is generative AI material being presented to a reader under our name, and that is a line we should not cross. Andrew Gray (talk) 12:53, 4 June 2025 (UTC)
Oppose any movement towards the integration of generative AI on Wikipedia. This is a volunteer project based on community effort, always has been. We're the last best place on the internet because we're built on love, not automation. Don't ruin our reputation by falling victim to the craze for dumbing down everything to the lowest possible attention span - and then add insult to injury by doing it with generative AI. ♠PMC(talk) 14:02, 4 June 2025 (UTC)
  • Volunteers have spent tremendous amounts of time both crafting the lead sections of articles, and trying to keep chatbot-generated junk out of articles in general. Now, here's this proposal, to flip a middle finger at both efforts, and say "Hey, we'll put some chatbot-generated junk above the actual leads the volunteers worked so hard on building consensus for!". I do not believe I have the words to express how disgusted I am at this proposal. Seraphimblade Talk to me 14:07, 4 June 2025 (UTC)
  • Very strongly opposed to this. LLMs are nothing but unethical, mindless plagiarism-machines built with the stolen work of actual humans. Keep this techbro bubble out of Wikipedia. -- kazerniel (talk | contribs) 15:31, 4 June 2025 (UTC)
    Everything you have said in this comment is false. jp×g🗯️ 13:26, 5 June 2025 (UTC)
    @JPxG Even their claim that they are very strongly opposed to this? Polygnotus (talk) 13:28, 5 June 2025 (UTC)
    Yeah, I guess the timestamp is true as well. jp×g🗯️ 14:04, 5 June 2025 (UTC)
  • What the hell? No, absolutely not. Not in any form or shape. Not on any device. Not on any version. I don't even know where to begin with everything that is wrong with this mindless PR hype stunt. —  HELLKNOWZ  TALK 15:59, 4 June 2025 (UTC)
  • No, no, no This will destroy whatever reputation for accuracy we currently have. People aren't going to read past the AI fluff to see what we really meant. --SarekOfVulcan (talk) 17:14, 4 June 2025 (UTC)
  • Validation issues An experiment seems reasonable as our articles can often seem inpenetrable. And it's good to develop expertise and experience with AI tech as it is becoming omnipresent. But the idea of human validation may be difficult. One issue is that our articles may change frequently and so the summary and validation would need to be redone frequently too. Another issue is that every page has a disclaimer saying that our content is not guaranteed to be valid and so an assertion of validity would be sending a mixed message. Andrew🐉(talk) 18:17, 4 June 2025 (UTC)
To say publicly what I wrote in the survey: from a structural standpoint, an AI-generated summary seems pointless. The one paragraph summary of the article is the lead paragraph. The multi-paragraph summary is the entire lead. If AI can write a better one-paragraph summary than the lead paragraph, then that paragraph should just be the lead paragraph. Why have two summaries at the top of an article, especially if one is worse than the other?
A better deployment for AI here would be to have AI write a summary of the article and then compare its summary to the existing lead, and post the differences/analysis on the article's talk page, essentially an AI edit suggestion. Human editors can then review the suggestion and implement it (or not) as appropriate. AI could do this not just for the lead but for entire articles. Of course, this hinges on an AI that has sufficient accuracy to the point where it's suggestions are actually useful.
A next-level deployment would be for AI to read the sources of the article, summarize those, and then compare its summary to the written article, and post suggestions for changes on the talk page.
A next-next-level deployment would be for AI to suggest new sources for the articles, or to read and summarize sources that aren't in the article, and then post edit suggestions to the talk page. AFAIK AI isn't ready for this level yet, but the first two suggestions above could probably be done today with reasonable accuracy using large and small language models.
I hope the WMF keeps developing, experimenting, testing, and iterating on these approaches. Levivich (talk) 19:17, 4 June 2025 (UTC)
+1. AI that helps editors address problems with Wikipedia is welcome (if it works). We already use it, in various places, to varying degrees of success, and to relatively little pushback. -- asilvering (talk) 19:25, 4 June 2025 (UTC)
I would not be in support of large AI-suggested edits. To properly evaluate what an AI writes takes a lot of brainpower. In my opinion it takes the same or more brainpower as just reading the sources/article yourself and writing it without AI. I think an AI that posts suggested edits on talk pages would just distract editors or get ignored, and reminds me of Microsoft trying to have AI write patches and post them on GitHub. There is a thread on Reddit called "My new hobby: watching AI slowly drive Microsoft employees insane" making fun of how terrible AI has been at writing software patches (7,000 upvotes in r/ExperiencedDevs). It would be sad if Wikipedia got into a similar situation. –Novem Linguae (talk) 20:08, 4 June 2025 (UTC)
I think Of course, this hinges on an AI that has sufficient accuracy to the point where it's suggestions are actually useful. covers that already. That's a big ask. -- asilvering (talk) 20:13, 4 June 2025 (UTC)
The problem is that many people do think that LLM is useful and it can be good for certain low-hanging fruit but it can also be dangerously wrong with no way to know. Andre🚐 21:42, 4 June 2025 (UTC)
+2 although I think we ought to aim at article-writing AI at some point. Especially since human editors can also err in many ways. Jo-Jo Eumerus (talk) 06:44, 5 June 2025 (UTC)
Please don't do AI summaries. I go to Wikipedia because I can generally trust it not to hallucinate. And I can already read. I am sick and tired of seeing AI summaries everywhere. Yuck. Deliamturner (talk) 18:02, 11 June 2025 (UTC)
I'm not sure what value any of this adds.
I love Wikipedia, ever since its creation, and have been a faithful monthly donor to support the one source of information that has been a steadfast anchor.
I am a steadfast believer in tech, and I personally use the current generation of Generative AI and LLM's on the daily, but I do not think it is reliable enough to add value to Wikipedia. The lack of reliability and unpredictable hallucinations make it potentially harmful to people looking for real information. Additionally, to check the output would just result in reading the article anyways. Lastly, anyone looking for an AI summary of a Wikipedia page instead of reading would most likely just be asking ChatGPT externally with Search/Research functionality turned on.
This is not to mention the damage to Wikipedia's reputation and also the cost of it. 2601:240:C801:1410:F89C:AF4D:F327:1EDE (talk) 19:29, 11 June 2025 (UTC)
As an occasional donor, I will stop donating if Wikipedia moves to AI summaries. I mean, seriously? Why the hell would you waste my money on that? This is not a good use case for LLMs; you're essentially duplicating the lead of the article and inevitably introducing errors. So not only does this not add any value, but it actively removes it.
(Could Wikipedia use LLMs to rewrite the leads of some articles? Sure (and undoubtedly some editors are already doing this). However, you'd need to go through each and every one in order to make sure that they're accurate and not introducing hallucinations.) 24.212.207.82 (talk) 15:05, 12 June 2025 (UTC)
Oppose. A theft of creators work, bad for the environment. I'll stop my monthly donations if they are introduced. StevenJLilley (talk) 19:37, 11 June 2025 (UTC)
This is not true. jp×g🗯️ 06:12, 12 June 2025 (UTC)
@JPxG You can't seriously dispute that AI models are trained on copyright infringing materials, and are a net negative for the environment? Polygnotus (talk) 06:15, 12 June 2025 (UTC)
Both of these statements are completely false, yes -- they use trivially less water and electricity than listening to streaming music, using GPS navigation, et cetera. The only situation in which it is a "net negative for the environment" to use LLMs is one in which special pleading is used to apply a completely novel set of moral standards used for no other human activity.
I don't know if you are familiar with what copyright is, but there is no such thing as a "copyright infringing material", and this phrase is facially nonsense -- the materials do not themselves infringe copyright by existing. Even in the case where you mean "material obtained without permission from copyright holders", it does not infringe copyright to write text based on having previously read text in violation of copyright. As far as I am aware, there is no legal precedent whatsoever saying that having a computer read a book establishes any subsequent creation as a derivative work of the original.
There are plenty of objections it's possible to raise to this proposal that do not involve lying. jp×g🗯️ 14:01, 12 June 2025 (UTC)
No, nada, nope. We already have summaries of articles - they're called lead sections. MiasmaEternal 04:45, 12 June 2025 (UTC)
No AI Please!
I contribute here because this place is made by human.
If AI is in Wikipedia I am out. Arutoria (talk) 06:34, 12 June 2025 (UTC)
Unacceptable, flat out. I assumed Wikipedia would naturally avoid this trend. Please, Please, cancel this immediately. Orbitingteapot (talk) 13:45, 12 June 2025 (UTC)
Anybody on the team who thought this was a good idea for even a second should step down. Absolutely braindead. 184.147.101.90 (talk) 14:01, 12 June 2025 (UTC)
I sent an email was directed to post here. As Wikipedia is one of the few remaining online resources for accurate information written by human beings, I would like to make clear that my continued support as a donor of this organization will cease if the use of LLMs or other such generative Ai is implemented onto the website. 2600:1015:A030:8DB2:7DE3:8A29:303:7F68 (talk) 15:06, 12 June 2025 (UTC)
Why would I want you to create a summary of my work that might be wrong? That defeats the entire purpose of my being editor which is to inform people. Hammy (talk) 22:36, 12 June 2025 (UTC)
  • No gen AI. Ever. Aside from the legal, moral, and ethical issues, a major contributor to the wiki's success is that it does not chase the latest trends. In my opinion, this "exploration" violates the principle that the WMF does not edit Wikipedia content. The misleading title and unilateral nature of the "experiment" demonstrate bad faith on the part of the foundation. James (talk/contribs) 23:00, 12 June 2025 (UTC)
Thank-you for engaging with the community.
  • Why does the WMF development process engage with the community after development?
  • Can you confirm that this has no links to Knowledge/Google? The Google AI summary is receiving criticism because the summaries are poor. By having editors check the output then that accuracy could increase. To improve this and other LLMs, AI companies are paying people to provide feedback. The concern would be that we are doing work for Alphabet
  • Would it reduce people clicking through to Wikipedia, and a small percentage becoming editors?
  • How many articles are we talking about? Is this only in en? The concern is how much editor time would be needed.
  • Concern is whether it a good time with the DC attorney general issue. Even if he is not approved, then the next one will ask the same questions
  • WMF has been concerned that enWP editor's white male and American. As LLMs have a cultural bias do you see this as increasing WMF's perception of a white male American bias? My big concern is that it takes the fun part out of editing, and creates a Readers Digest Condensed book of the lede , which is our summary of the article, which is a summary of others peoples summaries. But if there was an editor toolbar that allowed editors to create a lede as part of their process, or to use the AI in an integrated way (create a table based on these paragraphs), or to check a page for errors, then you might get traction Wakelamp d[@-@]b (talk) 05:33, 5 June 2025 (UTC)
it takes the fun part out of editing. Indeed, and not only that: having to correct the output of an AI that makes the same mistakes again and again may be the quickest way to burnout. Yes, the most important thing, in the end, is to have fun. It's not about maximising clicks, which is what the WMF seems to be interested in. If it were, much more of us would spend much more time on core content rather than our highly specialised pet topics. Introducing AI to Wikipedia looks like an attempt to "optimise" us or our work to get more clicks – even though that's not even the principal goal for most of us personally. Interacting with AI is quite the antithesis of "having fun". I am here because I want to collaborate with humans to build something great together, and to help others becoming better writers. That is meaningful to me and makes it worthwhile – interacting with AIs is just not. --Jens Lallensack (talk) 17:30, 7 June 2025 (UTC)
Perfectly said. Plasticwonder (talk) 17:49, 7 June 2025 (UTC)

Well, it's easy enough to test, so I asked AI to summarize a Wikipedia article in one paragraph and then to compare its summary with Wikipedia's summary, and to tell me the differences between the two summaries. I did this for Today's Feature Article, 1880 Republican National Convention, using the TFA blurb as the example of Wikipedia's one-paragraph summary (although one could just as easily use the lead paragraph, or entire lead). I figure TFA is a good proxy for top-quality one-paragraph article summary written by Wikipedia volunteers. I asked ChatGPT and Gemini to "summarize https://en.wikipedia.org/wiki/1880_Republican_National_Convention in one paragraph", and then to "compare these two paragraphs and tell me what's different between them". Here are their responses:

TFA one-paragraph summary

The 1880 Republican National Convention (June 2 to 8, 1880, Chicago) resulted in the nominations of James A. Garfield of Ohio and Chester A. Arthur of New York as the Republican Party candidates for U.S. president and vice president. Leaders in the early balloting at the convention, which deadlocked for several days, were former president Ulysses S. Grant, Senator James G. Blaine of Maine, and John Sherman, the treasury secretary. After the 35th ballot, Blaine and Sherman switched their support to a new "dark horse", Garfield, a congressman from Ohio, who had nominated Sherman. On the 36th ballot, Garfield won the nomination. After Arthur won the vice presidential nomination on the first ballot, the longest-ever Republican National Convention adjourned. The Garfield–Arthur Republican ticket narrowly defeated Democrats Winfield Scott Hancock and William H. English in the 1880 presidential election.

ChatGPT's one-paragraph summary

The 1880 Republican National Convention, held from June 2 to 8 in Chicago, was marked by intense factionalism and a prolonged deadlock. The leading contenders were former President Ulysses S. Grant, backed by the Stalwart faction favoring patronage; Senator James G. Blaine, supported by the Half-Breeds advocating civil service reform; and Treasury Secretary John Sherman, representing a more moderate stance. After 35 ballots failed to produce a nominee, a compromise emerged when Blaine and Sherman shifted their support to James A. Garfield, a congressman from Ohio not initially in the running. Garfield secured the nomination on the 36th ballot. To balance the ticket and appease the Stalwarts, Chester A. Arthur of New York was nominated for vice president. The Garfield-Arthur ticket narrowly defeated Democrats Winfield Scott Hancock and William Hayden English in the 1880 presidential election. Tragically, Garfield was assassinated in 1881, leading to Arthur's succession to the presidency.

ChatGPT's comparison of its summary ("Paragraph 1") with the TFA blurb ("Paragraph 2")

Here are the key differences between the two paragraphs:

  • Tone and Style:
    • Paragraph 1 is more narrative and descriptive, using phrases like “marked by intense factionalism” and “a compromise emerged.”
    • Paragraph 2 is more concise and encyclopedic, structured more like a historical summary.
  • Emphasis on Factions:
    • Paragraph 1 details the factions backing each candidate (Stalwarts, Half-Breeds, etc.).
    • Paragraph 2 omits faction names and focuses on the individuals.
  • Garfield’s Role:
    • Paragraph 1 mentions Garfield was “not initially in the running.”
    • Paragraph 2 notes Garfield “had nominated Sherman,” subtly emphasizing his presence at the convention before his surprise nomination.
  • Vice Presidential Nomination:
    • Paragraph 1 explains Arthur’s nomination as a move to “appease the Stalwarts.”
    • Paragraph 2 simply states Arthur won on the first ballot, omitting strategic context.
  • Convention Description:
    • Paragraph 1 highlights the prolonged deadlock with rich detail.
    • Paragraph 2 mentions it was the “longest-ever Republican National Convention” but without elaboration.
  • Garfield’s Assassination:
    • Paragraph 1 includes it and its consequence (Arthur’s presidency).
    • Paragraph 2 omits it entirely.

In summary, Paragraph 1 is more detailed and interpretive; Paragraph 2 is more formal and fact-focused.

Gemini's one-paragraph summary

The 1880 Republican National Convention, held in Chicago, Illinois from June 2 to June 8, 1880, saw the nomination of James A. Garfield of Ohio for president and Chester A. Arthur of New York for vice president. Despite Ulysses S. Grant, James G. Blaine, and John Sherman being strong contenders, Garfield ultimately secured the nomination after multiple ballots, with Arthur as his running mate. The Garfield–Arthur ticket went on to narrowly win the 1880 presidential election against Democrats Winfield Scott Hancock and William Hayden English.

Gemini's comparison of its summary ("Paragraph 1") with the TFA blurb ("Paragraph 2")

The two paragraphs describe the 1880 Republican National Convention, but Paragraph 2 provides more specific details and additional information compared to Paragraph 1.

Here are the key differences:

  • Format of Dates and Location: Paragraph 1 states "held in Chicago, Illinois from June 2 to June 8, 1880," while Paragraph 2 uses a more concise parenthetical format: "(June 2 to 8, 1880, Chicago)".
  • Detailed Nomination Process:
    • Paragraph 1 generally states, "Garfield ultimately secured the nomination after multiple ballots, with Arthur as his running mate."
    • Paragraph 2 offers much more detail, mentioning "Leaders in the early balloting at the convention, which deadlocked for several days," and specifies the "35th ballot," how "Blaine and Sherman switched their support to a new 'dark horse', Garfield," and that Garfield "had nominated Sherman." It also mentions "On the 36th ballot, Garfield won the nomination."
  • Arthur's Vice Presidential Nomination:
    • Paragraph 1 briefly notes "with Arthur as his running mate."
    • Paragraph 2 adds that "After Arthur won the vice presidential nomination on the first ballot."
  • Convention Duration/Length: Paragraph 2 explicitly states that "the longest-ever Republican National Convention adjourned," a detail not present in Paragraph 1.
  • Full Name of Democratic Vice Presidential Candidate: Paragraph 1 refers to "William Hayden English," while Paragraph 2 shortens it to "William H. English."

Helpful? I think so. It would probably be better if the prompt were improved to instruct it write an encyclopedia-style summary, or even a Wikipedia-style lead. Novem's probably right that posting these on talk pages, particularly in volume, could be overwhelming, but it would be easy to create a toolforge-hosted python script that uses the AI APIs to perform these two steps for any given article, and thus generate "AI suggestions for improvement" on-demand for any given article... as long as somebody pays the API credits. But WMF has hundreds of millions of dollars and could afford to foot that bill, at least up to some reasonable cap. Levivich (talk) 21:51, 4 June 2025 (UTC)

If we're just going to ask the AI to write in the style of a Wikipedia lead, what's the use in having the AI? Wikipedia-style leads are already present.The Morrison Man (talk) 22:05, 4 June 2025 (UTC)
@The Morrison Man: The use of the AI is that the AI may have good suggestions for improvement. For example, ChatGPT's comparison above identifies the following possibilities for improving the TFA blurb:
  • name the factions backing each candidate (Stalwarts, Half-Breeds, etc.)
  • explicitly state that Garfield was not initially in the running
  • explicitly state that Arthur's nomination was a move to appease the Stalwarts
  • mention that Garfield was assassinated the following year, making Arthur president
Editors can debate whether any of these changes are, in fact, improvements to the TFA blurb, but it seems to me to be a pretty helpful list of things to consider when thinking about how to improve the TFA blurb. And it took less than one minute -- way less time than it would have taken me to read the article and the TFA blurb and figure out suggestions for improvement. That's what AI can do, today. Levivich (talk) 22:26, 4 June 2025 (UTC)
Reading the lead would've gotten you all this information on-site... The Morrison Man (talk) 22:30, 4 June 2025 (UTC)
Well first, that's flat not true. The lead doesn't have the third or fourth bullet points. Second, none of it is in the lead paragraph. Third, duh, reading the 4-paragraph lead would give more info than the 1-paragraph TFA blurb, and reading the article would certainly have all that info, but this demonstration is about improving the 1-paragraph TFA blurb, so pointing out that some of this info is in the lead isn't really relevant. Levivich (talk) 22:39, 4 June 2025 (UTC)
Agreed with your comments above -- this is a thoughtful way to build good tools for editors, rather than automating anything that shows up directly for readers. In theory leads are a good arena in which to explore such editor-facing tools, as anything in them should appear elsewhere in the article. And an internally-consistent way to think about "improving the reader experience" is to identify editor backlogs and improve how we work through them [because almost every editor backlog is about improving the reader experience!]
But that's for another discussion. – SJ + 03:32, 11 June 2025 (UTC)
I also tested it with a few articles I know well, and if asked to keep the summary short it generally did well. Longer "summaries" meant more hallucinations, though. Cremastra (uc) 21:55, 4 June 2025 (UTC)
as long as somebody pays the API credits. look at wikitech:Machine Learning/LiftWing. They don't need to pay for API credits, they spend significant amount of money to run various models themselves. Polygnotus (talk) 22:00, 4 June 2025 (UTC)
An informal test does not prove the absence of hallucinations. They are insidious and they may appear in unpredictable places. Andre🚐 22:40, 4 June 2025 (UTC)
  • This is a bad idea. While AI generated summaries of Wikipedia content are potentially useful and, given our free licensing, ultimately inevitable, there is no reason for the Foundation to be generating or hosting them. The primary role of the Foundation should be assisting volunteers in the creation and curation of content. Since it's free delivery in digestible packets mostly takes care of itself. Eluchil404 (talk) 00:05, 5 June 2025 (UTC)
I see below that the WMF has already decided to pause the idea. However, I really think it should just be scrapped altogether rather than "reformatted", and I believe that any attempts by the WMF to push through with it in any capacity, despite community opposition, will not end well. Generative AI is already very fond of hallucinating, even when trained on proper data, and promoting technology that encourages such things would only hurt the project's reputation. I get the idea, I understand that there are good intentions behind it, and I do think that AI has its use cases (not on-wiki though). But now is not the right time, and until AI gets past its hallucinations, I don't think it ever will. Narutolovehinata5 (talk · contributions) 01:59, 5 June 2025 (UTC)
  • I'm not sure what I could say that hasn't been already said above, but I strongly oppose this as well. Relativity ⚡️ 02:35, 5 June 2025 (UTC)
  • Question? A question about accessibility. When I watched the demo video, it looks like the default position for the "simple summary" was collapsed. For our visually impaired readers and editors who use screen readers to access Wikipedia, would this summary be available to them/us. Because in order to view and/or hear the summary, it requires interaction (clicking on it to open it), which obviously screen readers are not capable of. And per our guidelines at MOS:COLLAPSE, content in an article (if this summary is to be considered content), should never be collapsed by default. So how would this feature be beneficial to our visually impaired readers/editors if it is not accessible? We have alternative text for images, so has there been any consideration given by the develpers for alt text for this feature if it is to remained collapsed by default? Isaidnoway (talk) 07:47, 5 June 2025 (UTC)
    @Isaidnoway The video itself is already inaccessible for a large group of people. Polygnotus (talk) 08:11, 5 June 2025 (UTC)
    As is the survey, but I would still like to hear their response. Wikipedia already has enough problems with accessibility issues, so adding another one is not ideal. Isaidnoway (talk) 08:39, 5 June 2025 (UTC)
    @Isaidnoway Regarding your first question. Screen readers are capable of interacting with collapsed interface elements, as long as it's implemented correctly, e.g. that button elements are marked as interactive and respond to keyboard actions and not just mouse clicks. For example, if you're using the "Reply" button to reply in this conversation, the reply tool has an "Advanced" button following the comment box, which should be fully compatible with your screen reader, and which should reveal some additional options when used. In MediaWiki development we use libraries called Codex and OOUI, which make this easy to do or even do it automatically, so I would expect any new features to be accessible by default. Matma Rex talk 15:35, 5 June 2025 (UTC)
  • I think this would be a quite bad idea for a first step towards integrating a new technology — especially one that many people passionately hate (whether for intelligent reasons or nonsensical/false/superstitious ones). Essentially, the pitch here is the lead: essentially to supplant the single most important, visible, sensitive and contentious part of the entire article! Surely, if a surgeon comes up with a new parkour move, you don't want them trying it out for the first time on your aorta. Can't we, like, try this on something that doesn't instantly kill us in the event of an error? jp×g🗯️ 13:16, 5 June 2025 (UTC)
  • Oppose A terrible, monstrous idea. LLMs certainly have their uses, but not for this. Carlstak (talk) 17:48, 5 June 2025 (UTC)
  • No. Just no. – Every reason why this would be a catastrophic decision has already been said above, based on the fact that it is just a conceptually terrible idea. We need help from the WMF regarding LLM-generated content, but to fight it back, not to include it in the most prominent spot of every article. Choucas0 🐦‍⬛💬📋 08:37, 6 June 2025 (UTC)
  • Strong oppose. AI-generated content does not have any place in articlespace and I am coming around to the idea that it doesn't have any place in project discussions either. LLMs are not reliable sources, and so this is as clear a WP:5P2 violation as it gets. It is also very likely to plant a seed for legal questions (is this going to be on BLPs? edit 6/7 -- articles on BLPs seem to have been filtered out, but not articles that contain statements about them) and almost certain to result in immediate and massive public backlash when the media gets wind of it.
    I also have reservations about the experiment from which this terrible idea sprung. This is based on the opinion of eight (8) people. Are those eight people in the room with us right now? There seems to be a strong and near-unanimous consensus against this from far more than eight people. It seems to also be based on this presentation at Wikimania 2024, the quality of which is unclear because no one bothered to upload the slides despite it being almost a year. Gnomingstuff (talk) 17:56, 6 June 2025 (UTC)
  • Oppose The first and only thing I ever learned from machine-generated summaries is to close my eyes on sight and hit the page-down key, even at the cost of skipping some first info paragraphs. --CiaPan (talk) 06:14, 9 June 2025 (UTC)
  • Strong Oppose. Nope, definitely not. Useight (talk) 13:36, 9 June 2025 (UTC)
  • With all due compassion and respect for the WMF people who I'm sure are trying to be innovative and responsive to trends, Heck No. The potential benefit to our readers is minimal since every major search engine already provides these summaries generated from our content. The potential harm to the project is great both in alienating those of us who have invested thousands of human-hours of editing into building this resource, and in damaging the reputation and quality of our content. I don't claim any authority over WMF nor do I claim to speak for the entire community, but want to make it clear in the strongest possible terms that if the WMF incorporates AI-generated summaries into Wikipedia articles they will do so in blatant disregard of the wishes and counsel of myself and many others. -- LWG talk 02:57, 11 June 2025 (UTC)
  • Not in a million years. Both for the test AND any future use of generative AI on this site. Ethical, legal, and environmental concerns aside, LLMs are incapable of producing anything but inaccurate slop. "Hallucination" is just a fancy word to hide the fact their technology is garbage. I won't so much as edit a typo again if Wikipedia ever implements this. AmityBlamity (talk) 18:11, 11 June 2025 (UTC)
    This is not true. jp×g🗯️ 06:11, 12 June 2025 (UTC)
    Do you have an argument? A source? The Morrison Man (talk) 11:41, 12 June 2025 (UTC)
    Sure: here is one about the "environmental concerns", here is one about the "copyright concerns". I don't know how to address the claim that literally every LLM, past and future, is "incapable of producing anything but inaccurate slop" -- is there any definition of what this means, or is it an unfalsifiable expression of opinion? jp×g🗯️ 14:10, 12 June 2025 (UTC)
    Completely agree. Even just for the shady way that training data is used without permission (and in completely non-transparent way), LLMs should never be used in Wikipedia. Let alone the fact that they are completely unreliable Ita140188 (talk) 12:45, 12 June 2025 (UTC)
  • Oppose any use of LLM content, at this point, on Wikipedia. — Jules* talk 18:44, 11 June 2025 (UTC)
  • Strong oppose AI has no place in Wikipedia, in any form. It threatens to poison the whole project and it should not be used at all. AI is unreliable, non-free, non transparent, controlled by corporations, biased, and partly based on Wikipedia itself (thus introducing a dangerous feedback). Using it is an affront to all the contributors that put all the effort to actually create new content instead of the regurgitations of language models.--Ita140188 (talk) 07:22, 12 June 2025 (UTC)
  • Oppose. Some articles are (unfortunately) controlled by spammers, so ironically AI summaries wouldn't be much worse for those articles, but we already have summaries (in the form of article leads) and this is an insult to the hard-working volunteers who write for free! And this is stupid, because we already spend much effort cleaning up AI hallucinations and copyvios! WMF, please listen to the community for once! AI will vomit up crap and tarnish our reputation! Thanks. 123957a (talk) 19:54, 12 June 2025 (UTC
  • Oppose, Oppose, and Oppose again We contribute one way or another to Wikipedia because it is fun. If WMF decides to dumb it down by using AI summeries it may as well shut up shop. Take the fun away and we go away.
I have an oven with a WiFi facility. That, too, is a solution looking for a problem. Apparenlty I can leave my cold, raw food in it all day going quietly rancid, and use my phone to switch it in when I am on my way home, thus poisonimng all who eat the (now cooked) food. 🇵🇸‍🇺🇦 FiddleTimtrent FaddleTalk to me 🇺🇦‍🇵🇸 20:11, 12 June 2025 (UTC)
@Timtrent I think the idea is that you can remotely preheat the oven so that it's ready for your frozen pizza when you get home (although I do remember reading a couple of years ago about someone who was making an oven with a built-in refrigerator). --Ahecht (TALK
PAGE
)
20:25, 12 June 2025 (UTC)
@Ahecht Such scope for misuse! And this is the route WMF wants us to go down. But my oven manual says I can leave my lasagne in it to await my command to cook it for when I get home! Perhaps I should thaw the pizza I my WiFi enabled tumble dryer, another solution looking for a problem, and then use the robot vacuum cleaner to transport it to the oven. 🇵🇸‍🇺🇦 FiddleTimtrent FaddleTalk to me 🇺🇦‍🇵🇸 20:42, 12 June 2025 (UTC)
  • Hell no. No AI. No AI summaries, no AI translations. We fight to keep AI slop out of Wikipedia, so why is WMF trying to push it in? If AI is allowed we will loose reputation, we will loose readers, we will loose editors and with that we will loose the whole project. Also WMF should keep out of content generation in general. --Lupe (talk) 20:32, 12 June 2025 (UTC)
    Absolutely agree. The problem is that most of the WMF does not really care about the project in the long term, and does not even understand the community behind it, because they were never really part of it. This needs to stop. The only way is to massively cut funding to the WMF. Running servers is <10% of the current budget, and that's what the WMF should stick to. Proceeds from the Endowment Fund should be enough to run the servers without further donations necessary for the foreseeable future. Ita140188 (talk) 20:53, 12 June 2025 (UTC)
  • Oppose I'm tired of AI being pushed into everything these days. AI can be a useful tool in certain, specific situations, but this is not one of them. Google has become nearly unusable for me because of those AI-generated summaries, and I'd hate to see Wikipedia head in the same direction. Keep Wikipedia human please. Paprikaiser (talk) 22:06, 12 June 2025 (UTC)
  • So the plan is to make an AI that writes a new lead, but needs to be moderated by humans, where humans have presumably already written a lead. This is something existing AIs can already do. So I'm not seeing the cost/benefit there where it's worth spending volunteer time on. This looks a bit like starting with the desire to integrate AI, and then working backward to find a place to fit it in. GMGtalk 14:17, 13 June 2025 (UTC)
Now now... are y'all planning on replacing your volunteer editors with AI 🙄? Y'all do realize that we contribute here for free, no? 𐩣𐩫𐩧𐩨 Abo Yemen (𓃵) 16:49, 13 June 2025 (UTC)
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

In the app, if you scroll down to the External Links section of the article pedophilia and open the wikiversity link, it actually opens https://en.wikiversity.org/wiki/Pedophilia:%20Innate%20or%20Learned? rather than https://en.wikiversity.org/wiki/Pedophilia:%20Innate%20or%20Learned%3F and because the browser interprets ? as a separator, it tries to open v:Pedophilia: Innate or Learned, which doesn't exist. Weirdly though, it only happens in the app.

After some testing in the sandbox, it seems like this affects interwikial links, but not internal links. HyperAnd (talk) 12:50, 14 June 2025 (UTC)

Seems like a bug you should report in Phabricator against whichever mobile app you're using. Anomie 22:18, 14 June 2025 (UTC)
Okay, done! At phab:T396942. Hopefully I didn't mess up the bug report. HyperAnd (talk) 23:03, 14 June 2025 (UTC)
@HyperAnd: I see your Phab report is about the Android app. I have the iOS app 7.7.5 (5231) on an iPhone 8 with iOS 16.7.11 and Safari. The app is not for Wikiversity so v:Pedophilia: Innate or Learned? opens in Safari and works correctly for me. Internal links like Quo vadis? stay in the Wikipedia app. I don't know the Android App. Does the link open in the app or a browser for you? If it's a browser then I think your report should say so and name the browser. PrimeHunter (talk) 23:34, 14 June 2025 (UTC)
 Done HyperAnd (talk) 23:52, 14 June 2025 (UTC)

Why is Zo nationalism in the backlog of pages needing an infobox conversion?

I hope I'm asking this in the right place. It's been in the backlog for awhile and I don't understand why. If I someone would explain it to me, I might be able to address other pages like that which I come across. I'd be really grateful. Oona Wikiwalker (talk) 00:44, 15 June 2025 (UTC)

@Oona Wikiwalker: Where are you seeing this? I cannot find anything about it in Zo nationalism, Talk:Zo nationalism or Category:Pages needing an infobox conversion. PrimeHunter (talk) 00:57, 15 June 2025 (UTC)
That's what I'm saying. Yet it's in the backlog category for pages needing an infobox conversion. What could be the cause? Oona Wikiwalker (talk) 01:23, 15 June 2025 (UTC)
@Oona Wikiwalker: Do you mean the last entry in your pages link? It says Talk:Zogam for me, not "Zo nationalism". What does it say for you and what happens when you click it? I go to Talk:Zogam and stay there but I have discovered that the corresponding non-talk page Zogam is a redirect to Zo nationalism. I don't know something which could cause the category page to say "Zo nationalism" or cause the talk page link to go there. Talk:Zogam was added to the category by placing {{New infobox}} on it in 2024 [1] when Zogam was an article and made its infobox with a wikitext table [2] instead of an infobox template PrimeHunter (talk) 01:59, 15 June 2025 (UTC)
So, do you think I can just remove the tag, since there is no infobox to convert? It must have been placed in error... I thought about adding an infobox from scratch, but I'm not even sure what the article is about... a people? a political movement? a social movement? I don't think the article even knows what it's about. Oona Wikiwalker (talk) 02:22, 15 June 2025 (UTC)
I've removed it. The infobox that went with the "needs a better box" was also removed a couple weeks ago. Izno (talk) 02:38, 15 June 2025 (UTC)

Multiple pings for one AIV report

After making this report, I got the normal ping "Cannot send mention etc." for reporting an IP. However, I then got the same ping 2 minutes after that, and a 3rd identical ping another five minutes later. Any reason this might have happened? CMD (talk) 15:21, 15 June 2025 (UTC)

Wiki🖤︎

Does anyone know when and why the wikilove button was changed from a heart icon to text? I always accidentally click on it instead of history. Aaron Liu (talk) 17:08, 15 June 2025 (UTC)

Most likely an unintentional change due to a regression in vector-2022, see phab:T396923. – SD0001 (talk) 19:49, 15 June 2025 (UTC)

Citation in infobox does not work in visual editor

If you open this revision in VisualEditor, both the PBS citation in the infobox and the TIME citation after the first sentence in the lead of the article show up as number [1], even though they're not the same. The PBS citation in the infobox also doesn't show up in the "References" section and thus cannot be selected to edit using the visual citation/template editor. The issue doesn't occur in the source editor's preview pane or when viewing the article normally—then, the PBS citation is correctly labeled [1] and the TIME citation [2], and both appear in "References" as they should. 2A00:807:D3:9A9A:986B:2094:714B:898C (talk) 10:47, 14 June 2025 (UTC)

VisualEditor has limitations when references are inside template parameters. If you edit the infobox then the source text of the reference can be edited in the "Part of" field. PrimeHunter (talk) 11:06, 14 June 2025 (UTC)
Old bug report about this, still valid: T52474 In VisualEditor, references in templates cannot be reused and are numbered separately from references in the text. Matma Rex talk 09:55, 16 June 2025 (UTC)

Month in country category header

Ever since the idea of categorizing things as "[Month] [Year] events in [Country]" started catching fire, there's been a recurring issue at Special:WantedCategories where people create the category for one specific month and year in one specific country, but then fail to create the "[Year] in [country] by month" parent that gets autogenerated by {{Month in country category header}}, thus leaving a redlinked category behind — but because this is all happening in an incredibly scattershot fashion rather than by any kind of coordinated project, there's frequently only one month with its own category, such that the creation of the parent category can't be justified until there's more than just one month for it.

But in turn, because the category is autogenerated by a template, I'm left having to either wrap the template in {{suppress categories}} and then manually readd it to all of the appropriate bluelinked categories, or create the "[Year] in [Country] by month" category as a redirect to the base "[Year] in [Country]".

This isn't an isolated issue, either: there's always at least one instance (and sometimes several instances) of this every time WantedCategories updates with new redlinked categories.

Accordingly, could somebody more versed in template coding than I am modify that template to slap an #ifexist condition on "[Year] in [country] by month", and to automatically just use "[Year] in [Country]" if the month category doesn't already exist? Thanks. Bearcat (talk) 17:21, 13 June 2025 (UTC)

@Bearcat: An example is always good. I couldn't find a current example, maybe because you fixed them, so here is a hypothetical example. Suppose Category:June 1989 in Romania was created (similar to Category:June 2022 in Romania). {{Month in country category header}} would add the red Category:1989 in Romania by month. You want it to instead add Category:1989 in Romania. The #ifexist test would be in {{Month in country category header/inner core}} on Category:{{{year}}} in {{Country prefix the|{{{country}}}}} by month. But do we really want this instead of just creating the missing category? Category:2022 in Romania by month does exist and has five month subcategories but is it so bad to only have one? It's part of a larger category navigation system. And if a a red category isn't added then the category may not be created when it would have multiple month subcategories. You wrote "the creation of the parent category can't be justified until there's more than just one month for it". Is this based on a guideline, a discussion, or your opinion? PrimeHunter (talk) 21:13, 13 June 2025 (UTC)
It's been standard practice of the Wikipedia category system, for literally the entire history of Wikipedia, that with certain specific defined exceptions in certain specific defined circumstances (e.g. Category:Albums by artist is allowed to contain "Artist albums" categories for any artist even if they only ever released one album), categories are not supposed to be created for just one thing, and require several things before their creation is warranted. You can peruse CFD and plainly see that "category created for just one thing", or "category that was previously populated, but most of its contents have been deleted or redirected, so that it now has only one thing left in it", are seen as deletion rationales, and routinely succeed as deletion rationales, all the time.
And even if you want to decree that "[Year] in [country] by month" categories should be added to the "one-entry categories are permissible" exceptions list, they would need to be handled in some other way that prevents them from becoming the redlinked category cleanup's job to deal with, such as a bot that automatically creates any non-empty category of that type whenever one shows up. Bearcat (talk) 12:46, 16 June 2025 (UTC)

"[ reply]"

It looks like this is happening because <span class="ext-discussiontools-init-replylink-bracket">]</span> is not :last-child anymore.

Out of curiosity, what is <mw:dt-ellipsisbutton>...</mw:dt-ellipsisbutton> for? Nardog (talk) 03:39, 14 June 2025 (UTC)

For clarification, this is about the space in "[ reply]" at the end of talk posts when "Enable quick replying" is enabled at Special:Preferences#mw-prefsection-editing. PrimeHunter (talk) 23:55, 14 June 2025 (UTC)
Filed as T397011, thanks.
<mw:dt-ellipsisbutton> is internal markup that you shouldn't see :) We need to use placeholders like these, because they're shown in the user's language, while the rest of the page contents is shown in the wiki's language. It is used to display an additional button next to "Reply" with a dropdown menu of actions, including "Thank" (T366095) and "Report" (m:Incident Reporting System), but these features are not yet enabled here. Matma Rex talk 09:53, 16 June 2025 (UTC)
@Matma Rex: Thanks. Oh, that's neat about additional buttons. Is there a frontend API for dynamically adding a button after "reply"? Nardog (talk) 13:18, 16 June 2025 (UTC)
It seems there isn't. Only a PHP hook was added in gerrit:961809. You could easily enough add an item to the dropdown menu from JS when it is shown (OO.ui.infuse() it and go wild), but you can't make it appear when it isn't. Matma Rex talk 13:34, 16 June 2025 (UTC)

Citer not working

The gadget for automated filling in references https://citer.toolforge.org/ has died - anyone know what's wrong with it, and when it'll be working again? - MPF (talk) 13:25, 16 June 2025 (UTC)

Looks like that external tool is maintained by User talk:Dalba. Try their talk page. — xaosflux Talk 14:02, 16 June 2025 (UTC)
Seems like this is a known issue per wikitech:User_talk:Dalba#Help.._Citer_is_not_working. — xaosflux Talk 14:06, 16 June 2025 (UTC)
@Xaosflux Thanks! I've dropped a note there to say it's not working again - MPF (talk) 15:34, 16 June 2025 (UTC)

HotCat's Bug

Can't add Category:Date of birth missing (living people) via HotCat. HumanRight 22:45, 17 June 2025 (UTC)

@Human Right Wiki: On which page, and what happens when you try? PrimeHunter (talk) 23:22, 17 June 2025 (UTC)
On many pages for example Robert M. Wallace (professor). When I add with HotCat, page will be refresh but can't see the category. HumanRight 23:29, 17 June 2025 (UTC)
@Human Right Wiki: You added it correctly in [3] and I see it. Have you enabled "Show hidden categories" as described at top of Category:Date of birth missing (living people)? PrimeHunter (talk) 23:59, 17 June 2025 (UTC)
I just added in Shoumya Joyti. It's show in the history but can't see it in the article. How can I active "Show hidden categories"? HumanRight 00:08, 18 June 2025 (UTC)
@Human Right Wiki: At Special:Preferences#mw-prefsection-rendering. Do you not see a blue box with instructions at top of the category page? PrimeHunter (talk) 00:10, 18 June 2025 (UTC)
Yes. I just did it. I can see the category in articles now. Thank you for the guided me :) HumanRight 00:26, 18 June 2025 (UTC)

Fixing PageViews templates

I wonder if an iframe type solution could be allowed for this, where some service produces embedable pageview graphs, or if all pageview templates can be removed from talk pages. I don't think the charts extension will support dynamic content, so porting it is a lost cause. GalStar (talk) 18:07, 17 June 2025 (UTC)

Right now charts can not fetch PageViews. Well, apart from the jank of letting a bot populate the data into the data namespace on commons. That jank will also send the message to developers that a proper solution is not needed, so it is better to wait. When developers have been asked about it, they have said since the chart is on a talk page it is not a priority. wikitech:SLO/Charts exists however, so it might be supported at some point, but it will be one of the last things to be worked on. Snævar (talk) 19:07, 17 June 2025 (UTC)
However this applies to airport data, Template:Television Rotten Tomatoes scores and historical population data graphs as well. I'm wondering if lua modules will help with this. GalStar (talk) 05:04, 18 June 2025 (UTC)

"articles" vs. "pages" in WikiProject rating categories again

The latest run of Special:WantedCategories has been weighed down with over 200 redlinked categories being spawned by {{WikiProject Military history}}, of the "Something something pages" variety, where a category almost always already exists at the form "Something something articles" for the exact same value of something-something, and is being moved to "pages" on some pages while remaining at "articles" on other pages with the exact same template parameters on them. But the template itself has not been recently edited, which means that once again the redlinks are being smuggled in via a module.

But since this is only happening on some pages and not on others, I have no idea how to resolve this as every solution I've tried so far has caused other problems. Moving a category from "articles" to "pages" broke stuff, redirecting a "pages" redlink back to the "articles" form broke stuff, and on and so forth — so I just don't know what to do about them, but can't just leave them sitting red either. Today's update was already the second run in which these categories have failed to go away (and in fact increased in number compared to the previous update), so they absolutely have to be resolved one way or another because they're actively interfering with cleaning up other redlinked categories — if I go with the "ignore these and deal with everything else" approach, it's become untenably difficult to even find the everything else in the list because there are so many of these to ignore, so the solution cannot be "just put up with it".

I've already asked at Wikipedia talk:WikiProject Military history for somebody to address this, but there's been no response. Could somebody look into it and figure out how to fix it? Thanks. Bearcat (talk) 20:20, 16 June 2025 (UTC)

Bearcat, is this related to this bot request? — Qwerfjkltalk 20:24, 16 June 2025 (UTC)
Fixed by reverting an edit to Module:WikiProject banner. I have no idea why my fix worked. * Pppery * it has begun... 20:28, 16 June 2025 (UTC)
Indeed, that worked. Thanks. Obviously if there is a desire to move all WikiProject rating categories to "pages" across the board, that's fine if it happens consistently and gets dealt with systematically — but this, with the categories moving to redlinked versions on some pages and staying in bluelinked versions on others, is clearly not how such things should be done. So thanks again, I'd never have found that on my own. Bearcat (talk) 20:43, 16 June 2025 (UTC)
Further problems with populated redirects are coming up at: Category:B-Class Help pages, Category:C-Class Help pages, Category:Start-Class Help pages and Category:Stub-Class military history pages. Can anyone identify what's populating them and zap it? Timrollpickering (talk) 21:35, 16 June 2025 (UTC)
In my (limited) understanding of how this baroque system works deleting the redirects should cause the template to switch to the correct name. * Pppery * it has begun... 21:39, 16 June 2025 (UTC)
I've given that a try already but it doesn't seem to be working. The whole template population system is a total mess and these random changes keep making it worse. Timrollpickering (talk) 22:01, 16 June 2025 (UTC)
I tried deleting Start-Class help pages and deleting it seems to have punted its contents into Category:NA-Class help pages. Which technically isn't populating a redirect so solves the problem, but is not the desired end state either. I give up. * Pppery * it has begun... 00:30, 17 June 2025 (UTC)
Seems the module reversion plus deleting has done the trick. We must get better protection for the module so one user can't repeatedly create these messes and dismisses the problems caused by them. Timrollpickering (talk) 07:48, 17 June 2025 (UTC)
And one of the problems with Category:Stub-Class military history pages is that a lot of these pages aren't stubs but redirects. It seems to pages were merged or redirected without the template entires being fixed. Timrollpickering (talk) 07:51, 17 June 2025 (UTC)

The check_fallbacks function that Pppery modified is designed to get the type of page to fallback to something appropriate in the event that a category does not exist. For example if placed on a category talk page then it would look to place the page in Category-Class TOPIC pages, but if that category does not exist then it will fall back to NA-Class TOPIC pages. Similarly FM-Class will fall back to File-Class. This logic was never intended for WikiProjects which have opted out of the standard scale. These projects should really define all the custom logic that they want in their custom class masks — Martin (MSGJ · talk) 13:22, 17 June 2025 (UTC)

In the case of the categories listed above:
  • For the Help project, these ought to be Category:B-Class Help pages, etc. because they are assessing the quality of help pages, not articles.
  • For the Military history project listed above, I suggest the issue is with the logic in Template:WikiProject Military history/class. I would guess that the template is allowing editors to assess redirects, disambiguation pages, etc. with a quality class (e.g. Stub-class) instead of automatically applying Redirect-Class or Disambig-class which the standard code does by default.
— Martin (MSGJ · talk) 13:29, 17 June 2025 (UTC)
@Pppery can you clarify why you reverted that change to the module? I believe it fixed an error which prevented the help pages getting assessed. You'll notice on Category:Help articles by quality that A, B, C and Start are all empty currently — Martin (MSGJ · talk) 18:58, 17 June 2025 (UTC)
I reverted that change because I observed that when I previewed with that change reverted pages currently in redlinked categorries were no longer in them, and indeed my revert seems to have solved the immediate problem and caused pages no longer to be in redlinked categories. I can still reproduce this right now; if I undo my own edit and preview Talk:Atlantic Revolutions (among other pages), it displays several categories that don't exist. * Pppery * it has begun... 19:03, 17 June 2025 (UTC)
Okay. Well, the edit to the module was correct, but some opt-out projects need to improve the logic in their custom masks. I have removed the help project's quality ratings based on a discussion I saw on their talk page. And I will work with the military history project to improve their template. (This is the problem when projects opt out but then don't maintain their bespoke code!) I will reapply that edit to the module shortly. Thanks — Martin (MSGJ · talk) 09:20, 18 June 2025 (UTC)

Tech News: 2025-25

MediaWiki message delivery 23:35, 16 June 2025 (UTC)

Looking at the blog post about their API plans, I'm glad to see that the planners seem to have avoided some of the worst past mistakes. I still see some hints they haven't entirely given up on bad ideas like "API tokens" though. And I find it amusing that they're celebrating plans to add things to the REST APIs that the Action API has had since 2014 or 2015. Anomie 00:20, 17 June 2025 (UTC)
I nominated {{Database report}} for the Coolest Tool Award. If you're a data nerd like me and can write SQL, it's pretty powerful. I had trouble thinking of other tools to nominate because they had already won before. Any suggestions for other tools for me (or others) to evaluate and potentially submit? I already know of tools lists in various spots, so I'm looking for specific personal recommendations. What really excites you? Stefen 𝕋owers among the rest! GabGruntwerk 05:19, 18 June 2025 (UTC)
WP:FACTOTUM? — Qwerfjkltalk 11:40, 18 June 2025 (UTC)

Actual Wikipedia stub count and percentage

Can anybody give me an idea of what the current stub count is roughly, using 1500 bytes of readable prose as a threshold? Category:All stub articles has about 2,321,000, about 33% percent of all articles, but I could have sworn when I asked a few years ago there were over 3 million and it was 57% of all articles. Of course there'll be many articles under 1500 bytes currently not stub tagged. ♦ Dr. Blofeld 08:24, 17 June 2025 (UTC)

If generated by an automated query, would that count exclude redirect and disambiguation pages? ThoughtIdRetired TIR 08:44, 17 June 2025 (UTC)
I wonder if the figure you saw before was based on talkpage stub-tagging? That total is currently ~3.9m, which would be ~55% of all articles, though there is potentially some double counting, mistaken tagging of disambigs/redirects, etc. It's certainly in the same range though.
I'd love to know an answer for this as well - a couple of years ago someone calculated prosesize for all FAs, but I don't think it's been done for all pages. Will have a look and see what I can find. Andrew Gray (talk) 12:21, 17 June 2025 (UTC)
Aha, here is the data I remember seeing. FAs and GAs only, unfortunately. Andrew Gray (talk) 20:03, 17 June 2025 (UTC)
Thanks guys, appreciate it. See @JustToBeClearIAmNotJoeBiden: it is about 55%. Was 57% before.♦ Dr. Blofeld 09:55, 18 June 2025 (UTC)
@Dr. Blofeld So I've dug into this a bit more, and I think that a) that must have been the source of the figure you saw before, but b) on reflection it's almost certainly an overcount and I should have spotted that sooner!
Wikipedia:Content assessment has a table with a total of 7.86m rated articles (plus another 394k unrated). This is where we get the 55.6% figure - it assumes every talkpage stub tag matches a different stub article, 3.9m articles. But of course we don't actually have that many articles.
The biggest factor in the overcount seems to be that pages are counted multiple times under different importance ratings. If we count the number of distinct mainspace pages that have a stub class talkpage template, we get a more conservative 3.14m, or 44.8% of articles. (petscan query). I think this is going to be closer to the correct value.
(I did wonder if it was possible that talkpage tagging would have a lot of "lag" - pages that were once stubs but have since been improved. It turns out that 95% of the corresponding articles are under 5000 bytes pagesize, which does suggest that they are mostly still broadly "stubby", even if we can't quite map that to the prose-size threshold.) Andrew Gray (talk) 12:31, 18 June 2025 (UTC)

Huggle issue

Hi, I have been using Huggle for years, but yesterday, after logging into Huggle, the system displayed a "Wikimedia Foundation" error message and advised contacting Wikimedia system administrators. If anyone knows any Wikimedia system administrators, please let me know or provide the link to the page for reporting issues. I have reported this to https://phabricator.wikimedia.org/ and if this is not the venue to report the incident, then please let me know. Thank you. Cassiopeia talk 02:25, 18 June 2025 (UTC)

"Wikimedia Foundation" error message" "If you report this error to the Wikimedia System Administrators, please include the details below. Request served via cp5018 cp5018, Varnish XID 446793839 Upstream caches: cp5018 int Error: 429, at Tue, 17 Jun 2025 03:59:01 GMT Sensitive client information IP address: //Reducted//"

Huggle screenshot issue

Cassiopeia talk 02:25, 18 June 2025 (UTC)

Not sure if there's any fix for that Welcome page, but you can change the default provider to start getting RC queue. (System -> Change Provider > IRC or XmlRcs) Currently, Huggle is pretty much unusable for me because every time I rollback an edit, it says I'm not logged in and the app crashes; there r other people with the same issue too at Wikipedia talk:Huggle/Feedback. — DVRTed (Talk) 03:54, 18 June 2025 (UTC)
DVRTed Huggle used to work when connected to XmLRcs, then about some months ago, I have to change to Wiki to load the edits but they system never should the "Wikimedia Foundation" error message". I have changed to Wiki as provider and it works. Thank you. Cassiopeia talk 04:08, 18 June 2025 (UTC)
It mentions error 429, which means you are being rate limited. —TheDJ (talkcontribs) 17:33, 18 June 2025 (UTC)