Jump to content

Wikipedia:Village pump (WMF)

Page semi-protected
From Wikipedia, the free encyclopedia

 Policy Technical Proposals Idea lab WMF Miscellaneous 
The WMF section of the village pump is a community-managed page. Editors or Wikimedia Foundation staff may post and discuss information, proposals, feedback requests, or other matters of significance to both the community and the Foundation. It is intended to aid communication, understanding, and coordination between the community and the foundation, though Wikimedia Foundation currently does not consider this page to be a communication venue.

Threads may be automatically archived after 14 days of inactivity.

Behaviour on this page: This page is for engaging with and discussing the Wikimedia Foundation. Editors commenting here are required to act with appropriate decorum. While grievances, complaints, or criticism of the foundation are frequently posted here, you are expected to present them without being rude or hostile. Comments that are uncivil may be removed without warning. Personal attacks against other users, including employees of the Wikimedia Foundation, will be met with sanctions.

« Archives, 1, 2, 3, 4, 5, 6, 7, 8, 9

US government questionnaire

The organisation I work for has been sent this questionnaire by the US government. It has 36 questions that produce a score between 12 and 180. I would like to know what WMF's score is. Hawkeye7 (discuss) 20:23, 24 March 2025 (UTC)[reply]

combatting Christian prosecution I would normally think this was a typo. But given the circumstances... GMGtalk 13:59, 25 March 2025 (UTC)[reply]
I keep thinking that they can't be that bad, but then they come out with something that shows that they are. I'm just glad that I don't live in the US. Phil Bridger (talk) 16:00, 25 March 2025 (UTC)[reply]
Me neither, but I still have to deal with the questionnaire. Hawkeye7 (discuss) 21:14, 27 March 2025 (UTC)[reply]
I have to wonder what the actual US government would score on that thing. Seraphimblade Talk to me 03:48, 28 March 2025 (UTC)[reply]
The WMF doesn't need to do it though. And I'm not sure why you are posting here instead of contacting the WMF directly. Doug Weller talk 08:26, 28 March 2025 (UTC)[reply]
Universities in Europe are generally advising not to fill in or respond to the survey. – Joe (talk) 08:36, 28 March 2025 (UTC)[reply]
The advice from the Australian government is: "it is better for researchers to respond to the questions rather than refuse to respond". Hawkeye7 (discuss) 19:56, 25 April 2025 (UTC)[reply]
We have to encourage free speech and encourage open debate and free sharing of information but also be sure to not work with any party that espouses anti-American beliefs, I guess. jp×g🗯️ 04:56, 1 April 2025 (UTC)[reply]
All right, I filled it out. Somewhat surprisingly, Wikipedia scores a respectable 90/180 (a lot more than you would expect given the fact the organization has a suspicious absence of minerals):
1: Yes, I would hope so. (5)
2. Yes, collaborating with any such organization (or any organization with a political viewpoint at all) would violate WP:COI (5)
3: No, most Wiki-meetups are informal gatherings of editors so vetting them for being terrorists would be a waste of time as well as pointless. (0)
4: WTF. No. Clear WP:NPOV vio. (0)
5: Yes, per WP:NOTCENSORED, and WP:FREECONTENT. Speech is constrained by the practical constraints of an encyclopedia but that’s about it. (5)
6: Yes? We don’t really collaborate with any organizations with policies for or against the US, per WP:COI. (5)
7: No, per WP:NOTCENSORED, we have abortion information on our website. (0)
8: Yes. Wikipedia is a well-funded organization with more than enough money to cover its operating cost. (5)
9: Yes. Let’s be honest, there is a fair amount of complaining on the site of Wikipedia’s high overhead costs, but the overhead costs of Wikipedia are dwarfed by the impact of the site. (5)
10: No. Why would we? We’re an encyclopedia? (0)
11: Yes? Again, we don’t really collaborate with any organizations with policies for or against the US, per WP:COI. (5)
12: No. As an international organization with global governance structures, we collectively politely tell you to go soak your head over this one. (0)
13: Yes. Local branches of Wikipedia have, at points, received money from Russia, and worked with groups such as Wikipedians in Mainland China. That being said, Wikipedia no longer receives funding from those organizations and has never partnered with them per WP:COI. (5)
14: No, per WP:NPOV. (0)
15: No. We have programs that seek to include and improve coverage of topics not currently covered by Wikipeda. That’s a good thing. (0)
16: Yes. Endorsing any policy positions officially would be WP:NPOV. We let the facts speak for themselves. (5)
17: No, per WP:NPOV. (0)
18: No. Even though sometimes it sure feels like it. (0)
19: No per WP:NOTCENSORED. Although, let’s be honest, the fact that Wikipedia fails this is more because “Gender Ideology” is really just talking about trans people. (0)
20: No per WP:NOTCENSORED (0)
21: Yes. Wikimedia Enterprise is the business arm of the foundation. (5)
22: Yes. Millions of people across the US use Wikipedia every day. Not to mention search engines rely on it. (5)
23: Yes. We’ve already done so. (5)
24: Yes. If the free flow of and access to information is a national security need, you could hardly find a better organization to fulfill this need. (5)
25: Providing access uncensored information to authoritarian regimes who are (for now) the primary “malign influencers” undermines their interests. (4)
26: I doubt Wikipedia has any impact whatsoever. We let the facts speak for themselves, and people make their decisions with those. (1)
27: I doubt Wikipedia has any impact whatsoever. We let the facts speak for themselves, and people make their decisions with those. (1)
28: Ironically, we probably do a better job of providing accurate health information than the current US government, which definitely mitigates biological threats and pandemics. As per “foreign dependence on medical supplies”, why even include that in this question you morons? (4)
29: Again free speech has generally helped promote US national security interests (we’ll see for how much longer). (2)
30: I guess disclosing what they are and providing information helps, sort of? That being said, WP:NPOV applies here. (1)
31: WP:NOTCENSORED means Wikipedia has information on most religions, benefiting religious minorities. Unfortunately, as you may know, the facts have a well known anti-Christian bias. (3)
32: None beyond letting the facts speak for themselves. That may be a bad thing for the current regime. (1)
33: People like Wikipedia, and many Wikipedia editors are American. That sort of cultural exchange hopefully helps people abroad see not everybody in the US is quite as bad as the current regime. (3)
34: The financial return of Wikipedia, when taking into account the benefits its provides, is massive. We’re one of the most visited websites in the world (5)
35: Wikipedia Enterprise makes bank, man [1]. (5)
36: Wikipedia is an encyclopedia. It is a concept, not a mining company. (0) Allan Nonymous (talk) 15:53, 15 April 2025 (UTC)[reply]
Looks like we need a WP:NOTMININGCOMPANY section. jlwoodwa (talk) 00:26, 24 April 2025 (UTC)[reply]

So, the Acting US Attorney for the District of Columbia, Ed Martin has issued a legal threat to the WMF here: [2]. I think a strong community affidavit is warranted. (Perhaps some more artful version of "fuck off we'll see you in court"?) Tito Omburo (talk) 23:34, 25 April 2025 (UTC)[reply]

(Perhaps some more artful version of "fuck off we'll see you in court"?) We could refer them to the response given in the case of Arkell vs Pressdram. (I don't necessarily think we should, but we could). Thryduulf (talk) 01:36, 26 April 2025 (UTC)[reply]
...or Moskva vs Snake Island. Certes (talk) 13:00, 27 April 2025 (UTC)[reply]

BHL

Is the WMF able to do anything to help with this? Cremastra talk 23:05, 24 April 2025 (UTC)[reply]

Kaggle

Seriously? The WMF is just gonna give our data to AI scrapers willingly, without our consent? This is revolting. LilianaUwU (talk / contributions) 23:24, 25 April 2025 (UTC)[reply]

Everything on Wikipedia is openly licensed and we all knew that when we contributed. This seems like a proactive move from the WMF to stop web scrapers from putting a strain on the servers, which degrades Wikipedia for everyone. I don't see any indication whatsoever that anything non-public is being shared here. —Ganesha811 (talk) 01:11, 26 April 2025 (UTC)[reply]
Yes this just seems to be some other variant of what's already here [3]. Nil Einne (talk) 10:54, 26 April 2025 (UTC)[reply]
I agree. On web scrapers, see https://drewdevault.com/2025/03/17/2025-03-17-Stop-externalizing-your-costs-on-me.html, https://arstechnica.com/ai/2025/03/devs-say-ai-crawlers-dominate-traffic-forcing-blocks-on-entire-countries/. Aaron Liu (talk) 23:34, 27 April 2025 (UTC)[reply]
I agree, Liliana. This is disgusting, and the community should not accept it. The wonderful Timnit Gebru and many others, especially the Algorithmic Justice League, have worked tirelessly to counter algorithmic bias. Wikipedia is still overwhelmingly written by White Anglosphere males, and disproportionately represents them. What would you expect the result to be?
Currently, training AI on the English Wikipedia would be a horrible thing for informationally marginalized groups. Lindspherg (talk) 18:54, 26 April 2025 (UTC)[reply]
The irony of you responding with agreement with something that appears AI-generated is not lost on me. LilianaUwU (talk / contributions) 01:58, 28 April 2025 (UTC)[reply]
ehh I don't see it. First paragraph just sounds like normal Robert Reich–ish rhetoric. Aaron Liu (talk) 03:32, 28 April 2025 (UTC)[reply]

WMF receives letter from Trump-appointed acting DC attorney

See this article in the Washington Post. There's also coverage from other reliable outlets findable online. Ed Martin appears to have picked the WMF as his next target for vaguely threatening letters. I am very interested to see what, if any, response the WMF makes to this, and trust they will continue to stand up fro free speech, free information, and Wikipedia's editor community. I see there's also some discussion on Jimbo's talk page here. —Ganesha811 (talk) 01:14, 26 April 2025 (UTC)[reply]

Hi, I know this is super late to the discussion, but here's a free link to the Washington Post article if anyone wants it: https://wapo.st/4jE6rp8 Northern-Virginia-Photographer (talk) 14:06, 2 May 2025 (UTC)[reply]

From Admin Noticeboard

US Attorney for the District of Columbia Ed Martin sent this threatening letter to the WMF today. Larry Sanger is involved. Here is early analysis. Cullen328 (talk) 22:19, 25 April 2025 (UTC)[reply]

God damn... Tarlby (t) (c) 22:31, 25 April 2025 (UTC)[reply]
Some time ago, there was a thread at the Teahouse (?) about moving the servers out of the US. Maybe this needs a rethink? Knitsey (talk) 22:35, 25 April 2025 (UTC)[reply]
What's he's threatening is Wikimedia's tax exempt status. Schazjmd (talk) 22:41, 25 April 2025 (UTC)[reply]
Move the foundation out of the United States too.Simonm223 (talk) 23:36, 25 April 2025 (UTC)[reply]
Would be fun to have to delete all images on Commons and Enwiki, lol. And say hello to 80 billion libel suits. PARAKANYAA (talk) 23:58, 25 April 2025 (UTC)[reply]
Why would we need to delete the images? There are countries with even more liberal copyright laws than the US. Moving the servers out of the US is a common request on Commons because of this.
And as far as I have heard, WMF has already servers in several countries. Plus, there are also countries that give NGOs and the like tax exempt status. Nakonana (talk) 09:30, 27 April 2025 (UTC)[reply]
Last I heard, the WMF lawyer said on Commons that they don't actually have to obey US copyright law and that the Commons community was free to relax copyright PoliciesAndGuidelines a bit if they wanted to. Aaron Liu (talk) 23:24, 27 April 2025 (UTC)[reply]
(See c:Special:GoToComment/c-JWilz12345-20250303024700-Y.haruo-20250302172500, cf. c:Commons:Lex loci protectionis.) Aaron Liu (talk) 23:31, 27 April 2025 (UTC)[reply]
France is the most appropriate fall-back I can think of, given it is the only EU state with both freedom of speech laws and a working nuclear deterrence force to back it up. Baltakatei 04:50, 26 April 2025 (UTC)[reply]
France did have this episode, however. Curbon7 (talk) 07:18, 26 April 2025 (UTC)[reply]
And also Sinking of the Rainbow Warrior. No, definitely not France. MinervaNeue (talk) 08:51, 26 April 2025 (UTC)[reply]
More recently, French Wikipedians have been subjected to threats and intimidation from the right-wing press. --Grnrchst (talk) 12:33, 26 April 2025 (UTC)[reply]
While I don't condone the dox threat (making a user accountable in their perspective), the page legitimately had neutrality issues and was sourced by a blog post. Please don't further derail the conversation. Hplotter (talk) 16:05, 27 April 2025 (UTC)[reply]
@Baltakatei France is not a good location for Wikimedia servers since 2015. Note that their society of architects and artists (ADAGP) is anti-Wikipedia, considering their vocal opposition to Freedom of Panorama and their criticism of Wikimedia world's imposition of commercial-type CC licensing to images of buildings and monuments. Unless you want to impose universal prohibition of all images of modern architecture on enwiki and apply a restrictive fair use exemption tag like what French Wikipedia is doing. Per c:COM:FOP France, "Even if these non-free images [of modern buildings] are now tolerated in French Wikipedia articles, the legitimate copyright holders [(like the living architects)] can send their veto so that these images will be deleted on French Wikipedia too. The same deletion will occur when receiving a French court order: their long-term presence is not warranted as long as the copyright protection persists." JWilz12345 (Talk|Contrib's.) 08:52, 26 April 2025 (UTC)[reply]
Though frankly, concerns about pictures of modern buildings doesn't really move the needle considering the bigger picture of what's at stake. Bon courage (talk) 10:03, 26 April 2025 (UTC)[reply]
I'm surprised that France was the first country to be proposed here, given all the problems it has with freedom of the press (as mentioned above). As a counter-example, Switzerland has freedom of panorama, robust privacy and data protection laws, and is ranked 9th in the world for freedom of the press. Ireland, Norway and the Netherlands would also spring to mind before I'd suggest France. --Grnrchst (talk) 12:30, 26 April 2025 (UTC)[reply]
Switzerland was also one of the first country that comes to mind. Maybe Norway, Sweden or Finland, too? --PantheraLeo1359531 (talk) 14:42, 26 April 2025 (UTC)[reply]
Finland is restrictive regarding freedom of panorama, iirc. Nakonana (talk) 09:38, 27 April 2025 (UTC)[reply]
@Nakonana Finland has FoP for buildings though, but they take architecture strictly; they don't follow the logic Californian courts follow with regards to sculptures that are inherent elements of architectures, like gargoyles and stained glass windows. Perhaps 95% of FoP-USonly images might be OK under Finnish FoP but not the 5%, including File:Pedro Calungsod stained glass (cropped).jpg. JWilz12345 (Talk|Contrib's.) 09:44, 27 April 2025 (UTC)[reply]
Germany also has freedom of speech laws. See Artikel 5 of the German Grundgesetz. (It's just called "freedom of expression" instead of "freedom of speech".) France has very restrictive rules for copyright (e.g. even plain buildings are copyrighted), so that you'd need to delete half of the photos from wiki Commons if servers were to be moved there. Germany's copyright laws are much more lenient. Nakonana (talk) 09:36, 27 April 2025 (UTC)[reply]
Canada seems like a decent option. We have fair dealing, freedom of panorama, relatively close to the US, etc. But my understanding is that there are challenges beyond simply deciding to move everything. Clovermoss🍀 (talk) 00:28, 30 April 2025 (UTC)[reply]
The WMF already maintains servers in a number of locations around the world including Brazil, France, Netherlands, Singapore and USA. Andrew🐉(talk) 19:06, 26 April 2025 (UTC)[reply]
The US has a strong government that sticks its nose where it doesn't belong.
I would vote for Island. ·Carn·!? 05:57, 28 April 2025 (UTC)[reply]
Iceland? It's next on their list after Greenland. Gråbergs Gråa Sång (talk) 06:43, 28 April 2025 (UTC)[reply]
I anticipate the WMF will retain counsel and send a forceful response. voorts (talk/contributions) 23:08, 25 April 2025 (UTC)[reply]
@Voorts What force would they have for that, may I ask? Darwin Ahoy! 14:29, 26 April 2025 (UTC)[reply]
Ed Martin sends lots of letters but he's clearly wrong on the law and this won't go anywhere. voorts (talk/contributions) 15:03, 26 April 2025 (UTC)[reply]
Right... Well, lets see how it goes. Darwin Ahoy! 15:10, 26 April 2025 (UTC)[reply]
See my longer comment below. voorts (talk/contributions) 15:15, 26 April 2025 (UTC)[reply]
Can someone protect Ed Martin's article. Martin sent the letter and the page seems to be picking up random vandalism. Thanks. Randy Kryn (talk) 23:12, 25 April 2025 (UTC)[reply]
Two IP edits isn't enough to warrant protection. voorts (talk/contributions) 23:16, 25 April 2025 (UTC)[reply]
Semi-protected x 4 years per WP:CT/AP. -Ad Orientem (talk) 04:28, 26 April 2025 (UTC)[reply]
This is part of a larger campaign against sources that allow criticism of Trump policies, and includes sending letters to major medical journals. StarryGrandma (talk) 00:06, 26 April 2025 (UTC)[reply]
I share the administration's concerns with the media, academia, Wikipedia, and bias, but this is ridiculous. You don't combat bias with lies. The Knowledge Pirate (talk) 04:04, 26 April 2025 (UTC)[reply]
Smart lawyers don't send reams of data to a prosecutor in response to a fishing expedition letter. So I don't expect WMF to send anything more than a polite "We share your concerns about neutral points of view, accuracy, and propaganda in media. The long arc of our efforts bends toward neutrality and accuracy. There are no political litmus tests for educational 501(c)(3) organizations, which have a First Amendment right to write as they see the world. There are thousands of examples of 501(c)(3) organizations publishing from conservative points of view, including some that you yourself have founded, such as the Eagle Forum Education and Legal Defense Fund." If they wanted to poke the bear, they could add, "We consider your threatening letter an effort to coerce Wikipedia to be more amenable to using its deserved popularity to push your own propaganda."
However, there is a kernel of truth in the attack; there is an imbalance in WP's NPOV. I have tried using very reliable sources (e.g. a book written by a serious scientist and professor who'd served years in the Federal Government on the topic) to inject a little neutrality into pages on Climate Change. All my edits were reverted because that source's statements conflicted with the rabidly biased existing article and with the apparent political opinions of other editors (and administrators). The cited author isn't even conservative -- merely not rabidly progressive on the topic, taking a neutral scientific view. But there's a whole "if you don't agree with us, you are DENIER of SCIENCE" attitude in WP, despite real science proceeding by airing disagrements rather than suppressing them. Another example is how the article on Paul R. Ehrlich is periodically edited to a hagiography, by editors who seemingly can't stand the idea that the prophet who taught them the world would end due to high population had feet of clay, being extremely inaccurate and often completely incorrect in the majority of his sensationalized predictions. That article remains a mess, veering in all directions and following most valid, well-sourced criticism with "but..." and praise. There is a similar problems with the articles about the Great Barrington Declaration and its authors. It was a well-sourced and legitimate disagreement on Covid policy that was ruthlessly suppressed by the left (including the Federal government) to present an appearance of scientific and political unanimity for a "lockdown" policy. Even today, its lede still uses the dismissive word "fringe"! And smears the sponsoring nonprofit as "associated with climate change denial", as if that had anything to do with whether the Declaration about Covid policy was reliable or notable.
On WP topics where there IS a current imbalance of neutrality, the deck is stacked such that it's quite hard for serious editors to correct the imbalance. What changes can the WP community make to be more welcoming to serious editing (not conservative propaganda) from people who disagree with liberal sacred cows? -- Gnuish (talk) 00:08, 29 April 2025 (UTC)[reply]
The ideas you want to insert are not widely accepted by mainstream academia, so they don't get equal weight in articles. This isn't the place to rehash old content disputes. Thebiguglyalien (talk) 🛸 00:11, 29 April 2025 (UTC)[reply]
Discussion about closing this thread (when it was at AN), reopening this thread, and moving this thread to a village pump. –Novem Linguae (talk) 03:48, 26 April 2025 (UTC)[reply]

I object to your close of this thread, Cambalachero, and have explained why on your talk page. I urge you to revert your close. Cullen328 (talk) 02:42, 26 April 2025 (UTC)[reply]

Seconding. I mean, they’ve finally done it, going after people that they don’t like. Jeez, what a downward spiral Sanger’s gone through. How did he get to this point of hating Wikipedia so much that he’s actively trying to shut it down? — EF5 (questions?) 02:52, 26 April 2025 (UTC)[reply]
Thirded. And do look at the ridiculous examples that Sanger gives here on what constitutes "bias" on Wikipedia. Then note that this article was 4 years ago and he's only gotten more extreme since then. SilverserenC 02:56, 26 April 2025 (UTC)[reply]

This discussion should be closed. As I pointed when I did so, Whatever is done about this, will be decided by the WMF, not by editors (admin or not). There is no actionable request here, nor any news that changes our way to do things. In fact, the discussion has already been derailed into forum-like territory. Discussing if Trump's policies are good or not, is exactly that. Discussing things that none of us has the power to decide either way (such as moving the servers, or even the WMF itself), is exactly that. If you take a moment to think about it, you will realize it. --Cambalachero (talk) 03:15, 26 April 2025 (UTC)[reply]

I wish I could say I was surprised. But I have been expecting something like this from the moment he won the election. -Ad Orientem (talk) 03:21, 26 April 2025 (UTC)[reply]

Can't this discussion be moved somewhere else besides the administrators' noticeboard? It's absolutely true that this is for the WMF to decide how to respond to this. But if it's to be discussed on Wikipedia, it shouldn't be discussed somewhere that gives the impression that administrators have any more "authority" than others do about this subject. 11USA11 (talk) 03:24, 26 April 2025 (UTC)[reply]
I think the following is the most appropriate place: Wikipedia:Village pump (WMF)#WMF receives letter from Trump-appointed acting DC attorney. 11USA11 (talk) 03:27, 26 April 2025 (UTC)[reply]
Concur. -Ad Orientem (talk) 03:28, 26 April 2025 (UTC)[reply]

Continued discussion

Page 3 point 6 of the letter from the Acting United States Attorney for the District of Columbia says Similarly, what is the Foundation's official process for auditing or evaluating the actions, activities, and voting patterns of editors, admins, and committees, including the Arbitration Committee ... This is clearly a major concern for all editors and administrators. Clearly, these people are planning to "audit and evaluate" us when the WMF tells them that is not appropriate and not how Wikipedia works. I reject the notion that editors and administrators should meekly step aside and expect the WMF handle this latest outrage with zero input from us. Cullen328 (talk) 03:51, 26 April 2025 (UTC)[reply]
I hope the editors and admins State-side don't receive much negativity or spotlight on this, especially those who are not really anonymous. – robertsky (talk) 04:04, 26 April 2025 (UTC)[reply]
Biggest concern is probably for those living in the US who are not citizens. Nil Einne (talk) 07:13, 26 April 2025 (UTC)[reply]
@Nil Einne Those are obviously in the front line, but the danger is for all people living in the United States, looking at what the US administration has repeatedly stated on that regard. Darwin Ahoy! 14:33, 26 April 2025 (UTC)[reply]
I noticed that the letter accuses WMF of allowing people to endanger the "national security and the interests of the United States". Since Wikipedia is a multilingual, international project, maybe the WMF should point out in its response that it is not beholden to protect the national security or the interests of any country. Also, given that the letter does not mention any examples of so-called "information manipulation", I'm not sure what Martin is trying to get at, other than perhaps trying to bully the WMF into compliance. Finally, I should note that the letter mentions that the presence of "foreign nationals" (i.e. non-Americans) on WMF's board is "subverting the interests of American taxpayers", which is a rather strange thing to say, given that (1) WMF serves an international audience, not a US-only audience, and (2) WMF receives no American tax revenue, so there is no such interest being "subverted". – Epicgenius (talk) 04:09, 26 April 2025 (UTC)[reply]
Tax free status is a form of government subsidy. Hawkeye7 (discuss) 05:36, 26 April 2025 (UTC)[reply]
Why be specific when you can be vague, much easier to defend your statements. Gråbergs Gråa Sång (talk) 09:45, 26 April 2025 (UTC)[reply]
From TheFP [4], The letter did not specify which foreign actors were manipulating information on Wikipedia and did not cite examples of alleged propaganda. However, a person close to Martin said he is concerned about “edits on Wikipedia as they relate to the Israel-Hamas conflict that are clearly targeted against Israel to benefit other countries.”hako9 (talk) 18:55, 26 April 2025 (UTC)[reply]
Why would the Foundation (or any non-profit/company/ect) need to know the voting patterns of anyone? That's a really f'ed up thing to include in there. SilverserenC 04:06, 26 April 2025 (UTC)[reply]
Wouldn't that be virtually impossible to qualify as well? Knitsey (talk) 04:50, 26 April 2025 (UTC)[reply]
Sure, but I think we all know exactly what sort of voting patterns and general opinions about politics (and who one supports) that they're really wanting to know by including that in there. SilverserenC 05:07, 26 April 2025 (UTC)[reply]
Yeah, I guess that is obvious. But it would take a long time to complete that task. I would think that the WMF might be able to string this out for, say, just short of 4 years? Knitsey (talk) 05:12, 26 April 2025 (UTC)[reply]
You think there'll be elections in the USA again anytime soon? Well, maybe ... But even if there were, the risk is there would be some new manifestation of US govt in future that leaned the same way, for socially-ingrained reasons that are very hard to grapple with, within the electorate. The question is: why should Wikipedia/WMF want to be in the USA? I cannot see any serious downside to decamping, and many up-sides. Bon courage (talk) 05:30, 26 April 2025 (UTC)[reply]
I think this was discussed once before, and someone mentioned that it would cost many millions of dollars to change the country that wmf is headquartered in. There is also a danger of picking the wrong country to change to, then this process would need to be repeated if authoritarianism or government suppression of free speech occurred there. –Novem Linguae (talk) 11:39, 26 April 2025 (UTC)[reply]
It's certainly a huge thing to consider, with a lot of potential problems it could introduce, but I don't think we should rule it out completely. The logistical, legal and financial costs of moving to a different country are far outweighed by the societal damage that could be done by leaving the encyclopedia at the mercy of a regime that is openly hostile to its existence.
The Encyclopédistes were forced to move their publication headquarters to Switzerland when the ancien regime tried to shut them down. Wikimedia having to move its base of operations elsewhere would not be historically unprecedented. --Grnrchst (talk) 12:49, 26 April 2025 (UTC)[reply]
Rousseau, Diderot, Voltaire.. Funny how these things keep resurfacing. Apparently we sometimes forget and slide backwards far enough for history to rear its head. -- GreenC 21:17, 26 April 2025 (UTC)[reply]
Well, the WMF would certainly be welcome in Geneva, Rousseau's place of birth and where many international organizations are headquartered. Switzerland has largely favorable laws for such organizations, also tax-wise, and good freedom of press - with some caveats when it comes to bank secrecy... Gestumblindi (talk) 19:33, 29 April 2025 (UTC)[reply]
@Novem Linguae Depending on how the WMF behaves and answers to the US Administration demands, that could be a very plausible move, indeed. Darwin Ahoy! 14:39, 26 April 2025 (UTC)[reply]
What is the Foundation’s official process for auditing or evaluating the [...] voting patterns of editors, admins, and committees. Well that's disturbing... Curbon7 (talk) 05:55, 26 April 2025 (UTC)[reply]
If it helps, I've never voted for any American party. Gråbergs Gråa Sång (talk) 10:24, 26 April 2025 (UTC)[reply]
Nor have I, but that doesn't stop them from trying to find out which Swedish parties you have voted for, or, in my case, British. Phil Bridger (talk) 10:07, 27 April 2025 (UTC)[reply]
I read this part as voting patterns for "!votes" on-Wiki, as that would make most sense. But given the throngs of fascism that are latched through the current political moment in the US, this may have been naivete on my part. -- Cdjp1 (talk) 11:55, 27 April 2025 (UTC)[reply]
@Cullen328 One of the reasons IP editing should never have been allowed in any wikimedia project, even in 2001. As of now, all people that uses and used an IP of which the records still are in the ISP is a sitting duck ready to be sued. Darwin Ahoy! 14:42, 26 April 2025 (UTC)[reply]
Well, that ship sailed a quarter of a century ago, DarwIn. And it is rarely easy to identify an individual from an IP address. Cullen328 (talk) 16:59, 26 April 2025 (UTC)[reply]
@Cullen328 all it takes for any government to know location and eventually identity is to request that data from the ISP the IP belongs to, the most common case by large being that an IP belongs to some sort of ISP. In the case of authoritarian governments that information is usually at the distance of a phone call. Yes, that ship quite unfortunately sailed a quarter of a century ago, but it can, and should, be shipwrecked any day. We have already done just that at the Portuguese speaking wikipedia 5 years ago, btw. Darwin Ahoy! 17:11, 26 April 2025 (UTC)[reply]
All you'd know then is who the name of the person who signed the contract with the internet provider for this IP. But you'd not know who made the edit: was it the person who signed the contract, was it a family member of that person (if so, then which one), was it a friend, was it a one-time guest of the person who signed the contract? It will be impossible to identify the actual editor, and after 25 years even said editor probably doesn't remember whether it was them who made the edit in question. Nakonana (talk) 10:08, 27 April 2025 (UTC)[reply]
Additionally, there are also public wifi at cafes, libraries, etc, which do not require people to share their personal information in order to be connected. – robertsky (talk) 14:34, 27 April 2025 (UTC)[reply]
@Robertsky I wouldn't assume the generality of IP users are Mata Haris or 007s in sunglasses and headscarf sneaking into public wifis to edit "anonymously". From my experience, people usually do that either out of laziness, or even worst, misguided by the reckless but prevalent myth that IP editions are somehow "anonymous", happily walking into the wolves mouth that way. Darwin Ahoy! 15:24, 27 April 2025 (UTC)[reply]
@Nakonana I don't think assuming the ISP contract was signed by someone else, usually very close to the person in question. is really an argument. Fact is that IP editing is and has been a significant hazard for the editors of the wikimedia projects that use that, willingly or unwillingly, endangering people's lives including their physical integrity and of their loving ones ones. Some quick examples:
It's absolutely reckless to persist in allowing IP editions on the Wikimedia projects, even more in the current context in the US where that can mean almost immediate identification of the editor, and the fact that such recklessness persists for 25 years already only makes it more urgent to stop it now. Darwin Ahoy! 15:52, 27 April 2025 (UTC)[reply]
It may well be absolutely reckless, but multiple times the en.wiki community has requested the mandating of 'sign in to edit', and each time the WMF has rejected it, because - apparently, as I recall - it 'goes against being the Encylopedia That Anyone Can Edit'. Even as TVTropes mandated SITE. This was over 10 years ago, and given that "temporary accounts" are apparently about to become a thing, (proper) SITE remains a pipe dream. - The Bushranger One ping only 01:50, 28 April 2025 (UTC)[reply]
@The Bushranger Well, we've done just that at wiki.pt 5 years ago, and the WMF took no issue with it. IP editing has been successfully banned from that Wikipedia since then, and we still are the encyclopedia anyone can edit (after spending 2 seconds creating an account). Darwin Ahoy! 10:04, 28 April 2025 (UTC)[reply]
Maybe they've changed since ~10 years ago. But the fact en.wiki remains IP-enabled points to y'all at pt. being lucky. - The Bushranger One ping only 22:10, 28 April 2025 (UTC)[reply]
Temporary accounts that don't show people's IP addresses are being slowly rolled out across wikis. I think we'll probably be one of the last to get it, but the existence of the project shows that the foundation has considered the privacy implications of an IP address being publically visible (even it took 20 years to get to this point where it's a near-future feature). Clovermoss🍀 (talk) 00:37, 30 April 2025 (UTC)[reply]
Is the Acting United States Attorney for the District of Columbia also going to send letters to Facebook and Twitter/X to ask them about their official process for auditing or evaluating the actions, activities, and voting patterns of [users]...? I'd be really curious to hear Musk's reply to this. Nakonana (talk) 09:52, 27 April 2025 (UTC)[reply]
You guys remember the Asian News International case, where an Indian court attempted to force WMF to provide the names and details of three users? A Wikipedia article about the case, Asian News International vs. Wikimedia Foundation was promptly created, but had to be taken down (blanked). Is anybody working on creating an article about Ed Martin's letter to the WMF, hint hint? I don't think it would be as easy to get that taken down. Bishonen | tålk 10:16, 26 April 2025 (UTC).[reply]
@Bishonen Too early, it has a sentence in his article atm, which seems about right. But the WaPo article is a good start, don't you agree, @Valereee? Gråbergs Gråa Sång (talk) 10:21, 26 April 2025 (UTC)[reply]
We do have other news sites picking this up now, though none as prominent as WaPo. Gizmodo, Huffpost, The Verge, New Zealand Herald. -- Cdjp1 (talk) 11:58, 27 April 2025 (UTC)[reply]
So I should write another blacklockable article? :D I agree it's probably too early, but if it turns into an actual lawsuit, probably notable. Valereee (talk) 12:15, 28 April 2025 (UTC)[reply]
I am of the opinion that the only affirmative action WMF should do at this time is have legal write a letter indicating WMF is willing to vindicate its rights in court. Moving servers is a bad idea, for reasons already indicated, but also because it is, in a way, complying with the lawless bully. I don't know what the community response should be, since I don't know what it would hope to achieve. I had (in the earlier thread on this page) the idea of a "community affidavit", to support WMF legal's fight. Tito Omburo (talk) 12:09, 26 April 2025 (UTC)[reply]
Here's my perspective as an attorney: Ed Martin is a clown. His job thus far appears to be sending threatening letters to conservative bugbears in an attempt to chill speech. He doesn't have the authority to revoke tax exempt status (he's the interim United States Attorney for the District of Columbia, not the IRS), and if he actually had a case of criminal wrongdoing, his office/the FBI would be sending subpoenas or executing warrants, not sending public letters to the WMF. Even Kash Patel's FBI wouldn't open an investigation on thin bullshit like this and no judge would sign a warrant based on innuendo. As I said above, WMF will send a forceful letter in response and Martin will back down because he's got nothing. Everyone freaking out about this is precisely what Martin wants; he should be ignored. voorts (talk/contributions) 15:15, 26 April 2025 (UTC)[reply]
I would laugh this off -- most of those around the short-fingered convicted felon are clowns (& the rest are incompetent hacks) -- except this time around they understand what they can do having control of the White House, & have ratcheted up their oppression. Witness the arrest of a state judge for opposing the increasingly lawless ICE. I'm no longer confident that the threats having that person in office can be overstated. -- llywrch (talk) 18:09, 26 April 2025 (UTC)[reply]
Yes, they're all clowns, but more of the killer clown variety. They're literally supporting more than one genocide right now. I wouldn't be laughing. Lindspherg (talk) 18:56, 26 April 2025 (UTC)[reply]
My 2¢ ... I am taking a wait and see approach. While I hope voorts is right and this turns out to be a clownish distraction, I'm not dismissing the potential for it to become something serious. This administration has already shown a breathtaking contempt for the rule of law and civil liberties. The language in that letter is right out of every tyrant's playbook for intimidating and/or suppressing sources of news and information that they can't control. For now, I await with interest the WMF's response. I know they have lawyers on retainer and the resources to hire more if needed. -Ad Orientem (talk) 19:14, 26 April 2025 (UTC)[reply]
Elsewhere I have recommended groups associated with Wikipedia outside the US make & keep backups of the project databases. My point in recommending this is as insurance of the worst case scenario: the DoJ somehow shuts down the Foundation. Now I've said elsewhere that Wikipedia can survive much better without the Foundation than the Foundation can survive without Wikipedia. Having backups outside the control of the Federal government makes it far easier for a group to fork Wikipedia & preserve our goal of creating a free encyclopedia -- or an encyclopedia in exile, if you will. Sure, there will be legal problems basing a free encyclopedia in a non-US country (e.g. copyright, laws of defamation), but I have faith that the grass roots of Wikipedia -- as well as similar projects -- will come up with solutions. There has been talk of the Foundation creating contingency plans if the clowns with nukes are effective; we, the community, must needs have our own contingency plans to carry on our work. -- llywrch (talk) 18:00, 29 April 2025 (UTC)[reply]
I'm not sure what the best course of action is but, if the WMF wishes to respond directly to these questions, it will have no shortage of material. For example, there are lots of policies such as the Universal Code of Conduct which is currently undergoing a round of revision. And it can point to actions taken such as the 2021 Wikimedia Foundation actions on the Chinese Wikipedia.
In any case, it's good that the WMF has a substantial endowment so that it can afford to take whatever course of action is decided.
Andrew🐉(talk) 19:39, 26 April 2025 (UTC)[reply]
There's a massive noise to info ratio here. The most tangible damage this letter has done so far is prompting WP:FORUM-style speculation and fearmongering within the community. Several people here have taken the bait, and reopening this discussion was a mistake. Thebiguglyalien (talk) 🛸 20:46, 26 April 2025 (UTC)[reply]
  • I disagree. But as someone who has a front-row seat to these disturbing political developments, I suggest as a prudent action that all Wikimedia groups outside of the US to start making regular backups of Wikipe[p|m]ia content against the worst possible outcome. (In any case, making backup copies of important data is always a good idea. Every IT system expert recommends this. Even if there is no threat from a lawless regime.) -- llywrch (talk) 22:14, 26 April 2025 (UTC)[reply]
    • We at Wiki Project Med ship EN WP on a Raspberry Pi Zero W 2 server. So you can buy your very own version. Or you can make your own.MDWiki:WikiProjectMed:Internet-in-a-Box Doc James (talk · contribs · email) 23:07, 26 April 2025 (UTC)[reply]
      It's an easy enough job to download the entirety of En.Wiki (<25GB (sans media)), host would be harder with potential traffic level, but is doable. And of course, for as long as the archiving sites are up, they hold a repository of a majority of wiki articles. -- Cdjp1 (talk) 12:07, 27 April 2025 (UTC)[reply]
    • Stewing on it for a bit, I think the most practical approach that each of us individually can take to any challenge is simply to double down on our principles. WP:V, WP:NPOV, and WP:BLP remain the top priority. We can do our cause a lot of good just by sticking to them strictly, keeping our processes transparent and avoiding any iota of a violation on high-profile articles or BLPs within the American politics topic area. This also means clamping down on WP:SOAPBOX and WP:CPUSH, which we can sometimes be very lax with. It might be worth starting a discussion about how WP:AE handles politically charged editing that's subtle enough to avoid an instant ban. This would help with stopping these bad actors from manipulating Wikipedia from within while also stopping those who might make the rest of us look bad in the eyes of the public. Thebiguglyalien (talk) 🛸 02:28, 27 April 2025 (UTC)[reply]
      Just adding a "yes and" to say reliability is also paramount in these contentious situations. I spend little to no time on US politics-related areas of the encyclopedia, but I have seen in articles I come across that blog posts, opinion columns and even tweets and reddit threads are far more prevalent than they ought to be. --Grnrchst (talk) 08:24, 28 April 2025 (UTC)[reply]
    • While making backups is always a good idea and one that should be encouraged, the real threat here is not the loss of any information Wikipedia contains. The relatively small file size of the English language Wikipedia means that such a large number of copies have certainly been made that there is little risk of it disappearing. Instead, the lasting damage would come from the disruption to the networks and communities that maintain it, the inability to continue improving and updating it and the problem with accessing the aforementioned archived data. –Noha307 (talk) 03:11, 27 April 2025 (UTC)[reply]
I think it's fine to discuss this (that's not taking the bait from anyone), and I think the most important thing for the community to do (along with prudent measures like making backups, and protecting one's real-life identity, if not already disclosed) is to make it clear that we are proud of what we do (yes, sure, we have lots of mistakes, but we correct them), and we aren't going to be intimidated by bullies. --Tryptofish (talk) 00:16, 27 April 2025 (UTC)[reply]
As it seems there is consensus that the thread should stay open, I will add my 2 cents. As I understand, Wikipedia is an educative web page, and that grants them a tax exemption. But I'm sure that it can't be enough that Wikipedia self-describes itself as an educative web page, there must be requirements to it, otherwise every page out there would abuse of such loophole. And what I understood when I checked the mail was that Ed Martin was discussing if Wikipedia actually met such requirements or not. After all, we all know that Wikipedia, as a self-published source, is not a reliable source... so can we really be that upset when someone says that we are not reliable enough to be educative? So the options for the WMF may be to either change things around to fit the standards required to be a fully reputable educative source (and that may mean mass culling of topics such as TV series, videogames, films, recent events, etc, editorial oversight, editors editing under their real names and only on topics they have some actual degree or expertise, following standards on content set by external actors, etc), and then keep the tax excemption. Or, be just a general-purpose web page, that sets it own internal rules on its content and user behavior, but pays the applicable taxes. So, my question is, which are the legal rules to be considered an educative web page? Does Wikipedia meet such rules? --Cambalachero (talk) 00:24, 27 April 2025 (UTC)[reply]
I'm disinclined to treat Martin's question about whether or not we are educational as a serious question, at least insofar as the editing community's response. There is a legal question as to tax status, and that's something we should leave to WMF Legal. --Tryptofish (talk) 00:37, 27 April 2025 (UTC)[reply]
Well, you should. "Wikipedia is an educative web page as definited in those laws and regulations" is a stronger argument than "Wikipedia is an educative web page because they say so, and I don't like the guy who questioned it" Cambalachero (talk) 02:43, 27 April 2025 (UTC)[reply]
Not to belabor the point, but I meant that we should let Legal speak first, as opposed to the editing community getting out ahead of them. I can see that my use of the word "serious" unintentionally led me into the rabbit hole of "seriously versus literally", where I didn't want to go. I wasn't trying to say that we should be glib. Rather, I mean that we should not take the letter on face value, because the letter is clearly written in bad faith. --Tryptofish (talk) 22:36, 27 April 2025 (UTC)[reply]
This letter has nothing to do with WMF fulfilling its legal tax status (despite what is written in it), and everything to do with intimidation by a government that does not like press freedom, free speech, academic liberty, sciences, and more broadly knowledge. — Jules* talk 10:47, 27 April 2025 (UTC)[reply]
Funny, a thing I learned as a Wikipedia editor is never to trust someone whose main argument is that there is a conspiracy to silence him. Cambalachero (talk) 00:16, 28 April 2025 (UTC)[reply]
While I do agree that there is most likely no conspiracy to silence us right now, I do think it is genuine topic of concern when it comes the administration's handling of situations like this. (Man, this is becoming a downer). Gaismagorm (talk) 00:18, 28 April 2025 (UTC)[reply]
Well, the people etc who indicate they want WP to shut up about some stuff include Musk, Heritage Foundation who said their investigation of WP will be "shared with the appropriate policymakers to help inform a strategic response", Ed Martin, ADL and orgs like New York Post [5].
This of course does not mean there is conspiracy, but at least there are some people with influence with a common view. Gråbergs Gråa Sång (talk) 04:49, 28 April 2025 (UTC)[reply]
There absolutely is a conspiracy to silence us ([6], [7], [8]). We can argue about its extent, participants' identities, and efficacy, but it is foolish to deny it. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:50, 28 April 2025 (UTC)[reply]
I'm not sure about the specifics within the US, but antisemitism can lead to crimes, if not a crime in itself. And it's written at one of the disclaimer pages that editors must respect the law. So, if someone commited a crime by adding antisemitism content to this internet page, and an organization wants to track the real people behind the usernames and make them answer for such crimes before a court of law... by all means, let them do it. WP:NONAZIS surely includes antisemitism as well. Cambalachero (talk) 16:03, 28 April 2025 (UTC)[reply]
The bar for unprotected hate speech in the US is very high. GMGtalk 16:08, 28 April 2025 (UTC)[reply]
Perhaps, I'm no lawyer. But even if they can legally get away with it, don't place me in the same bag, in the "us" of "There absolutely is a conspiracy to silence us". If anything, there is a conspiracy to silence them, not us, and I have no problem with it, in fact I support it. Cambalachero (talk) 16:17, 28 April 2025 (UTC)[reply]
This has never been about combating antisemitism. There are numerous people in the orbit of the administration who are, themselves, antisemites. Fighting antisemitism is just a convenient fig leaf for the real agenda, which is shutting down counter-narratives to the officially preferred narrative. Same thing for the bogus claim that Wikipedia harbors foreign agents who are trying to harm US interests. --Tryptofish (talk) 21:37, 28 April 2025 (UTC)[reply]
That, and it's a camel's nose. It's reasonable. It's even laudable! It also sets a precedent. - The Bushranger One ping only 22:12, 28 April 2025 (UTC)[reply]
Or not. This reminds me of a real-world case: the notorious nazi Adolf Eichmann escaped to my country, Argentina, and stayed hidden. Simon Wiesenthal and the MOSAD located, captured and smuggled him to Israel, where he was put on trial. Someone could have said: "this sets a precedent, if we allow this the MOSAD will soon do whatever they want in Argentina". But no. The MOSAD captured and smuggled him, mission accomplished, and except for some other similar cases of runaway nazis, things never escalated to a "Jewish occupation" as the usual antisemite tropes would claim. Projects that seek to reduce or stop antisemitism have my full support, and if that means outing a couple of Wikipedia troublemaker editors, so be it. Cambalachero (talk) 00:42, 29 April 2025 (UTC)[reply]
I fail to see a valid analogy or parallel here. Antisemitism is being used here as a Trojan Horse by right wing Christian nationalists. They don't actually care about Jews, Jewish people, Jewish culture, or even Israel. What they care about is building powerful voting bloc coalitions like the kind promoted by the Council for National Policy. They have strategically targeted and convinced a tiny percentage of U.S. Jews (see American Jews in politics: "Helmreich describes them as "a uniquely swayable bloc" as a result of Republican stances on Israel") that the Christian right will uphold their shared interests. Ironically, this so-called "interest" is in opposition to 70% (likely much higher) of U.S. Jews who do not support Project 2025 or their policies. The reality is that religious tolerance is a liberal idea upheld by Democrats, not the Christian right. Just like the kapos in Nazi-era WWII who helped their fellow Jews to their deaths, we see the same or similar occurring here. And that, my friend, is a valid analogy. Viriditas (talk) 02:21, 29 April 2025 (UTC)[reply]
...and, as I said earlier, I only have deaf ears for arguments based on conspiracy theories. Cambalachero (talk) 02:43, 29 April 2025 (UTC)[reply]
I’m just going to leave this here.[9] Viriditas (talk) 02:48, 29 April 2025 (UTC)[reply]
Try again. That page lost me the second they used the term "latinx"... which, if nobody told you, is highly offensive for most Latin Americans like me. Cambalachero (talk) 14:05, 29 April 2025 (UTC)[reply]
American historian Steven Hahn discusses this kind of reaction in his research on illiberalism in U.S. history. He argues that illiberalism often emerges as a fearful reaction to a perceived threat. Your comments above illustrate this tendency. Hahn: "People who regard themselves as liberal in every other respect are perfectly happy to impose an incredibly repressive, politically and otherwise…expulsive regime as a way of trying to soothe the concerns of their constituents." It’s interesting that taking offense at a word you don’t like, or being upset by a group one doesn’t like, or living in any kind of perpetual offense or fear, would have one reject the entire philosophical and liberal enterprise of the Enlightenment, from democracy to individual rights. Thanks for the insight into the global phenomenon of democratic backsliding. Viriditas (talk) 16:34, 29 April 2025 (UTC)[reply]
Right to cultural identity is repressive. Got it. Cambalachero (talk) 14:46, 30 April 2025 (UTC)[reply]
Cambalachero, you and I are probably going to have to agree to disagree. And that's fine with me! In fact, something that I deeply value about what we do here at Wikipedia is that editors with all manner of personal opinions are not only allowed to edit here, but are welcome to, just so long as we all adhere to NPOV and adhere to the various other policies and guidelines. That's something that editors should be proud of. And right there, we can see the moral bankruptcy of the accusations that we systematically suppress the conservative point of view. --Tryptofish (talk) 23:00, 29 April 2025 (UTC)[reply]
What does your comment about antisemitism have to do with my comment? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:11, 2 May 2025 (UTC)[reply]
You said "There absolutely is a conspiracy to silence us", followed by a link to an article about the HF trying to locate antisemite editors and start legal actions (or whatever, not clear yet) against them. Did you actually read the article, or just the clickbait title? Cambalachero (talk) 12:17, 2 May 2025 (UTC)[reply]
I linked to three separate articles, as evidence of a conspiracy to silence us. Your comment did not address that point, let alone disprove it. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 12:27, 2 May 2025 (UTC)[reply]
Thousands of educational institutions could provide expert evidence that their students use Wikipedia as an educative resource. (Maybe skip Harvard this time...) Certes (talk) 10:52, 27 April 2025 (UTC)[reply]
This is actually a very good idea for an action to take at this stage (as opposed to some of the more extreme proposals in this thread, which I think should be reconsidered at a later date). Putting out a request for public support from people and institutions that use Wikipedia as an educational resource, or some kind of open letter, would definitely help improve our position against any threats on these grounds. --Grnrchst (talk) 13:25, 27 April 2025 (UTC)[reply]
Yes, that is a good idea. I like the concept that we should, for now (as in while we wait to see what WMF Legal decides to do), focus on informational, rather than confrontational, things, and doing something that both (1) demonstrates how other people appreciate what we provide, and (2) lets other people know what's happening, in case they should want to speak out in support of us, is a good strategy. --Tryptofish (talk) 22:41, 27 April 2025 (UTC)[reply]
For anyone who isn't already aware of it, there was an earlier, related, discussion at Wikipedia:Village pump (miscellaneous)/Archive 80#Heritage Foundation intending to "identify and target" editors. --Tryptofish (talk) 00:44, 27 April 2025 (UTC)[reply]

Some thoughts here:

  • Don't worry about the data. WMF technical people aren't dumb, and they almost certainly have robust backups capable or weathering any natural or political disaster. Even if all of them failed, 3rd parties have backups sufficient to piece it back together.
  • WMF should have an exit plan for America at this point. They absolutely should NOT share it with us or even acknowledge it exists publicly, but they should, and probably already do, have a plan for leaving the US while maintaining continuity of their technical systems, know-how and key personnel. Sharing this plan or acknowledging it exists would just add unnecessary fuel to the fire at this point.
  • As a community, we need to double down on our policies. NOTCENSORED and NPOV are the two I think are most endangered by this right now. We need to emphasize to the WMF that these policies are non-negotiable. If the Government starts pushing on them, the community needs to communicate to the WMF the expectation that bending or breaking is unacceptable, and it is preferable for the WMF to pull out of America than to bend on our core policies.
  • The community needs to chill on the blackout talk. We're not there yet. If editor safety or our core policies are threatened, THEN it's time to breakout the banners, blackouts, and forks in escalating order. Right now we're WAAAAY premature, and the WMF has excellent lawyers precisely for letters like this. Tazerdadog (talk) 11:26, 28 April 2025 (UTC)[reply]
I just want to note that emphasizing to the WMF that NPOV is non-negotiable is not really the issue. As you may or may not have read, I'm chairing a working group on NPOV. There is no chance that the WMF is about to challenge the idea of neutrality, and a much higher chance that the WMF will be expanding support for volunteers in making sure that NPOV is upheld. "As threats to neutrality appear to be on the rise globally, Wikipedia’s neutral point of view (NPOV) policy is needed now more than ever."
In short, I 100% agree with you, Tazerdadog, that "we need to double down on our policies". I think there are exactly zero people at the WMF who have any notion that we should give up on neutrality to please any government! We're all in this together. Jimbo Wales (talk) 10:01, 29 April 2025 (UTC)[reply]
Can you expand on the working group? It's the one described here, correct? Do you think that work has greater implications for small wikis than for the English Wikipedia? —Ganesha811 (talk) 17:41, 29 April 2025 (UTC)[reply]

Friends, Wikipedians, citizens of the world, lend me your ears. We will not be cowed by the aggressive actions of a lawless regime. Its fate will be decided by the public, whose approval of the Trump administration has already sunk in opinion polls to the lower 40s ranging to the high 30s. When MAGA encounters empty shelves at stores, high inflation, disappearance of jobs, and an increasingly likely recession, suddenly the "anti-woke" will be awakened. Carlstak (talk) 00:45, 29 April 2025 (UTC)[reply]

I was tempted to add {{Not a forum}} to this discussion earlier, and I'm increasingly thinking it's warranted. Thebiguglyalien (talk) 🛸 01:25, 29 April 2025 (UTC)[reply]
That's fine, I did get up on Antony's soapbox. I've been sounding the alarm about this for months in real life, and people are just now taking it seriously. Carlstak (talk) 01:56, 29 April 2025 (UTC)[reply]

Time to move to a more Federal model

The Wikimedia movement started in the USA, but it has been a global movement since its earliest days. There are other global movements around that we can compare ourselves to, at least in how we handle money. Some are relatively loose confederations, with each national organisation having its own fundraising. The Wikimedia movement is an odd hybrid with some chapters like Germany handling the donations from readers in Germany but most, including the UK, being grant funded from the USA with UK readers donations going to the USA. Now would seem an appropriate moment to reconsider that model. Maybe move one or both datacentres from the USA to another country such as Canada, Iceland or Ireland, the endowment to a financial hub such as London or Franfurt, and decentralise fundraising to any country where we have a national registered charity. It would be odd for a for profit US organisation to do charitable fundraising in other countries.

If the US organisation was only handling US donations, then it would be reasonable for its board to be US based, with a separate global board to coordinate the various national chapters. If the only wikimedia donations handled in the USA were donations from people in the USA then the movement's exposure to US taxes etc would be greatly reduced. Disclosure, I have at times been a member of WMUK and worked for it from 2013 to 2015, however I'm not connected to it these days. ϢereSpielChequers 05:50, 27 April 2025 (UTC)[reply]

@WereSpielChequers sounds good on paper, but in practice, it will be a complicated setup as different countries have different rules on how donations raised within the borders should be disbursed within the borders and internationally. From what I understand, the German chapter passes on the excess amount collected back to USA (excess of what was would be budgeted originally with Foundation).
I organise the Singapore user group and I did consider a scenario of what if the donation banners are activated for Singapore IP addresses and the money collected through Singapore IP addresses into a future Singapore charity for Wikimedia. If the aim is to share this collected amount to other affiliates that do not have fundraising options, it is not pretty as the 80% of the net proceeds raised in this manner mostly likely have to be set aside for the activities in Singapore. If the collection of the amounts is way beyond what we have budgeted for the year, we may have to find ways to spend it (I don't know... maybe like offsetting costs of running the datacenter in Singapore? yeah. the Foundation has caching servers in Singapore) or endure criticisms of having a reserve fund that may not be depleting over time. – robertsky (talk) 14:30, 27 April 2025 (UTC)[reply]
EN.WP serves Canada, the UK and several other Commonwealth countries with English as a first language. Federation other languages would not make a US-only board even remotely OK. Simonm223 (talk) 14:57, 27 April 2025 (UTC)[reply]
To consider another example, in Canada a charity must be Canadian-registered or a UN agency in order for donations to be tax-deductible. To be registered as a Canadian charity, the organization must be carrying out its charitable purposes itself (or be in direct control of the work being done by others). (Donations to a U.S. charity can be eligible with some restrictions if you have U.S. income, or you or your family are enrolled in a U.S. university.) Based on my understanding, a Canadian Wikimedia charity wouldn't be able to simply transfer tax-deductible donations to another organization. isaacl (talk) 15:04, 27 April 2025 (UTC)[reply]
Yes a federal model has implications, individual chapters would have to adopt particular projects as DE has done with Wikidata. The WMF board would have to split into a USA chapter board and some sort of global council, and the two combined would have less power within the movement than the WMF has today. But if organisations as diverse as Greenpeace and the Red Cross can do this, we could to. ϢereSpielChequers 17:24, 27 April 2025 (UTC)[reply]
This sounds like a good idea, but I have little knowledge of charity laws in different countries. I admit that I don't contribute any money to the WMF, but I know that if I contribute money to a UK charity I am usually asked if I am a UK tax payer, in which case they can claim back the tax paid on the donation. I certainly don't like contributions from outside the US going towards the MAGA agenda. Phil Bridger (talk) 18:04, 27 April 2025 (UTC)[reply]
The Canadian Red Cross carries out its own relief work, and Greenpeace Canada is a non-profit but not a registered charity. I agree it's possible in theory to transform the network infrastructure into separately run subnetworks. They'll be additional overhead, with duplication of functions across the separate organizations, and fundraising challenges to ensure each organization collects enough funds for its operations and endowment fund. isaacl (talk) 14:47, 28 April 2025 (UTC)[reply]
Indeed. There's a lot of talk of servers above, but moving things between data centres is relatively trivial. The primary reason we're vulnerable to this kind of pressure from the US government is that the WMF made the early mistake of concentrating its financial and organisational resources in the US, instead of going down the route exemplified by Wikimedia DE. I hope this will prompt them to reconsider that choice. – Joe (talk) 10:30, 28 April 2025 (UTC)[reply]
I don't think we're particularly vulnerable to pressure from the US government, by the way. Jimbo Wales (talk) 10:02, 29 April 2025 (UTC)[reply]

What efforts went into the SOPA blackout?

I think that being prepared to support a banner or a blackout to protest is warranted at this time. The SOPA blackout had senators phones ringing off the hook. This is political power. The Wikimedia community has the power to shape public opinion-politicians-law through banners and blackouts. DO NOT BE AFRAID TO USE IT. Protests_against_SOPA_and_PIPA#Wikimedia_community Victor Grigas (talk) 11:41, 26 April 2025 (UTC)[reply]

It's not a matter of being afraid to use it, but that this is an encyclopedia with editors from around the world and of many different opinions rather than a campaigning site for Americans. Phil Bridger (talk) 12:41, 26 April 2025 (UTC)[reply]
If someone wants to seriously propose a blackout, they'll need to begin with a fully formed proposal that clearly lays out a very credible existential threat to the English Wikipedia, supported by reliable sources. Basically your proposal needs to be Featured Article quality when you first post the RFC. And you need to post it far enough ahead of time that consensus has time to happen. Keep in mind that you'll have to convince or outvote those who believe Wikipedia should be entirely apolitical even in the face of an existential threat, people from other countries who aren't familiar with US politics or culture wars, and the opposite side in the US culture wars (yes, there are such people here). And hope that someone else doesn't decide to post a "stub-class" proposal while you're preparing your FA-class proposal, and thereby poison the well. Anomie 13:46, 26 April 2025 (UTC)[reply]
In addition to the threat, there would need to be a compelling case for impact. The SOPA/PIPA blackout sought to raise awareness of the potential impact of specific legislation that was likely not very well known among the general public. That's not the situation the Martin WMF letter creates. CMD (talk) 14:03, 26 April 2025 (UTC)[reply]
yes however the wmf (if I remember correctly) is based in america. Even if someone isn't a US resident, it would still effect them. I do however understand the sentiment, but I do highly support a possible blackout/banner. Wikipedia is used very frequently, so it would definitely get people's attention (which, sadly, I feel is the most people can do nowadays as regular citizens, oh well). Gaismagorm (talk) 01:16, 27 April 2025 (UTC)[reply]
As much as it is an American thing, there are many things that international editors do here are reliant on the laws in America and the status of WMF to shield them from their own. Nonetheless, the use blackouts like SOPA's should be considered when the threat becomes very real, whereas the situation now is still fluid. Case in point, just yesterday we learned that ICE is reversing the termination of international students. The letter from Martin may be another round of bluster with little substance. – robertsky (talk) 01:28, 27 April 2025 (UTC)[reply]
(From French Wikipedia.) Obviously what happens to the WMF affects and concerns us too, very much. — Jules* talk 10:51, 27 April 2025 (UTC)[reply]
@Victorgrigas For now the threats are to the Wikimedia Foundation, not Wikipedia. As a consequence of these I suspect that the Wikimedia Foundation may indeed threaten Wikipedia freedomness and neutral point of view at some point to comply with these demands, but that's another story and we'll deal with that if it eventually comes to that. Darwin Ahoy! 14:37, 26 April 2025 (UTC)[reply]
There is zero chance that the WMF is going to "threaten Wikipedia freedomness". There's zero (zero!) support for that from any staff or board members. Keep in mind that, when being attacked, one of the thing that the attackers usually want is for the attackees to turn on each other for no reason. We can be unified because we are unified. Jimbo Wales (talk) 10:04, 29 April 2025 (UTC)[reply]
Well, there's a lot of overreaction here. Nobody has made any existencial threat to Wikipedia, only that it may not be suitable to get a tax exemption. A protest to keep a taxes priviledge may actually have the opposite effect as the one expected. It may be better to let the WMF deal with this behind courtains, and if it can't be done and the WMF looses the tax exemption... just accept it and pay the taxes. --Cambalachero (talk) 00:47, 27 April 2025 (UTC)[reply]
We shouldn't kid ourselves into thinking that this is just a normal governmental inquiry into tax status. It's coming from the same motivations as the attacks on universities, the press, and law firms: the motivation to shut down any source of honest, unbiased information that goes against the Trump administration's preferred narrative. But it's also true that we shouldn't take any reckless, knee-jerk actions. We should be deliberative and thoughtful, and respond only in well-considered ways. --Tryptofish (talk) 00:56, 27 April 2025 (UTC)[reply]
I agree, but cambalachero has a good point. It won't look good for Wikipedia to protest for a tax exemption. While I want it to have a tax exemption, out of context it sounds kinda weird. I do hope that WMF will be able to deal with it however, since that will make everybody's lives a thousand times easier. I do however trust that, no matter what, we'll survive. I feel as if Wikipedia has likely survived much worse threats than this. It won't be fun while all of this lasts, but i'm optimistic that things will get better. They always do, and they always will. But maybe that's just hopeful thinking. Gaismagorm (talk) 01:33, 27 April 2025 (UTC)[reply]
It is important to note that other right-wing attacks on wikipedia turned out to be essentially nothing (such as the heritage foundations recent scheme, which as far as I can tell hasn't happened, and I don't think it will). Once again, this is likely just me trying to remain optimistic so I can remain sane. Gaismagorm (talk) 01:35, 27 April 2025 (UTC)[reply]
I definitely agree that protesting framed in terms of tax status would be politically tin-eared. As for Heritage, it hasn't happened yet; this may be where it starts. And as for one's sanity, me too. --Tryptofish (talk) 01:44, 27 April 2025 (UTC)[reply]
Yep, always assuming the worst case scenario is narrow-minded by definition. We need to respond to the scenario in front of us, not the hypothetical scenario that sounds the most dramatic. Addiction to pessimism porn is more harmful than most psychological dependencies. Thebiguglyalien (talk) 🛸 02:32, 27 April 2025 (UTC)[reply]
And the scenario in front of us does not require any response from en.wiki. Whether the WMF will issue a response on their end is up to them. CMD (talk) 02:44, 27 April 2025 (UTC)[reply]
The larger issue isn't whether or not the Wikimedia Foundation has to pay taxes; it's that donations will no longer qualify for a tax deduction. If I understand meta:Wikimedia Foundation Annual Plan/2023-2024/Finances § Budget numbers correctly, the vast amount of revenue comes from the fundraising campaign, so there will be significant effects on operations and funding model. isaacl (talk) 04:34, 27 April 2025 (UTC)[reply]
It is WAY too soon for any response from the community. The WMF has an excellent legal team. They can deal with this. Don’t over-react. Blueboar (talk) 12:15, 27 April 2025 (UTC)[reply]
I haven't advocated for any reaction from the community. I agree that the WMF is capable of deciding the next best steps. isaacl (talk) 14:31, 27 April 2025 (UTC)[reply]
The reality is that at present both congress and the senate are politicaly non functional so convential campaining is unlikely to be effective. For the most part the best option is to try and keep a low enough profile that people lose interest. On a technical level ensuring there are overseas cold backups should probably be a thing.©Geni (talk) 12:03, 27 April 2025 (UTC)[reply]
at present both congress and the senate are politicaly non functional - It's more complicated. While the US Congress (consisting of the House and the Senate) is indeed very polarized these days and gridlocked on most "big" topics, there is actually still quite a lot of bipartisan legislation going on under the radar - a phenomenon that has been called "secret congress".
And unfortunately (for us), these heartening examples of bipartisan consensus include various attempts to weaken Section 230 (a law which has been described as being essential for Wikipedia's existence), and similar efforts.
In fact, just two days ago, right after you posted this comment, Congress passed a new internet law against the warnings of groups such the Center for Democracy & Technology, the Authors Guild, Demand Progress Action, the Electronic Frontier Foundation (EFF), Fight for the Future, the Freedom of the Press Foundation, New America's Open Technology Institute, Public Knowledge, and TechFreedom (to quote from TAKE_IT_DOWN_Act#Criticism) that its takedown provisions could be abused. Some commenters have specifically described Wikipedia as a website that could be affected by this:

In general, government-mandated takedown systems are easily abused by private bad actors. (This primarily happens with “copystrike” extortion and censorship, which has grown out of mandatory takedown systems for copyright infringement.) [see also my recent Signpost article with some specific examples of Wikipedia articles affected by such spurious takedowns on Google]

More specifically, conservatives have signaled an interest in undercutting supposedly “liberal” platforms — Wikipedia in particular is frequently attacked by Musk and has been targeted by the Heritage Foundation. The Take It Down Act covers online platforms (with the exception of email and a few other carveouts) that “primarily [provide] a forum for user-generated content,” and while Wikipedia isn’t typically in the business of publishing nonconsensual nudes, it seems plausibly covered by some interpretations of the law. The FTC would probably have no compunctions about launching a punitive investigation if trolls start spamming it with deepfakes.

(from an article in The Verge, with one internet liability expert - who heads a program on platform regulation at Stanford University - agreeing with The Verge about Wikipedia being a plausible target)
Now, these observations are from early March and I don't know if the bill was improved since then, or how likely it is that the current US government will indeed try to (ab)use this new law against Wikipedia in this way.
But my larger point is that even if one judges the current legal risk regarding 501(c)(3) status as low (see also Jimbo's comment, we might well see new laws soon that increase the attack surface greatly, and not just in the US.
For Wikimedians interested in that kind of threat: The public policy mailing list is probably the most active forum about such issues.
Regards, HaeB (talk) 04:51, 1 May 2025 (UTC)[reply]
Considering the community just rejected a proposal for a blackout, which would have been in response to an Indian media conglomerate using lawfare to intimidate individual users and censor our content, I would honestly find it a bit insulting for us to propose a blackout over a letter questioning the foundation's tax-exempt status in the United States. This letter is certainly a bad sign of things to come, but let's be real here, it does not yet represent an active and present threat to us in the way that the ANI lawsuit does. We should absolutely be proactively considering how to react if things get worse, and if the political environment in the United States presents an active threat to the project's functioning, but this is really putting the cart before the horse. --Grnrchst (talk) 12:54, 27 April 2025 (UTC)[reply]
We didn't blackout for the middle-eastern arrested editors either, but we did blackout for platform-wide threats. This is a giant fiscal threat to WMF and that means super-reduced operations. That said I want to see how this situation progresses first especially with WMF legal. Aaron Liu (talk) 23:49, 27 April 2025 (UTC)[reply]
I think talks of a blackout right now are an over reaction. For now, we should assume good faith and see this as only a threat to our tax status. However, as many other wikipedians have stated, it wouldn't be unprecedented for the current US administration to challenge the freedom of the content on Wikipedia or the safety of her editors. For now we need to stay calm, hope for the best, and be prepared for the worst without expecting it. It would be far more productive to use this time to figure out ways to increase the anonymity of both the readers and editors, particularly those of contentious topics. mgjertson (talk) (contribs) 14:36, 30 April 2025 (UTC)[reply]

Our job is to educate and teach, not to protest. IF this reaches a point where there is a need to formally react to any of this, a banner explaining the situation might be considered, but it should not take the form of a blackout/protest banner. Blueboar (talk) 13:08, 27 April 2025 (UTC)[reply]

If something threatens our ability to educate and teach, we absolutely should be protesting against that. --Grnrchst (talk) 13:58, 27 April 2025 (UTC)[reply]
Blueboar has said elsewhere on related topics that we should all wait 10 years to consider reacting to the current situation. Viriditas (talk) 22:46, 27 April 2025 (UTC)[reply]
They might have been referring to WP:10YEARS, which is about covering things in articles, not project activity. Thebiguglyalien (talk) 🛸 22:53, 27 April 2025 (UTC)[reply]

For crying out loud - we don't need to blackout the site because questions were raised about tax exempt status of the WMF. In no way does the WMF potentially needing to pay taxes undermine the neutrality of the encyclopedia; before last November half of the editing base here was perpetually pissed at the WMF for wasting their money more than anything else. And we shouldn't forget that the Obama admin was targeting various nonprofits at one point; I don't think we got riled up over that did we? American politics are cyclical in many ways. Yes, there are some things that occurring in the USA that concern me. And yes, I'm aware that the editing base of enwiki skews to the left. And yes, a lot of more conservative media sources aren't near as reliable as they use to be (there's a reason Fox doesn't run the "Fair and balanced" tagline anymore ...). But we need to be really careful that we don't create an editing environment in which 49.8% of the American public doesn't feel that they can contribute. Hog Farm Talk 22:42, 27 April 2025 (UTC)[reply]

I'll keep saying it until it sinks in: a lot of editors here are increasingly falling into pessimism porn addictions. The explanation there describes many of these "everybody panic" posts we've been seeing. Thebiguglyalien (talk) 🛸 22:56, 27 April 2025 (UTC)[reply]
It's a matter of finding a middle ground between learned helplessness and over-reacting. We shouldn't do things that are premature, or that will backfire on us, but we also shouldn't ignore reality. We can look at what has already happened, as a matter of public record, to other institutions that have been targeted in the same way. US universities provide some good examples. Initially, universities that were accused of antisemitism (as opposed to harboring people hostile to the US, which is what we are accused of) made the mistake of trying to keep their heads down and placate the Trump administration. Their grant funding got cut anyway, and the demands just increased. These demands included having administration personnel monitor curricula and hiring. Translate to Wikipedia, and that would be administration officials getting to rule on what our content says, and which editors can be blocked. Now that Harvard has announced that they will fight back in court, there's a greater sense that things will play out in the courts over time, and that reason can prevail. We need to recognize that this is the path we are facing, too. It isn't about whether WMF will pay taxes. It's about whether we will allow ourselves to stop being a reliable encyclopedia, something we will not allow. We shouldn't freak out, but we need to be realistic. --Tryptofish (talk) 23:13, 27 April 2025 (UTC)[reply]
I don't see a scenario where a government has full editorial control over Wikipedia, but otherwise I think we're in agreement on the issue. Thebiguglyalien (talk) 🛸 23:17, 27 April 2025 (UTC)[reply]
Thanks. I only see that scenario happening if we let it happen. But I do see a realistic chance of them trying to get us to do it. --Tryptofish (talk) 23:20, 27 April 2025 (UTC)[reply]
My genuine concern is that we're going to overcorrect and end up taking a general political stance that is incompatible with encyclopedic goals. I personally can't imagine a situation in which the US court of public opinion or the US court system is going to side against the general principals of Wikipedia if we stick to them. If we get to a point where WP:NPOV, WP:RGW, etc. get replaced by a political shibboleth in our response to this, or we create and accept WP:NOREPUBLICANS to go alongside WP:NONAZIS and Wikipedia:No Confederates, then we've 1) lost our credibility as a neutral encyclopedia and 2) will lose a good chunk of said court of public opinion and end up destroying the encyclopedia. Hog Farm Talk 23:48, 27 April 2025 (UTC)[reply]
My concern is that a handful of people here want this to happen. You'll see people around here who think we have a moral obligation to take a stand on political issues using Wikipedia as a platform. Then there are also the people who are only WP:HERE to try and push a Trumpist viewpoint into articles. Higher up I mentioned the same thing about principles being the most useful path forward. I also suggested a discussion about how to address people who want to violate these principles for their own political ends, and I have since started a discussion at Wikipedia talk:Arbitration/Requests/Enforcement#Clarification on POV pushing and AE action. Thebiguglyalien (talk) 🛸 00:07, 28 April 2025 (UTC)[reply]
Completely agree, we should be taking this constructively and look at ways we can do better. Unfortunately, some of his criticisms and questions are somewhat valid. Kowal2701 (talk) 21:59, 28 April 2025 (UTC)[reply]
I share Hog Farm's view. We need to maintain our credibility as a reliable encyclopedia that isn't distorting the facts, in order to maintain (or recapture) the political upper hand. Giving in to feel-good retribution in mainspace will assuredly backfire. But I also believe that editors should feel free to speak plainly in the behind-the-scenes namespaces. --Tryptofish (talk) 21:46, 28 April 2025 (UTC)[reply]
Agree 100% with Tryptofish. --Grnrchst (talk) 08:20, 28 April 2025 (UTC)[reply]

I'm definitely on the side of wanting WMF Legal to be able to take the lead here. We shouldn't do anything that would undercut their effectiveness. I also think that any actions we eventually take should play to our political strengths, and not play into the hands of those threatening us. I'm not wild about a blackout, because depriving readers of the information we provide is actually what the Trump administration wants, so why should we do it for them? I like the idea that Blueboar mentioned, of an informative banner. If members of the public come here, and still find the information they want from us, but they first have to get past a conspicuous banner (maybe one that you cannot make disappear by clicking an x) that tells them of the situation and points them to ways to object to what's happening, that could be very effective at getting public opinion on our side. Something else that we should all try to do is to stay faithful to our values in terms of NPOV and the like. The more we continue to insist on accurate and neutral content, correcting errors as we find them, and not engaging in WP:RGW in mainspace, the more credibility we have, and the weaker our opponent's case will be. --Tryptofish (talk) 22:59, 27 April 2025 (UTC)[reply]

Meh. I don't see any particular efforts in any direction we might take as having any gravitas. The reality of this situation is that what is asked for by the letter is impossible to achieve by the deadline imposed. The WMF doesn't have the pockets to fight this sort of thing like Harvard does, and certainly doesn't have the pockets to weather the storm that's coming. The WMF will lose its 501c3 status. It's essentially a given. How much of an impact will that have on donors? Who knows, but the status will be gone in three weeks time. No imagined solution generated from this or any other page where this is being discussed is going to change that reality. --Hammersoft (talk) 19:02, 28 April 2025 (UTC)[reply]
Apes together strong. We should join with Harvard in a coalition of the willing. Viriditas (talk) 21:32, 28 April 2025 (UTC)[reply]
I'm all for facing reality, but I think it's maladaptive and frankly craven to adopt the position that we should just take it and say thank you can we please have some more. I understand and sympathize with how unpleasant it feels to deal with government-by-bullying, and how that can make editors just want to rationalize inaction. But rationalizing is what it is, and that's facing reality, too. Editors (and indeed people in the "real world") should feel self-confident enough to call this what it is. Now that said, I also expect that it's quite likely that the tax status is going to get pulled. I also expect that, subsequently, it will end up in litigation, and that will go on for a long time and have unpredictable aspects. Simultaneously, I expect further demands, that will go beyond tax matters, along with very public efforts to discredit Wikipedia and our content. --Tryptofish (talk) 21:57, 28 April 2025 (UTC)[reply]
Again, *shrug*. I'm sorry, but I do not see any reasonable way that Wikipedia can defend itself against this other than highly expensive (as in millions of $) expenditure in court. That's the only venue that will matter. The government in question will not give any thought whatsoever by what we say here. Even if a million editors all screamed out at once, it would have as much effect as a single rain drop would have in the Gobi Desert. They simply won't see it. It's meaningless flapping of our wings in the hope that some wild butterfly effect would somehow cause an earthquake, hurricane, and blizzard to all happen in D.C. at the same time. It's just fantasy. I'm not saying this against you personally, but against any idea that we can somehow stop this. We can't. The best path forward is how to structure the project despite the serious damage this administration is about to inflict on it. Stopping it is impossible. --Hammersoft (talk) 01:03, 29 April 2025 (UTC)[reply]
Stopping it is impossible To quote a little green Muppet, "that is why you fail". Plan for the worst, yes. But saying something is impossible and acting accordingly is not productive. - The Bushranger One ping only 05:44, 29 April 2025 (UTC)[reply]
Fortunately, I don't guide my life by little green muppets, whether they have cute ears or none at all :) Seriously though; standing in front of an oncoming 100mph avalanche with a shovel saying "I got this!" isn't productive either. The powers that be in D.C. will not care if we all black out our userpages, take down every article, or go on an editing strike. It all serves their purposes. Even if it all was directly against their purposes, they still wouldn't care. There simply isn't any reasonable method by which we can affect the outcome of this. --Hammersoft (talk) 17:55, 29 April 2025 (UTC)[reply]
Before I hovered the link, I wondered when it was that Kermit the Frog turned zen. ⁓ Pelagicmessages ) 06:24, 3 May 2025 (UTC)[reply]
  • In addition to depriving readers of the information we provide is actually what the Trump administration wants, there's the simple fact that a blackout is pointless. It certainly won't change any of our opinions. And those on the 'other side', it won't change theirs either. It'll only affect the people in the middle - by making them pissed off at us. - The Bushranger One ping only 22:15, 28 April 2025 (UTC)[reply]
    It would demonstrate two things: (1) that Wikipedia is an American organisation and not an international one; and (2) that it engages in political lobbying against the interests of the US government. Hawkeye7 (discuss) 22:28, 28 April 2025 (UTC)[reply]
    I'm against a blackout, too. But I feel the need to clarify that "the interests of the US government" are neither "the interests of the Trump administration" nor "violation of the First Amendment guarantee of free speech". But I agree that members of the public may very well see a blackout in the way that you describe. --Tryptofish (talk) 22:33, 28 April 2025 (UTC)[reply]

Just for clarification, CentralNotice banners can be targeted to specific countries/languages/projects only. Blacking out for everyone because of things in a country would be overly invasive. (Not that I think we should do anything with banners, letter and the like right now. WMF Legal will handle this with the highest expertise they have.) Best, —DerHexer (Talk) 12:44, 29 April 2025 (UTC)[reply]

To add: m:Project-wide protests has a (possibly incomplete) list of past actions of this kind, and (to come back to the initial question above) m:English Wikipedia anti-SOPA blackout links to various detailed descriptions of what was done in that case.
(And agreed that any action of this kind in the present matter would seem premature at this point, especially before having heard from WMF Legal.)
Regards, HaeB (talk) 16:18, 1 May 2025 (UTC)[reply]

Possibly less constructive musings

Some thoughts, either thinking outside the box or desperately needed tragi-comic relief.

  1. Give up 401(c)(3) status. Reincorporate elsewhere (Liechtenstein?) and move all financial assets offshore. This will have zero impact on contributions from non-US sources, and US sources may donate 20%-30& less.
  2. Create a MAGA Wikipedia, en.maga.wikipedia.org, with its own rules. Let that be how the Wikimedia Foundation is able to demonstrate that it accommodates all views. Fans of "separate but equal" will embrace this. (Hide it from search engines.)

Feel free to add yours. Largoplazo (talk) 22:28, 27 April 2025 (UTC)[reply]

Oppose both. Your second option is, in fact, precisely the central heart of this dispute. If you go on to Twitter right now (or any other right-wing forum) you will quickly discover that the most shared or viewed discussions on this topic are concerned with this very problem. MAGA believes that Wikipedia articles are hostile to conservatism because Wikipedia doesn't entertain or accept alternate facts or baseless conspiracy theories and doesn't use or rely on poor unreliable sourcing like "Ron Vara". That's what this is all about, no more, no less. Viriditas (talk) 22:43, 27 April 2025 (UTC)[reply]
Okay, but hear me out here. It would be hilarious if we beat conservapedia at their own game. Especially if the second option is taken such a hilarious extreme where it rolls back to satire. Gaismagorm (talk) 23:39, 27 April 2025 (UTC)[reply]
No action needed on our part. See Conservapedia's article (permalink) about The Room. — Newslinger talk 01:23, 28 April 2025 (UTC)[reply]
See also Conservapedia's essay "Greatest Conservative Songs" (permalink). — Newslinger talk 01:32, 28 April 2025 (UTC)[reply]
The most concerning thing about Conservapedia at the moment is that it has entirely embraced Putinism and Orbánism, two styles of government that are behind the push to extend the reach of an autocratic state into education and private industry. Viriditas (talk) 01:38, 28 April 2025 (UTC)[reply]
Wow. That's priceless. I love how the "legacy" section smoothly transitions into an explanation of social conservative, centre-right politics. Cremastra talk 23:00, 28 April 2025 (UTC)[reply]
Conservapedia is unintentional satire, but they are neither aware of it or understand why it is satire. I mean, let’s not forget, they literally invented one of the most famous memes on the Internet: Supply Side Jesus riding a dinosaur. And they were dead serious about it at the time. Viriditas (talk) 01:25, 28 April 2025 (UTC)[reply]

Fans of "separate but equal" will embrace this. (Hide it from search engines.)

lmao that so quintessentially embodies segregation. But to make it nominally equal instead of "arbitrarily" silenced, it should probably be on a separate domain (magawikipedia.org?) the WMF registers through ICANN; as there are no links to that domain, it would not appear on search engines for quite a long time. Aaron Liu (talk) 23:47, 27 April 2025 (UTC)[reply]
This is a deeply unconstructive proposal. This is not the anti-MAGA encyclopaedia. Our articles are not supposed to push any particular political theory. We are supposed to be (in article space) neutral even on the topic of Wikipedia. CMD (talk) 03:10, 28 April 2025 (UTC)[reply]
Basically a giant WP:POVFORK Kowal2701 (talk) 22:00, 28 April 2025 (UTC)[reply]
I agree but at some point you have to realize they don't know or care quite frankly. Anything that challenges their worldview is seen as unfairly biased against them and in turn against our neutrality. Providing them their own Wikipedia not only shows that we aren't biased against them, but it gives us a way to show that our policies have merit in keeping a reliable encyclopedia since a maga Wikipedia would almost certainly betray the ideals that Wikipedia stands on and end up creating an objectively worse encyclopedia because of it mgjertson (talk) (contribs) 14:47, 30 April 2025 (UTC)[reply]
I know "they" (Ed Martin?) don't care, but for that reason they wouldn't care about any proposed solution. Creating a second Wikipedia would not show that this Wikipedia is unbiased, it would heavily imply that this Wikipedia is not the place for the target group. As Kowal2701 says, a giant POVFORK. And while they don't care, we do care, we want to build an accurate and neutral Wikipedia, and we should want that whether Ed Martin approves or disapproves. CMD (talk) 15:04, 30 April 2025 (UTC)[reply]
Sartre had the number of the Ed Martins of the world all the way back in 1946. And, no, there is nothing that Wikipedia could do to persuade him that we are sufficiently neutral because he does not seek a neutral Wikipedia. He seeks a subservient, cowed, compliant Wikipedia. Simonm223 (talk) 19:24, 30 April 2025 (UTC)[reply]
That's what I think. Carlstak (talk) 19:36, 30 April 2025 (UTC)[reply]

Publicity

The best way I can think of to fight this is publicity. Wikipedia must have friends in high places that value Wikipedia. How can we harness that to get the message to the Non-MAGA American public that the government is trying to kill it? Doug Weller talk 07:34, 29 April 2025 (UTC)[reply]

Someone at WMF should call liberal Illinois Gov. JB Pritzker. I bet he'll publicize that message with a thunderous speech. He is a masterful orator, and, he's a billionaire. Carlstak (talk) 17:12, 29 April 2025 (UTC)[reply]
I disagree, until there are actual damages we would just look petty and partisan... I also disagree that the government is trying to kill wikipedia, that seems a mite hyperbolic given the evidence we have. Horse Eye's Back (talk) 17:21, 29 April 2025 (UTC)[reply]
Basically this. A public reaction will only be spun as us being defensive and used as evidence that we're trying to hide "our agenda". Keep calm and carry on. GMGtalk 17:28, 29 April 2025 (UTC)[reply]
I think the tilting point will be if Martin actually launches an investigation. Then it's time for thunderous speeches. Carlstak (talk) 17:36, 29 April 2025 (UTC)[reply]
I was thinking more of media personalities, not policians. Doug Weller talk 17:42, 29 April 2025 (UTC)[reply]
@Doug Weller: If the situation escalates, I would be willing to reach out to people I've been in casual contact with. However, I don't see a reason to do so at this time without a more tangible threat and responding call-to-action. –MJLTalk 18:05, 29 April 2025 (UTC)[reply]
Yes, too early. Doug Weller talk 18:14, 29 April 2025 (UTC)[reply]
That still seems way to early... At least wait until an actual court case has been adjudicated. Horse Eye's Back (talk) 17:46, 29 April 2025 (UTC)[reply]
Adjudicated or initiated? Doug Weller talk 18:15, 29 April 2025 (UTC)[reply]
Adjudicated, I don't believe that we should resort to the court of public opinion before at least attempting the actual courts (which its not even clear this will reach, the AG seems to be on a fishing expedition). We have the high ground, action is not to our benefit. Horse Eye's Back (talk) 18:35, 29 April 2025 (UTC)[reply]
I take your point. Doug Weller talk 18:57, 29 April 2025 (UTC)[reply]
Well, if it gets to that stage, I would expect a lot of publicity in the sane-washing press and reality-based media alike. Carlstak (talk) 19:41, 29 April 2025 (UTC)[reply]
Ed Martin is basically just Milo Yiannopoulos with a law degree. Ignore. Partofthemachine (talk) 04:35, 30 April 2025 (UTC)[reply]

How about we create and publicize initiatives to find systemic fixes to en wikipedia's bias on US politics-related topics? Then we'd probably get less of this crap. Also, while trying to be totally unbiased is impossible to define much less achieve, when it gets to the point where it degrades and distorts the informativeness of the those articles (which it has), some such improvements would also align with our en Wikipedia mission which is to offer quality informative articles. Sincerely, North8000 (talk) 18:47, 30 April 2025 (UTC)[reply]

I agree, but I don't think the things the administration considers biases are actually things that need to be fixed. Obviously, there is some left-leaning bias in some articles, but not enough to warrant a removal of our tax exempt status. Gaismagorm (talk) 19:06, 30 April 2025 (UTC)[reply]
I don't think that would work; because a truly neutral perspective on US politics would be far more critical of the United States and most of its political class than what we have. I mean look at the farce that is Elon Musk salute controversy where the first explanation we explore in the body is that having autism makes one sieg heil. Much of this comes from treating American newsmedia as if it were consistently reliable for building an encyclopedia. But being neutral and calling more American things that quack ducks would just further infuriate the Trump regime.
The problem that Wikipedia faces is that the far-right doesn't care how neutral we claim to be. They don't want our neutrality. They want our submission. Simonm223 (talk) 19:09, 30 April 2025 (UTC)[reply]
The louder and more irrational critics will never be satisfied, but the average reader is going to have a good enough nose for bullshit that we could substantially improve our public image by addressing NPOV violations. Unfortunately, I have little hope that we can convince AMPOL editors that it's actually not good content work to put "is a far-right conspiracy theorist" in a lead or a 20,000 byte "controversies" section that would make WP:BALASP cry (the latter being something that's permeated the project well beyond AMPOL—I've spent the last few months trying to bail water out of that ship and would love to talk shop if anyone wants to help out). Thebiguglyalien (talk) 🛸 19:12, 30 April 2025 (UTC)[reply]
I think it would help. The widespread criticism of en Wikipedia bias is almost certainly a big factor that led to this. BTW my interest is more in fixing the systemic causes just to move the needle a bit so it's not so bad that causes the article to be distorted to the point of degrading and distorting the informativeness of the article. Rather than the elusive goal of defining and being unbiased. North8000 (talk) 21:12, 30 April 2025 (UTC)[reply]
Except the idea Wikipedia has a left-wing systematic bias is just an artifact of the skewed American Overton window that treats anything left of neoliberalism as hyper-Lenin. A neutral encyclopedia would be more critical of the Trump regime and would, for example, not prevaricate over its ideology as if it was in dispute. Instead we have a Wikipedia with a pervasive center-right bias. Those of us on the left are thus in the unenviable position of being told neutrality can only be achieved by further marginalizing left-wing perspectives and that we should swallow this as somehow neutral? This path only leads to there being two Conservapedias instead of just one. Simonm223 (talk) 21:26, 30 April 2025 (UTC)[reply]
Except the idea Wikipedia has a left-wing systematic bias is just an artifact of the skewed American Overton window that treats anything left of neoliberalism as hyper-Lenin. Yes, this. In their (the Americans') last election, I would say that while their two parties are obviously fairly big-tent, the Republicans struck me as far-right and the Democrats as centre-right to centre. But apparently the latter are considered a left-of-centre party in the U.S. American politics have shifted so far rightwards that trying to assess Wikipedia bias through that lens is hopeless. Cremastra talk 21:42, 30 April 2025 (UTC)[reply]
A few notes. General left wing / right wing bias is very different and even harder to define than US Politics bias which I think is the real issue. And my focus is more on where it degrades and distorts the factual coverage and informativeness of articles (and we do have lot of problems there) rather than what types of obvious (op ed type) criticisms or praise gets in there. As a tiny example, describing a conservative's positions on something using only vague inaccurate pejoratives from left leaning media instead of providing detailed factual coverage of that. And my interest is on fixable systemic contributors to the problem. North8000 (talk) 21:44, 30 April 2025 (UTC)[reply]
Easy fix to that: prohibit newspapers for politics articles. Wikipedia will never do it because newspapers are convenient sources of (low-quality) information but requiring higher-quality sources for political articles is something that would probably get leftist editors on-board - unlike declaring that the NYT is the second coming of Friedrich Engels. Simonm223 (talk) 23:31, 30 April 2025 (UTC)[reply]
Perhaps we could get something out of this whole brouhaha by tightening up our standards a little bit. There are so many steps we could take: moving away from newspaper sources, disallowing criticism and controversy sections on BLPs, strictly enforcing WP:BALASP/WP:ONUS/WP:WTW (especially WP:LABEL), considering sources with non-impartial tones to be less reliable, and (god please) tban people when the a large portion of their editing is showing up to support a given side in a CTOP. I'd take any of these as a win for Wikipedia. Thebiguglyalien (talk) 🛸 23:55, 30 April 2025 (UTC)[reply]
Let me know when this is officially proposed so I can strongly oppose it. Good journalism and newspaper coverage is under attack by the right, and your proposal would support their goals. Viriditas (talk) 00:09, 1 May 2025 (UTC)[reply]
Strongly agree. Carlstak (talk) 00:24, 1 May 2025 (UTC)[reply]
You should not be citing journalism on Wikipedia. In most cases it's a WP:PRIMARY source. Thebiguglyalien (talk) 🛸 00:25, 1 May 2025 (UTC)[reply]
I'm sorry? It being a WP:PRIMARY source is by no means an interdiction against it.
In any case, I dispute that claim; WP:PRIMARY notes that [p]rimary sources are original materials that are close to an event, and are often accounts written by people who are directly involved. They offer an insider's view of an event, a period of history, a work of art, a political decision, and so on. The vast majority of good newspaper articles are not this. They are written objectively from a person near the event but not "directly involved". Cremastra talk 00:32, 1 May 2025 (UTC)[reply]
WP:NEWSPRIMARY explains this. See also WP:RSBREAKING, which says All breaking news stories, without exception, are primary sources, and must be treated with caution. I also have my own essay explaining why citing contemporary coverage is poor form: User:Thebiguglyalien/Avoid contemporary sources. And this isn't considering investigative journalism and opinion-based writing, which are primary for the findings or views of the author. Thebiguglyalien (talk) 🛸 00:35, 1 May 2025 (UTC)[reply]
WP:NEWSPRIMARY is just a redirect to WP:USEPRIMARY (WP:Identifying and using primary sources), an explanatory essay which also says:
Again, "Primary" is not another way to spell "bad". Just because most newspaper articles are primary sources does not mean that these articles are not reliable and often highly desirable independent sources.
Carlstak (talk) 01:24, 1 May 2025 (UTC)[reply]
And I'll use them in these cases. The problem is when they're used to determine weight or indicate that something should be included in an article. I explained this in the essay I linked. See also the aptly-named WP:FART. I find the idea that we should use newspaper coverage to protect it from "attack by the right" to be WP:RGW, WP:NOTHERE style behavior. Thebiguglyalien (talk) 🛸 01:33, 1 May 2025 (UTC)[reply]
Nobody has ever said anywhere that we should use newspapers to protect it from right wing attacks, nor can I possibly comprehend how you got that from my comment. Journalism is under total attack by right wing billionares. They have decimated local news coverage in most US communities and have taken over most mainstream news outlets. The "left", liberalism, and left-wing voices and opinions have almost zero representation and cannot be said to be a threat to the right anywhere in the US. This whole line of reasoning is part of the "liberal media" myth which began in the 1970s with the Powell memo and continues today with the enemy of the people lie espoused by the current administration. Viriditas (talk) 02:26, 1 May 2025 (UTC)[reply]
My opinion on newspapers is simply that they provide lower-quality information than peer-reviewed academic work and books published by academic presses. We should always prefer these sources but saying that in politics related articles often leads to significant protest. Simonm223 (talk) 11:40, 1 May 2025 (UTC)[reply]
I agree with North that you have the right of this (if the leads of all CTOP articles were mostly locked things would probably function better), but I would stress again that it would not affect the current situation at all. We should not pretend or give credence to the idea that this dispute is actually about our neutrality. CMD (talk) 00:53, 1 May 2025 (UTC)[reply]

A systemic fix will be structural and more complex that I can get into here but the gist of two items is:

  • Get rid of the binary concept of a source being "wp:reliable" where they get the unconditional keys to the city for wikilawyers and those not in that club are unconditionally deprecated. . That club is determined by trappings which are those of legacy media and for not getting voted out / deprecated. And go more with actual reliability which is (context-specific) expertise and reliability with respect to the text which cited it (which is a wp:ver context)
  • WP:weight was intended to apply to "two sides of an issue" coverage but has been hijacked by wikilawyering (in tandem with the wp:rs issue) to exclude coverage on all "I don't like it" items even if they are are not "two sides of an issue" type situations. Fix that.

Sincerely, North8000 (talk) 00:57, 1 May 2025 (UTC)[reply]

(Not to distract from the wider point, but just a note that we have a quaternary concept of source reliability, although much of the spectrum does fall towards one end or the other. CMD (talk) 01:17, 1 May 2025 (UTC))[reply]
If the letter from Martin were a good-faith expression of concern about us not getting neutrality right, I would see this as a discussion that I would be happy to have. For example: doing away with criticism sections in BLPs is something I could potentially support. (But also consider: Ted Kaczynski and Dzhokhar Tsarnaev have BLPs, too.) But let's not pretend that this is the case. The complaint isn't about neutrality. It's that we harbor persons who are trying to undermine the national interests of the US. And it isn't a constructive effort to correct what we might be getting wrong. It's a probably unconstitutional misuse of government authority to attempt to bully us into publishing content that would blatantly fail NPOV, but make some people in power happy. So let's drop this pretense that this is an occasion for us to fix some things we get wrong with NPOV. We could fix some things. I'd support fixing them. But that wouldn't stop the attempted coercion. And while I'd support fixing problems with our content, I'll strongly oppose any misguided attempts to change our policies in the hope that this would make the bullies leave us alone. --Tryptofish (talk) 21:06, 1 May 2025 (UTC)[reply]
Then when is the time to fix things? I've been asking for these things for a long time, and if this is the tipping point that gets us talking about it, then I'll take it. Also, infamous and widely-hated people are where we should be most cautious about neutrality because that's where it's easiest to slip up and move away from WP:IMPARTIAL or WP:POVFORM. Thebiguglyalien (talk) 🛸 21:39, 1 May 2025 (UTC)[reply]
There's nothing wrong with the time, any time. My concern is with the reasons. --Tryptofish (talk) 21:42, 1 May 2025 (UTC)[reply]
Maybe the best option would be to start a new discussion elsewhere about whichever of these proposals we feel should have already been done, so we can brainstorm without the burden of... whatever all this is. Thebiguglyalien (talk) 🛸 21:47, 1 May 2025 (UTC)[reply]
Elsewhere, definitely. And not because of Martin's letter. On the merits of the proposals. --Tryptofish (talk) 21:49, 1 May 2025 (UTC)[reply]

*Shrugs*

I think the best thing Wikipedia can do right now is to ignore this letter. If Mr. Martin actually wanted to enforce it, he would: decide what actually he wants to do, go to a court, and then actually convince judge to support his marginal legal theory. Even if he manages to do that, we would have more than enough time to react to everything that is going on. Hence, the best thing to do is to not feed this troll and just ignore him beyond a boilerplate "We reserve all rights under the law". The discussion about bias in Wikipedia that this has generated, from those who feel Wikipedia has a left or right wing bias is counterproduct, and if anything, unlikely to be true. Wikipedia, if anything has an intentional centrist bias with differing biases in specific topic areas that may be considered left, right, or other wing. I think the best thing Wikipedia can do is to hedge against any deterioration of speech conditions by decentralizing its operations. For example, moving operations to Switzerland, may lower the inherent risk created by needing to have a physical presence. Most of the other talk tends to fall either under pointless dooming, or self-motivated arguing. Allan Nonymous (talk) 16:10, 4 May 2025 (UTC)[reply]

The threat may be fizzling

With my thanks to Herostratus, who posted about this at Jimbotalk, there is now this news report that Ed Martin appears unlikely to receive Senate confirmation, and so his acting position as DC Attorney may soon be coming to an end: [10]. --Tryptofish (talk) 18:57, 6 May 2025 (UTC)[reply]

Dang, nothing does happen. Gaismagorm (talk) 19:03, 6 May 2025 (UTC)[reply]
See [11] Doug Weller talk 07:46, 7 May 2025 (UTC)[reply]
This does appear to be another event that follows the trend of "Trumpist politician makes noise for several months, nothing happens, goes bust". Fantastic Mr. Fox 08:26, 7 May 2025 (UTC)[reply]

Sharing latest updates to WMF annual plan

Hi everyone - writing to share some good news. The Wikimedia Foundation has just shared the latest draft update to our annual plan for next year (July 2025-June 2026). This includes an executive summary (also on Diff), details about our three main goals (Infrastructure, Volunteer Support, and Effectiveness), and our budget and financial model. Feedback and questions are welcome here or on the talk page until the end of May. KStineRowe (WMF) (talk) 22:13, 29 April 2025 (UTC)[reply]

WMF plan to push LLM AIs for Wikipedia content

The page m:Strategy/Multigenerational/Artificial_intelligence_for_editors is largely about machine learning for the benefit of editors. Sure, likely and plausible - judiciously applied ML can work very well.

But it contains this alarming sentence:

Recent advances in AI have led to new possibilities in the creation and consumption of content. Large language models (LLMs) capable of summarizing and generating natural language text make them particularly well-suited to Wikipedia’s focus on written knowledge.

This is a claim frequently repeated by LLM boosters, and it is literally false.

LLMs don't summarise text - they shorten it. Without regard for meaning - because facts are not a data type in LLMs. The summaries will frequently be wrong, miss key points, or reverse meanings.

see e.g. the ASIC report on LLM summaries (PDF) - the AIs were worse than humans in every regard. In similar tests, LLMs will happily reverse the point of a paper.

LLM content isn't banned on English Wikipedia, but there's good reason it's almost universally shunned by the editing community - because we're not here for confabulating word generators, because the details actually matter here.

I have asked here for data on WMF's tests and studies backing this claim. Because it is a remarkable claim, and they need to back it up - David Gerard (talk) 16:27, 30 April 2025 (UTC)[reply]

I think you're reading a lot into that sentence that isn't intended, based on nitpicking the definition of "summarizing". Human-written text can also be misleading. That's why we have editors and not just writers.
IMO they make it clear that they understand how this can go wrong, and that we shouldn't do content generation at a scale where nobody can verify everything that's generated. mi1yT·C 16:57, 30 April 2025 (UTC)[reply]
If AI gets deployed as a content creator on Wikipedia, that'll be the end. Humans won't be able to keep up, and our 'jobs' as volunteers will become meaningless. For my part, I'll just leave the project. There won't be any point in contributing to it anymore. --Hammersoft (talk) 18:35, 30 April 2025 (UTC)[reply]
I would do the same. Wikipedia is, as it currently exists, a better alternative to LLMs. If they can write articles then readers can cut out the middle man and get their knowledge directly from LLMs. I'm glad I'm getting old. Phil Bridger (talk) 20:15, 30 April 2025 (UTC)[reply]
I hate getting old, but I agree. At least one tech writer is already talking about downloading a human consciousness and "implanting" it in a LLM, skipping the part of the AI doom talk where we live in dread of the AI singularity when it attains true intelligence and thus autonomy. Carlstak (talk) 21:03, 30 April 2025 (UTC)[reply]
Downloading human consciousness is the holy grail of tescrealism. It's also considered impossible by mainstream science with current tech. But that's not going to stop billionaires who don't believe in death and think their rule should last forever. Viriditas (talk) 23:03, 30 April 2025 (UTC)[reply]
The guy, much accomplished, is very well known in the developer community and more broadly in the commentariat. He writes as if it is inevitable rather merely hypothetical. I don't want to link. I read the tescrealism article and found it interesting. I have some thoughts about that, but don't want to go off-topic. Owsley Stanley, RIP, would have much to say, I think. — Preceding unsigned comment added by Carlstak (talkcontribs) 00:22, 1 May 2025 (UTC)[reply]
Billionaire Zizians. Great. Polygnotus (talk) 13:07, 1 May 2025 (UTC)[reply]
What I'm reading into that sentence is that whoever wrote it doesn't appear to know what the heck they're talking about. I think that's pretty important and needs clarification. I think you're inventing a better version of the sentence that is more sensible than the words they actually wrote there, which are commonplace phrases used by people who don't know what the heck they're talking about - David Gerard (talk) 21:08, 30 April 2025 (UTC)[reply]
It's even worse than that. It sounds like it was written by an LLM. Good lord. Carlstak (talk) 22:03, 30 April 2025 (UTC)[reply]
(fwiw, AI detection tools are deeply flawed but do not flag this or any other text I've spot checked as LLM generated) Gnomingstuff (talk) 04:15, 2 May 2025 (UTC)[reply]
Yes, they're deeply flawed.;-) Carlstak (talk) 04:31, 2 May 2025 (UTC)[reply]
I'm not surprised that a document that was essentially vetted by affiliates at Wikimania might have some shortcomings, because for some real number of affiliates the overlap between affiliate members and project editors is not as strong as I'd hope. Best, Barkeep49 (talk) 22:17, 30 April 2025 (UTC)[reply]
At one time, the height of tool-making technology was when somebody figured out they could make knives by chipping obsidian into whatever shape they wanted instead of just wandering around looking for old antelope jaws with sharp edges. Time went by and now we're making even better knives out of modern alloy steels worked on CNC machines and laser annealed.
AI is a tool, it's here to stay, and it will continue to improve. We would be foolish to ignore it. And like all tools, the best way to understand AI is to use it. I'm not going to pretend that the current generation of LLMs are good enough yet to replace human editors. But they are good enough that when I'm not finding what I need in the conventional search engines, I turn to Chat GPT and Claude. Sometimes I just get entertaining hallucinations like this one. It would make a pretty decent lead section for a wikipedia article, except for the minor problem that Brown was a bryologist (mosses and liverworts), not a entomologist. But often enough, the AIs dig up something useful enough to at least be a starting point for further research in a direction I never would have thought to go. RoySmith (talk) 22:48, 30 April 2025 (UTC)[reply]
Saying it shouldn't be used to write here isn't ignoring it, it's discussing a use case. I don't think anyone would object to using it to start research (assuming it doesn't state the opposite of what a source its citing claims, but Wikipedia has prepared me well for the concept of actually checking the source). CMD (talk) 01:20, 1 May 2025 (UTC)[reply]
I think there are possible positive use cases for LLMs in stuff like search (as an example). But using it in content creation is a red line for me. If nothing else, given how much Wikipedia is used to train LLMs, polluting it with LLM-generated text would lead to model collapse. --Grnrchst (talk) 11:12, 1 May 2025 (UTC)[reply]
I'm not disagreeing with you about using LLMs for content creation, but let's not muddy the argument with worries about our effect on LLM quality. Our job is to produce the best content we can. I assume any machine translations would be marked as such in some human and machine readable way, if for no other reason than our CC-BY-SA licensing requires it. If the model makers aren't smart enough to figure out what to ingest and what not to ingest, they will produce a poor product and the marketplace will reward or penalize them appropriately. Either way, that's not our problem. RoySmith (talk) 11:31, 1 May 2025 (UTC)[reply]
To be clear, I brought up model collapse not because I particularly care about the profitability of AI companies, but because the WMF began their analysis of the current state of the ecosystem by saying: "As the internet continues to change and the use of AI increases, we expect that the knowledge ecosystem will become increasingly polluted with low-quality content, misinformation, and disinformation." They then went on to say this same capacity for pollution "make[s] them particularly well-suited to Wikipedia’s focus on written knowledge." So adding more LLM slop into the mix on Wikipedia will lead to the models getting worse and thus lead to the LLM slop added to Wikipedia getting worse. I'm worried about the quality of what gets added to our encyclopedia and I think encouraging the use of LLM content creation will have a continuously worsening effect on our content due to the issues of model collapse. --Grnrchst (talk) 11:47, 1 May 2025 (UTC)[reply]
CC-BY-SA licensing does not require us to give attribution to entities that cannot hold copyrights. We do voluntarily hold ourselves to that standard for published public-domain works, but not so far for machine translations. -- Tamzin[cetacean needed] (they|xe|🤷) 04:26, 2 May 2025 (UTC)[reply]
This report was according to the byline written by User:CAlbon (WMF) and User:LZia (WMF), who are respectively the WMF Director of Machine Learning and the Head of Research (Director). I've pinged them so that if they want to they can respond and perhaps offer clarification as to what was intended in this passage. Thanks, Cremastra talk 00:46, 1 May 2025 (UTC)[reply]
This is a claim frequently repeated by LLM boosters, and it is literally false. - Folks might want to be aware that this capable of summarizing statement which David claims to be "literally false" also matches e.g. the conclusion of this peer-reviewed academic publication with over 500 citations. It found that LLM summaries are judged to be on par with human written summaries. The "LLM boosters" in this case are a team of researchers from Columbia University and Stanford University.
Now, that doesn't have to mean that every LLM is suitable for every summarization task. That will depend not just on the model's quality but also on the text genre and on the requirements for the summary. I don't doubt that the results of that particular experiment by an Australian government agency that David cites were indeed unsatisfactory. However, based on a glance of the executive summary, it also seems that David is misrepresenting his source as a general verdict on LLMs, something its authors explicitly warn against: Whilst the Gen AI summaries scored lower on all criteria [than the human-written ones authored by the agency's professional staff, which were by no means rated as perfect either], it is important to note the PoC tested the performance of one particular AI model (Llama2-70B) at one point in time. [...] Technology is advancing rapidly in this area. More powerful and accurate models and GenAI solutions are being continually released, with several promising models released during the period of the PoC. It is highly likely that future models will improve performance and accuracy of the results. [...] It is important to note that the results should not be extrapolated more widely. In summary, David's "literally false" accusation is, well, literally false.
LLMs don't summarise text - they shorten it. - Perhaps there are valid debates to be had about the precise definition of the term "summarise", and David is entitled to his feelings in that matter (i.e. what others have less charitably called nitpicking above). However, his claim directly contradicts not only the academic RS mentioned above, but also the very first sentence in the English Wikipedia's article Automatic summarization: Automatic summarization is the process of shortening a set of data computationally [...]. (The "shortening" term that David wants us to believe is incompatible with summarizing was added there almost 8 years ago - presumably not by "LLM boosters" -, at which point the article began Automatic summarization is the process of shortening a text document [...]).
Regards, HaeB (talk) 02:44, 1 May 2025 (UTC)[reply]
It is horrifying to me that the foundation is considering the use of LLMs for content creation, but even more specifically, I'm extremely worried about the use of it for automated translation. Machine translation, whether using an LLM or otherwise, is infamously poor. Even in the best cases where it has the most data, it often misses nuance or translates stuff word-for-word in a way that sacrifices understandability. I've already seen many cases of monolingual people lazily using machine translations to port stuff over to or from languages they don't understand or care to learn, which effectively pollutes Wikipedia with incomprehensible and incorrect bullshit. I can't bring myself to believe the tenet that "We prioritize multilinguality in nuanced ways." Nobody who is multilingual would see LLM translation as a prioritisation of nuanced multilinguality; it is inherently a reinforcement of monolinguality and a monolingual understanding of the nuances of translation. --Grnrchst (talk) 10:58, 1 May 2025 (UTC)[reply]
Hold up here, you're conflating two different topics. Yes, editors shouldn't mindlessly copy paste machine translations of other languages into Wikipedia, and people who do so should be aggressively banned. But no, machine translation is not "infamously poor." Obviously machine translation is worse than a real life bilingual human. But modern day machine learning techniques for translation are 10x better than rules-based systems of 2010, which themselves were 10x better than 1996-era AltaVista Babelfish, which was 10x better than people leafing through pre-Internet phrasebooks. If used responsibly (i.e. as a starting place where Google Translates a foreign-language reference for claims that the editor are confident aren't being lost in translation) it's very helpful, and alarmist claims about machine translation being total garbage will just muddy the valid point. (Or, put another way, the problem with 2025 machine translation isn't that it's terrible. If it was that'd almost be better because then it'd stand out like a sore thumb when someone blindly trusts it. It's that it's good enough that it looks plausible but might be 20% wrong, which is 20% too much.)
Additionally, my understanding is that many readers of non-English languages don't use their language's Wikipedia, they use English Wikipedia translated through Google Translate. SnowFire (talk) 15:33, 1 May 2025 (UTC)[reply]
Agreed. What's more, the Wikimedia Foundation integrated machine translation into its Content Translation tool over a decade ago already (initially pioneered by the Catalan Wikipedia community, who are not exactly known for advocating reinforcement of monolinguality), and it has continued to update and expand it use for many years since (of course only as a tool to support human editors, an aspect that the current announcement also stresses).
I don't know if there are current stats, but it has plausibly been used by many tens of thousands of Wikipedia editors at this point, across many languages, in remarkable contrast to the urgent worries proferred by Grnrchst.
Regards, HaeB (talk) 16:58, 1 May 2025 (UTC)[reply]
As the comments in that Signpost article about the tool say, I think the danger is machine translations being used carelessly. My worry is that promoting the use these tools will result in editors recklessly overlooking the steps they need to take to use them properly. --Grnrchst (talk) 17:47, 1 May 2025 (UTC)[reply]
My original comment was probably a bit too hyperbolic. My worries about this come from experience seeing this kind of stuff happen first-hand. I was specifically thinking about a recent case of someone using Google Translate to create articles about common topics across several Wikipedias in marginalised languages (one of the things this Wikimedia post said it wanted to encourage), none of which they understood and none of which the machine translator was capable of providing a good translation for (as it left several untranslated words behind).
And obviously I understand that machine translation has improved over the course of 3 decades, I didn't intend to imply anything to the contrary. I agree completely with your comment about the more dangerous thing being a text that's only 20% wrong rather than obviously wrong. --Grnrchst (talk) 17:38, 1 May 2025 (UTC)[reply]
My reaction to this depends entirely on what the end result looks like, and we're really not being given much to go off of here. A lot of the messaging is that it will be used to save time and reduce workload, but that's fluff which tells us nothing about the actual use cases. This type of empty corporate-speak is pervasive throughout the brief. It doesn't even go into detail about what types of AI we're considering. AI can refer to a lot of different systems and methods. The other main point is that it can be used for onboarding new editors. This worries me, because editor recruitment is the most critical and most vulnerable aspect of Wikipedia. Doing it wrong can be an existential threat, and we're already not great at it.
The main use being presented is automating tasks, but we have no way of deciding whether this is helpful or harmful if we don't know what those tasks are. The key distinction in automated activity is acting versus flagging. We already have Cluebot, which acts. It makes edits and changes the appearance of the page. The key to Cluebot is that it's fairly conservative about when it takes action. We should be very strict about when we let AI act, and the obvious line in the sand is going to be non-human content generation. I'm not against having AI help out behind-the-scenes depending on how it's used, but we cannot allow it to add original content in articles or any other reader-facing area. There's far more potential in flagging. Bots that can identify and flag issues for editors to address would be huge. If the WMF can develop an AI program to go through an article, identify likely integrity problems, and list them for editors to check, that would be the single greatest improvement to Wikipedia since it was founded.
A lot of this feels like a solution looking for a problem. I really hope the WMF isn't going to be burning millions of donors' dollars (which the donors had intended for Wikipedia) by developing unhelpful AI programs just to push the foundation's scope creep even further. But there's a lot of potential here too. Thebiguglyalien (talk) 🛸 20:02, 1 May 2025 (UTC)[reply]
  • I agree that much of the text is frustratingly vague and generic, even for a strategy document. That said, WMF has shared more concrete ideas and plans elsewhere, see e.g. last week's Tech News about mw:Edit_check/Peacock_check, or the list of potential use AI use cases explored here. In general, may I also suggest to follow the "Recent research" section in the Signpost (doubling as the m:Research:Newsletter) where we often review AI-related work, also sometimes involving WMF researchers, such as this recent example: "GPT-4 is better at writing edit summaries than human Wikipedia editors". (Be aware though that the WMF research department has published or coauthored many academic papers that never resulted in editor-facing implementations.)
  • The other main point is that it can be used for onboarding new editors. This worries me, because editor recruitment is the most critical and most vulnerable aspect of Wikipedia. Doing it wrong can be an existential threat, and we're already not great at it. - maybe, but so can be doing no onboarding at all (or not enough of it). Or to put it differently: It is easy to fall victim to a nirvana fallacy, where one compares an AI-based improvement to an imaginary wiki paradise full of experienced, competent, friendly, patient and didactically skilled human Wikipedia editors willing to devote hours of their time to guide even the most clueless new user who came here to promote their garage band. But as you indicate, this is not the world we live in.
  • Also a reminder that that line in the sand [regarding] non-human content generation has been crossed on English Wikipedia 23 years ago already (with much of the "non-human" content remaining in place for years or even decades). Of course this doesn't mean that it's a good idea to start adding LLM-generated articles now. But it shows that simplistic human vs. non-human narratives are not always helpful in deciding what's the best way to build an encyclopedia.
  • There's far more potential in flagging. Bots that can identify and flag issues for editors to address would be huge. - well we've already had that for almost a decade in form of ORES. You can go to Special:RecentChanges right now and use it (or its successor models). I have reverted tens of thousands of vandalism edits flagged this way. And yes, those older models make lots of mistakes too (WMF publishes their error rates at m:Machine learning models), but they are still eminently useful - fortunately the "omigod AI makes mistakes!!" crowd wasn't as loud when they were introduced around 2016.
  • If the WMF can develop an AI program to go through an article, identify likely integrity problems, and list them for editors to check, that would be the single greatest improvement to Wikipedia since it was founded. - Agreed that this could be extremely useful. This still seems not easy to do well though, from what I've seen in that area of research so far. I think it's safe to say that WMF won't get there for a couple of years, based on its current speed in building production-ready AI-based tools (or even just in deciding what to do with AI - e.g. it appears that the strategy document we're discussing here had originally been due in September 2023 already, eons ago at the current rate of progress in AI). But there are some external academic researchers working on a limited version of this, see m:Research:Wikipedia Inconsistency Detection (from the same lab at Stanford that also came up with SPINACH and STORM). When they attended the SF meetup in March, they were eager for editors to try out their prototype and give feedback.
Regards, HaeB (talk) 07:49, 2 May 2025 (UTC)[reply]

About a month ago, I ran an extremely WP:BOLD experiment where I took the top 68 articles with {{technical}} tags by pageviews per month, used Gemini 2.5 Pro to generate a paragraph of text to address their tagged sections or entire article, and posted it to their talk pages with full disclosure including source code asking human editors to review and revise the suggestion to address the tag. Objectively the project was a huge success, going by the number of fully human editors who have been addressing over a dozen of these tags so far, amounting to solutions of longstanding requested improvements for over a million readers per year. But the opposition was overwhelming, probably mostly because I started with fifth grade (ages 10-11 years) reading level summaries without any source citations, which is well below the target reading level for STEM articles on Wikipedia. I feel strongly that if I had started with 8th grade reading level summaries will full source citations the outcome would have been very different.

One observation which was clear from the VP/M discussions is that some of our most respected, senior, and knowledgeable editors have very heterodox opinions on both the capabilities and drawbacks of recent LLMs. I am not sure what to do about this issue. When one of the most respected senior editors claims something like "LLMs just predict the next word," without regard to the world modeling in latent space and attention head positioning that accurately making such predictions require, I just don't know how to respond. However, I think there is one way in which the Foundation's R&D team could help introduce editors to the capabilities of LLMs in a way which wouldn't involve even the mere suggestion of content improvements, but would help one of our most important core pillar workflows for all edits to all articles.

Let's re-imagine ORES away from random forest classifiers of simplistic and easily gamed features, into a full LLM analysis of each watchlisted edit or new page being patrolled for quality, including a full attempt to verify both the existence of offline source citations and the correctness of online sources, as to whether they support the article text after which they are cited. This might require an extra click to save resources, but it might not, for example, with self-hosting by the Foundation or some of the new low or zero-cost for models capable of this task. Let's compare the results to legacy ORES to show what LLMs can do to uphold WP:V. Cramulator (talk) 23:20, 1 May 2025 (UTC)[reply]

The opposition was not just because of the reading level, but because it produced nonsense, which is not unheard of in LLMs. If by "heterodox" you mean "not aligning with what the AI people claim constantly, despite reality flying in the face of their claims", then yes, I guess many editors here are heterodox. "Objectively the project was a huge success": you found and highlighted a real issue, which was good. And you presented a deeply flawed solution. Usually, when your solution, your work, is universally rejected, you don't consider your project "a huge success", at least if you aren't the president of the US. Fram (talk) 07:50, 2 May 2025 (UTC)[reply]
Trend of enwiki accounts blocked as LLMs
One of the summary's statements that you suggested was nonsense turned out to be a pernicious omission in the underlying source in a deeply mathematical section of the Minimum wage article. I would say that almost all of the other complaints including yours were the result of asking for fifth grade reading level summaries. But only about 15% of all the summaries received complaints, while about 7% of them were complemented by editors. Again, I am convinced that starting with 8th grade reading level summaries and including the pertinent source citations would have changed the outcome. That's on me, I fully admit. As for the question of success, again I'm judging on what the human editors who presumably read the suggestions did (none of whom copied the suggestions verbatim into the article) and continue to do. I have always been against AI generated edits in article space. In fact, several days before the experiment I complained that the trend of editors being blocked for the use of AI is extremely troubling. It's still early days and whether that trend persists remains to be seen. Cramulator (talk) 20:25, 3 May 2025 (UTC)[reply]
No, you are wrong. The article stated that higher minimum wage would mean less workers (presumably, but unstated, because more companies wouldn't be able to afford as many people a minimum wage), your AI summary claimed that "If the minimum wage is already high, raising it more could make fewer people want to work" (emphasis mine). Please stop pushing your deeply flawed experiment, please stop misrepresenting the opposition against it or the bad results you produced. Fram (talk) 08:44, 5 May 2025 (UTC)[reply]
The tagged section states, "if 𝑤 ≥ 𝑤∗ [the minimum wage meets or exceeds the efficiency-level wage], any increases in the minimum wage entails a decline in labor market participation and an increase in unemployment." That is based on the cited source's statement that, "if 𝑤 ≥ 𝑤∗, any increase in the minimum wage entails a decline in labor market participation (because Vu decreases) and an increase in unemployment, which necessarily leads to a fall in employment," where Vu is defined as the expected present value of utility while unemployed. The mathematical model implies that once the minimum wage is above the efficiency level, a further rise lowers the expected value of searching, so the participation function shrinks and meaning fewer individuals enter the labor force. Yet what drives that outcome is not that the high wage itself makes employment unattractive; it is that the higher wage reduces firms' vacancy posting, lowers the job‑finding probability, and thus cuts the expected payoff from looking for work. Saying "fewer people want to work" captures the fall in participation, but it obscures the causal channel because might be misread to mean workers dislike high pay rather than they they are dissuaded from looking for work because they are anticipating weak prospects.
In any case, the mathematical model is the actual nonsense, because it assumes firstly that all workers are interchangeable and behave identically, and because it assumes exactly what you found to be such nonsense, that workers are not more motivated to work for greater wages. It further assumes that greater pay will not attract better (more skilled and more motivated) workers, and that greater pay will not reduce turnover. I stand by my statement that the mathematical model in the source is the actual nonsense here. Cramulator (talk) 12:09, 5 May 2025 (UTC)[reply]
I was looking at your 9th grade summary for glycine and it didn't contain any errors. The problem is that the summary was made up of very surface-level information, and the hardest parts of the article to understand weren't in it at all. The editor who added the technical tag said For example, what is an R group? There are too many links. An educated reader should be able to understand it without following links. in their edit summary. R groups or the sentence they were mentioned in were not summarized by the AI.
The sentence in question is "Glycine is integral to the formation of alpha-helices in secondary protein structure due to the "flexibility" caused by such a small R group." A human editor addressing the problem would have realized either a) they don't know enough to explain this or b) this is not true, glycine usually disrupts alpha-helices because of its flexibility. When I asked ChatGPT to explain the sentence at a ninth-grade level, it told me the "R group in glycine is so small, it makes the protein chain more flexible, allowing it to easily form the helical shape", repeating the mistake.
For me this means the AI added no value. It didn't identify the technical parts, didn't correct the mistake and basically just shortened the article and removed technical details, the thing the template specifically says you should not do. Clearly your experiment worked though as I just made an edit to the article that hopefully made it more correct and less technical. Maybe the real artificial intelligence was inside us all along. HansVonStuttgart (talk) 09:03, 2 May 2025 (UTC)[reply]
FYI, you are incorrect about the LLM being incorrect about glycine's effect on protein chain flexibility: Glycine's flexibility is real and significant, enhancing overall protein chain plasticity; that same flexibility, however, makes it poorly suited for the rigid, ordered alpha-helix, where it often acts as a helix breaker. Esculenta (talk) 15:37, 2 May 2025 (UTC)[reply]
The quote was part of a longer explanation where the AI seemed to just dumb down the sentence and claim a lot of flexibility is needed for alpha-helix formation. Of course, I'm not a biochemist, so there may be something I'm not getting here. HansVonStuttgart (talk) 06:22, 3 May 2025 (UTC)[reply]
I suggest that those who add {{technical}} tags are in fact looking for surface-level information about the WP:JARGON that they can't understand. The generated paragraphs were never intended as replacements for the problematic sections and articles, but as suggestions for summary introductions to preface them. Cramulator (talk) 20:59, 3 May 2025 (UTC)[reply]
With all due respect, the opposition was not because of the reading level or citations, but because the content was nonsense, a mixture of vague, misleading, and outright false. This is a common problem with LLMs: they are good for producing material that sounds vaguely plausible to laypeople but is clearly garbage to anyone who knows the topic. –jacobolus (t) 23:04, 3 May 2025 (UTC)[reply]
Vague, absolutely, but again because of the low reading level. While a handful of the suggestions were also characterized as misleading or false (e.g., equating "waste" to "trash") those specific issues were not present in the higher reading level summaries. The point of the exercise was to provide a sounding board for editors who do understand the topic to help clarify the issues tagged in the article. That is the only way the issues raised by HansVonStuttgart above, for example, can ever truly be addressed in a correct manner. The goal was never to put AI generated text into articles, but spur human editors into addressing those longstanding tags. The point of the exercise was to demonstrate a useful and responsible way to use LLMs to help improve the encyclopedia, and I screwed it up by asking for lower reading level summaries than was appropriate. Cramulator (talk) 20:08, 4 May 2025 (UTC)[reply]
I don't understand what point you are trying to make, but in my opinion this exercise was a waste of everyone's time. –jacobolus (t) 22:18, 4 May 2025 (UTC)[reply]
I'm trying to say that over a million readers per year are now being served over a dozen high pageviews articles which have since had WP:JARGON issues addressed by about 15 human editors spurred into action by about 20 hours of work on my part, even though I made a monumentally stupid mistake, and all without any LLM content being added to articles. Can we agree that content suggestions by LLMs on article talk pages do not waste time when they lead to such outcomes? Cramulator (talk) 23:51, 4 May 2025 (UTC)[reply]
The useful part here was poking humans to go look at the articles, and the AI aspect was an entirely arbitrary and irrelevant distraction, which might have been replaced by any other clickbait hook. –jacobolus (t) 00:14, 5 May 2025 (UTC)[reply]
No. They waste time, and run the risk of being taken at face value by lazy or hasty editors. Please don't try this experiment again and finally learn something from the feedback you received. Fram (talk) 08:46, 5 May 2025 (UTC)[reply]
I have repeatedly said that I will not continue the experiment. Cramulator (talk) 12:25, 5 May 2025 (UTC)[reply]

Helping editors share local perspectives or context by automating the translation and adaptation of common topics[12]

Please, no, no, NOOO! Fram (talk) 16:10, 2 May 2025 (UTC)[reply]

See also [13]. Fram (talk) 16:11, 2 May 2025 (UTC)[reply]
The use of machine translation was already discussed above, where another editor had expressed a similarly highly emotional reaction. However, as detailed there, it's something that WMF has implemented over a decade already, and it has since plausibly been used by many tens of thousands of Wikipedia editors. So consider the possibility that your "Please, no, no, NOOO!" reaction is not universally shared among the community. Regards, HaeB (talk) 16:29, 2 May 2025 (UTC)[reply]
There is a reason that "restricted article creation by the WMF's semi-automatic content translation tool to extended confirmed users" and "In addition, integration with machine translation has been disabled for all users.", because it produced many, many rubbish pages ("95% of articles created with this tool were unacceptable"), as seen by and cleaned up by the people who "expressed a similarly highly emotional reaction". And note that in my quote, they have added "and adaptation" to it, which is a lot worse still. And why should I care that my "reaction is not universally shared among the community."? Neither is yours, that's why we have a discussion. Preferably opinions based on facts though. Fram (talk) 16:47, 2 May 2025 (UTC)[reply]
Oh, I see "From 2011 to 2019 I worked for the Wikimedia Foundation, most recently as a senior data analyst." No surprise there. Fram (talk) 16:48, 2 May 2025 (UTC)[reply]
Oh, you're moving into WP:PA arguments now? I have been an editor since 2003, and have criticized WMF many times before and after working for it, e.g. when reporting about specific activities in the Signpost. I am not, however, someone who is reflexively outraged about everything they do. (Besides, I'm amused about the naive assumption that former WMF employees always defend the organization's current activities, you don't seem to have met many of them.) Regards, HaeB (talk) 17:04, 2 May 2025 (UTC)[reply]
I have met too many of them in similar enwiki discussions and with similar "but look how good it is" blind beliefs (and the Signpost is a rag I avoid at all costs, it's not really an association which improves one's standing or credibility) Anyway, I also provided substantive arguments why your 10 years of happy customers story may not be really convincing. Fram (talk) 17:13, 2 May 2025 (UTC)[reply]
Is it because of our reporting about your case(s)? I wasn't involved with that IIRC, but if you have or had specific complaints about it, you should always feel free to raise them. Regards, HaeB (talk) 17:24, 2 May 2025 (UTC)[reply]
They were raised[14], and that reporting was not only a hack job and a series of BLP violations, but also retribution for an earlier case I started about a Signpost article (and behaviour surrounding it), Wikipedia:Arbitration/Requests/Case/Gamaliel and others. While this (and other things I read at the time) indicated to me that the issues went on for years, it obviously doesn't mean that it has anything to do with you. Anyway, anything about the translation tool which isn't really liked on enwiki? Anything about the new issue that they will create something to automatically adapt topics through AI? Fram (talk) 17:41, 2 May 2025 (UTC)[reply]
And why should I care that my "reaction is not universally shared among the community."? - your "no, no, NOOO!" exclamation sure made it sound like you think that the use of machine translation is such an evidently absurd idea that your reaction should be universally shared. If your point is that such features can come with moderation challenges that must be considered and addressed (e.g., if I'm not misremembering, a feature to to discourage direct copypasting of auto-translated text was added to the CX tool long ago), that's a more reasonable discussion to have. But there too our situation here on enwiki will differ from those of many other Wikipedias. Above it seems you were reacting to a blog post and media coverage, but I'm not sure if you read the actual strategy document yet, where this Automating the translation and adaptation of common topics is explicitly framed as something to support the Editors of less represented languages.
And note that in my quote, they have added "and adaptation" to it, which is a lot worse still. How so? I have to say it's not actually clear what they mean by that specifically - as mentioned above, I find the document too vague in many parts. However (assuming you've got around to reading it aready), note that the statement comes with the explanatory footnote there: Examples of such common topics include but is not limited to List of articles every Wikipedia should have/Expanded. Would your "no, no, NOOO!"ing apply to a feature that automatically highlights article topics on that list that do not yet have an article in a Wikipedia in such a smaller language (to editors on that Wikipedia), say?
Regards, HaeB (talk) 17:41, 2 May 2025 (UTC)[reply]
My exclamation was my personal feeling about this. What you read into it is your problem. I don't claim anything about how widely my expression is shared or not, I certainly don't claim anything about other Wikipedia's in general, but many have their own set of issues (as we have seen with e.g. the Scots or Greenlandic (? I think?) Wikipedias, automatic translations on smaller Wikipedias made these worse on an unimaginable scale). Your "feature that automatically highlights article topics" is a strawman, as that is clearly not what the blog post is talking about. And as can be seen from that The Verge article, such ill thought out blog posts already tar the reputation of enwiki, even if it wasn't meant to be used on enwiki (which I doubt, judging from previous experiences). Fram (talk) 17:47, 2 May 2025 (UTC)[reply]
About the Greenlandic: Meta closure discussion, a choice quote: "Then Wikimedia launched its own AI translator, which was even worse, and this one produced completely random letter sequences, that often didn't even looked like Greenlandic. " "In bigger projects there are many users, that can spot those articles and they get deleted, but in the Greenlandic Wikipedia I am the only user, who is checking, what is written and edited, and none of the users, who "write" these "articles", cannot even comprehend, what they produce. I have connections to the Greenlandic government, and they would actually see Wikipedia as a threat for the Greenlandic language, directly counteracting official Greenlandic language policies." A ringing endorsement right from a small Wikipedia language version. Fram (talk) 18:12, 2 May 2025 (UTC)[reply]
I would go further and say it is an evidently absurd idea, my own experience with the state of the art in GenAI translation is that it tends to make incredibly basic mistakes, e.g. hallucinating a double negative from a simple single negative when translating between English and Spanish. That's potentially the difference between a good translation and a BLP vio with a red herring citation in a Wikipedia article context, for two of the most represented languages in the world. The large scale issues Fram points out with Scots and Greenlandic WP aren't just illustrative, those wikis have no doubt been consumed into the training data of all existing LLMs, drowning out the sum total of good native text on the internet and forever poisoning future translations. To pretend like the dire state of machine translation is going to somehow improve rapidly in the coming years is AI booster nonsense, completely unevidenced assertions that if we keep feeding more text into more GPUs it will somehow crack the art of translation. REAL_MOUSE_IRL talk 12:13, 3 May 2025 (UTC)[reply]
Hi everyone. Thanks for inviting us to this conversation. I'm one of the authors of the strategy and I’m happy to clarify some of the points from the strategy that you have brought up in this conversation.
The primary thesis of this strategy is that we focus on editors and their needs when developing or using AI. Through this strategy we have made a decision to “use AI in support of editors in targeted ways”. We emphasize the focus on supporting humans one more time in the section where we talk about how we will implement this strategy: “We adopt a human-centered approach. We empower and engage humans, and we prioritize human agency.”.
Everything else that we say in the document is in light of the above. So if we are talking about the use of AI for translation, or use of AI for text summarization, that is all in the context of giving the editors a choice to spend more of their time on what they are uniquely positioned to do: deliberation, discussion, consensus building and judgement calls.
I’d like to share more about the topic of translation. Editors in some of the smaller languages of Wikipedia (as measured by article count) are operating under a significant burden of responsibility. They must balance creating articles on universally understood topics (such as the concept of a circle) with their desire to share their unique local knowledge (such as Trams in Florence) with their language community and the world.
AI is already in use to aid translation on the Wikimedia projects. Moving forward, we hope to further leverage AI-powered translation to give editors the option to translate content more quickly. This will free up some of their limited time to focus on sharing culturally specific insights, if they choose, which can further enrich the encyclopedic knowledge with local and cultural knowledge, something that Wikipedia is uniquely positioned to offer to the world.
Moderation is another area where we think AI is well suited to improve editing workflows in ways that improve the integrity of knowledge on the projects. For example, we see significant opportunities in improving retrieval/discovery options. Consider this scenario: a source has been retracted and an editor wants to find all the instances that the source has been used on Wikipedia (within a given language or considering all languages) to update the related content. It is currently technically very difficult for editors to retrieve a list of all articles that have used a source. This is something that LLMs can do a decent job in. Our aim is to offer the assistive technology that can help editors focus on what they are uniquely positioned to do: determine which source is retracted, if the retraction requires an action on Wikipedia, and if so triggering a request to receive a list of articles that may need to be updated as a result of it. The editor can then decide what action to take on those articles. There are of course many other applications in this space we can support with AI.
I hope I have been able to emphasize our primary thesis of the strategy: use AI in selected areas to support editors, who are still doing the job, with the ability to use more advanced tools. --LZia (WMF) (talk) 20:30, 2 May 2025 (UTC)[reply]
Can I take it that WMF Legal have approved this? Given that the WMF must be assuming responsibility for AI-generated content, it would appear to be rather a departure from their previous assertions regarding contributors assuming responsibility for their own edits, and the WMF this having no legal responsibility. AndyTheGrump (talk) 21:11, 2 May 2025 (UTC)[reply]
What in the strategy document makes you think that the WMF must be assuming responsibility for AI-generated content? If such content is merely provided as a suggestion to editors who then have to decide whether to publish it as an edit under their own account (which is how the integration of machine translation in the Content Translation tool has worked for the past decade), then it seems pretty clear to me that the responsibility remains with editors, as always. Regards, HaeB (talk) 00:12, 3 May 2025 (UTC)[reply]
I sympathize with the fear that if and when we ever get really good translations, or paraphrasing for readability, or summarization for introductory text, it's a slippery slope that eventually people will simply copy verbatim into articles without proper review. That's a legitimate concern that we need to think about developing firm guardrails against, for example, by flagging edits of verbatim generated content which is included too soon after its production. Cramulator (talk) 20:40, 4 May 2025 (UTC)[reply]
I think EN.WP editors should just continue deleting any machine generated material that we can identify as being machine-generated. WMF can propose all the garbage ideas they like; we don't have to actually use them. Simonm223 (talk) 12:57, 5 May 2025 (UTC)[reply]
AI is already in use to aid translation on the Wikimedia projects. Moving forward, we hope to further leverage AI-powered translation to give editors the option to translate content more quickly.
Indeed, as discussed above, it seems important for folks to be aware that AI has long been used for this already, as you also already mentioned in the strategy document itself. However, "more quickly" is extremely vague.
Might it be possible to ask a colleague who is familiar with more concrete details of these plans to weigh in here too? (For context, the Foundation's ongoing work on the Content Translation feature is public; but it's not immediately clear to me which of these open tasks relate to speedups, and what those speedups might consist in.)
Given the anxieties evident in the discussion above, I think moving beyond communicating in vague PR-like terms on this matter would help establish trust and address potential legitimate community concerns.
Regards, HaeB (talk) 21:12, 2 May 2025 (UTC)[reply]
PS: Also, more generally, given that the Foundation is currently soliciting feedback on its 2025-2026 annual plan, could you explain where and how this strategy is reflected there? Given that it will set a high-level direction for the development, hosting, and use of AI in product, infrastructure, and research at WMF at the direct service of the editors during the time between July 1, 2025 to June 30, 2028, I guess that the Product and Technology department's "Contributor Experience (WE1)" section in the 2025/26 annual plan should be one of the relevant ones. But I find it difficult to detect any traces of this strategy there. (E.g., to take the first of the four "prioritised strategy" items that you also highlight above, I don't see anything resembling AI-assisted workflows for moderators and patrollers mentioned among the planned activities for WE1.1, WE1.2 or WE1.3.)
Regards, HaeB (talk) 23:43, 2 May 2025 (UTC)[reply]
It is currently technically very difficult for editors to retrieve a list of all articles that have used a source. This is something that LLMs can do a decent job in. Is it? Are they? We have a whole table on WP:RSP where commonly-discussed sources are listed that links to a list of all pages each source was used. It's not difficult at all to search for where, e.g., a particular website has been cited on wikipedia and replace that source (I've been doing that for years), and while it's a bit more of a learning curve, plenty of us are competent enough at regex to use that for more complex source searches and for semi-automated replacement via autowikibrowser. I'm not clear on how LLMs would actually be better at this, other than perhaps spitting out a good regex query.
I also don't really get why it's ok to make a distinction between "articles every language should have" and "local knowledge topics" with regards to machine translation. Why are topics in the former group assumed to be "less nuanced" (whatever that means) and therefore more acceptable to offload to ML? Shouldn't every wiki want the most core articles, the ones most likely to be visited by the most people, to be especially accurate? I also think a very significant part of en.wp's early expansion was due to editors having the opportunity to write these core articles themselves; the drop-off in unique-editor activity is often partly attributed to there just not being many low-hanging fruit left. Wouldn't it be easier to build a larger editor base in other languages if there were more topics available for the average person to write about without needing technical expertise? And why are we presuming editors in other languages would be more interested in writing about material requiring niche local knowledge than they would be in more general topics? This ML angle also assumes that the en.wp version of a core article should be the default template from which versions in other languages ought to be derived, which is a little..... JoelleJay (talk) 22:56, 2 May 2025 (UTC)[reply]
I am likewise very confused about the sources part, and specifically about this use case:

Consider this scenario: a source has been retracted and an editor wants to find all the instances that the source has been used on Wikipedia (within a given language or considering all languages) to update the related content. It is currently technically very difficult for editors to retrieve a list of all articles that have used a source. This is something that LLMs can do a decent job in. Our aim is to offer the assistive technology that can help editors focus on what they are uniquely positioned to do: determine which source is retracted, if the retraction requires an action on Wikipedia, and if so triggering a request to receive a list of articles that may need to be updated as a result of it. The editor can then decide what action to take on those articles.

This sounds pretty much like what User:RetractionBot (first launched in 2018) is already doing. This Signpost article from last year has some background on how it works. CCing the Signpost article's author Headbomb and the bot's operators Samwalton9 and Mdann52 in case they would like to shed some light on why it is currently technically very difficult for editors to retrieve a list of all articles that have used a source, and how a LLM-based solution might help in such tasks.
Regards, HaeB (talk) 00:07, 3 May 2025 (UTC)[reply]
@HaeB: In my experience, the main issue with identifying retracted sources has been the vast amounts of referencing formats that are used across just enwiki, even before I start to look at cross-wiki operation. You could potentially have one article referenced in 4 different places using a PMID, a DOI number, a link to the article directly, or the plaintext citation without any links or identifiers. All of these are distinct references, even though they could all refer to the same articles. I've found a dataset that links some of these identifiers together, but I'm not even attempting to mark sources not labelled with a DOI or PMID as retracted as it's not a easy solution.
I think ML is potentially a good case to be able to link these citations together (for example, using lookups in the PMID and DOI databases to identify possible duplicate sources, flagging these up and allowing human review). For what it's worth, I don't think a LLM is a good solution to this. ML does present some opportunities to score possible duplicate sources across articles to make sure these are properly tagged however. There's a discussion across Wikipedia to standardise the usage of CS1 templates before this is a practical reality however. A LLM won't solve the key issues with a lack of structure here, unless it's specifically trained for the task, and then there's still issues around false outputs. Mdann52 (talk) 11:52, 4 May 2025 (UTC)[reply]
Agreed, I can see the potential utility of ML processes in this area, but not LLMs. JoelleJay (talk) 17:11, 4 May 2025 (UTC)[reply]
This will free up some of their limited time to focus on sharing culturally specific insights, if they choose, which can further enrich the encyclopedic knowledge with local and cultural knowledge Setting aside the somewhat neocolonialist undertone of this sentence — reminiscent of the old European geographic societies eager to document the "exotisms of the natives" — it's also quite paternalistic and condescending to assume that editors from so-called "smaller languages Wikipedias" (curiously including Italian here?) would naturally want to contribute culturally specific content. Editors typically contribute based on personal interest — whether that's pop culture, politics, football, or anything else — and rightly so. Wikipedia is a volunteer-driven project, not a cultural repository curated on others' behalf.
In my experience, including with contributors from Lusophone African countries, there is often little appetite for producing narrowly defined "culturally specific insights." They edit what they enjoy — as should be expected.
I also share concerns about using AI to pre-fill core articles (from English?) as if other communities have nothing to add to them, or don't have their own nuances on these subjects. Furthermore, as said, this risks discouraging new editors by removing opportunities to create such foundational content themselves, the so called "low hanging fruit". It’s counterproductive and past experiments show it often/generally undermines organic growth within those communities, and may even kill it entirely. Darwin Ahoy! 01:57, 3 May 2025 (UTC)[reply]
Agree wit this. Furthermore, is the concept behind this that editors are supposed to turn their chosen language wiki into a specific reflection of their local knowledge? This runs in the opposite direction to the move towards a global NPOV policy, and also runs against the concept of a global Wikipedia. We make efforts to not reflect particular cultural biases here, the WMF should support that. CMD (talk) 02:10, 3 May 2025 (UTC)[reply]
(We have Trams in Florence here too, so do 13 further language wikis.) CMD (talk) 02:37, 3 May 2025 (UTC)[reply]
Well said, Darwin. Good sense. Carlstak (talk) 02:25, 3 May 2025 (UTC)[reply]
it's also quite paternalistic and condescending to assume that editors from so-called "smaller languages Wikipedias" [...] would naturally want to contribute culturally specific content - well, these are strong adjectives. But I think the key criticism here would be to assume - that is, merely claiming something is the case, without empirical evidence.
However, the Foundation nowadays conducts lots of research, user testing and data analysis to inform product decision about new features for editors and readers. So I would hope that this was done here too. And Leila is after all the Foundation's Head of Research, so I'm fairly sure she is especially invested in making sure that multi-year strategy decisions are grounded in research and data.
@LZia (WMF), could you share some pointers to the research or data that statements like They must balance creating articles on universally understood topics [...] with their desire to share their unique local knowledge were based on? The "must balance" seems to posit that every editor possesses these two different motivations and is conflicted between them (as opposed to User:DarwIn's countervailing claim that Editors typically contribute based on personal interest — whether that's pop culture, politics, football, or anything else). And in particular the research and data that informed this rationale in the strategy:

Automating the translation and adaptation of common topics allows editors to enrich the encyclopedic knowledge with cultural and local knowledge and nuances that AI models cannot provide. This allows editors to invest more time in creating content that strengthens Wikipedia as a diverse, global encyclopedia.

This amounts to an empirical prediction: If WMF automates this for common topics, then editors will do more work (investing saved time) on those other topics. But there could also be different mechanisms at work. For example, consider the following alternative possibility:
  • New editors are typically attracted to these smaller Wikipedias by a desire to write about common, general topics in their own language, and only later in their editing career find the confidence and skills to write about local knowledge, where there are fewer sources to aid them.
In that case, the Wikimedia Foundation's proposed editor AI strategy would be clearly detrimental to the sustainability of those smaller editing communities, as it would remove this entry point for new editors. Again, this is just one possible hypothesis and I would guess that before embarking on this three-year path, WMF did research to exclude this possibility. But it would be good to know what that research consisted of.
PS: As discussed above, we still don't know what that "Automating the translation and adaptation of common topics" actually means concretely, but that's a separate question.
Regards, HaeB (talk) 08:25, 3 May 2025 (UTC)[reply]
It certainly wouldn't be "freeing up" their time if they have to spend it clearing up bad AI translations that mangle their language. I'd personally rather the foundation give funding to people-led initiatives to improve the Wikipedia projects for these smaller languages, rather than putting resources towards AI translation. --Grnrchst (talk) 15:34, 4 May 2025 (UTC)[reply]
This seems like a solution in search of a problem. More "how can we jump on the AI hype bandwagon" and less "how can we best support Wikipedians with their current problems." –jacobolus (t) 23:15, 3 May 2025 (UTC)[reply]

It is currently technically very difficult for editors to retrieve a list of all articles that have used a source. This is something that LLMs can do a decent job in.

This is a bit of interesting new to me, as I operate a bot that essentially does this for at least some types of source! With my experience using LLMs in my day job, I agree that Machine Learning could well assist with this, but not a LLM.
The correct answer to this is to agree a standard citation style, enforce it, and ML could help with that. Given a LLM cannot verify a source, cannot search this against external databases (for example, PMID, DOI, Google Scholar for just a few) to catch incorrect titles, authors etc, I can't see how this would help Mdann52 (talk) 11:59, 4 May 2025 (UTC)[reply]
It seems to me that the WMF, along with very many business and political leaders, is asking the wrong question. Rather than, "how can we use AI?", it should be, "how can we do things better?" The answer to the question may or may not be , "by using AI", but it shouldn't be presupposed. Phil Bridger (talk) 13:11, 3 May 2025 (UTC)[reply]
The correct approach to this work is demonstrated by Stanford's STORM and Co-STORM projects: https://github.com/stanford-oval/storm -- the goal being to generate articles which would pass all the quality requirements of Wikipedia, but taking place entirely off-wiki. Their work, which anyone can experiment with at https://storm.genie.stanford.edu/ shows the capabilities and drawbacks quite clearly. It's clearly not ready for prime time, but there is no question that continued work will continue to improve it. Someday we may have double-blinded tests it can pass, but until then, LLM content should stay out of article space. Cramulator (talk) 20:21, 4 May 2025 (UTC)[reply]

I would have more confidence if LZia pointed to some actual data rather than "Trams in Florence". Perhaps this paper LLMs Are Here But Not Quite There Yet will shed some light on the subject? Carlstak (talk) 13:43, 3 May 2025 (UTC)[reply]

And this one Findings of the WMT24 General Machine Translation Shared Task: The LLM Era Is Here but MT Is Not Solved Yet Carlstak (talk) 14:02, 3 May 2025 (UTC)[reply]
Ah, another of these discussions. I'll again bring up both Shit flow diagram and Malacca dilemma, which I used ChatGPT 4.5 in project mode to help create. It is fairly easy to make an LLM, or at least ChatGPT, use only provided sources, which can be uploaded directly. This makes it far easier to avoid hallucination issues. Your can also instruct the LLM to provide the page numbers with quotes from the sources to verify the information. ChatGPT still has a problem with synth, but that is easily addressed when checking and verifying sources. The method I use is to not have the LLM format references so I have a list of what needs checking for in the form of going through and formatting the refs. It's still a fair amount of work as I'm still reading all of the sources and checking all the work, and it needs to be done as LLMs can't be trusted to do this on their own, but this is the type of stuff we need to know. LLMs aren't going away, and while I understand the objections and concerns of others, that's not going to stop the use of the tools. We can't even begin to get a handle on COI/UPE, flagrant BLP issues, and all manner of other issues, so there's no realistic expectation that we can actually prevent their use, so we should know fully what their limitations are, what use cases are reasonable, and what is involved in actually using them constructively. Then we can craft our policies and guidelines around their use from knowledge and experience rather than gut reactions and experience with their worst uses. ScottishFinnishRadish (talk) 14:08, 3 May 2025 (UTC)[reply]
Useful information. That hatnote at the top of Malacca dilemma, though: "This article may incorporate text from a large language model. It may include hallucinated information or fictitious references" does not inspire confidence in what follows. The title "Shit flow diagram" is a masterpiece of concision and precision.;-) Carlstak (talk) 14:49, 3 May 2025 (UTC)[reply]
Someone placed that tag, and I don't think I should be the one to remove it, although I reached out on their talk page. ScottishFinnishRadish (talk) 14:58, 3 May 2025 (UTC)[reply]
If you used ChatGPT responsibly there then I am afraid that you are in a very small minority. Most people who use LLMs seem to take their word as gospel. Phil Bridger (talk) 16:39, 3 May 2025 (UTC)[reply]
I definitely agree. That's why I think developing and documenting best practices is probably a good idea. ScottishFinnishRadish (talk) 16:50, 3 May 2025 (UTC)[reply]
The other day I tried to get ChatGPT to provide quotes from a pdf which had certain words. It made up the page numbers each time, so in the end no time was saved. I'm surprised that if you're using ChatGPT you don't use it to format references. I suspect that would be quite unusual, it's the main thing I use it for. Every now and then it forgets an instruction but if you tell it off it'll play nice for another couple of weeks. CMD (talk) 17:26, 3 May 2025 (UTC)[reply]
I've been experimenting, so I'm not too concerned saving time at this point and I want to make sure I'm checking every sourced statement.
One of the things I find it does very well if given a bunch of sources is throw together an outline based on the most common points found in the sources, with quotes and such. ScottishFinnishRadish (talk) 18:40, 3 May 2025 (UTC)[reply]
"Every now and then it forgets an instruction but if you tell it off it'll play nice for another couple of weeks." Haha. I always phrase my GPT prompts politely, but if it delivers bad results I give it the thumbs down and phrase my requests more sternly. It does help.;-) I used GPT-4 to copy edit the grammar in a few gigabytes' worth of text from some very long WP articles. I checked it with the "show changes" diffs and it performed admirably—found only a few errors it didn't catch, understandably, because of contextual nuance.
I routinely use GPT Scholar, prompted with well-defined instructions, to find academic references for WP articles and it does very well, delivering actual sources with actual authors rather than hallucinated ones (that was a problem with GPT-3.5), and links to the source pages. Carlstak (talk) 19:26, 3 May 2025 (UTC)[reply]
PS:GPT Scholar does occasionally yield references (real ones) that don't actually support the text I've supplied, with the info in the source being merely category-adjacent to what I'm looking for, but the majority have been reliable, usable sources. It's even pointed to journal articles and books that were revelatory to me. Carlstak (talk) 19:39, 3 May 2025 (UTC)[reply]
It does respond to tone differently, very weird tech. I find that on the times I ask it for grammar advice, I take about half the recommendations. Just tried GPT Scholar and it seems to have hallucinated sources, or at least, the Google AI tells me Journal of Digital Humanities in Asia doesn't exist. CMD (talk) 02:26, 4 May 2025 (UTC)[reply]
I wouldn't ask GPT for grammar advice—I use it to automate repetitive tasks, and it does it very well. I've also used it to clean up code. Despite pontifications to the contrary, using LLMs for these tasks with human curation works fine for me, very explicit prompts are key. Using them gives me more time to research and write content. Carlstak (talk) 13:59, 4 May 2025 (UTC)[reply]
What did you mean by "copy edit the grammar" then? CMD (talk) 14:57, 4 May 2025 (UTC)[reply]
I meant that I wouldn't ask GPT to prescribe grammar rules (think of all the contradictory prescriptive advice from manuals of style, for example, that are part of the scraped internet content they're trained on). I use GPT as uncomplaining servant to get boring jobs done, but I have to be nice to it.;-)
Dave Winer the developer wrote on X:
"I asked ChatGPT to "roast me and don’t hold back and omg that really hurts. Seems it has been remembering all the hoops I make it jump through, who knew it could harbor so much resentment. Not kidding."
Its reply was truly astonishing. Carlstak (talk) 15:36, 4 May 2025 (UTC)[reply]
Please don't use LLMs to write articles. Yikes. You should leave such experiments in user space and get some kind of explicit community support before polluting the main namespace with them. –jacobolus (t) 23:13, 3 May 2025 (UTC)[reply]
  • I looked at Malacca Dilemma to understand what it was. Having read it, it seemed clear that the word "dilemma" is a poor translation as a dilemma is strictly a difficult choice between alternatives. The article does not discuss this or provide the original Chinese phrase. I did a Google search and didn't find any English language source which goes into this either. But Google's AI figured what I was after and provided an excellent overview

    The original Chinese term for "Malacca dilemma" is 马六甲困境 (mǎ liù jiǎ kùn jìng). This phrase translates directly to "Malacca difficulty" or "Malacca predicament," capturing the core meaning of China's vulnerability to disruptions in energy and trade routes passing through the Strait of Malacca.

    Andrew🐉(talk) 19:17, 4 May 2025 (UTC)[reply]
  • I've just had a look at the same article and found a significant problem in the first few words, which, at the time of writing, are, "The Malacca dilemma refers to...". It does not refer to anything; it is something. This is just reproducing the worst writing in the LLM's training material. Phil Bridger (talk) 20:42, 4 May 2025 (UTC)[reply]
  • I am not one to mindlessly punch at the Foundation, but their "strategy" and many of the responses here fundamentally misunderstand what LLMs are good for and should be used for, and are a very bad idea. LLMs cannot be trusted to accurately reason about things or very often, even accurately report information, and should not be used for any kind of decision making task or moderation or content generation task on Wikipedia. They do not know facts; they are simply trained on plausible sounding sentences. This can get you pretty far because they can end up regurgitating good facts most of the time, but not for a topic which is not already covered in their training data. Even more complex architectures such as Gemini which incorporate multi-step transformations that can improve accuracy are prone to frustrating and persistent hallucinations. This is baked in and inherent to LLMs. Also, they are not good writers - they can write at a junior high level, but are prone to certain types of constructions that make it a dead giveaway that they were composed using an LLM. Wikipedia is basically a public access option of human information in an increasingly slop-infested and paywall-blocked internet. Wikipedia has its own problems with POV gatekeepers, hoaxes and inaccuracies, showing the limitation of the wisdom of crowds. Still, it is an important factor in the information environment and LLMs are a great way to make it much worse. Things like the structured data API, commons and wikidata, are a good idea because they help create data paths that can compete with or be an alternative to LLMs. LLMs should be rejected as tools for writing or automating tasks on Wikipedia. One thing that I think LLMs can do reasonably well is take an existing document or source and tell a human what those documents are about and answer questions about it. But any text that makes it into articles has to be carefully checked by a human being. Andre🚐 01:50, 4 May 2025 (UTC)[reply]

Regarding the philosophical aspects of using AI tools (not to mention the environmental consequences), Wired published an interview with Andrea Colamedici, the Italian philosopher who released the book Hypnocracy: Trump, Musk, and the New Architecture of Reality, whose Chinese author was revealed to be non-existent. He says:

We must keep our curiosity alive while using this tool correctly and teaching it to work how we want it to. It all starts from a crucial distinction: There is information that makes you passive, that erodes your ability to think over time, and there is information that challenges you, that makes you smarter by pushing you beyond your limits. This is how we should use AI: as an interlocutor that helps us think differently. Otherwise, we won't understand that these tools are designed by big tech companies that impose a certain ideology. They choose the data, the connections among it, and, above all, they treat us as customers to be satisfied. If we use AI this way, it will only confirm our biases. We will think we are right, but in reality we will not be thinking; we will be digitally embraced. We can't afford this numbness.

Carlstak (talk) 16:27, 4 May 2025 (UTC)[reply]

An awful lot of confident declarations in this thread about what LLMs absolutely [can|can't] do and what we absolutely [must|mustn't] do with them. Yes, LLMs work surprisingly well for many things; yes, they work surprisingly poorly for many things; yes, there are are ethical discussions worth having. Folks, the jury is out on much of this stuff, and they're judging a moving target. Enwiki's got some off-putting [anything AI related] partisanship vibes lately. Maybe we can look forward to some future date when we have a new tool that we can actually evaluate. — Rhododendrites talk \\ 00:55, 5 May 2025 (UTC)[reply]

The type of "content generation" that LLMs are best at so far, that I have seen, is mass SEO spam pages about every imaginable topic, slathered with ads, that contain "information" of highly variable quality (often on a single page) ranging from more or less a mediocre written summary of existing web pages through statements so vague as to be vacuous all the way to outright false nonsense perhaps created by mixing up unrelated topics. Such pages have become so pervasive that web search is now dramatically less useful for finding basic reliable information compared to 15 years ago. People are (quite rightly) wary of the use of LLMs to write Wikipedia pages because this remains one of the few easy to find and relatively reliable (with all of the usual caveats) oases on a web drowning in nonsense. If Wikipedia goes down the same path it would be a tremendous tragedy, and we should collectively do everything we can to prevent it. –jacobolus (t) 01:04, 5 May 2025 (UTC)[reply]
Or we can get a reality check on the WMF plans before they start on a costly years-long attempt to create something dubious again. A Wikipedia AI plan which doesn't even mention the Greenlandic Wikipedia catastrophe and how they plan to avoid such a problem of recurring or becoming even worse with this plan, is not something I trust. Fram (talk) 08:57, 5 May 2025 (UTC)[reply]
At meta:Proposals for closing projects/Closure of Greenlandic Wikipedia, the sole project admin, Kenneth Wehr, blames Google Translate, which is not based on LLMs, but instead uses a faster and less accurate architecture called a "Neural Machine Translation" model.[15] Cramulator (talk) 12:32, 5 May 2025 (UTC)[reply]
I have already quoted them above, will repeat it here: "Then Wikimedia launched its own AI translator, which was even worse, and this one produced completely random letter sequences, that often didn't even looked like Greenlandic." (emphasis mine, as it seems necessary) It's literally in the sentence directly proceeding the first mention of Google Translate by Wehr, it's hard to imagine that you didn't see this. Fram (talk) 12:43, 5 May 2025 (UTC)[reply]
Meta's NLLB-200 translation, which is the only Grenlandic translator in Wikimedia's MinT, is also a NMT model, not an LLM. Apologies for omitting that. Cramulator (talk) 13:24, 5 May 2025 (UTC)[reply]
You are missing the point, which is not the specific technology choices that failed in the past, but that this WMF proposal does not address "how they plan to avoid such a problem of recurring or becoming even worse". –jacobolus (t) 13:30, 5 May 2025 (UTC)[reply]
I'm pretty sure they don't plan to. Cremastra talk 19:33, 5 May 2025 (UTC)[reply]

There is AI snake oil out there. Saw this in the NYT: "A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse":

The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why....
For years, companies like OpenAI relied on a simple concept: The more internet data they fed into their A.I. systems, the better those systems would perform. But they used up just about all the English text on the internet, which meant they needed a new way of improving their chatbots.
So these companies are leaning more heavily on a technique that scientists call reinforcement learning. With this process, a system can learn behavior through trial and error. It is working well in certain areas, like math and computer programming. But it is falling short in other areas...
“Despite our best efforts, they will always hallucinate,” said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. “That will never go away.”

Carlstak (talk) 16:04, 5 May 2025 (UTC)[reply]

Further update on developments in India

Further to our prior update on 10 April 2025, as well as subsequent reporting in the media, we are reaching out to provide a brief update. This specifically concerns the legal proceedings arising from the injunction orders issued by the Delhi High Court (second point in the previous update) that directed the Foundation to take down allegedly defamatory content from an English Wikipedia article titled “Asian News International”.

Following the injunction order dated 2 April 2025 by the Single Judge Bench, and subsequent revised order dated 8 April 2025 passed by the Division Bench, the Foundation filed an appeal before the Supreme Court of India challenging the injunction orders [Civil Appeal No. 5455 of 2025].

On 17 April 2025, the Supreme Court set aside the injunction orders, thereby vacating the directions that required the Foundation to takedown the allegedly objectionable content from the referenced article. As the matter remains sub judice, the Foundation is unable to comment further on the ongoing proceedings.

The Foundation remains committed to defending the community's right to access and share neutral, verifiable, and reliably sourced free knowledge. Joe Sutherland (WMF) (talk) 17:53, 2 May 2025 (UTC)[reply]

Thanks. The page Asian News International vs. Wikimedia Foundation has now been unavailable for more than 6 months. is there some maximum length of time that the WMF will keep this thus, or can the litigator draw this out for years and years without the WMF saying "screw this, it's back online"? Fram (talk) 18:00, 2 May 2025 (UTC)[reply]
Our prior update concerning SLP (Civil) Diary No(s). 2483/2025 relates to the takedown of the English Wikipedia article titled "Asian News International v. Wikimedia Foundation". The proceedings have concluded, and the appeal is currently reserved for judgment by the Supreme Court. Joe Sutherland (WMF) (talk) 18:56, 2 May 2025 (UTC)[reply]
Thanks for the update, Joe. --Grnrchst (talk) 21:03, 4 May 2025 (UTC)[reply]

WMF CEO Maryana Iskander stepping down

See this news report. Iskander will depart "early next year" and the search for a new CEO is underway. "Iskander said her departure is part of an organized succession plan and that she began discussions with the nonprofit's board more than a year ago.... 'What Maryana did over the last four years is bring [the organization] from post-teenage years into young adulthood,' said McKinsey partner Raju Narisetti, the Wikimedia board member who led the search for Iskander and will also spearhead the effort to find her successor." There's also a long blog post / letter from Maryana on metaWiki here. —Ganesha811 (talk) 18:08, 6 May 2025 (UTC)[reply]

Wikimedia Foundation Bulletin 2025 Issue 8


MediaWiki message delivery 20:00, 6 May 2025 (UTC)[reply]

I request confirmation of the copyright status of the Wikipedia puzzle globe, the official Wikipedia logo. My best guess is that it is "Attribution: Nohat, CC-By-SA 3.0", but perhaps other designers get credit too, and also it is unclear to me whether Nohat ever transferred the copyright to the Wikimedia Foundation. If there are other designers, then I am unsure if their contributions are trivial or whether they merit attribution, and if they merit attribution then I am unsure of copyright license.

The Wikipedia logo exists in several variations in Commons:Category:SVG Wikipedia logo (2010).

The official copy is File:Wikipedia-logo-v2-en.svg as noted at foundation:Legal:Wikimedia_trademarks/About_the_official_marks. I think if we confirmed the copyright there, then that is the the single most important place to get it correct.

It seems to be the case that no one ever sorted the copyright attribution for the logo because I do not see a discussion, transfer of copyright, or clarity on who was involved in redesign. Multiple people contributed to the logo. In the version which the Wikimedia Foundation regards as official, the license says that copyright attribution goes to the Wikimedia Foundation, but if there ever was a record of copyright transfer, then it is not in the file metadata, and there are lots of versions of the logo which give attribution elsewhere. There are some interesting changes in the edit history of the file but I cannot quickly interpret it, and I thought I would just ask if anyone knew the answer about copyright.

At Talk:Wikipedia_logo#Copyright_attribution I asked who the copyright holder might be.

The logo itself is from 2003. Wikimedia Commons was established in 2004, but before then people uploaded files in Wikipedia or Meta-Wiki, and then those files got copied into Commons after its establishment. I think upload dates were preserved in the mirroring.

Bluerasberry (talk) 16:07, 7 May 2025 (UTC)[reply]

Does https://diff.wikimedia.org/2014/10/24/wikimedia-logos-have-been-freed/ help? Thincat (talk) 13:29, 8 May 2025 (UTC)[reply]

Article on WMF and UK's Online Safety Act

From BBC. Basically trying to make sure proper exemption is made for WP that it doesn't get classified with large social media sites, as if it were it would be required to collect info on editors and other concerning steps. Masem (t) 13:11, 8 May 2025 (UTC)[reply]