Jump to content

Wikipedia:Village pump (idea lab)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Blueboar (talk | contribs) at 13:48, 22 August 2021 (Transcluding categories from templates). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

 Policy Technical Proposals Idea lab WMF Miscellaneous 
The idea lab section of the village pump is a place where new ideas or suggestions on general Wikipedia issues can be incubated, for later submission for consensus discussion at Village pump (proposals). Try to be creative and positive when commenting on ideas.
Before creating a new section, please note:

Before commenting, note:

  • This page is not for consensus polling. Stalwart "Oppose" and "Support" comments generally have no place here. Instead, discuss ideas and suggest variations on them.
  • Wondering whether someone already had this idea? Search the archives below, and look through Wikipedia:Perennial proposals.

Discussions are automatically archived after remaining inactive for two weeks.

« Archives, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62

Feature Proposal: Contextual Citations

I'm interested in getting feedback for a feature I've been working on as a hobby. I have working code but it is alpha-quality.

The Problem:

Many people do not trust quotations in the media because they are suspicious that the quote may be taken out of context or fabricated.

Solution:

I've been developing an open-source app that:

 * given a quotation which is attributed with a URL, 
 * looks up the quote context using a Python Webservice and saves the contextual data to a JSON file
 * displays the context to the reader using javascript to render contextual popups or expanding blockquotes

Proposal:

 * I propose that Wikipedia explore "upgrading" those quotations that are attributed to a web source to CiteIt-style contextual citations.
Screenshot: Contextual Popup used in Sample Ruth Bader Ginsburg Wikipedia article
Screenshot: Contextual Popup used in Sample Ruth Bader Ginsburg Wikipedia article

Benefits:

The Benefit of using Contextual Citations is that readers:

 * learn more about the context of quotes
 * gain trust in the integrity of the citation and verify that a quote wasn't fabricated or cherry-picked.

Implementation:

You can view a video and demo on my project website:

 * View Demo

More Information:

I've outlined some of the steps to implementing this on my Wikipedia User page:

 * https://meta.wikimedia.org/wiki/User:Timlangeman/sandbox

Open Source License:

 * All code is published on GitHub and licensed under the open-source MIT License

P.S. I'm new to the Wikipedia culture, so feel free to point out the correct Wikipedia way.

Timlangeman (talk) 03:05, 24 July 2021 (UTC)[reply]

Hi Timlangeman, thank you for your neat idea and having the tech chops to develop the code! From seeing how it works in the demonstration video, I can see this implemented in one of two ways: just as you show it, by making the quote itself clickable to bring up the contextual pop-up box, or, have this feature imbedded in the corresponding in-line citation. The question is: would this feature be used so often by readers to be it worth to make the main text clickable? Or should we minimize the "noise" in the text by having this feature a bit more discretely included within the reference? Food for thought. Thanks! Al83tito (talk) 05:35, 24 July 2021 (UTC)[reply]

Feedback Ideas: Contextual Citation UI

Hi Al83tito, I'm open to UI suggestions and I'd be happy for others to experiment with UI ideas too.

As far as the amount of link "noise", this can be handled in different ways:

  1. style the link in a visible way
  2. style the link in an unobtrusive way, but still visible on hover
  3. remove the link and move quote inspection functionality into an icon or the footnote

Sample Articles

I don't know if you saw the 8 sample articles that I mocked up?

 * 8 Sample Wikipedia Articles

These should provide good samples for programmers and designers to experiment with.

Alternate UI Designs: Programmers Download Sample Articles

I packaged these files up into a Git repository for anyone that has UI skills and wants to experiment with different UI options.

 * https://github.com/CiteIt/wikipedia-samples

Feedback

If you don't have programming skills, I can create mockups based on your suggestions.

Timlangeman (talk) 14:48, 24 July 2021 (UTC)[reply]

@Timlangeman, you might be interested in mw:New Developers. Whatamidoing (WMF) (talk:Whatamidoing (WMF)|talk) 20:30, 26 July 2021 (UTC)[reply]
@Whatamidoing, Thanks for that suggestion. It looks like the python Wikibot may be helpful :-). Timlangeman (talk)

I like the general idea. I do have one issue however... Wikipedia generally doesn't have many direction citations. This is because of the copyright nature of Wikipedia and because it is a tertiary source. We intentionally 'transform' and 'rewrite' points from 1st and 2ndary sources most of the time and then use references. I see that as a bit of a problem for the success of this particular tool. Do you have thoughts on that ? —TheDJ (talkcontribs) 09:05, 3 August 2021 (UTC)[reply]

This is a copyright nightmare. Even the image used to demonstrate this ([1]) is probably not acceptable, as it has a way too long quote of copyrighted text. We instruct editors to make their quotes as short as possible (if the text is copyrighted, and if they can't avoid using quotes altogether): to then add a tool that shows much longer quotes simply contradicts this. For public domain sources, fine, but for copyrighted texts, I don't think this will be acceptable. Fram (talk) 09:24, 3 August 2021 (UTC)[reply]

@talk:Fram, I'm fine with changing the length of the context so long as it is long enough to get a proper sense of the meaning. The question I have is whether it is possible to automate the process of setting the context length? I did a little bit of research into copyright law and it seems like the law is fairly subjective or at least not something that a computer can easily calculate. I don't know any copyright lawyers. Does Wikipedia have access to any copyright lawyers? I've drafted an email to Lila Baily, who is the in-house lawyer for the Internet Archive. I'm interested in hearing ideas on whether a fixed or computable length is feasible or whether each quote has to be handled on a case-by-case basis. Timlangeman (talk) 10:42, 15 August 2021 (UTC)[reply]

Citation Deep Linking with Google Text Fragments

If copyright issues are a barrier to contextual citations, I'm wondering what people think about using Google Text Fragments in linking to citations to help the reader more easily inspect the context of the quote. I've written up a summary aggregating some information about them:

 * Using Google Text Fragments to Link to Specific Text

I know that there are issues with browser support and privacy. At this point, I'm mainly trying to seek what people think about them in principle. Timlangeman (talk) 21:47, 5 August 2021 (UTC)[reply]

@Mike Peel and Andy Mabbett have a talk scheduled for this weekend at Wikimania:2021:Submissions/Automatically maintained citations with Wikidata and Cite Q. They've spent a lot of time thinking about questions related to verifiability. @LWyatt (WMF), don't you have a talk about WikiCite coming up, too? Whatamidoing (WMF) (talk) 18:09, 9 August 2021 (UTC)[reply]
Yep User:Whatamidoing (WMF) - we've got a session on Sunday. LWyatt (WMF) (talk) 14:40, 10 August 2021 (UTC)[reply]

CU policy exception for handling OS requests

Background

Earlier today I stumbled upon a help request from a user asking for suppression of the IP associated with an edit they made while accidentally logged out. The request didn't include a diff of the problematic edit (duh), but since such requests are routinely granted, I thought someone at #revdel would be able to CU the user and find the edit in question. To my surprise, this couldn't be done because our current CU policy precludes such checks.

Idea

I believe that adding an exception to our CU policy that would allow Oversighters who are also CUs to use the CU tool to expedite the handling of incomplete suppression requests would be an improvement on the current situation in which the requester is needlessly required to supply the missing information before their request can be actioned.

Global CU policy compliance

The global CU policy allows individual wikis a degree of latitude in handling self-requested checks; I believe that filing an OS request (regardless of its form) such that it requires user IP information to handle can be regarded as such a request and so the exception would be in keeping with the global policy. An argument could be made that such a check might reveal more than the user wanted us to know but this is always the case with self-requested checks and doesn't clash with the long-standing practice of allowing such, globally speaking (enwiki currently disallows self-requested IP checks wholesale).

Is this really necessary though?

I'd imagine that the typical profile of a person that inadvertently reveals their IP while logged out is that of a careless new editor. Such are notorious for being difficult to reach when a follow-up is needed which means their perfectly valid OS requests may end up rejected, or significantly delayed, due to purely bureaucratic hurdles. That sounds to me like a situation that could and should be improved upon regardless of how often this actually happens.

Mock-up phrasing

My proposed policy amendment would roughly look something like this:

As oversight requests are often time-sensitive, oversighters who are also checkusers are permitted to perform CU checks to expedite the processing of incomplete suppression requests. Information obtained through such checks may be used only for that specific purpose, and must not be shared; it also must not be retained, or even accessed, by the oversighter any longer than absolutely necessary for the handling of the request.

Is this something worth proposing at VPP? I'd appreciate some feedback. 78.28.44.31 (talk) 05:57, 26 July 2021 (UTC)[reply]

Thing is, considering the avg response time of oversight, I don't know whether having to follow up once with the diff of the edit is a big slowdown. The delay here was because the user apparently didn't know oversight could be used for this, but usually editors would, and so they would email into the queue or email an OSer directly, presumably with the diff of the logged out edit. So I suspect this particular case is very uncommon and not worth amending the CU policy over. I'm surprised CU can't be used to actually verify the OS request though, but I suppose in most circumstances it's probably obvious from the context. ProcrastinatingReader (talk) 09:33, 26 July 2021 (UTC)[reply]
It's not the response time of Oversight that's the problem here though, is it? It's the fact there's a needless roadblock in the policy that occasionally stops them from doing their job. Thanks for pointing out request verification as a potential consideration; I haven't even thought of it! If I take my idea to VPP, it'll need a complete rewrite to accommodate that concern, that's for sure. 78.28.44.31 (talk) 00:05, 17 August 2021 (UTC)[reply]
I can't find a circumstance where material cannot be suppressed because policy somehow gets in the way. The cases such as the one mentioned above are just people not knowing or realising how to request suppression (OS requests should always be private and made with the relevant info such as an IP in this instance), and we shouldn't charge the CU policy to accommodate them. Giraffer (not) 19:28, 7 August 2021 (UTC)[reply]
Quite obviously, any material that oversighters aren't able to locate cannot be suppressed. This occasionally includes material that needs to be suppressed and could be suppressed if its location were known. Here's a real world analogy. You dial 911. You scream "fire." You run out of the building, leaving the phone behind. Should the emergency services have the ability to track your phone call to find out what your address is or not? Sure, in a perfect world, you would've provided it yourself but we don't live in a perfect world, do we? Your house is on fire so clearly we don't. 78.28.44.31 (talk) 00:05, 17 August 2021 (UTC)[reply]
  • This whole thing looks to me very much like a solution without a problem. The purpose of suppressing IP addresses for editors who have edited logged out is to prevent the connection of the IP address to the account from being publicly visible. If an administrator or oversighter can't see that information without CheckUser then it isn't publicly visible. End of story. JBW (talk) 09:15, 17 August 2021 (UTC)[reply]
  • Users have individual styles and quirks in editing. It is often possible to guess the identity of a logged-out user, especially if the logged-out edit is on a page where the user has also edited while logged in, or if the logged-out user refers to previous edits made while logged in. Users inadvertently editing while logged out has been called the "poor man's checkuser." - Donald Albury 17:03, 17 August 2021 (UTC)[reply]
@Donald Albury: I've never before come across the expression "poor man's checkuser" used in that sense. The expression was, however, once in fairly common use to refer to the fact that if an account became blocked from editing, on the contributions page of any IP address that consequently became autoblocked, the administrator link "block" changed to "change block", so that any administrator who had reasons to believe that the account and IP address were linked could see confirmation of the fact. However, the software was changed many years ago to prevent that from happening. JBW (talk) 21:08, 17 August 2021 (UTC)[reply]

Hide offensive images and allow users to show them if they want to see them

Note:Image depicts an undressed, female, spider

There are people who find certain images on Wikipedia offensive or disturbing. It should be possible for those viewers to access the text of an article without being forced to view the images. There is community consensus that offensive or disturbing images shouldn't be censored. At the same time one fundamental goal of Wikipedia is to make information accessible for everyone. If the graphic nature of certain (e.g. medical) images shocks some users, the information on that page is in effect no accessible to those persons. In those cases, those images work against one of the fundamental goals of Wikipedia. I therefore propose that sensitive images are hidden by default, and can be displayed with a single click. If no consensus can be reached which images should be considered offensive or disturbing, I propose that all images are hidden – and not loaded – be default. This will have the additional effect of making both mobile browsing for users and hosting of Wikipedia cheaper.— Preceding unsigned comment added by 2003:df:972b:fc52:bc41:2309:e4d1:9069 (talkcontribs) 07:40, 3 August 2021 (UTC)[reply]

There are many browser extensions you could use to not load images if you really don't want them to load; not displaying every image by default for every reader would be a big disservice. See Help:Options to hide an image for many options available. — xaosflux Talk 10:42, 3 August 2021 (UTC)[reply]
I agree with Xaosflux. A reader who is offended by images on Wikipedia will also be offended by many images elsewhere on the Internet, so it is better to use such a general solution rather than a Wikipedia-specific one. This will also have a bigger effect on the cost of mobile browsing (if a reader is paying by the amount of data downloaded), and neither will have any appreciable effect on the cost of hosting. Phil Bridger (talk) 10:56, 3 August 2021 (UTC)[reply]
This is not a new discussion for us. Different people have different concepts as to what is sensitive. Defining one or more criteria for offensive imagery is not a realistic task for a global community. If we hid or removed everything that anyone was offended by we would offend a bunch of people who aren't offended by human shins or female faces. So it is better that we make Wikipedia freely available to everyone, and those who don't want to see certain things can write and install their own filters, and make their own version of Wikipedia. Wikipedia is openly licensed under terms that allow for such reuse. ϢereSpielChequers 11:09, 3 August 2021 (UTC)[reply]
If we hid everything that offended people, then people would be offended. 🐔 Chicdat  Bawk to me! 11:11, 3 August 2021 (UTC)[reply]
See the policy WP:NOTCENSORED. - Ahunt (talk) 13:07, 3 August 2021 (UTC)[reply]
Hiding all images is a good start, but to absolutely avoid offending readers, we may need to go a step further. Wikipedia is full of textual descriptions of all kinds of offensive things, such as Adolf Hitler, the Rwandan genocide, and Hawaiian pizza. These articles (even without images) could cause the reader to visualize offensive mental images. To show you just how bad this problem is, consider that our article on the human penis contains about 8500 words. If a picture is worth a thousand words, then the text on its own is equivalent to eight or nine potentially offensive images of a penis! In order to protect readers from offense, I propose that we hide all article text by default, and allow the user to click to un-hide it, paragraph by paragraph, to ensure they are definitely not offended. RoxySaunders (talk · contribs) 05:10, 4 August 2021 (UTC)[reply]
I think a few years ago there was a large initiative to solicit the community's opinion on this matter. It was remarkably even advertised in banners on the top of pages, if I recall correctly. It was a big deal. I wish I could provide a link to that but I don't recall where to go find it. So this is indeed a very relevant issue, that was addressed in breadth and depth some time ago. Generally I think the result was to keep leaning against censorship, or even optional hiding/displaying features. Al83tito (talk) 03:34, 6 August 2021 (UTC)[reply]
Probably one of the main discussions on this matter. See Wikipedia:Perennial proposals#Censor offensive images. The specific event you're talking about Al83tito is the m:Image filter referendum from August 2010. — Berrely • TalkContribs 07:16, 6 August 2021 (UTC)[reply]
Technically we could tag files. A simple good-bad tagging is unlikely to be sufficient. We could borrow the MPA film ratings or tag based on content like being nausea-inducing, NSFW, nudity, violence and religiously offensive. We could add a class to images (e.g. [[File:Example.jpg|thumb|caption here|class=rated_g]] though this would require changes to various infobox templates which likely can't pass classes for images. Nannyware and/or a gadget could use these classes to hide undesired image classes or only display those that are rated G/unoffensive. This would very much be opt-in as the classes wouldn't do anything by themselves. I wouldn't oppose this, but it may still prove an uphill battle to get the community to embrace this. If anyone wants to try I might be able to assist but this is isn't something I'd take on myself. — Alexis Jazz (talk or ping me) 12:47, 6 August 2021 (UTC)[reply]
Your idea has already been proposed in the past, and evaluated as unworkable. The US-based movie rating system, for example, can be seen in other cultures as allowing ridiculous amounts of violence while being very prudish with respect to nudity. Category-based tagging might work slightly better, but keep in mind you'd need a lot of categories. "Image contains spiders", for example, to avoid offending arachnophobes. "Image contains visible women" to not offend people from some cultures (but that tagging might offend people in other cultures). And so on. Anomie 03:36, 7 August 2021 (UTC)[reply]
The 2010 report found four general categories covered most situations for most cultures:
  • extreme violence (e.g., photo of a wrestler gouging out another wrestler's eye)
  • sexuality (e.g., photos of sex acts)
  • religious sensibilities (e.g., Depictions of Muhammad)
  • disgusting images (mostly medical conditions; e.g., the lead image at Smallpox draws regular complaints)
With the growth of structured data about Commons images, it may someday be possible to run a script that would suppress most images of spiders (or your least-favorite politician, or whatever you felt like), but there have never been serious proposals to suppress narrow categories (e.g., only images of spiders).
Among the objections to this that I've always found unconvincing: Vandals will add politicians to the "disgusting" list/category, it will be impossible to get consensus about whether some borderline images belong in these groups, and that unsubstantiated rumor Anomie alludes to, about someone who is allegedly very offended by photos of fully clothed women, but who still uses the internet/reads Wikipedia articles regularly. However, even if everyone was enthusiastic about it, doing it effectively would still require manually reviewing and tagging millions of images, so this would require years of work to set up plus maintenance forever. There is no quick or easy fix here. Whatamidoing (WMF) (talk) 18:38, 9 August 2021 (UTC)[reply]
I see a script possible based on individual user input, where one user who sees an image under certain criteria will flag it, and then it will be blocked if a user opts into the filter. The list of filtered files will be hosted on an autoconfirmed-protected page, and no categories will actually be added to the images. Over time, the filter list will become very accurate. Of course, this relies on user participation and lack of conflict over the list. It may also require web hosting to confirm if a file is on the list so that users do not have to download very long lists of files. WIKINIGHTS talk 20:47, 9 August 2021 (UTC)[reply]

Political positions bloat in articles of politicians

If you use Wikipedia to stay informed about current politics and elections, you will be used to articles of politicians being filled with sections about their political positions. Such sections are often divided into sub-sections about specific issues and may become very long. It seems that editors will indiscriminately add a politician's statement about their political position as long as there is RS (usually news) to substantiate it. I have noticed this problem in the English-speaking countries of the US, the UK, Canada, and Australia, and it may extend to virtually every democratic country. It also extends beyond government officials, elected or unelected, to political commentators and others who are involved in politics.

Political positions bloat is unencyclopedic, excessive detail, recentism, and cruft. Remember that Wikipedia articles are summaries of their topics, not treatises, not essays, not voter guides. While politicians generate a lot of media coverage (which would be considered routine coverage in the terms of our notability guides), we do not consider every detail mention-worthy, unless it happens to be a political position. Why would we include a low-profile comment someone says about, say, marijuana, but not if they own a dog? Both will not be remembered in the long term.

The salience of political issues constantly changes, and politicians will become known for a few accomplishments or ideological opinions. Lists of political positions will not pass the 10-year test, but summaries and generalizations will. We should not include such lists for pre-21st century politicians because we are aware of what they are notable for and what is a side detail. Indeed, we find sections discussing the political positions of dead politicians, though they are not bloated with details of minor political issues. In reality, that is because the Internet and the existence of Wikipedia have enabled the expansion of lists in gradual steps. One may object that we do not know which political positions are relevant. But even without the certainty of scholarship, we should recognize what is noteworthy based on how much attention some position has garnered in RS, just as we recognize which details of personal life are notable and which are not. It is especially useful when a source explicitly states that someone is known for something.

I propose a link-able essay written on this issue (Wikipedia:Political positions, WP:POLPOS as a link?) and hope that we can soon cut down on political views sections. WIKINIGHTS talk 13:19, 9 August 2021 (UTC)[reply]

Idle thought: {{uw-van1}} etc really ought to link to a specific diff when they can. Enterprisey (talk!) 07:46, 10 August 2021 (UTC)[reply]

It makes it easier for others to see what the warning is about. And may make it easier to find inappropriate warnings. However it is too much trouble for warners to do, and it is extremely likely that the recipients know what the warning is about. So Enterprisey, I suggest that you go through all the warnings on users' talk pages and find the diff that led to it. I guess you will find it to be a waste of time. Graeme Bartlett (talk) 12:02, 10 August 2021 (UTC)[reply]
We would need to change the automated tools that place these warnings; asking patrollers to find specific revids would certainly be a waste of time. Enterprisey (talk!) 21:40, 10 August 2021 (UTC)[reply]
There's already an article parameter for the template, but not a diff template. I agree it'd be nice if we could get some of the warning tools to add one automatically. A wrinkle is that sometimes the warning will be for multiple edits on a page. {{u|Sdkb}}talk 22:17, 10 August 2021 (UTC)[reply]
Yeah, needless to say in that case we'd just go with the current behavior. But I'm pretty sure we could get a decent user experience improvement out of this nevertheless. Enterprisey (talk!) 06:56, 11 August 2021 (UTC)[reply]
Ikr. So much of this website seems to run on... manual paperwork, for things that could easily be automated with structured data. Intralexical (talk) 15:40, 11 August 2021 (UTC)[reply]

Bold idea: the Generic Queue Toolkit

Wikipedia has a lot of queues. Two disparate examples: WP:AFC/R and WP:PERM. What if we made them all use the same technology? I have a vision. If you write a JSON page that has the following information:

  • Questions to ask requesters, and request instructions (like the "Before applying for approval" section at Wikipedia:Bots/Requests for approval/Instructions for bot operators)
  • Templates to add to the request (like the toolbox available at PERM)
  • Possible outcomes (templates the responders could use, like those listed at {{BAG Tools}})
  • Whether each request gets a new section, or a new subpage (OK, this one's a bit ambitious - let's only support the "new section" workflow for the first version)

You should get:

  • A nice form requesters can fill out (like the WP:DRN form - click "Request dispute resolution" for an example)
  • A page where new requests go (of course)
  • A nice user script to respond to requests (could look like the WP:AFC/R script, or any number of the other request-response scripts we have)
  • Nice archives

Problems this solves:

  • Not having to maintain a new "form script", "response script", and archival process for each darned queue. (This is really big. Most people don't notice this. Being the maintainer for one of these, I really notice it)
  • We can improve the user experience for every queue simultaneously
  • Users don't have to fill out a wikitext form, one of the more user-hostile things we still have
  • Responders don't have to respond by clicking "Edit section" over and over and over

Queues this could apply to, off the top of my head: DRN, BRFA, PERM, AFC/R (probably more)
Anyway, if anyone wants to help out, let me know here. This might be a bit of a heavy lift, but I'm sure it's worth it, and I know a thing or two about graceful and peaceful transitions of Wikipedia processes to newer technology. Enterprisey (talk!) 07:10, 11 August 2021 (UTC)[reply]

@Enterprisey: I could probably help you with this. TheTVExpert (talk) 21:04, 11 August 2021 (UTC)[reply]

Deleted section aggregator?

Are there any tools that can automatically find and display sections that used to be in an article but have since been deleted?

I think this might unfortunately be the most insidious form of vandalism. Disinformation isn't terribly hard to spot, verify, and remove, but sourced material that has been removed can't be seen again unless you go digging through the page's history for some reason. Intralexical (talk) 15:42, 11 August 2021 (UTC)[reply]

@Intralexical: Edit filter 172 automatically tags all edits that remove a section as "section blanking". You can have a look through the filter log or you can sort recent changes to show only tagged edits, see here. 192.76.8.91 (talk) 18:02, 11 August 2021 (UTC)[reply]
@192.76.8.91:. Hm. Cool. Clunky, though. I guess it could be useful as an scraping/API endpoint for generating a more robust display. Intralexical (talk) 21:03, 11 August 2021 (UTC)[reply]

Possible redesign of Template:AFD help

Hello,

I have been spending some time in the sandbox of Template:AFD help, trying to fix some problems I perceive with it:

  1. Because it's positioned below the "previous AfDs" box, they're designed to appear as the same size. This causes a lot of whitespace, sometimes with both templates.
  2. The "Hide this box" link is surrounded by square brackets. One person on the talk page for the template felt that this suggested it should be a button that actually hides the box; compare with Template:Hidden and Template:Collapse which have action links in square brackets.

Below are some possible redesigns, transcluded from Template:AFD help/testcases:

Possible redesigns of Template:AFD help
Comparisons

Current live code


Sandbox code


Any feedback is appreciated. Also, please feel free to suggest other possible designs. Regards, DesertPipeline (talk) 05:38, 15 August 2021 (UTC)[reply]

Comments (Possible redesign of Template:AFD help)

Comments by Headbomb (Possible redesign of Template:AFD help)

This idea/proposal is fundamentally flawed and is based on several misconceptions.

  1. Firstly, the template doesn't cause "a lot of whitespace". {{AFD help}} is designed match the width of the box containing previous AFD discussions. It doesn't cause AFD links to 'have a lot of whitespace', that's caused by the AFDs link box itself, being of width 33%.
  2. Secondly, the proposal entirely ignores that {{AFD help}} comes after the previous AFDs link box, not before. More specifically, that means that all the mockups should look like the two examples below, not the way DesertPipeline imagines them to look like. This would be applied retroactively to tens of thousands of AFDs (65815 as of writing), borking them up, causing misalignment and a jarring clash in presentation.
  3. Thirdly, and perhaps more importantly, there is no actual problem with the current template. It is perfectly functional, and DesertPipeline's problems with it simply amounts to WP:IDONTLIKEIT. There's half a dozen proposed redesigns, all incoherent, changing the colour of the box for no reason, to the alignment of the box for no reason, to width for no good reason, explicitly cause a mismatch between the two boxes.
Example 1 (Live sandbox, since it changes all the time); Original Wikipedia:Articles for deletion/Daniel Carver (3rd nomination)

The following discussion is an archived debate of the proposed deletion of the article below. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review). No further edits should be made to this page.

The result was no consensus. Sandstein 06:33, 1 July 2018 (UTC)[reply]

Daniel Carver (edit | talk | history | protect | delete | links | watch | logs | views) – (View log · Stats) (Find sources: Google (books · news · scholar · free images · WP refs· FENS · JSTOR · TWL) Doesn't seem to satisfy WP:GNG. Ambrosiaster (talk) 17:56, 1 June 2018 (UTC)[reply]

Note: This discussion has been included in the list of People-related deletion discussions. MT TrainTalk 07:58, 2 June 2018 (UTC)[reply]

Relisted to generate a more thorough discussion and clearer consensus.
Please add new comments below this notice. Thanks, North America1000 07:13, 9 June 2018 (UTC)[reply]
  • Keep The sourcing in the article seems weak and I do not believe it is enough to justify an article in itself. This [2] indicates there was a Nightline interview with him so NEXIST comes into play. If he were simply some Howard Stern Show interviewee I would not think there is enough out there on which to base an article but there is nearly always something reported in RS before Nightline becomes interested. Jbh Talk 14:16, 9 June 2018 (UTC)[reply]
  • Keep I also think the 3 different, reliable sources are enough to keep this article. While it is a stub it's cited, and neutral. SEMMENDINGER (talk) 23:25, 9 June 2018 (UTC)[reply]
  • Delete local area coverage does not show notability. This really fails any reasonable reading of our fringe coverage guidelines. A few local interests stories in newspapers do not overcome the inherent problems of this article.John Pack Lambert (talk) 19:17, 13 June 2018 (UTC)[reply]
Relisted to generate a more thorough discussion and clearer consensus.
Please add new comments below this notice. Thanks, Sandstein 18:28, 16 June 2018 (UTC)[reply]
Relisted to generate a more thorough discussion and clearer consensus.
Please add new comments below this notice. Thanks, Enigmamsg 05:09, 24 June 2018 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review). No further edits should be made to this page.

The following discussion is an archived debate of the proposed deletion of the article below. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review). No further edits should be made to this page.

The result was no consensus. Sandstein 06:33, 1 July 2018 (UTC)[reply]

Daniel Carver (edit | talk | history | protect | delete | links | watch | logs | views) – (View log · Stats) (Find sources: Google (books · news · scholar · free images · WP refs· FENS · JSTOR · TWL) Doesn't seem to satisfy WP:GNG. Ambrosiaster (talk) 17:56, 1 June 2018 (UTC)[reply]

Note: This discussion has been included in the list of People-related deletion discussions. MT TrainTalk 07:58, 2 June 2018 (UTC)[reply]

Relisted to generate a more thorough discussion and clearer consensus.
Please add new comments below this notice. Thanks, North America1000 07:13, 9 June 2018 (UTC)[reply]
  • Keep The sourcing in the article seems weak and I do not believe it is enough to justify an article in itself. This [3] indicates there was a Nightline interview with him so NEXIST comes into play. If he were simply some Howard Stern Show interviewee I would not think there is enough out there on which to base an article but there is nearly always something reported in RS before Nightline becomes interested. Jbh Talk 14:16, 9 June 2018 (UTC)[reply]
  • Keep I also think the 3 different, reliable sources are enough to keep this article. While it is a stub it's cited, and neutral. SEMMENDINGER (talk) 23:25, 9 June 2018 (UTC)[reply]
  • Delete local area coverage does not show notability. This really fails any reasonable reading of our fringe coverage guidelines. A few local interests stories in newspapers do not overcome the inherent problems of this article.John Pack Lambert (talk) 19:17, 13 June 2018 (UTC)[reply]
Relisted to generate a more thorough discussion and clearer consensus.
Please add new comments below this notice. Thanks, Sandstein 18:28, 16 June 2018 (UTC)[reply]
Relisted to generate a more thorough discussion and clearer consensus.
Please add new comments below this notice. Thanks, Enigmamsg 05:09, 24 June 2018 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review). No further edits should be made to this page.

This is bike-shedding at its finest. Headbomb {t · c · p · b} 06:42, 15 August 2021 (UTC)[reply]

User:Headbomb: I'm confused. Previously, you stated that there was no relation between the previous AfDs box and the help box – that was why you said you didn't think the version with the help box inside the previous AfDs box works. I agree with you. But now you're saying that the changes explicitly cause a mismatch between the two boxes. That is the intent; they aren't related boxes. Putting the help links elsewhere allows us to make the previous AfDs box a dynamic size, avoiding a load of whitespace which would occur if the links inside that box aren't as long as the box itself. DesertPipeline (talk) 07:04, 15 August 2021 (UTC)[reply]
I said there were no hierarchical link between the AFD help box and the previous AFD discussion link box, because you wanted to put the AFD help box inside the previous AFD discussions link box, making it appear as a sub-level of the previous AFD discussions link box. AFD helps is not a sub-level of previous AFD discussions, conceptually or practically, which means it is inappropriate to present as such and is bad design. Changing the width of one box without changing the width of the other box will cause a mismatch in width between the boxes, and that's also bad design. Headbomb {t · c · p · b} 07:07, 15 August 2021 (UTC)[reply]
User:Headbomb: And, likewise, they are unrelated in every other sense. There's no rule saying that the help links have to go after the previous AfDs box. They're two different sets of information for different purposes: One for helping people new to AfD, and one for linking to previous discussions on the article in question.
Changing the width of one box without changing the width of the other box will cause a mismatch in width between the box, and that's also bad design.
Currently, both boxes have to be the same size to not look strange. That's also bad design. The help box will never have enough text to fit all of that area; the previous AfDs box might. That's why I believe that the best solution is to move the help box elsewhere. Do you have any other suggestions? DesertPipeline (talk) 07:14, 15 August 2021 (UTC)[reply]
"There's no rule saying that the help links have to go after the previous AfDs box." There is. Namely, the ~65815 previous AFDs, as of writing, where {{AFD help}} is after the previous AFD discussions box. These won't magically change positions if the template is editted. Headbomb {t · c · p · b} 07:17, 15 August 2021 (UTC)[reply]
User:Headbomb: On Wikipedia, we are familiar with retaining things for historical reference. But that doesn't mean we can't change what we're doing; this is an argument of "We've always done it this way; therefore, it should stay this way". It's perfectly fine to leave existing AfDs with the current layout. Only new AfDs would use the new one. Is that an issue in your opinion? DesertPipeline (talk) 07:19, 15 August 2021 (UTC)[reply]
And, like I said to you before You could in theory design something that's used on a go forward basis, but you'd need to redesign the AFD link box, the AFD help box, both should line up and be of equal width, then redesign the AFD workflows to make use of them. And then you'd need an RFC to roll out the changes. You cannot do so simply by editing {{AFD help}}, you need new templates entirely, and then redesign AFD around those new templates. Headbomb {t · c · p · b} 07:21, 15 August 2021 (UTC)[reply]
User:Headbomb: Why do you consider it a requirement for the previous AfDs box and help box to line up? I don't believe there's any way to do that which will fix the whitespace problem. However, if you have any suggestions, please state them. DesertPipeline (talk) 07:23, 15 August 2021 (UTC)[reply]
There is no 'whitespace problem'. This is what things look like now (or at a different zoom level). If it's not an issue for the previous AFD discussion links, it's not an issue for the AFD help box either. Headbomb {t · c · p · b} 07:24, 15 August 2021 (UTC)[reply]
User:Headbomb: It isn't helpful or effective to deny the existence of a clearly-existing problem. At 33% width, the AfD help box has a lot of whitespace on the right. Do you not see this on your monitor? If so, just because it isn't like that for you, it doesn't mean it isn't like that for others. Also, I can't view your image links. If you upload them to Wikipedia, I'll be able to. DesertPipeline (talk) 07:30, 15 August 2021 (UTC)[reply]
No more or less than the previous AFD discussions link box does. It's exactly the same width. And try the links again, they should work now. Headbomb {t · c · p · b} 07:32, 15 August 2021 (UTC)[reply]
User:Headbomb: See the screenshot on the right. Also, when I say "I can't view the images", I mean I can't view the website. Please upload them to Wikipedia so I can view them. DesertPipeline (talk) 07:36, 15 August 2021 (UTC)[reply]
The exact same thing will happen if you go to an AFD discussion with previous AFDs discussion links with the same zoom level. So again, why is this a problem with the AFD help box, but not the previous AFDs discussion links box? Headbomb {t · c · p · b} 07:38, 15 August 2021 (UTC)[reply]
User:Headbomb: It is a problem with the previous AfDs box. Putting the help links somewhere else will allow us to make the previous AfDs box scale to fit the text inside it without having two boxes of different sizes, as it would be with the current help box. DesertPipeline (talk) 07:40, 15 August 2021 (UTC)[reply]

Here are the same screenshots:

Headbomb {t · c · p · b} 07:42, 15 August 2021 (UTC)[reply]

User:Headbomb: Your first screenshot demonstrates that the problem exists. We should try to make it look good on all (or most) displays. DesertPipeline (talk) 07:45, 15 August 2021 (UTC)[reply]
Which, for the at least third time now, you cannot do in a vacuum.

You could in theory design something that's used on a go forward basis, but you'd need to redesign the AFD link box, the AFD help box, both should line up and be of equal width, then redesign the AFD workflows to make use of them. And then you'd need an RFC to roll out the changes. You cannot do so simply by editing {{AFD help}}, you need new templates entirely, and then redesign AFD around those new templates.

Headbomb {t · c · p · b} 07:46, 15 August 2021 (UTC)[reply]
User:Headbomb: We can start from what we have right now; if it requires that a new template be made, then that's fine. Currently it's in the testing stage, and there's no reason to make a new template until we know what we're going to do. Right now I want to see if anyone has other suggestions. Are you saying, though, that the layout I suggest with a messagebox at the top is a bad design? If so, why? DesertPipeline (talk) 07:51, 15 August 2021 (UTC)[reply]
You've got seven million designs going on. Messages boxes are overkill for what is effectively "BTW" side information and it makes no sense to put previous AFD discussion links in a message box, and it will break old pages/require a massive effort to rollout (depending on the specifics). Leave the bike shed alone. Headbomb {t · c · p · b} 07:55, 15 August 2021 (UTC)[reply]
User:Headbomb: So if you think the messagebox design isn't a good choice, please suggest an alternative. Otherwise, I'm not really sure how it helps to just keep talking about bike sheds over and over. I recognise you don't want anything to be changed. Please stop telling me over and over. I'm not going to stop talking about it until a conclusion has been reached (either a change which takes into account all requirements is suggested or I have been convinced that it actually doesn't need changing). The "bike shed" thing isn't convincing me that it doesn't need changing. DesertPipeline (talk) 08:02, 15 August 2021 (UTC)[reply]
The current design is working just fine, and saves everyone considerable headaches. The alternative to your proposals is the status quo. Your eyes won't start bleeding because there's more whitespace than you'd like in some situations (i.e. big zoom out levels). Leave the bike shed alone. Headbomb {t · c · p · b} 08:16, 15 August 2021 (UTC)[reply]
User:Headbomb: My zoom level is 100% and my monitor is 1920x1080. It's displaying like this on a standard setup. DesertPipeline (talk) 08:26, 15 August 2021 (UTC)[reply]

Distinguish between block-type with preference gadget

I use a gadget in my preferences that shows blocked users with a strike through their username on talk pages and page histories. However, anything from a decade ago, for example, has quite a lot of strikethroughs. Some of the users are LTAs, trolls, and whatnot, but a lot are either deceased or retired or just abandoned. I was wondering if there could be some kind of difference in a perma-ban block & a deceased block, etc. Also, at ANI a 24-hr block appears the same as an indef. Maybe a perma-ban could have double strike throughs, a deceased block could be a different color, etc. Thoughts? Rgrds. --Bison X (talk) 15:55, 15 August 2021 (UTC)[reply]

How would the script determine any of these things? Headbomb {t · c · p · b} 19:41, 15 August 2021 (UTC)[reply]
The gadget already does distinguish temporary and indefinite blocks. The latter appear paler and are italicized. The gadget even provides a way to specify custom styles in your common.js. Also, the age of the block is available in the tooltip. Nardog (talk) 17:13, 17 August 2021 (UTC)[reply]
Yes, but that's because length of block is something that's easy for a script to determine. How would a script know the reason for the block? Headbomb {t · c · p · b} 03:45, 18 August 2021 (UTC)[reply]
I don't know if scripts can read the block log, but could it search for phrases "retired" or "deceased" versus "ArbCom" or "community"? If not, then, and I have no idea how possible this is, can a choice be added to the block button that allows for an "honorable" or "dishonorable" block? Rgrds. --Bison X (talk) 12:54, 18 August 2021 (UTC)[reply]
It wouldn't be difficult to write a user script that checks the user's talk page for {{retired}}, {{deceased}}, and other templates that imply things about the block. Users with those two templates aren't usually blocked, though, which is another issue. Enterprisey (talk!) 08:25, 20 August 2021 (UTC)[reply]

Is any bot allowed to create Wikipedia articles unassisted?

Apologies if this is the wrong place, willing to move this question if this is the case. Is there any bot currently on the English Wikipedia which creates articles unassisted? And would such a bot be allowed in the future if this isn't the case? By articles I don't mean redirects, or disambiguations. Note that this is not me making a bot request but instead just asking out of curiosity. Cheers, Rubbish computer Ping me or leave a message on my talk page 18:21, 16 August 2021 (UTC)[reply]

See WP:MASSCREATION; bots can only create articles with pre-approval. ProcrastinatingReader (talk) 18:28, 16 August 2021 (UTC)[reply]
I'm not aware of any currently approved article-creation bots, and any request that went to WP:BRFA would require a strong consensus for the task - with the amount of support, advertisement, and participation proportional to the size of the task. Getting a task to do something like "Create 200 articles on this specific bacteria family based on this well regarded public-domain source" are much more likely to be approved then for example doing what ruwikinews is doing with mass-mirroring external sources in to their project. — xaosflux Talk 18:32, 16 August 2021 (UTC)[reply]
I believe User:Qbugbot is the most recent instance of a bot approved to create articles; see Wikipedia:Bots/Requests for approval/Qbugbot 2/ Plantdrew (talk) 19:07, 17 August 2021 (UTC)[reply]

Thank you both, will read more about it at the links provided. Rubbish computer Ping me or leave a message on my talk page 18:34, 16 August 2021 (UTC)[reply]

Xaosflux I didn't realise ruwikinews did that, sounds like a mess. Rubbish computer Ping me or leave a message on my talk page 18:36, 16 August 2021 (UTC)[reply]

They are copying verbatim professionally written news articles that were placed into Creative Commons after the Putin regime forced the closure of this newspaper as it was critical of the regime. This might have unintended results since anyone including pro-Putin elements can now edit those articles requiring constant vigilance to monitor millions of articles. A read-only archive might have been better. -- GreenC 19:35, 16 August 2021 (UTC)[reply]

@GreenC: sounds like something more aligned with the Internet Archive capabilities. — xaosflux Talk 17:55, 17 August 2021 (UTC)[reply]

New Idea

Hi guys! Please consider my Idea. The mobile view doesn't have easy access to the preferences, so I was just thinking if there was a way to change that? Twilight Sparkle 222 (talk) 22:12, 16 August 2021 (UTC)[reply]

@Twilight Sparkle 222: this requires an upstream software change, you can follow and comment on the outstanding request for this feature here: phab:T229818. — xaosflux Talk 18:01, 17 August 2021 (UTC)[reply]

Ability to add pages to watchlist by entering page name

Hello! So I have an idea of being able to add pages to your watchlist by simply putting the name of the article/page into a text box and it will add that article/page into your watchlist instead of having to go to individual pages and clicking on the star to add it on to the watch list. Not sure if this is already something that's been mentioned or if it's even possible because I don't know how to write code that makes Wikipedia work. Blaze The Wolf | Proud Furry and Wikipedia Editor (talk) 13:53, 20 August 2021 (UTC)[reply]

You can sort of do this at Special:EditWatchlist/raw. CMD (talk) 13:56, 20 August 2021 (UTC)[reply]
It appears to be broken for me or it just doesn't work all that well. Blaze The Wolf | Proud Furry and Wikipedia Editor (talk) 14:53, 20 August 2021 (UTC)[reply]
@Blaze The Wolf, would you please click on https://en.wikipedia.org/wiki/Special:EditWatchlist/raw?safemode=1 and let me know if it looks different from what you see at Special:EditWatchlist/raw? Whatamidoing (WMF) (talk) 16:30, 20 August 2021 (UTC)[reply]
It does. It would be what displayed on Special:EditWatchlist/raw for a brief second before showing a blank page. Blaze The Wolf | Proud Furry and Wikipedia Editor (talk) 16:32, 20 August 2021 (UTC)[reply]

What is the Best AI model for Content Moderation on Wikipedia?

Imagine you’ve just spent 27 minutes working on what you earnestly thought would be a helpful edit to your favorite article. You click that bright blue “Publish changes” button for the very first time, and you see your edit go live! Weeee! But 52 seconds later, you refresh the page and discover that your edit has been reverted and wiped off the planet.

An AI system - called ORES - has been contributing to this rapid judgement of hundreds of thousands of editors’ work on Wikipedia. ORES is a Machine Learning (ML) system that automatically predicts edit and article quality to support content moderation and vandalism fighting on Wikipedia. For example, when you go to RecentChanges, you can see whether an edit is flagged as damaging and should be reviewed. This is based on the ORES predictions. RecentChanges even allows you to change the sensitivity of the algorithm to "Very Likely Have Problems (flags fewer edits)" or "May Have Problems (flags more edits)”.

In this discussion post, we want to invite you to discuss the following *THREE potential ORES models* -- Among those three models, which one do you think presents the best outcomes and would recommend for the English Wikipedia community to use? Why?

ABOUT US: We are a group of Human–computer interaction researchers at Carnegie Mellon University and we are inviting editors to discuss the trade-offs in AI-supported content moderation systems like ORES; your input here has the potential to enhance the transparency and community agency of the design and deployment of AI-based systems on Wikipedia. We will share the results of the discussion with the ML platform team which is responsible for maintaining the ORES infrastructure. However, the decisions of the discussion are not promised to be implemented. More details are available at our research meta-pages: Facilitating Public Deliberation of Algorithmic Decisions and Applying Value-Sensitive Algorithm Design to ORES.

Model Card One: High Accuracy

  • Performance table
Group / Metrics Accuracy
Percentage of edits that are correctly predicted
Damaging Rate
Percentage of edits that are identified as damaging
False Positive Rate
Percentage of good edits that are falsely
identified as damaging
False Negative Rate
Percentage of damaging edits that are
falsely identified as good
Overall 98.5% 3.4% 0.5% 26.3%
Editors that have registered more than two months 99.7% 0.2% 0.0% 61.2%
Editors that have registered only less than two months 95.7% 10.7% 1.8% 23.0%
Editors that have not registered 94.8% 12.7% 2.4% 22.8%
  • Explanation: this model has the highest overall accuracy.

Model Card Two: Fair Treatment

  • Performance table
Group / Metrics Accuracy
Percentage of edits that are correctly predicted
Damaging Rate
Percentage of edits that are identified as damaging
False Positive Rate
Percentage of good edits that are falsely
identified as damaging
False Negative Rate
Percentage of damaging edits that are
falsely identified as good
Overall 97.2% 1.2% 0.1% 69.9%
Editors that have registered more than two months 99.6% 0.0% 0.0% 94.0%
Editors that have registered only less than two months 91.2% 4.4% 0.8% 68.5%
Editors that have not registered 90.7% 4.5% 0.0% 67.2%
  • Explanation: Compared to Model One, this model treats experienced editors, newcomers, and anonymous editors more similarly, but it has lower overall accuracy.

Model Card Three: Balanced

  • Performance table
Group / Metrics Accuracy
Percentage of edits that are correctly predicted
Damaging Rate
Percentage of edits that are identified as damaging
False Positive Rate
Percentage of good edits that are falsely
identified as damaging
False Negative Rate
Percentage of damaging edits that are
falsely identified as good
Overall 96.1% 7.6% 4.0% 2.4%
Editors that have registered more than two months 99.9% 0.4% 0.0% 17.9%
Editors that have registered only less than two months 91.8% 19.8% 9.1% 1.0%
Editors that have not registered 82.7% 30.8% 19.9% 0.8%
  • Explanation: Compared to Model One and Two, Model Three attempts to achieve a better balance between false positive rate and false negative rate. The false negative rate is the best among the three models. But this model has lower accuracy and higher damaging rate.

If you are not satisfied with any of the models described above, you can try out this interface, pick a model on your own, and share your chosen model card in the discussion by copying and pasting the wikitext offered in the interface.

Bobo.03 (talk) 15:32, 20 August 2021 (UTC)[reply]

Discussion Break

Hi @Bobo.03: I'm sure this has come along since my very early engagement with it. I know you are well-aware of CluebotNG, but I'd like to draw a highlight that although it once accepted a false positive rate of 0.25%, it has been changed to use 0.1% as its threshold. That hit 55% and 40% of vandalism (thus 45% and 60% false negative). That, I think, gives a pretty clear marker that Wikipedians are way more willing to accept it missing something than an unwarranted hit. Unwarranted hits kill off new users, and irk experienced users, while many issues missed can be caught by alternate means. I tried to have a fiddle with the interface but couldn't figure out how to make it apply different tolerable false positive rates to different groups. Nosebagbear (talk) 20:43, 20 August 2021 (UTC)[reply]

I would add my opinion that the false positive rates reported for option 3 are way too high for me, and I suspect for most other editors. Phil Bridger (talk) 23:08, 20 August 2021 (UTC)[reply]
I'd like to add that IMO, whatever happens to 'Experienced' editors is pretty irrelevant to me, so that leaves newcomers and anonymous edits. False positives rates above 1% are unacceptable from the outset IMO. So the 'fair treatment' table approach is the most viable one, IMO. Since this only flags, but doesn't revert, I'm OK with a higher false positive rate than ClueBot NG, but it should be sub 1% on any given categories, and lower would be even better. Headbomb {t · c · p · b} 02:24, 22 August 2021 (UTC)[reply]

Indicator in watchlist of number of consecutive edits by same editor

I have come across a situation when I am checking edits in my watchlist. I for example see the last minor edit of someone and then I don't bother and keep going through the list. But behind that edit in the article's history may be a number of consecutive edits that I would like to take a look, but I didn't because I thought it was a single minor edit the editor made. As a solution, an idea is to add the number of consecutive edits next to the number indicating the size of the edit in bytes. I think this would help editors be more engaged in their patrolling by not missing consecutive edits that may be of interest to them, which oftentimes are overlooked because in the watchlist one assumes there was only one little edit made. It would also help save time by giving the editor some info that currently is obtained by clicking on the page history to see namely the number of consecutive edits by an editor. Thinker78 (talk) 00:08, 22 August 2021 (UTC)[reply]

Add redirects to Commons talk pages for all images

Sometimes when an image becomes the subject of discussion, discussion takes place both on en-Wikipedia and on Commons. However, most images are hosted on Commons, you can't directly edit or replace the image on en-Wikipedia etc. so image talk page on en-Wikipedia probably shouldn't be used unless any discussion relates only and specifically to en-Wikipedia.

I propose that soft redirects to Commons talk pages on all pictures be added automatically.

An alternative is to add tags like Discussed on sister project or talk at enwp to en-Wikipedia file discussion pages, like is used here: https://commons.wikimedia.org/wiki/File_talk:RGB_3bits_palette_sample_image.png but I think that tag is specific to Commons.

(Why am I proposing this? I recently updated in the seating diagram of the US Senate, and found it cumbersome getting consensus both at Talk:United States Senate and c:File Talk:117th United States Senate.svg. I ended up adding a soft redirect at File Talk:117th United States Senate.svg lest discussion began in a 3rd location.)

update: please discuss at MediaWiki talk:Newarticletext instead!

Egroeg5 (talk) 02:50, 22 August 2021 (UTC)[reply]

Transcluding categories from templates

Was there a previous discussion on whether categories should or should not be transcluded from templates? For example, {{Kerala State Award for Best Actress}} could potentially transclude Category:Kerala State Film Award winners onto the articles. That would make categorization of the articles a little easy, by removing the manual need of adding cats to articles. Surely, it would also put the Kerala State Film Award for Best Actress article into the category. To counter these unwanted, a |cat=no parameter could be introduced to templates to not allow not transcluding the cats. I updated {{NSE}} and {{BSE}} to include the respective cats, and wondering if I acted too quickly. Thoughts around these? -- DaxServer (talk) 13:00, 22 August 2021 (UTC)[reply]

I had the idea from {{Infobox film}} -- DaxServer (talk) 13:09, 22 August 2021 (UTC)[reply]
@DaxServer: Wikipedia:Categorization#Categorization using templates says: "it is recommended that articles not be placed in ordinary content categories using templates in this way". PrimeHunter (talk) 13:10, 22 August 2021 (UTC)[reply]
@PrimeHunter: Should I rollback the BSE and NSE edits and add the cats back to the articles? -- DaxServer (talk) 13:36, 22 August 2021 (UTC)[reply]