Wikipedia:Bot requests: Difference between revisions
done |
|||
Line 15: | Line 15: | ||
|maxarchsize=120000 |
|maxarchsize=120000 |
||
|numberstart=67 |
|numberstart=67 |
||
|archivenow=<nowiki>{{User:ClueBot III/ArchiveNow}}</nowiki> |
|||
}} |
}} |
||
{{User:MiszaBot/config |
{{User:MiszaBot/config |
||
Line 723: | Line 724: | ||
:::::Hold the RfD first. The thing is, if you delink first, you are pre-empting the outcome of a (potential) RfD, which is an abuse of process. --[[User:Redrose64|<span style="color:#a80000; background:#ffeeee; text-decoration:inherit">Red</span>rose64]] ([[User talk:Redrose64|talk]]) 23:24, 26 October 2016 (UTC) |
:::::Hold the RfD first. The thing is, if you delink first, you are pre-empting the outcome of a (potential) RfD, which is an abuse of process. --[[User:Redrose64|<span style="color:#a80000; background:#ffeeee; text-decoration:inherit">Red</span>rose64]] ([[User talk:Redrose64|talk]]) 23:24, 26 October 2016 (UTC) |
||
::::::{{ping|Xaosflux|Redrose64}} After redirects are tagged for deletion, their redirect function gets broken and they become simple short pages. In a [[Wikipedia:Redirects for_discussion/Log/2016_October_9#.5BList.5D_Spaces_and_parentheses|RFD previous discussion]] some users complained about "who is going to fix all the redlinks that will be created" by deletion of the listed pages. I don't think there is a bot that fixes in articles such redirects deleted at RFD. At this moment it's easier to write a bot for replacing such redirects, than after they will be tagged or deleted. So this is why I came here with this request. --[[User:XXN|XXN]], 09:36, 27 October 2016 (UTC) |
::::::{{ping|Xaosflux|Redrose64}} After redirects are tagged for deletion, their redirect function gets broken and they become simple short pages. In a [[Wikipedia:Redirects for_discussion/Log/2016_October_9#.5BList.5D_Spaces_and_parentheses|RFD previous discussion]] some users complained about "who is going to fix all the redlinks that will be created" by deletion of the listed pages. I don't think there is a bot that fixes in articles such redirects deleted at RFD. At this moment it's easier to write a bot for replacing such redirects, than after they will be tagged or deleted. So this is why I came here with this request. --[[User:XXN|XXN]], 09:36, 27 October 2016 (UTC) |
||
{| class="wikitable sortable |
{| class="wikitable sortable" |
||
|- |
|- |
||
! In page !! Redirect to be fixed |
! In page !! Redirect to be fixed |
||
Line 1,084: | Line 1,085: | ||
|} |
|} |
||
--[[User:XXN|XXN]], 17:04, 30 October 2016 (UTC) |
--[[User:XXN|XXN]], 17:04, 30 October 2016 (UTC) |
||
::Some redirects were fixed in tens of pages at once by fixing them in templates. |
|||
::All done now. --[[User:XXN|XXN]], 21:10, 5 November 2016 (UTC) |
|||
{{Resolved}} |
|||
{{User:ClueBot III/ArchiveNow}} |
|||
== Add protection templates to recently protected articles == |
== Add protection templates to recently protected articles == |
Revision as of 21:10, 5 November 2016
This page has a backlog that requires the attention of willing editors. Please remove this notice when the backlog is cleared. |
Commonly Requested Bots |
This is a page for requesting tasks to be done by bots per the bot policy. This is an appropriate place to put ideas for uncontroversial bot tasks, to get early feedback on ideas for bot tasks (controversial or not), and to seek bot operators for bot tasks. Consensus-building discussions requiring large community input (such as request for comments) should normally be held at WP:VPPROP or other relevant pages (such as a WikiProject's talk page).
You can check the "Commonly Requested Bots" box above to see if a suitable bot already exists for the task you have in mind. If you have a question about a particular bot, contact the bot operator directly via their talk page or the bot's talk page. If a bot is acting improperly, follow the guidance outlined in WP:BOTISSUE. For broader issues and general discussion about bots, see the bot noticeboard.
Before making a request, please see the list of frequently denied bots, either because they are too complicated to program, or do not have consensus from the Wikipedia community. If you are requesting that a template (such as a WikiProject banner) is added to all pages in a particular category, please be careful to check the category tree for any unwanted subcategories. It is best to give a complete list of categories that should be worked through individually, rather than one category to be analyzed recursively (see example difference).
- Alternatives to bot requests
- WP:AWBREQ, for simple tasks that involve a handful of articles and/or only needs to be done once (e.g. adding a category to a few articles).
- WP:URLREQ, for tasks involving changing or updating URLs to prevent link rot (specialized bots deal with this).
- WP:USURPREQ, for reporting a domain be usurped eg.
|url-status=usurped
- WP:SQLREQ, for tasks which might be solved with an SQL query (e.g. compiling a list of articles according to certain criteria).
- WP:TEMPREQ, to request a new template written in wiki code or Lua.
- WP:SCRIPTREQ, to request a new user script. Many useful scripts already exist, see Wikipedia:User scripts/List.
- WP:CITEBOTREQ, to request a new feature for WP:Citation bot, a user-initiated bot that fixes citations.
Note to bot operators: The {{BOTREQ}} template can be used to give common responses, and make it easier to keep track of the task's current status. If you complete a request, note that you did with {{BOTREQ|done}}
, and archive the request after a few days (WP:1CA is useful here).
Legend |
---|
|
|
|
|
|
Manual settings |
When exceptions occur, please check the setting first. |
Bot-related archives |
---|
ReminderBot
I request an on-Wiki bot (way) to remind tasks. "Remind me in N days about "A" etc. Talk page message reminder or anything is okay. --Tito Dutta (talk) 17:09, 9 February 2016 (UTC)
- See previous discussions at Wikipedia:Village pump (technical)/Archive 143#Reminderbot? and Wikipedia:Bot requests/Archive 37#Reminder bot. It needs more definition as to how exactly it should work. Anomie⚔ 17:22, 9 February 2016 (UTC)
- This may work in the following way:
- a) a user will add tasks in their subpage User:Titodutta/Reminder in this format {{Remind me|3 days}}. The bot will remind on the user talk page.
- b) Anomie in an discussion, one may tag something like this {{Ping|RemindBot|3 days}}.
Please tell me your views and opinion. --Tito Dutta (talk) 18:31, 10 February 2016 (UTC)
- Outside of a user subpage, how will the bot know who to remind - i.e. how can it be done so that other editors aren't given reminders, either accidentally or maliciously? - Evad37 [talk] 22:40, 10 February 2016 (UTC)
- I don't know if a bot can do it. {{ping}} manages to do this right. When you get a ping, the notification tells you who it is from, so we can see that it keeps track somehow (signature?). I realize that ping is deeper into MW than a bot, but personally, I wouldn't use a reminder system that requires me to maintain a separate page. {{ping}} is useful exactly because you can do it in context and inline. Before ping, you could just manually leave a note at someone's page but the benefits of ping are clear to everyone. I draw the same parallels between a manual reminder system and the proposed {{remind}}. Regards, Orange Suede Sofa (talk) 22:49, 10 February 2016 (UTC)
- Yes, being able to leave reminders on any page will make it more useful – but how can it be done in a way that isn't open for abuse? - Evad37 [talk] 23:23, 10 February 2016 (UTC)
- Maybe this is a better way to think about it: A reminder could be little more than a ping to oneself after a delayed period of time. Ping doesn't suffer from forgery issues (you can't fake a ping from someone else) and reminders could be restricted to ping only oneself (so that you can't spam a bunch of people with reminders). But as I allude to above, ping is part of mediawiki so I imagine that it has special ways of accomplishing this that a bot can't. I think that this discussion is becoming unfortunately fragmented because this is a bot-focused board. I think I was asked to join the discussion here because I previously proposed this on WP:VP/T and was eventually pointed to meta. Regards, Orange Suede Sofa (talk) 03:09, 11 February 2016 (UTC)
- Agree; this is a potentially useful idea (although outside reminder software can always suffice), and might make sense as a MediaWiki extension, but if we did it by bot it would end up being a strange hack that would probably have other issues. — Earwig talk 03:12, 11 February 2016 (UTC)
- Maybe this is a better way to think about it: A reminder could be little more than a ping to oneself after a delayed period of time. Ping doesn't suffer from forgery issues (you can't fake a ping from someone else) and reminders could be restricted to ping only oneself (so that you can't spam a bunch of people with reminders). But as I allude to above, ping is part of mediawiki so I imagine that it has special ways of accomplishing this that a bot can't. I think that this discussion is becoming unfortunately fragmented because this is a bot-focused board. I think I was asked to join the discussion here because I previously proposed this on WP:VP/T and was eventually pointed to meta. Regards, Orange Suede Sofa (talk) 03:09, 11 February 2016 (UTC)
- Yes, being able to leave reminders on any page will make it more useful – but how can it be done in a way that isn't open for abuse? - Evad37 [talk] 23:23, 10 February 2016 (UTC)
- I don't know if a bot can do it. {{ping}} manages to do this right. When you get a ping, the notification tells you who it is from, so we can see that it keeps track somehow (signature?). I realize that ping is deeper into MW than a bot, but personally, I wouldn't use a reminder system that requires me to maintain a separate page. {{ping}} is useful exactly because you can do it in context and inline. Before ping, you could just manually leave a note at someone's page but the benefits of ping are clear to everyone. I draw the same parallels between a manual reminder system and the proposed {{remind}}. Regards, Orange Suede Sofa (talk) 22:49, 10 February 2016 (UTC)
- It would be great if we have this. User:Anomie, any comment/question? --Tito Dutta (talk) 23:48, 17 February 2016 (UTC)
- How would a bot go about finding new reminder requests in the most efficient way? The Transhumanist 01:11, 18 February 2016 (UTC)
- The Transhumanist, what if we pinged the bot instead? So, for instance, I could say
{{u|ReminderBot}}
at the end of something, and the bot would be pinged and store the ping in a database. Later on, the bot could leave a message on my talkpage mentioning the original page I left the ping in. Enterprisey (talk!) (formerly APerson) 04:10, 20 June 2016 (UTC)
- The Transhumanist, what if we pinged the bot instead? So, for instance, I could say
- How would a bot go about finding new reminder requests in the most efficient way? The Transhumanist 01:11, 18 February 2016 (UTC)
- Agree this would be badass. I sometimes forget in-progress article or template work for years, after getting distracted by something else. — SMcCandlish ☺ ☏ ¢ ≽ʌⱷ҅ᴥⱷʌ≼ 19:02, 19 February 2016 (UTC)
- I love this idea. I think the obvious implementation of this would be to use a specialized template where the editor who places the template receives a talk page message reminding them after the specified number of days/weeks, etc. The template could have a parameter such as "processed" that's set to "yes" after the bot has processed the request. A tracking category of all transclusions without the parameter set to the appropriate value would be an efficient method of searching. ~ RobTalk 02:01, 31 March 2016 (UTC)
- @BU Rob13: Am going to code this with Python later on. PhilrocMy contribs 15:02, 19 April 2016 (UTC)
- @BU Rob13: By the way, would you want the user to input the thing they wanted to be reminded about too? PhilrocMy contribs 15:02, 19 April 2016 (UTC)
- Philroc, I'm not Rob, but yeah, I think that would be a great feature to have. APerson (talk!) 02:31, 3 May 2016 (UTC)
- I love this idea. I think the obvious implementation of this would be to use a specialized template where the editor who places the template receives a talk page message reminding them after the specified number of days/weeks, etc. The template could have a parameter such as "processed" that's set to "yes" after the bot has processed the request. A tracking category of all transclusions without the parameter set to the appropriate value would be an efficient method of searching. ~ RobTalk 02:01, 31 March 2016 (UTC)
- For the record, I've started working on this - at the moment, I'm waiting for this Pywikibot patch to go through, which'll let me access notifications. Enterprisey (talk!) (formerly APerson) 19:59, 23 June 2016 (UTC)
- Patch went through, so I can start working on this now. Enterprisey (talk!) (formerly APerson) 03:35, 29 June 2016 (UTC)
- Gotta keep this thread alive! Unbelievably, Pywikibot had another bug, so I'm waiting for this other one to go through. Enterprisey (talk!) (formerly APerson) 00:56, 2 July 2016 (UTC)
- Patch went through, so I can start working on this now. Enterprisey (talk!) (formerly APerson) 03:35, 29 June 2016 (UTC)
- Status update: Coding... (code available at https://github.com/APerson241/RemindMeBot) Enterprisey (talk!) (formerly APerson) 04:53, 5 July 2016 (UTC)
- Status update 2: BRFA filed. Requesting comments from Tito Dutta, Evad37, SMcCandlish, and Philroc. Enterprisey (talk!) (formerly APerson) 04:30, 7 July 2016 (UTC)
- @Enterprisey: What happened? I was waiting for it to go live and you......never tried it! Can we have a another BRFA filed and have it go live soon! Please {{Alarm Clock}} is one useless bit of..... Don't you agree Xaosflux VarunFEB2003 I am Online 14:21, 21 August 2016 (UTC)
- The BRFA expired as there was no action, however it may be reactivated in the future if the operator wishes. — xaosflux Talk 14:27, 21 August 2016 (UTC)
- I was having a few issues with the Echo API. I'll continue privately testing (testwiki, of course) and if it starts looking promising, I'll reopen the BRFA. Enterprisey (talk!) (formerly APerson) 18:22, 21 August 2016 (UTC)
- Great! VarunFEB2003 I am Offline 14:37, 22 August 2016 (UTC)
- @Enterprisey: How is this goin? The Transhumanist 02:10, 12 September 2016 (UTC)
- To be honest, I haven't looked at this since my last response to Varun. I've been a bit busy IRL, and I don't exactly have extravagant amounts of time to devote to my projects here. I'm still thinking about this, though. On Phab, someone's proposed this problem as a potential Outreachy/mentorship thing, which I fully support - if something comes of that, we won't have to worry about this bot-based solution any more. Until then, however, I'll keep working. Enterprisey (talk!) 02:12, 12 September 2016 (UTC)
- @Enterprisey: What happened? I was waiting for it to go live and you......never tried it! Can we have a another BRFA filed and have it go live soon! Please {{Alarm Clock}} is one useless bit of..... Don't you agree Xaosflux VarunFEB2003 I am Online 14:21, 21 August 2016 (UTC)
Automatic change of typographical quotation marks to typewriter quotation marks
Could a bot be written, or could a task be added to an existing bot, to automatically change typographical ("curly") quotation marks to typewriter ("straight") quotation marks per the MoS? Chickadee46 (talk|contribs) 00:16, 15 June 2016 (UTC)
- I think this is done by AWB already. In citations AWB does it for sure. -- Magioladitis (talk) 09:25, 18 June 2016 (UTC)
- Potentially this could be done, but is it really that big an issue that it needs fixing? It seems like a very minor change that doesn't have any real effect at all on the encyclopedia. Omni Flames (talk) 11:14, 30 June 2016 (UTC)
- This is a "general fix" that can be done while other editing is being done. All the best: Rich Farmbrough, 16:10, 13 August 2016 (UTC).
- Magioladitis, AWB may be doing it but I don't feel it's keeping up. Of my last 500 edits, 15% of those pages had curlies, and I similarly found 7 pages with curlies in a sample of 50 random pages. So that's what, potentially
4 million(correction: 700k) articles effected? Omni Flames, one big problem is that not all browsers and search engines treat straight and curly quotes and apostrophes the same so that a search for Alzheimer's disease will fail to find Alzheimer’s disease. Also, curly quotes don't render properly on all platforms, and can't be easily typed on many platforms. If content is to be easily accessible and open for reuse, we should be able to move it cross-platform without no-such-character glyphs appearing. There was a huge MOS discussion on this in 2005 (archived here and here) which is occasionally revisited with consensus always supporting straight quotes and apostrophes, as does MOS:CURLY. If it's really 4 million articles,that might break the record for bot-edits to fix, so perhaps not practical. What about editor awareness? Would it be feasible to set up a bot to check recent edits for curlies and, when detected, post a notice on that editor's talk page (similar to DPL bot when an editor links to a disambiguation page) alerting them and linking them to a page with instructions for disabling curlies in popular software packages? If we can head-off new curlies working into the system, then AWB-editors may have a better chance of eventually purging the existing ones. Thoughts? (BTW: I'm inexperienced with bots but would happily volunteer my time to help.) - Reidgreg (talk) 22:38, 25 September 2016 (UTC)- If I have time I'll try to see if this is a good estimate. All the best: Rich Farmbrough, 18:51, 19 October 2016 (UTC).
- If I have time I'll try to see if this is a good estimate. All the best: Rich Farmbrough, 18:51, 19 October 2016 (UTC).
- Magioladitis, AWB may be doing it but I don't feel it's keeping up. Of my last 500 edits, 15% of those pages had curlies, and I similarly found 7 pages with curlies in a sample of 50 random pages. So that's what, potentially
- @Reidgreg: 15% of mainspace is ~700k articles, not 4m. And whether we use it outside mainspace is mostly irrelevant, since curlies don't usually break a page. --Izno (talk) 22:46, 25 September 2016 (UTC)
Reidgreg if there is consensus for such a change I can perform the task. -- Magioladitis (talk) 22:45, 25 September 2016 (UTC)
- Thanks for the quick replies! (And thanks for correcting me, Izno. I see you took part in the last discussion at MoS, revisiting curly quotes.) I'll have to check on consensus, I replied quickly when I noticed this because I didn't want it to be archived. The proposals I've found at MoS have been the other way around, to see if curlies could be permitted or recommended and the decision has always been "no". Will have to see if there is support for a mass change of curlies to straights, or possibly for MoS reminders. - Reidgreg (talk) 17:18, 26 September 2016 (UTC)
While there is general support for MOS:CURLY, there is a feeling that curlies tend to be from copy-and-paste edits and may be a valuable indicators of copyright violation. So there's a preference for human editors to examine such instances (and possible copyvio) rather than a bot making the changes. - Reidgreg (talk) 16:53, 7 October 2016 (UTC)
@Chickadee46: @Omni Flames: @Rich Farmbrough: @Magioladitis: @Izno: Hi, I'm Philroc. If you know me already, great. I've been over at the talk page for the MOS for a while talking about what the bot this discussion is about should do. I've decided that it will put in {{Copypaste}} with a parameter which changes the notice to talk about how the bot found curlies and changed them, and it will also say that curlies are the sign that what the template says happened. See its sandbox and its testcases. PhilrocMy contribs 13:49, 19 October 2016 (UTC)
- @Philroc: I appreciate the enthusiasm and I'm sorry if I'm being blunt, but from the MOS discussion there is no consensus for having automated changes of curly to straight quotes & apostrophes, nor for the automatic placing of this template. I'm in the middle of another project right now but hope to explore this further next week. - Reidgreg (talk) 14:46, 19 October 2016 (UTC)
- @Reidgreg: We can get consensus from this discussion, can't we?
- @Reidgreg: Wait, we're talking about the number of articles affected by curlies on the MoS. After we're done with that, we will talk about consensus. PhilrocMy contribs 23:18, 19 October 2016 (UTC)
- I reviewed a small number of articles with curlies and found copyvio issues and typographical issues which would not be easily resolved by a bot. (More at MOS discussion.) I hope to return to this at some point in the future. – Reidgreg (talk) 19:24, 30 October 2016 (UTC)
- @Reidgreg: We can get consensus from this discussion, can't we?
Bot to automatically add Template:AFC submission/draft to Drafts
Let me explain my request. There are quite a few new users who decide to create an article in the mainspace, only to have it marked for deletion (not necessarily speedy). They might be given the option to move their article to the draft space, but just moving it to the draft space doesn't add the AfC submission template. Either someone familiar with the process who knows what to fill in for all the parameters (as seen in drafts created through AfC) or a bot would need to add the template, as the new user would definitely not be familiar with templates, let alone how to add one.
My proposal is this: Create a bot that searches for articles recently moved from the mainspace to the draft space and tags those articles with all the parameters that a normal AfC submission template would generate. For those who just want to move their articles to the draft space without adding an AfC submission template (as some more experienced editors would prefer, I'm sure), there could be an "opt-out" template that they could add. The bot could also search for drafts created using AfC that the editor removed the AfC submission template from and re-add it. Newer editors may blank the page to remove all the "interruptions" and accidentally delete the AfC submission template in the process, as I recently saw when helping a new editor who created a draft. Older editors could simply use the "opt-out" template I mentioned above. If possible, the bot could mention its "opt-out" template in either its edit summary or an auto-generated talk page post or (because it'll mainly be edited by one person while in the draft space) in an auto-generated user talk page post.
I realize this may take quite a bit of coding, but it could be useful in the long run and (I'm assuming) some of the code is there already in other bots (such as auto-generated talk page posts, as some "archived sources" bots do). -- Gestrid (talk) 06:55, 12 July 2016 (UTC)
- Sounds like a sensible idea; maybe the bot could check the move logs? Enterprisey (talk!) (formerly APerson) 01:59, 13 July 2016 (UTC)
- Sorry for the delay in the reply, Enterprisey. I forgot to add the page to my watchlist. Anyway, I'm guessing that could work. I'm not a bot operator, so I'm not sure how it would work, but what you suggested sounds right. -- Gestrid (talk) 07:44, 28 July 2016 (UTC)
- I don't think this should be implemented at all; adding an AfC template to drafts should be opt-in, not opt-out, since drafts with the tag can be speedely deleted. A bot can't determine whether an article moved to draft comes from an AfC or some other review or deletion process. Diego (talk) 10:12, 9 August 2016 (UTC)
- @Diego Moya: I'm no bot operator, but I'm pretty sure there are ways for bots to check the move log of a page. It comes up in #cvn-wp-en connect. CVNBot will say "User [[en:User:Example]] Move from [[en:2011 Brasileiro de Marcas season]] to [[en:2011 Brasileiro de Marcas]] URL: https://en.wikipedia.org/wiki/2011_Brasileiro_de_Marcas_season 'moved [[2011 Brasileiro de Marcas season]] to [[2011 Brasileiro de Marcas]]'". (That last part that starts "Moved on" is the edit summary.)
- As a side-note, in case you don't know, #cvn-wp-en is the automated IRC channel that monitors potential vandalism for all or most namespaces on English Wikipedia, courtesy of the Countervandalism Network.
- -- Gestrid (talk) 07:36, 25 September 2016 (UTC)
- I don't think this should be implemented at all; adding an AfC template to drafts should be opt-in, not opt-out, since drafts with the tag can be speedely deleted. A bot can't determine whether an article moved to draft comes from an AfC or some other review or deletion process. Diego (talk) 10:12, 9 August 2016 (UTC)
- Sorry for the delay in the reply, Enterprisey. I forgot to add the page to my watchlist. Anyway, I'm guessing that could work. I'm not a bot operator, so I'm not sure how it would work, but what you suggested sounds right. -- Gestrid (talk) 07:44, 28 July 2016 (UTC)
Excuse me, but I can't see how that would help. With the move log you can't distinguish an article moved to draft space by a newcomer from an article moved by an experienced editor who never heard of this proposed bot. Diego (talk) 08:27, 25 September 2016 (UTC)
- In that case, an edit like that is easily undo-able (if admittedly slightly irritating), and the bot can have a link to its "ignore template" within the edit summary, similar to how ClueBot NG has its "Report False Positive?" link in its edit summary. -- Gestrid (talk) 16:50, 25 September 2016 (UTC)
- You're assuming that there will be someone there to make such review, which is not a given. But then you want to create a procedure where, by default, drafts are put on a deletion trail without human intervention after someone makes a non-deleting movement, and it requires extra hoops to avoid this automated deletion outcome. Such unsupervised procedure should never be allowed, in special when it gets to delete content that was meant to be preserved. And what was the expected benefit of such procedure again? Diego (talk) 23:37, 25 September 2016 (UTC)
Fix thousands of citation errors in accessdate
Since the recent changes in the citation templates (see Update to the live CS1 module weekend of 30–31 July 2016), the parameter access-date
now requires a day and no-longer accepts a "month-year" formatted date such as August 2016
and displays a CS 1 error (Check date values in: |access-date=) error, as soon as the article has been edited.
- See example
- I have no idea how many articles this concerns on wikipedia.
- In the last 10 months I have used this now deprecated format in several thousand citations (in about 1000 articles).
- TODO: change/fix
access-date
oraccessdate
from, for example,August 2016
to1 August 2016
, by adding the first day of the month. - Special case: if the parameter
date
contains a more recent date (e.g.4 August 2016
) than the fixed accessdate parameter (i.e1 August 2016
), the value in access-date would be older than that in date. Although accessing a cited source before its (publication) doesn't seem very sensible to me, there is (currently) no CS 1 error, so adjusting for "accessdate == date" is purely optional. - Add a
1\s
in front of the outwritten month ("August"), maintaining the original spacing, i.e. a white space between "=" and the date-value.
Adding a day to the accessdate parameter seems like a straight forward change to me. However if I am the only editor on wikipedia that used such date format, or if my request causes some kind of controversy, I'll prefer to do these changes manually. Thx for the effort, Rfassbind – talk 12:15, 4 August 2016 (UTC)
- Like this? If so, I can fix at least some portion of them.
- As for the special case, it is not unusual for access-dates to precede publication dates, since some publication dates are in the future. Putting "1" in front of the month gets us within 30 days of the actual access date, which is close enough for verification. – Jonesey95 (talk) 12:55, 4 August 2016 (UTC)
- Oppose. Only the editor who accessed the information knows on what day the information was accessed. Only the editor who added the access date should fix this so-called error. I personally see no need to fix this "problem". Jc3s5h (talk) 13:29, 4 August 2016 (UTC)
@Jc3s5h I expected this kind of unhelpful comment, and that's why I was reluctant to post this request in the first place. It's depressing sometimes, yes, but that's the way wikipedia works. @Jonesey95 yes that's a perfectly good fix. Rfassbind – talk 15:47, 4 August 2016 (UTC)
- I won't hesitate to propose a community ban against any bot that is designed to falsify information. Jc3s5h (talk) 15:56, 4 August 2016 (UTC)
- Access dates can be easily determined by looking at when the URL was added. That can then be used to reasonably extrapolate the access dates. InternetArchiveBot does the same when determining archive snapshot dates.—cyberpowerChat:Online 16:16, 4 August 2016 (UTC)
- Access dates for journal citations with DOI or other identifier values can also be removed. Per the {{cite journal}} documentation, "access-date is not required for links to copies of published research papers accessed via DOI or a published book". – Jonesey95 (talk) 18:27, 4 August 2016 (UTC)
- @Cyberpower678: Doesn't always work, particularly if text is copypasted between articles, see my post of 14:54, 4 August 2016 at User talk:Corinne#Sol Invictus. --Redrose64 (talk) 18:45, 4 August 2016 (UTC)
- Access dates for journal citations with DOI or other identifier values can also be removed. Per the {{cite journal}} documentation, "access-date is not required for links to copies of published research papers accessed via DOI or a published book". – Jonesey95 (talk) 18:27, 4 August 2016 (UTC)
- Access dates can be easily determined by looking at when the URL was added. That can then be used to reasonably extrapolate the access dates. InternetArchiveBot does the same when determining archive snapshot dates.—cyberpowerChat:Online 16:16, 4 August 2016 (UTC)
- I won't hesitate to propose a community ban against any bot that is designed to falsify information. Jc3s5h (talk) 15:56, 4 August 2016 (UTC)
We can even check whether the link is still alive and put the current date. -- Magioladitis (talk) 16:24, 4 August 2016 (UTC)
- oppose to adding day. The format should not insist on it and that should be reverted as I doubt that a well attended RfC has been held to see if there is a consnus for such a change. The change opens up a can of worms of the correct place for the day "January 16, 2016" or "16 January 2016" or 2016-01-16. The reason for access date is to help editors in the future to find an archive version of the page if necessary. A granularity of a month is sufficient for that. -- PBS (talk) 18:34, 4 August 2016 (UTC)
- The day is useful for recent events, especially for web pages that are likely to be revised. Knowing whether a site was accessed before or after a late-breaking revelation can help an editor decide whether a site should be revisited, with an eye to revising the article to incorporate the latest information. But for older sources, the day is seldom useful. Jc3s5h (talk) 19:17, 4 August 2016 (UTC)
- Then leave it up to the judgement of the editor and do not impose the day automatically with a bot. -- PBS (talk) 20:25, 13 August 2016 (UTC)
- The day is useful for recent events, especially for web pages that are likely to be revised. Knowing whether a site was accessed before or after a late-breaking revelation can help an editor decide whether a site should be revisited, with an eye to revising the article to incorporate the latest information. But for older sources, the day is seldom useful. Jc3s5h (talk) 19:17, 4 August 2016 (UTC)
- Support if narrow in scope Since Rfassbind has personally revised many 100s (or more) of these pages, if he can attest to the access-date of each reference, I see no problem adding the day corresponding to his article revisions, which only comes to light after the module update. I don't know enough about the
|access-date=
portion of the module update to have a further option yet. ~ Tom.Reding (talk ⋅dgaf) 22:44, 4 August 2016 (UTC)
- Methodology: Since Rfassbind has been consistent with his edit summaries, they can be searched for text such as "overall revision". These are all exclusively minor planet pages (as can/will be double checked as such), and all share similar online sources from which their references are built, so I have no problem taking this list, organizing it by date, and applying that date to all
|access-date=
parameters on the corresponding page (or whichever references Rfassbind confirms checking). As a further check, I'd only edit the "old" access-dates which match the corresponding month & year of the overall revision. ~ Tom.Reding (talk ⋅dgaf) 00:06, 5 August 2016 (UTC)
- Methodology: Since Rfassbind has been consistent with his edit summaries, they can be searched for text such as "overall revision". These are all exclusively minor planet pages (as can/will be double checked as such), and all share similar online sources from which their references are built, so I have no problem taking this list, organizing it by date, and applying that date to all
- Support as it would benefit the project.BabbaQ (talk) 11:57, 20 August 2016 (UTC)
- @BabbaQ How? -- PBS (talk) 19:46, 27 August 2016 (UTC)
- Oppose per Jc3s5h and Cyberpower678. Providing a source answers the question: "where exactly did you get this information from", not "where else you can probably find this information, maybe". It is a bit dishonest to change the accessdate to something that for any given day has only about 3% chance of being the actual accessdate. It's also an important principle that people who use citation templates should read the template documentation and comply with it, instead of relying on others to "fix" things for them, especially in cases such as this when we aren't mind readers. Cyberpower678's point has merits: we should limit ourselves to what archive bots do. It doesn't fix all cases, but it does not introduce what are probable mistakes either. – Finnusertop (talk ⋅ contribs) 20:16, 27 August 2016 (UTC)
- I think you misunderstand. The CS1 and CS2 templates have been updated to reject citation templates when the accessdate parameter is missing a date and only has the month and the year. The idea of this request, since there are now thousands of citation templates giving nice red errors everywhere is to have a bot add a date to these access dates to fix the error. My idea is that a bot can extrapolate the access date based on when the link is added since in 95% of the case, the link is added the same day it was initially accessed when sourcing.—cyberpowerChat:Limited Access 20:22, 27 August 2016 (UTC)
- It would be better to make the templates recognize just the year, or just the year and month, for access dates that are older than the "improvement" to the templates. Jc3s5h (talk) 21:02, 27 August 2016 (UTC)
- @Cyberpower678 where is the RFC where a substantial number of editors agreed to this change? If no such RfC was held then no bot changes to enforce it should be made; and the obvious solution is to removed the red error message. If there are thousands of them then a lot (some added by me) were added with the day date deliberately miss out, so where is the RfC justifying changing them? -- PBS (talk) 14:41, 29 August 2016 (UTC)
- I wouldn't know. I didn't make the change to the CS templates. I'm just suggesting what a bot could do to fix the red error messages.—cyberpowerChat:Online 14:44, 29 August 2016 (UTC)
- @Cyberpower678 where is the RFC where a substantial number of editors agreed to this change? If no such RfC was held then no bot changes to enforce it should be made; and the obvious solution is to removed the red error message. If there are thousands of them then a lot (some added by me) were added with the day date deliberately miss out, so where is the RfC justifying changing them? -- PBS (talk) 14:41, 29 August 2016 (UTC)
- Comment Original discussion on incomplete access dates. The CS1 forum is the correct place to discuss if a CS1 error should be generated or not. I'm afraid this discussion is deadlocked due to questions about legitimacy of the CS1 error which can't be resolved here. -- GreenC 15:07, 29 August 2016 (UTC)
- Here are the stats:
- Articles with a bad access-date: 14665
- Cites with a bad access-date: 32255
- Full list available on request (from the 8/20/2016 database) -- GreenC 17:35, 29 August 2016 (UTC)
- Here are the stats:
- Support—and base the day added on the day the edit was made, not the first or last of the month. Major style guides specify that an access date needs to be complete to the day, and so should we. I would also add a piece of logic so that the date added can't be earlier than the publication date if a full date is specified there, for obvious reasons. Imzadi 1979 → 22:47, 30 August 2016 (UTC)
- Oppose and change the citation template back to accepting month and year. There is no valid reason for it to not accept it. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 00:48, 31 August 2016 (UTC)
- Support because this will never be reverted in the cite templates for the very sane and reasonable reason that all style guides required access dates to be complete to the day. So basically, per Imzadi. This obviously needs to be based on the revision where the link was first added, so there needs to be some extensive testing do deal with reverts and vandalism. Blindly putting the date to the first of the month, however, is unnacceptable.Headbomb {talk / contribs / physics / books} 12:19, 8 September 2016 (UTC)
- Support Full dates have been in the style guidelines since 2006 (see links by Trappist the Monk). We should follow the documentation/guidelines. If this was a new guideline recently added I could understand, but it's not new. -- GreenC 01:28, 10 September 2016 (UTC)
- Support per Green Cardamom. I just don't see any problem with fixing the date relative to the style guide. It would remove unsightly error messages and would be accurate to within 30 days of the access date. --RuleTheWiki (talk) 11:08, 14 October 2016 (UTC)
- Oppose (with possible compromise): I've worked through hundreds of these now by hand. Besides the accuracy problems pointed out above with blindly adding "1", there have been dozens of other problems I've found in the references that I've fixed. Of course, dead URLs are the most common, but there have been incorrect titles/authors, incomplete URLs, URLs that are moved, incorrect references - just about every problem you can think of in a reference. While working through the mountain is a large task, and there are certainly some similar pages (so far, I've seen both the minor planet pages and the Gitmo detainee pages) that could benefit from limited bot help, I think the overall improvement to references is worth some short-term red warnings. Though, if someone wants to change the incomplete date warnings to a separate green category and just leave the other date errors red, I'd strongly support that. Floatjon (talk) 15:10, 17 October 2016 (UTC)
- Needs wider discussion. This just isn't the place to hold this discussion. Advertise it at a village pump with an RfC and then return here after its been closed. ~ Rob13Talk 19:22, 30 October 2016 (UTC)
Make more use of external link templates
{{Anarchist Library text}}, for example, has just three transclusions; yet we currently have 222 links to the site which it represents. A number of similar examples have been uncovered, with the help of User:Frietjes and others, in recent days, at the daily Wikipedia:Templates for discussion/Log sub-pages. I've made {{Underused external link template}} to track these; it adds templates to Category:External link templates with potential for greater use.
Using external links templates aids tracking, facilitates quick updates when a site's link structure changes, and makes it easy to export the data into Wikidata.
Is anyone interested in running a bot to convert such links to use the templates, please? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 18:58, 6 August 2016 (UTC)
- @Pigsonthewing: This is possible, but I need an example to follow off of. I also noticed that many of the articles that link to the domain but don't transclude the template are links in the references, does {{Anarchist Library text}} still fit in that case? Dat GuyTalkContribs 05:20, 14 September 2016 (UTC)
Could someone help substitute all transclusions of {{China line}}, which has been in the TfD holding cell for more than a year? (In addition, could instances of
{{China line|…}}{{China line|…}}
where two or more transclusions are not separated by anything (or by just a space in parameter |lines=
of {{Infobox station}}), be replaced with
{{Plainlist|1= * {{China line|…}} * {{China line|…}} }}
since this format seems to be heavily used in some infoboxes?)
Thanks, Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 16:04, 8 August 2016 (UTC)
- If no-one else picks this task up, ping me in like two weeks and I'll do it. ~ Rob13Talk 21:17, 8 August 2016 (UTC)
- The first part of your request can already be handled by AnomieBOT if Template:China line is put in Category:Wikipedia templates to be automatically substituted. Pppery (talk) 22:09, 21 August 2016 (UTC)
- @Pppery and BU Rob13: So maybe just do №2 with a bot through AWB and then put the template into AnomieBOT's category? Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 16:04, 23 August 2016 (UTC)
- (Pinging BU Rob13 and Pppery again, because that might not have gone through Echo. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 16:05, 23 August 2016 (UTC))
- Wait, Jc86035, I missed something in my previous comment. AnomieBOT will only substitute templates with many transclusions if they are listed on the template-protected User:AnomieBOT/TemplateSubster force. Note that this process of wrapper-then subst has been done before with Template:Scite. (I did get the above ping, by the way) Pppery (talk) 16:07, 23 August 2016 (UTC)
- @Pppery: Thanks for the clarification; although since BU Rob13 is an administrator, that's not necessarily going to be much of a problem. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 16:12, 23 August 2016 (UTC)
- @Jc86035: Note that pings only work if you do nothing but add a comment (not also move content around in the same edit), and thus neither me nor Bu Rob13 got the ping above. Pppery (talk) 16:19, 23 August 2016 (UTC)
- (neither do pings work if I misspell the username, BU Rob13) Pppery (talk) 16:20, 23 August 2016 (UTC)
- @Pppery: Thanks for the clarification; although since BU Rob13 is an administrator, that's not necessarily going to be much of a problem. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 16:12, 23 August 2016 (UTC)
- Wait, Jc86035, I missed something in my previous comment. AnomieBOT will only substitute templates with many transclusions if they are listed on the template-protected User:AnomieBOT/TemplateSubster force. Note that this process of wrapper-then subst has been done before with Template:Scite. (I did get the above ping, by the way) Pppery (talk) 16:07, 23 August 2016 (UTC)
I can probably still handle this, but I currently have a few other bot tasks on the back burner and I'm running low on time. I'll get to it if no-one else does, but it's up for grabs. ~ Rob13Talk 20:39, 23 August 2016 (UTC)
Bumping to prevent this from getting archived. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 08:30, 8 October 2016 (UTC)
- Unfortunately, I've gotten quite busy, so I probably can't handle this. Up for grabs if anyone else wants to do it; it's a very simple AWB task. Alternatively, we could just do the substitution and not worry about formatting for the sake of getting this done quickly. ~ Rob13Talk 11:13, 8 October 2016 (UTC)
- Hmm, I might start working on this. Basically, what you want Jc86035 is to subst {{China Line}}, and if there are two China Line templates on the same line, to convert it to plainlist? Dat GuyTalkContribs 11:19, 8 October 2016 (UTC)
- @DatGuy: Yeah, basically. (The reason why this situation exists is because the template was standardised to use {{RouteBox}} and {{Rail color box}}; before this, the template had whitespace padding and someone just didn't bother to add
<br>
tags or anything.) Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 11:47, 8 October 2016 (UTC) - Oh, and just a caveat: for multiple instances in the same row with parameter
|style=box
or=b
, they should only be separated by a space (like in the {{Beijing Subway Station}} navbox). Thanks for your help! Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 11:52, 8 October 2016 (UTC)- @Jc86035: Could you give me examples of what you'd like it to do on your talk page? Dat GuyTalkContribs 14:49, 8 October 2016 (UTC)
- @DatGuy: Yeah, basically. (The reason why this situation exists is because the template was standardised to use {{RouteBox}} and {{Rail color box}}; before this, the template had whitespace padding and someone just didn't bother to add
- Hmm, I might start working on this. Basically, what you want Jc86035 is to subst {{China Line}}, and if there are two China Line templates on the same line, to convert it to plainlist? Dat GuyTalkContribs 11:19, 8 October 2016 (UTC)
@DatGuy, BU Rob13, and Pppery: The template appears to have been substituted entirely by Primefac; not sure if {{Plainlist}} has been added to infoboxes. Might be easier to just do a search for {{Rail color box}} and replace semi-automatically with AWB. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 09:35, 21 October 2016 (UTC)
- Jc86035, didn't know this thread existed, so other than the replacement (it wasn't a true subst) I didn't change anything. I checked a handful of the latter ones (which I seem to recall I had some issues with because they were next to each other) and it looks like they mostly were separated by <br>. You're welcome to look yourself, though; my edits replacing this template are here. Primefac (talk) 15:02, 21 October 2016 (UTC)
- Would something like find:
- (\{\{[cC]hina [lL]ine\|[^<][^\n])
- Replace with:
- *$1
- work? (I probably have a mistake, but the general idea?)
- Actually, since the template has been deleted, do we need to do something else? Dat GuyTalkContribs 16:00, 21 October 2016 (UTC)
- Unless you wanted to go through every instance of {{rail color box}} and {{rint}}, going through my edit history would probably be easier (and fewer false positives in unrelated articles). I would bet, though, that the number of side-by-sides without <br> (maybe 10?) is going to not really be worth all that hassle. Primefac (talk) 16:13, 21 October 2016 (UTC)
- @Primefac: You also seem to have substituted the template incorrectly (not sure how that happened); the parameter
|inline=yes
is usually used for subway/metro lines for {{Rail color box}} in infoboxes (e.g. in New York, Hong Kong and others). Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 08:16, 22 October 2016 (UTC) - @Primefac: And you seem to have neglected to replace "HZ" with "HZM" (example), and "NB" with "NingboRT" (example). Wouldn't it have been easier to substitute the template normally? Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 16:24, 22 October 2016 (UTC)- Yeah, so I fucked up. I'm working on fixing the lines that I didn't translate properly. Primefac (talk) 22:20, 22 October 2016 (UTC)
- And for the record, lest you think I'm completely incompetent and managed to screw up a simple subst: I didn't subst because I didn't realize the template was already a wrapper. Honestly not sure how I made that mistake, but there you go. Primefac (talk) 22:55, 22 October 2016 (UTC)
- @Primefac: Oh well. I guess AWB edits can always be fixed by more AWB edits. (Thanks for the quick response.) Though I'd still prefer going through all of them to add
|inline=
, because I'd already substituted some instances before you replaced the rest. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 04:20, 23 October 2016 (UTC)- Jc86035, probably the easiest way to make that list would be to find what transcludes the templates in Category:China rail transport color templates. Be a bit of a big list, but it might be simpler than trying to mess around with pulling out the specific edits I made to replace everything. Primefac (talk) 04:26, 23 October 2016 (UTC)
- @Primefac: Oh well. I guess AWB edits can always be fixed by more AWB edits. (Thanks for the quick response.) Though I'd still prefer going through all of them to add
- @Primefac: You also seem to have substituted the template incorrectly (not sure how that happened); the parameter
- Unless you wanted to go through every instance of {{rail color box}} and {{rint}}, going through my edit history would probably be easier (and fewer false positives in unrelated articles). I would bet, though, that the number of side-by-sides without <br> (maybe 10?) is going to not really be worth all that hassle. Primefac (talk) 16:13, 21 October 2016 (UTC)
aeiou.at
We have around 380 links to http://www.aeiou.at/ using the format like http://www.aeiou.at/aeiou.encyclop.b/b942796.htm
Both the domain name and URL structure have changed. The above page includes a search link (the last word in "Starten Sie eine Suche nach dieser Seite im neuen AEIOU durch einen Klick hier") and when that link is clicked the user is usually taken to the new page; in my example this is: http://austria-forum.org/af/AEIOU/B%C3%BCrg%2C_Johann_Tobias
To complicate matters, 84 of the links are made using {{Aeiou}}.
Some pages may already have a separate link to the http://austria-forum.org/ page, and some of those may use {{Austriaforum}}.
Can anyone help to clear this up, please?
Ideally the end result will be the orphaning of {{Aeiou}} and all links using {{Austriaforum}}. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:56, 14 August 2016 (UTC)
- @Pigsonthewing: To make sure I understand
- First subtask: for each invocation of {{Aeiou}}
- Construct the Fully qualified URL
- Get the content of the page
- Search through the content for the "hier" hyperlink
- Extract the URL from the hyperlink
- Parse out the new Article suffixing
- Replace the original invocation with the new invocation
- Second subtask: For each instance of the string http://www.aeiou.at/aeiou/encyclop. in articlespace
- Construct the Fully qualified URL
- Get the content of the page
- Search through the content for the "hier" hyperlink
- Extract the URL from the hyperlink
- Parse out the new Article suffixing
- Replace the original string with the the austriaforum template invocation
- Do I have this correct? Also can you please link the discussion that ensorses this replacement? Hasteur (talk) 13:33, 12 October 2016 (UTC)
- All correct, with the provisos - first for clarity - that in the first subtask, "new invocation" means "new invocation of {{Austriaforum}}"; and that if {{Austriaforum}} is already present, a second is not needed. The current target page says "You are now in the "old" (no longer maintained version) of the AEIOU. The maintained version can be found at..." To the best of my knowledge, we don't need a special discussion, other than this one, to endorse fixing 380 such links. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:36, 12 October 2016 (UTC)
CFD daily subpages
A bot should create daily WP:CFD subpages for days in the following month at the end of each month. The ones up to Wikipedia:Categories for discussion/Log/2016 August 1 were created by ProveIt, while the ones from Wikipedia:Categories for discussion/Log/2016 August 2 to Wikipedia:Categories for discussion/Log/2016 September 1 were created by BrownHairedGirl. GeoffreyT2000 (talk) 01:56, 20 August 2016 (UTC)
- @Anomie: Could part of the script you use for AnomieBOT's TfD clerking be adopted for this easily? You wouldn't even need to worry about the transclusions and all that, just the creation of the actual subpages similar to how Special:PermaLink/732802882 looks. ~ Rob13Talk 05:50, 20 August 2016 (UTC)
- Yes, it could, although for TfD it creates each page daily instead of doing a whole month at a time (is there a reason for doing a whole month at once?). Are there any other clerking tasks at CFD that could use a bot, e.g. updating the list at Wikipedia:Categories for discussion#Discussions awaiting closure? Looking at the history, it seems @Marcocapelle: and @Good Olfactory: do it manually at the moment. For reference, the list of things AnomieBOT does for TFD are listed at User:AnomieBOT/source/tasks/TFDClerk.pm/metadata in the "Description" column. Anomie⚔ 15:50, 23 August 2016 (UTC)
- @Anomie: I would suggest something very similar to the TFD task:
- Create the daily CFD subpage.
- Fix the headers on the daily CFD subpages, if they get removed or damaged.
- Maintain the page Wikipedia:Categories for discussion/Awaiting closure.
- Subst {{cfd top}} and {{cfd bottom}}, when editing the page anyway.
- Other tasks affecting only WP:CFD and subpages as determined by consensus at WT:CFD.
- @Anomie: I would suggest something very similar to the TFD task:
- Yes, it could, although for TfD it creates each page daily instead of doing a whole month at a time (is there a reason for doing a whole month at once?). Are there any other clerking tasks at CFD that could use a bot, e.g. updating the list at Wikipedia:Categories for discussion#Discussions awaiting closure? Looking at the history, it seems @Marcocapelle: and @Good Olfactory: do it manually at the moment. For reference, the list of things AnomieBOT does for TFD are listed at User:AnomieBOT/source/tasks/TFDClerk.pm/metadata in the "Description" column. Anomie⚔ 15:50, 23 August 2016 (UTC)
I created the August pages after I noticed in the early hours of one day that the current day's page didn't exist. I have done that a few tines over the years, but there appear to be other editors who do it regularly. It's a tedious job, so congrats to the regulars ... but it could easily be done by a bot.
When I did the August pages I had forgotten that some time back, I created a template which could be substed to create the new pages. This discussion reminded me of it, and I eventually found it at Template:CFD log day. It may need some tweaking, but it would be handy for a bot to use something like that. --BrownHairedGirl (talk) • (contribs) 09:02, 20 August 2016 (UTC)
- It would be nice if this process can be automated. It would be preferable, in that case, to create daily updates. Marcocapelle (talk) 17:40, 23 August 2016 (UTC)
- It is already mostly a bot, I have a program that creates a page and loads it into the paste buffer, then you just paste. It takes about 10 minutes to make a months worth of pages this way. It would be no big deal to make it fully a bot, I've written dozens of bots for a previous employer. If I had permission to run a bot I would have done so years ago, but really it only takes 10 minutes a month. I hate to think of anyone making them by hand. -- Prove It (talk) 15:23, 28 August 2016 (UTC)
- ProveIt, would it be possible for you to make that generation code available as a GitHub Gist or something? Enterprisey (talk!) 18:24, 29 October 2016 (UTC)
- No objection to sharing, but I ought to clean it up a bit... this is code I wrote for fun in 2006, I'd want to at least pep8 before showing it to anyone - Prove It (talk) 18:56, 30 October 2016 (UTC)
- ProveIt, would it be possible for you to make that generation code available as a GitHub Gist or something? Enterprisey (talk!) 18:24, 29 October 2016 (UTC)
- It is already mostly a bot, I have a program that creates a page and loads it into the paste buffer, then you just paste. It takes about 10 minutes to make a months worth of pages this way. It would be no big deal to make it fully a bot, I've written dozens of bots for a previous employer. If I had permission to run a bot I would have done so years ago, but really it only takes 10 minutes a month. I hate to think of anyone making them by hand. -- Prove It (talk) 15:23, 28 August 2016 (UTC)
Coding... Anomie⚔ 23:08, 30 October 2016 (UTC)
- BRFA filed I'll note that the bot can't easily use Template:CFD log day unless we want to give up having the bot fix the page header when someone screws it up. Anomie⚔ 23:55, 30 October 2016 (UTC)
Bot to remove external link to MySpace?
Following the TfD discussion here: Wikipedia:Templates_for_discussion/Log/2015_January_15#Template:Myspace, I think we should remove all external links to MySpace as unreliable. They keep poping up. -- Magioladitis (talk) 13:05, 20 August 2016 (UTC)
- We have over 3400 links to MySpace. I'd want to see a much wider discussion before these were removed. [And its deplorable that a template for a site we link to so many times was deleted] Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:43, 22 August 2016 (UTC)
They are often in the external links section. See 120 Days. It is the band's homepage. Is there reason to delete primary sources? Also don't understand the template deletion. -- GreenC 13:21, 29 August 2016 (UTC)
Apparently the links were added after the template was deleted. -- Magioladitis (talk) 13:41, 29 August 2016 (UTC)
- Looks like they were converted? [1] -- GreenC 14:16, 29 August 2016 (UTC)
I've posted a proposal to recreate the template. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:16, 29 August 2016 (UTC)
Green Cardamom and Pigsonthewing thanks for the heads up. I would expect that according to the deletion reason of the template that the links were removed. Otherwise, the template is a valuable shortcut that helps against linkrot. - Magioladitis (talk) 15:23, 29 August 2016 (UTC)
I see no problem with this idea so i whole-heartedly endorse the idea, i see no problem with removing links that lead to a no longer used/nearly dead website/service where there are much better alternatives to link to. --RuleTheWiki (talk) 11:01, 14 October 2016 (UTC)
Non-notable article finder bot
Almost 1000+ pages get created everyday. Our New page patrollers work hard. Still some articles which are clearly non-notable survive deletion. In Special:NewPages we can see only one month old articles, if we click the "oldest".
Some pages can only be deleted only through AFD. Users tag them with speedy deletion, administrators remove speedy deletion tag with edit summary suggesting "that the article has no notability but can be deleted in AFD". In some cases the article is not taken to AFD, as the user who tagged for speedy moves on to other new pages, or he is busy with his personal life (they are volunteers). And as the AFD template stays for two/three days before being removed by administrators, other new page patrollers who focus on pages two days old sees the speedy deletion tag. But sometimes, they don't notice that administrator removed the speedy deletion tag with a suggestion of AFD. Luckily a few of these articles pass one month limit and survives on English Wikipedia.
Some articles are prodded for deletion, the prod is removed after two/three days. If anybody notices that the article is not notable, then it will be taken to AFD.
And some articles where the PROd is removed survives if the article is long, well written, has paragraphs, infobox template, categories, seemingly reliable sources, good English,(But only doing extra research can show that the article is not-notable). Means spend our internet bandwidth.
As there is a proverb "finding needle in a haystack"
. Finding these articles from five million articles is a nightmare. We don't have the time, energy and eternal life, nor any other editor.
I am damn sure that there are thousands of such article among five million articles. Only a bot can find these articles.
This is what the bot will do. In Wikimedia commons they have flickr Bot.
- This Bot will check articles, which are more than six months old. If any article which was
speedily deleted before and recreated by the same user
, but was not deleted after recreation by the same user, this bot will put a notice on the talk page of the article. Volunteers will check the deletion log and see whether the article was speedily deleted before.
- Those article which are
minimum six month old, has less than 50 edits in edit history and edited by less than 15 editors
. This bot will google news search with "article name" inside quotation marks"_____". The bot will also google book search the article name. If both the results are not satisfactory, then the bot will put a notice on the talk page of the article (If google news results show that the article is not notable, but google book search shows good result, then the bot won't tag the article's talk page). Then volunteers will check the notability of the article. The bot will not make more than 30 edits everyday.
The problem is that many good articles are unsourced and badly written, After checking on the internet, editors decide that the article is notable. While some articles which doesn't have any notability are wonderfully written. Thank you. Marvellous Spider-Man 13:13, 22 August 2016 (UTC)
User:Marvellous Spider-Man, for something like this, could you apply the above rules to a set of articles manually, and see what happens - pretend your a bot. Then experiment with other rules and refine and gain experience with the data. Once an algo is established that works (not many false positives), then codifying it becomes a lot less uncertain because there is evidence the algo will work and datasets to compare with and test against. You could use the "view random article" feature and keep track of results in columns and see what washes out to the end: Column 1: Is six months old? Column 2: Is 50 edits? etc.. -- GreenC 13:56, 29 August 2016 (UTC)
- I think Green Cardamom's suggestion is great. With a bot where the algorithm isn't immediately obvious, the proposed algorithm should definitely be tested manually first. Enterprisey (talk!) 22:45, 3 September 2016 (UTC)
- We need to be very careful about deletion bots, especially ones that look for borderline cases such as articles that didn't meet the speedy deletion criteria but do merit deletion. We need to treasure the people who actually write Wikipedia content and try to reduce the number of incorrect deletion tags that they have to contend with. Anything that speeds up sloppy deletion tagging and reduces the accuracy threshold for deletion tags is making the problem worse.
- More specifically, we have lots of very notable articles that 15 or fewer editors have edited. I'd be loathe to see "number of editors" become a metric that starts to be used in our deletion processes. Aside from the temptation on article creators to leave in a typo to attract an edit from gnomes like me; and put in a high level category to attract an edit from one of our categorisers; we'd then expect a new type of gaming the system from spammers and particular enthusiasts as groups of accounts start editing each others articles. You do however have a point that some newpage patrollers will incorrectly tag articles for speedy deletion where AFD might have resulted in deletion. But I think the solution to that is better training for newpage patrol taggers and a userright that can be taken away from ones that make too many errors. ϢereSpielChequers 10:21, 4 September 2016 (UTC)
- Here's an anecdotal example of testing your proposed criteria. I have created about 11 articles. I tend to create articles for people and things that are notable but not famous. All of my articles have fewer than 50 edits and fewer than 15 editors, so they would all fail the proposed test, but all of the articles are likely to survive notability tests.* That's a 100% false positive rate for the initial filter. I haven't tested the Google search results, but searching for most of the names of articles I have created would lead to ambiguous search results that do not hit the person or thing that the article is about.
- I think it would be better to focus on a category of article that is known to have many AfD candidates, like music singles or articles about people that have no references.
- * (Except perhaps the articles for music albums, which the music project holds to a different standard from WP:GNG for some reason.) – Jonesey95 (talk) 16:33, 4 September 2016 (UTC)
- There exists a hand-picked set of articles that specifically address a perceived lack of notability that no one has bothered to take to AfD yet: articles tagged with {{Notability}} in category Category:All articles with topics of unclear notability. Someone (a bot perhaps) should systematically take these items to AfD. The metric here is far more reliable than anything suggested above: it's not guesswork based on who has edited how much and when, but actual human editors tagging the articles because they seem to lack notability and prompting (but overwhelmingly not resulting in) an AfD nomination. – Finnusertop (talk ⋅ contribs) 16:43, 4 September 2016 (UTC)
- Other good places to look for deletion candidates are Category:All unreferenced BLPs and Category:Articles lacking sources. – Jonesey95 (talk) 17:03, 4 September 2016 (UTC)
- Even if you ignored all articles tagged for notability that have subsequently been edited, you would risk swamping AFD with low quality deletion requests. Better to go through such articles manually, remove notability tags that are no longer correct, do at least a google search to see if there are sources out there and prod or AFD articles if that is appropriate. ϢereSpielChequers 16:34, 14 September 2016 (UTC)
Birmingham City Council, England
Birmingham City Council have changed their website, and all URLs in the format:
https://www.birmingham.gov.uk/cs/Satellite?c=Page&childpagename=Member-Services%2FPageLayout&cid=1223092734682&pagename=BCC%2FCommon%2FWrapper%2FWrapper
are dead, and, if in references, need to be either marked {{Dead link}} on converted to archived versions.
Many short URLs, in the format:
http://www.birmingham.gov.uk/libsubs
are also not working, but should be checked on a case-by-case basis. *sigh* Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:54, 24 August 2016 (UTC)
- @Pigsonthewing: I believe Cyberpower678's InternetArchiveBot already handles archiving dead URLs and thus no new bot is needed. Pppery (talk) 15:03, 24 August 2016 (UTC)
- One month on, this doesn't seem to have happened; we still have over 200 dead links, beginning
http://www.birmingham.gov.uk/cs/
alone. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:33, 19 September 2016 (UTC)
- One month on, this doesn't seem to have happened; we still have over 200 dead links, beginning
- @Pigsonthewing: Did all pages get moved to the new URL format, or did they create an entirely new site and dump the old content? If the former, it may be helpful to first make a list of all the old URLs on Wikipedia, and then try to find the new locations for a few of them. That may help make the bot job easier if a good pattern can be found. Having the new URL format is helpful, but having real examples of the before and after for multiple pages should make it easier. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 06:23, 25 August 2016 (UTC)
The page for the first, long, URL I gave above, like many others I've looked for, appears not to have been recreated on the new site.
The page that was at:
http://www.birmingham.gov.uk/cs/Satellite?c=Page&childpagename=Parks-Ranger-Service%2FPageLayout&cid=1223092737719&pagename=BCC%2FCommon%2FWrapper%2FWrapper
(archived here) is now, with rewritten content, at:
https://www.birmingham.gov.uk/info/20089/parks/405/sutton_park
and clearly there is no common identifier in the two URLs. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:05, 27 August 2016 (UTC)
Help with anniversary calendar at Portal:Speculative fiction
In order to more easily update the anniversary section of the calendar, I would like a bot that:
- Runs once per week
- Makes a list at Portal:Speculative fiction/Anniversaries/Working of mainspace articles listed within Category:Speculative fiction and its subcategories (the categories in the "Subcategories" section on the category page).
- Updates Portal:Speculative fiction/Anniversaries/Current with all mainspace articles currently linked from the anniversaries pages (there are pages for every day of the year in the format Portal:Speculative fiction/Anniversaries/January/January 1).
- Checks Portal:Speculative fiction/Anniversaries/Ignore for a list of articles marked to be ignored (this page will be updated manually unless we can figure out a good system where the bot can do the listing).
- Updates Portal:Speculative fiction/Anniversaries/Todo with all mainspace articles from step 2 that are not in the list in step 3 and not listed to be ignored in step 4.
I hope that makes sense. Anyone up to the task? Thanks in advance for your time. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 06:18, 25 August 2016 (UTC)
- @Nihonjoe: I don't think we really need a bot to do this. I can update the pages every week semi-manually if you like. Just one thing, I'm a bit confused as to what the "ignore list" is meant to do? How do you plan on getting the articles to go on it? Omni Flames (talk) 22:05, 29 August 2016 (UTC)
- @Omni Flames: I figured a bot would be able to do it faster than a person. It shouldn't be too complicated a task, either, but it would be tedious (hence the bot request). I could do it manually myself, but it would take a lot of time. The ignore list would likely be updated manually, with pages determined to not be needed on the Todo list. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 22:10, 29 August 2016 (UTC)
- @Nihonjoe: Well, when I said manually, I didn't really mean manually. I meant more that I'd create the lists using a bot each week and paste it on myself. That would mean we wouldn't even need a BRFA or anything. However, we can do it fully-automatically if that suits you better. Omni Flames (talk) 22:58, 29 August 2016 (UTC)
- @Omni Flames: If that's easier, that's fine. I figured having a bot do it automatically would relieve someone of having to manually do something every week. I'm fine either way, though. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 17:15, 30 August 2016 (UTC)
- @Omni Flames: Just following up to see if you plan to do this. Thanks! ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 17:39, 8 September 2016 (UTC)
- @Nihonjoe: I'll see what I can do. I've had a lot on my plate lately. Omni Flames (talk) 08:49, 9 September 2016 (UTC)
- @Omni Flames: Okay, I appreciate any help. I'll follow up in a couple weeks. Thanks! ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 19:08, 9 September 2016 (UTC)
- @Omni Flames: Just following up as promised. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 19:58, 6 October 2016 (UTC)
- @Nihonjoe: Sorry, but I don't think I have time to do this at the moment. I've had a lot going on in real life at the moment and I haven't been very active on wiki recently. Hopefully you can find someone else to help you with this, sorry for the inconvenience. Omni Flames (talk) 09:46, 7 October 2016 (UTC)
- @Omni Flames: I can understand real life taking over. Thanks, anyway. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 16:22, 7 October 2016 (UTC)
- @Nihonjoe: Sorry, but I don't think I have time to do this at the moment. I've had a lot going on in real life at the moment and I haven't been very active on wiki recently. Hopefully you can find someone else to help you with this, sorry for the inconvenience. Omni Flames (talk) 09:46, 7 October 2016 (UTC)
- @Omni Flames: Just following up as promised. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 19:58, 6 October 2016 (UTC)
- @Omni Flames: Okay, I appreciate any help. I'll follow up in a couple weeks. Thanks! ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 19:08, 9 September 2016 (UTC)
- @Nihonjoe: I'll see what I can do. I've had a lot on my plate lately. Omni Flames (talk) 08:49, 9 September 2016 (UTC)
- @Nihonjoe: Well, when I said manually, I didn't really mean manually. I meant more that I'd create the lists using a bot each week and paste it on myself. That would mean we wouldn't even need a BRFA or anything. However, we can do it fully-automatically if that suits you better. Omni Flames (talk) 22:58, 29 August 2016 (UTC)
- @Omni Flames: I figured a bot would be able to do it faster than a person. It shouldn't be too complicated a task, either, but it would be tedious (hence the bot request). I could do it manually myself, but it would take a lot of time. The ignore list would likely be updated manually, with pages determined to not be needed on the Todo list. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 22:10, 29 August 2016 (UTC)
- Anyone else interested? It should be a pretty quick job. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 16:22, 7 October 2016 (UTC)
- @Nihonjoe: I wrote some code for this, but I have run into a recursion issue when getting all the articles in the Category:Speculative fiction tree. The tree either has loops (Category:Foo in Category:Bar [in ...] in Category:Foo) or is very large. I fixed one loop (Category:Toho Monsters), but I don't have time to check the entire tree. If I increase the maximum number of recursions permitted, it will work if the tree doesn't have any loops. It has been tested on smaller, clean trees with success. — JJMC89 (T·C) 15:39, 21 October 2016 (UTC)
- @JJMC89: Is there a bot that can check for loops and output a list? That will make it easier to fix. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 02:20, 27 October 2016 (UTC)
- Not that I am aware of. If I can find time, I might be able to come up with something. — JJMC89 (T·C) 05:57, 31 October 2016 (UTC)
- @JJMC89: Is there a bot that can check for loops and output a list? That will make it easier to fix. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 02:20, 27 October 2016 (UTC)
- @Nihonjoe: I wrote some code for this, but I have run into a recursion issue when getting all the articles in the Category:Speculative fiction tree. The tree either has loops (Category:Foo in Category:Bar [in ...] in Category:Foo) or is very large. I fixed one loop (Category:Toho Monsters), but I don't have time to check the entire tree. If I increase the maximum number of recursions permitted, it will work if the tree doesn't have any loops. It has been tested on smaller, clean trees with success. — JJMC89 (T·C) 15:39, 21 October 2016 (UTC)
Coordinates format RfC: Infobox park
Per this RfC (see Help:Coordinates in infoboxes), could all articles using {{Infobox park}} which are also in Category:Pages using deprecated coordinates format be run through with AWB (minor fixes turned on) with this regex (entire text, case-sensitive, other options default)? This should affect roughly 2,000 pages (with no visual changes, aside from the minor fixes).
Find:
*\| ?lat_d *= ?([\-0-9\. ]+)(\n? *\| ?lat_m *= ?([0-9\. ]*))?(\n? *\| ?lat_s *= ?([0-9\. ]*))?(\n? *\| ?lat_NS *= ?([NnSs]?) ?)?\n? *\| ?long_d *= ?([\-0-9\. ]+)(\n? *\| ?long_m *= ?([0-9\. ]*))?(\n? *\| ?long_s *= ?([0-9\. ]*))?(\n? *\| ?long_EW *= ?([EeWw]?) ?)?(\n? *\| ?region *= ?(.*) ?)?(\n? *\| ?dim *= ?(.*) ?)?(\n? *\| ?scale *= ?(.*) ?)?(\n? *\| ?source *= ?(.*) ?)?(\n? *\| ?format *= ?(.*) ?)?(\n? *\| ?display *= ?(.*) ?)?(\n? *\| ?coords_type *= ?(.*) ?)?(\n? *\| ?coords *= ?.*)?
(There's a space at the beginning.)
Replace:
| coords = {{subst:Infobox coord/sandbox | lat_d = $1 | lat_m = $3 | lat_s = $5 | lat_NS = $7 | long_d = $8 | long_m = $10 | long_s = $12 | long_EW = $14 | region = $16 | dim = $18 | scale = $20 | source = $22 | format = $24 | display = $26 | type = $28 }}
Thanks, Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 15:22, 31 August 2016 (UTC)
- (Pinging Mandruss and Jonesey95. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 15:23, 31 August 2016 (UTC))
- @Jc86035: Are you sure about
|type=$28
? That parameter is not deprecated in Infobox park. ―Mandruss ☎ 16:25, 31 August 2016 (UTC)- Are there sample edits that show a few articles in which this regex replacement has already been done? – Jonesey95 (talk) 18:21, 31 August 2016 (UTC)
- @Mandruss and Jonesey95: The
|type=
is the parameter in {{Infobox coord/sandbox}} (substituted to create {{Coord}}). The parameter|coords_type=
of Infobox park is put into it. I've done the replacement on 11 infoboxes (example, example), but without the rounding for latitude and longitude (which I have yet to test properly). Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 01:31, 1 September 2016 (UTC)- @Jc86035: More things I don't understand. What is the rounding you refer to? Are you altering coordinates precision? And in your first example you are switching from signed decimal to unsigned, is that your intent? ―Mandruss ☎ 01:51, 1 September 2016 (UTC)
- @Mandruss: The precision in many coordinates – 7 digits – is rather high for parks, which are generally wider than 10 centimetres. Because the input has always been put through {{Infobox coord}} (I'm just substituting a variation with comments and empty parameters removed), there aren't any visual changes. I used Infobox coord as a wrapper because I didn't want to break anything in current uses. —Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 02:04, 1 September 2016 (UTC)
- Rounding improved to keep zeroes on at the end. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 02:14, 1 September 2016 (UTC)
- Very not happy with bot decreasing precision. Mill Ends Park --Tagishsimon (talk) 02:17, 1 September 2016 (UTC)
- @Tagishsimon: Well we could always take
|area=
into account, but the vast majority of parks don't need that level of precision. I'll build it in at some point. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 02:21, 1 September 2016 (UTC) - Also, that one doesn't need conversion since it already uses
|coords=
. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 02:23, 1 September 2016 (UTC)- @Jc86035: WP:COORDPREC suggests 5 d.p. for objects between about 0–37° latitude and about 8–75 m. If we're going to blindly reduce precision, I don't think we should go fewer than 5 d.p. for parks. I assume we're never going to increase precision.
If you at some point take area into account, the only reasonable object size for this purpose would be the sqrt of area. Object size is always one-dimensional, which is why COORDPREC states it as m and km, not m2 and km2. ―Mandruss ☎ 02:57, 1 September 2016 (UTC)
- @Jc86035: WP:COORDPREC suggests 5 d.p. for objects between about 0–37° latitude and about 8–75 m. If we're going to blindly reduce precision, I don't think we should go fewer than 5 d.p. for parks. I assume we're never going to increase precision.
- @Tagishsimon: Well we could always take
- Very not happy with bot decreasing precision. Mill Ends Park --Tagishsimon (talk) 02:17, 1 September 2016 (UTC)
- Rounding improved to keep zeroes on at the end. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 02:14, 1 September 2016 (UTC)
- @Mandruss: The precision in many coordinates – 7 digits – is rather high for parks, which are generally wider than 10 centimetres. Because the input has always been put through {{Infobox coord}} (I'm just substituting a variation with comments and empty parameters removed), there aren't any visual changes. I used Infobox coord as a wrapper because I didn't want to break anything in current uses. —Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 02:04, 1 September 2016 (UTC)
- @Jc86035: More things I don't understand. What is the rounding you refer to? Are you altering coordinates precision? And in your first example you are switching from signed decimal to unsigned, is that your intent? ―Mandruss ☎ 01:51, 1 September 2016 (UTC)
- @Mandruss and Jonesey95: The
- Are there sample edits that show a few articles in which this regex replacement has already been done? – Jonesey95 (talk) 18:21, 31 August 2016 (UTC)
- @Jc86035: Are you sure about
@Mandruss, Jonesey95, and Tagishsimon: Rounding removed; probably wouldn't work in retrospect. I've already tested this configuration, so it should work. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 10:28, 4 September 2016 (UTC)
- Very good; thanks. We should probably do an exercise on precision sometime, but sensible to keep things as simple as they can as we progress the use of coord in infoboxes. --Tagishsimon (talk) 11:40, 4 September 2016 (UTC)
@Jc86035: I can write something up for this. Using an AWB custom module will be more flexible than using the regex above. Since the template conversions will be similar, I would like to file one BRFA for all of the templates that need to be converted. For each infobox, I'll just need a map of the parameters into {{subst:Infobox coord/sandbox}} if they differ from the above. (I don't need them all now, just when the template is ready.) — JJMC89 (T·C) 10:21, 5 September 2016 (UTC)
- @JJMC89: I'd prefer doing it in batches since this is likely to take at the very least four months (if no one decides to help), but I don't really mind doing it in one go. We also probably to make another wrapper based on {{Geobox coor}}, since many infoboxes use that. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 10:28, 5 September 2016 (UTC)
- @Jc86035: That will be fine. One BRFA doesn't mean that they all need to be done at once. It just means that I will have approval to run the bot for all of them. Each infobox can then be processed once the conversion is done. — JJMC89 (T·C) 10:33, 5 September 2016 (UTC)
- @Jc86035, Jonesey95, and Mandruss: BRFA filed — JJMC89 (T·C) 22:45, 5 September 2016 (UTC)
Draft space redirects
An adminbot should create a fully protected redirect from Draft:A to A for each article A (including disambiguation pages). If Draft:A already exists, then there are three cases to consider.
- Draft:A is not a redirect. In this case, the adminbot will ignore it.
- Draft:A already redirects to A. In this case, the adminbot will fully protect it.
- Draft:A is a redirect to some target other than A. In this case, the adminbot will fully protect and retarget it to A.
63.251.215.25 (talk) 17:05, 2 September 2016 (UTC)
- What would the benefit of this bot be? Enterprisey (talk!) 17:31, 2 September 2016 (UTC)
- First off, I'm not the OP. I didn't edit while logged out. I'm also not a bot operator. Just watching this page because of my request above. Anyway, I personally wouldn't feel safe with an adminbot running around, especially if it were to malfunction. I'd feel much safer if the bot tagged a redirect and then an admin could see and fully protect it. I'm also not sure why a redirect from a draft would need to be fully protected, other than because of vandalism and edit-warring, and WP:AIV and WP:EWN already takes care of that. And they don't preemptively protect pages from vandalism and edit-warring. They only do it if it's in progress. -- Gestrid (talk) 18:47, 2 September 2016 (UTC)
- Agree with Enterprisey. Why is this mass creation of millions of redirects helpful? Pppery (talk) 20:09, 2 September 2016 (UTC)
- Needs wider discussion. Clearly controversial. ~ Rob13Talk 19:19, 30 October 2016 (UTC)
Remove DEFAULTSORT keys that are no longer needed
Now that English Wikipedia is using UCA collation for categories (phabricator:T136150), there are a large number of DEFAULTSORT keys that are no longer needed. For example, it is no longer necessary to have DEFAULTSORT keys for titles that begin with diacritics, like Über or Łódź. (Those will automatically sort under U and L now.) Someone should write a bot to remove a lot of these unneccessary DEFAULTSORT keys (for example, when the title is the same at the DEFAULTSORT key except for diacritics). Kaldari (talk) 21:40, 6 September 2016 (UTC)
- Not really. The sort keys are a useful part of the product. They also show that the matter has been attended to, and discourage people from making up incorrect sort keys. All the best: Rich Farmbrough, 21:29, 27 September 2016 (UTC).
- Needs wider discussion. In any event, this is likely to be controversial, as the sort keys are not doing active harm. This would need consensus. ~ Rob13Talk 19:18, 30 October 2016 (UTC)
DYK talk tag
Hi, I was wondering if it is possible to make a Bot to update old DYK talk tags at the article talk pages of articles that has appeared on DYK. At present time most of the older tags are using the old "hits check tool". That tool is dead, so I think it would be beneficial if a bot could change so all DYK tags at talk pages was having the new and improved hits counting tool. --BabbaQ (talk) 16:33, 10 September 2016 (UTC)
- Is there example talk pages showing the old (broken) and new (working)? -- GreenC 16:41, 10 September 2016 (UTC)
- Just my luck, today the old tool is working. But only for the page hit of that particular day, as soon as you try to search for anything of try other days it refers to internal failure. It has not worked since January before and is effectively not in use anymore. Talk:Elsa Collin shows the old tool, and Talk:Syster Sol shows the new and improved tool for hits. The old one will soon go into complete failure. In my opinion it would be wise to ask a Bot to update the old DYK talk tags with the new tool. We are talking about several thousands that have the old tool. --BabbaQ (talk) 22:53, 10 September 2016 (UTC)
- In Talk:Syster Sol with a DYK of September 10 2016, the URL to tools.wmflabs.org/pageviews includes the date range 8/31/2016 -> 9/20/2016 .. that's 21 days (3 weeks) with the end date 10 days after the DYK, and the start date 11 days before the DYK. This will require more than an AWB replace, but a script to extract the DYK date and do date calculations. I know how to do it, but have other bot work before I commit, will keep it in mind. If anyone else wants to do it please go for it. -- GreenC 23:45, 10 September 2016 (UTC)
- Just my luck, today the old tool is working. But only for the page hit of that particular day, as soon as you try to search for anything of try other days it refers to internal failure. It has not worked since January before and is effectively not in use anymore. Talk:Elsa Collin shows the old tool, and Talk:Syster Sol shows the new and improved tool for hits. The old one will soon go into complete failure. In my opinion it would be wise to ask a Bot to update the old DYK talk tags with the new tool. We are talking about several thousands that have the old tool. --BabbaQ (talk) 22:53, 10 September 2016 (UTC)
I don't see anything to do in the example. The DYK boxes in Talk:Elsa Collin and Talk:Syster Sol don't specify any url or tool but just call {{DYK talk}} with the date the article was in DYK. Talk:Elsa Collin says:
{{DYK talk|18 July|2014|entry= ... that '''[[Elsa Collin]]''' ''(pictured)'' was the first woman at any Swedish university to be part of a student [[Spex (theatre)|spex show]]?}}
It produces:
A fact from Bot requests appeared on Wikipedia's Main Page in the Did you know column on 18 July 2014 (check views). The text of the entry was as follows:
|
Talk:Syster Sol says:
{{DYK talk|10 September|2016|entry= ... that singer '''[[Syster Sol]]''' ''(pictured)'' won the award for Best Reggae/Dancehall at the 2014 [[Kingsizegala]]?|nompage=Template:Did you know nominations/Syster Sol}}
It produces:
A fact from Bot requests appeared on Wikipedia's Main Page in the Did you know column on 10 September 2016 (check views). The text of the entry was as follows:
|
{{DYK talk}} uses the date to choose which tool to link on "check views". Both links currently work for me. https://tools.wmflabs.org/pageviews doesn't allow dates before 1 July 2015 so http://stats.grok.se is chosen for 18 July 2014. No tool is linked for dates before 10 December 2007 where the data at http://stats.grok.se starts. If the site dies completely then it can just be removed from {{DYK talk}}. @BabbaQ: Can you give an example page where an edit should be made to the page? Please check the wikitext of the page to see whether there is actually something to change. PrimeHunter (talk) 00:45, 11 September 2016 (UTC)
- BabbaQ, is this still a task you're interested in having a bot do? Enterprisey (talk!) 18:26, 29 October 2016 (UTC)
- Enterprisey. Yes. Mostly because it would be easier for anyone wanting to see more data from the DYK at the articles talk pages. And since the old template for DYK for the talk pages does include the old Stats tool it would be good if a bot could update all the old DYKs so the new tool is available at every separate DYK. I am not sure if I can be more specific, and if it is possible to make it happen.BabbaQ (talk) 19:43, 1 November 2016 (UTC)
- [2], just to give an example, here is a link to a DYK stats for a old DYK. It goes to the old tool and then collapses to internal server error. I think the Wiki project would benefit from the tool being updated at every DYK template at the article talk pages from the old to the new one. BabbaQ (talk) 19:48, 1 November 2016 (UTC)
- Enterprisey. Yes. Mostly because it would be easier for anyone wanting to see more data from the DYK at the articles talk pages. And since the old template for DYK for the talk pages does include the old Stats tool it would be good if a bot could update all the old DYKs so the new tool is available at every separate DYK. I am not sure if I can be more specific, and if it is possible to make it happen.BabbaQ (talk) 19:43, 1 November 2016 (UTC)
Missing WP:lead detection
I would like to see a bot that would flag up articles probably in need of a lead - articles with either (a) no text between the header templates and the first section header, or (b) no section headers at all and over say 10kB of text. The bot would place the articles in Category:Pages missing lead section, and possibly also tag them with {{lead missing}}
: Noyster (talk), 13:22, 19 September 2016 (UTC)
- Sounds like a good idea to me, at least in case (a). I think it should tag them with the template (the template adds them to the category). At least one theoretical objection comes to my mind: it's technically possible to template the entire lead text from elsewhere, making it appear as a header template in wikitext but as a verbose lead in the actual text (in practice I've never seen this and it doesn't sound like a smart thing to do anyhow). – Finnusertop (talk ⋅ contribs) 21:59, 23 September 2016 (UTC)
- I thought, that this resulted in some script, but not. --Edgars2007 (talk/contribs) 07:39, 25 September 2016 (UTC)
- Thanks Edgars2007 for linking to that discussion from last year. The main participants there Hazard-SJ, Casliber, Finnusertop, and Nyttend may have a view. If "tag-bombing" is objectionable then it may be less controversial to just add the articles to the category, so anyone wanting to supply missing leads can find material to choose from. Other topics to decide upon are minimum article size, and whether to include list articles: Noyster (talk), 15:44, 25 September 2016 (UTC)
- I'd be worried in this tagging a huge number of articles - I'd say excluding lists and maybe doing a trial run with sizable articles only (?6kb of prose?) might be a start and see what comes up...? Cas Liber (talk · contribs) 18:22, 25 September 2016 (UTC)
- As I noted in that previous discussion, lists need to be carefully excluded. We have absolutely no business listening to anyone who says It's just that this is one of those things that has never been enforced much and the community has developed bad practices — aside from provisions required by WMF, community practice is the basis for all our policies and guidelines, and MOS must bow to community practice; people who insist on imposing the will of a few MOS editors on the community need to be shown the door. On the technical aspects of this proposal, I'm strongly opposed to having a bot add any visible templates; this is a CONTEXTBOT situation, and we shouldn't run the risk of messing up some perfectly fine articles because the bot's algorithm mistakenly thought they needed fuller intros. However, a hidden category would be fine, simply because it won't be visible to readers and won't impact anything; humans could then go through the bot-added category and make appropriate changes, including adding {{lead missing}} if applicable. Nyttend (talk) 21:31, 25 September 2016 (UTC)
- OK thanks commenters. Revised proposal: Bot to detect and categorise articles NOT having "List" as first word of the title AND either (a) no text between the header templates and the first section header, or (b) no section headers at all and over 6kB of text. The bot would place the articles in Category:Pages missing lead section: Noyster (talk), 18:43, 7 October 2016 (UTC)
- As I noted in that previous discussion, lists need to be carefully excluded. We have absolutely no business listening to anyone who says It's just that this is one of those things that has never been enforced much and the community has developed bad practices — aside from provisions required by WMF, community practice is the basis for all our policies and guidelines, and MOS must bow to community practice; people who insist on imposing the will of a few MOS editors on the community need to be shown the door. On the technical aspects of this proposal, I'm strongly opposed to having a bot add any visible templates; this is a CONTEXTBOT situation, and we shouldn't run the risk of messing up some perfectly fine articles because the bot's algorithm mistakenly thought they needed fuller intros. However, a hidden category would be fine, simply because it won't be visible to readers and won't impact anything; humans could then go through the bot-added category and make appropriate changes, including adding {{lead missing}} if applicable. Nyttend (talk) 21:31, 25 September 2016 (UTC)
- I'd be worried in this tagging a huge number of articles - I'd say excluding lists and maybe doing a trial run with sizable articles only (?6kb of prose?) might be a start and see what comes up...? Cas Liber (talk · contribs) 18:22, 25 September 2016 (UTC)
- Thanks Edgars2007 for linking to that discussion from last year. The main participants there Hazard-SJ, Casliber, Finnusertop, and Nyttend may have a view. If "tag-bombing" is objectionable then it may be less controversial to just add the articles to the category, so anyone wanting to supply missing leads can find material to choose from. Other topics to decide upon are minimum article size, and whether to include list articles: Noyster (talk), 15:44, 25 September 2016 (UTC)
- I thought, that this resulted in some script, but not. --Edgars2007 (talk/contribs) 07:39, 25 September 2016 (UTC)
Bot that can remove unnecessary code
- User:Italia2006 and I have been discussing the removal of the unnecessary code present in the footballbox template for football (soccer) matches. User:SuperJew has also shown interest in this removal. In the
|report=
parameter usually shows [http://www.goal.com/en/match/italy-vs-romania/1042828/preview Report] however just the link without the brackets and the "Report" at the end of the link produces the same visual appearance that the unnecessary code produces. Also, the entire|stack=
parameter has been phased out as it is also not needed for the footballbox as it gives the same appearance with or without. Would it be possible for a bot to remove these unnecessary code?
- Example for proposed change for all footballboxes: Footballbox with unnecessary code:
17 November 2010 International friendly | Italy | 1–1 | Romania | Klagenfurt, Austria |
20:30 CEST (UTC+02:00) | Marica 82' (o.g.) | Report | Marica 34' | Stadium: Wörthersee Stadion Attendance: 14,000 Referee: Thomas Einwaller (Austria) |
- Footballbox showing the same appearance as the first, minus the unnecessary code
17 November 2010 International friendly | Italy | 1–1 | Romania | Klagenfurt, Austria |
20:30 CEST (UTC+02:00) | Marica 82' (o.g.) | Report | Marica 34' | Stadium: Wörthersee Stadion Attendance: 14,000 Referee: Thomas Einwaller (Austria) |
Thanks. Vaselineeeeeeee★★★ 02:15, 23 September 2016 (UTC)
- The
|report=
change does not appear to work correctly if a reference or similar follows the link. Keith D (talk) 13:16, 23 September 2016 (UTC)- @Keith D: Sorry, I'm not sure I understand what you mean? Can you provide an example? Vaselineeeeeeee★★★ 13:46, 23 September 2016 (UTC)
- @Vaselineeeeeeee: Generally, we don't make edits which are purely cosmetic and only change the wikitext of a page, not the page itself. See WP:COSMETICBOT. Is there any reason in particular you think this is necessary? Omni Flames (talk) 13:56, 23 September 2016 (UTC)
- @Omni Flames: I see. Although, the appearance is the same to the reader, it makes it easier for us editors, especially those who edit football match results for consistency within the project. It also makes things more efficient for us as there is less unnecessary updating to do in regards to the parameters in question. Maybe that's not a good enough reason, I don't know. @Italia2006: @SuperJew: anything to add? Vaselineeeeeeee★★★ 14:18, 23 September 2016 (UTC)
- @Vaselineeeeeeee: regarding the report I wouldn't recommend using a bot since having the link only works only in cases of one report. In cases such as 2018 FIFA World Cup qualification – AFC Third Round which lists for each match the FIFA report and the AFC report it would have to be manually as it is now.
- The stack parameter was deprecated, because it was getting rather ridiculous in appearance and editors (especially new editors or IPs) who used it often used it wrong. To change it now I suppose would be counted a cosmetic, but as Vaselineeeeeeee said it makes it easier for the editors, especially new or IPs who are unfamiliar with it. I have often seen bots make changes of things such as switching between <br>, <br/> and <br /> (though I don't remember in what order). Isn't that a cosmetic change? --SuperJew (talk) 14:42, 23 September 2016 (UTC)
- @Omni Flames: I see. Although, the appearance is the same to the reader, it makes it easier for us editors, especially those who edit football match results for consistency within the project. It also makes things more efficient for us as there is less unnecessary updating to do in regards to the parameters in question. Maybe that's not a good enough reason, I don't know. @Italia2006: @SuperJew: anything to add? Vaselineeeeeeee★★★ 14:18, 23 September 2016 (UTC)
- @Vaselineeeeeeee: Generally, we don't make edits which are purely cosmetic and only change the wikitext of a page, not the page itself. See WP:COSMETICBOT. Is there any reason in particular you think this is necessary? Omni Flames (talk) 13:56, 23 September 2016 (UTC)
- @Keith D: Sorry, I'm not sure I understand what you mean? Can you provide an example? Vaselineeeeeeee★★★ 13:46, 23 September 2016 (UTC)
- Usually those cosmetic changes are done along side other, non-cosmetic changes. However, I did write this a while ago, but never really bothered to propose it to people. It's still in draft form mind you. Might be time to take a look at it again. Headbomb {talk / contribs / physics / books} 15:19, 23 September 2016 (UTC)
- Example as requested
17 November 2010 International friendly | Italy | 1–1 | Romania | Klagenfurt, Austria |
20:30 CEST (UTC+02:00) | Marica 82' (o.g.) | Report[1] | Marica 34' | Stadium: Wörthersee Stadion Attendance: 14,000 Referee: Thomas Einwaller (Austria) |
- Footballbox showing the same appearance as the first, minus the unnecessary code
17 November 2010 International friendly | Italy | 1–1 | Romania | Klagenfurt, Austria |
20:30 CEST (UTC+02:00) | Marica 82' (o.g.) | http://www.goal.com/en/match/italy-vs-romania/1042828/preview [2] | Marica 34' | Stadium: Wörthersee Stadion Attendance: 14,000 Referee: Thomas Einwaller (Austria) |
- Keith D (talk) 17:56, 23 September 2016 (UTC)
- @Keith D: When we update footballbox results, we never add a reference using ref tags in the report parameter as the link to the match report is the reference for the match results. Thus, this would not be an issue. But as User:SuperJew pointed out, we may have problems when encountering a footballbox with two reports in one template. The stack parameter would still be helpful though. Vaselineeeeeeee★★★ 19:53, 23 September 2016 (UTC)
- I always add a reference to give full details of the report, as I was advised, or else you end up with a bare URL which should not be the case as per WP:ROT. Keith D (talk) 21:11, 23 September 2016 (UTC)
- @Keith D: Odd. I've never, ever seen that on any national football team results pages or club season articles... Vaselineeeeeeee★★★ 21:44, 23 September 2016 (UTC)
- I always add a reference to give full details of the report, as I was advised, or else you end up with a bare URL which should not be the case as per WP:ROT. Keith D (talk) 21:11, 23 September 2016 (UTC)
- @Keith D: When we update footballbox results, we never add a reference using ref tags in the report parameter as the link to the match report is the reference for the match results. Thus, this would not be an issue. But as User:SuperJew pointed out, we may have problems when encountering a footballbox with two reports in one template. The stack parameter would still be helpful though. Vaselineeeeeeee★★★ 19:53, 23 September 2016 (UTC)
- Keith D (talk) 17:56, 23 September 2016 (UTC)
Any news on this bot? Can we get it just for the stack parameter as a couple of us have pointed out fair reasons for why it should be removed? Thanks. Vaselineeeeeeee★★★ 11:29, 7 October 2016 (UTC)
Match Bandcamp links to articles
Please can someone do this:
For all the results in [3] which have the format http[s]://[www.]XXX.bandcamp.com
, match the ID ("XXX") to the corresponding article title's PAGENAMEBASE.
Example matches include:
- http://20minuteloop.bandcamp.com = 20minuteloop -> 20 Minute Loop
- http://aaronkent.bandcamp.com/album/winter-coats-summer-shorts = aaronkent -> Aaron Kent
- http://anagram.bandcamp.com = anagram -> Anagram (band)
- http://andremarques.bandcamp.com/ = andremarques -> André Marques (filmmaker)
- http://www.alvinpurple.bandcamp.com = alvinpurple -> Alvin Purple (band)
matches may be individual people, bands, or record companies.
These are not matches:
- http://bennytipene.bandcamp.com/album/room-demo-live-complete = bennytipene != Benny Tipene discography
- http://beastwars.bandcamp.com/ = beastwars != Beastwars (album)
- http://battlecircus.bandcamp.com/album/battle-circus = battlecircus != File:Battle Circus (album).jpg
Then, discard any duplicates, and for each match, fetch the Wikidata ID for the article.
If I could have the results in a spreadsheet, CSV or Wiki table, with four columns (URL, ID, article name, Wikidata ID), that would be great.
I have proposed a corresponding property on Wikidata, and will upload the values there if and when it is created. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:49, 26 September 2016 (UTC)
- The requirement for only people, bands, or record companies is tricky. I guess with people look for certain categories like "YYYY births" or "living persons" though not precise. Are there similar universal categories for bands and record companies, or other methods like common infoboxes that would identify an article as likely being a band or record company? -- GreenC 21:39, 26 September 2016 (UTC)
- Thanks; I wouldn't go about it that way (if anyone wants to, then Wikidata's "instance of" would be the way to go), but by eliminating negative matches such as pages with "discography" "(album)" or "(song)" in the title; or files. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:44, 27 September 2016 (UTC)
PR bot
This bot would likely be quite simple, basically it would act like the Legobot for RFC, if you have signed up for peer review volunteers, you get notices of new PR's in whatever area you choose, you would also set how many you are willing to receive per month. Iazyges Consermonor Opus meum 03:50, 29 September 2016 (UTC)
GA/FA bot
Acting much the same as my previous proposal, basically you would be able to sign up, and give both a maximum number to receive per month, and a subject area, in which to receive notices, again much like Legobot. Iazyges Consermonor Opus meum 03:52, 29 September 2016 (UTC)
Hatnote templates
Could a bot potentially modify articles and their sections which begin with indents and italicized text (i.e. ^\:+''([^\n'][^\n]+)''\n
) to use {{Hatnote}} (or one of the more specific hatnote templates, if the article's message matches)? Beginning articles like that without the hatnote template tends to mess up Hovercards, which are currently a Beta feature. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 11:38, 29 September 2016 (UTC)
- Nihiltres has been doing hatnote cleanup of late. Maybe he's already doing this? --Izno (talk) 11:50, 29 September 2016 (UTC)
- I've done a fair amount of cleanup, but I'm only one person. I mostly use the search functionality to pick out obvious cases with regex by hand. Here's a list of handy searches I've used, copied from my sandbox:
- Special:Search/insource:"for other uses" -insource:/for other uses\./ -hastemplate:"hatnote"
- Special:Search/insource:"redirects"+insource:/:\s*''[^\n]*?\s*[Rr]edirects+(here|to+this+page)/
- Special:Search/insource:/:\s*''\s*[Ff]or [^\n]*?,?\s*see/
- Special:Search/insource:"this article is about" insource:/\s*This article is about/ -hastemplate:"about" -hastemplate:"hatnote"
- Special:Search/insource:/:\s*''\s*[Ff]or+[^\n]*?,?\s*see/
- I've avoided doing broad conversion to {{hatnote}} because there's more work than I can handle just with the cases that should use more specific templates like {{about}} or {{redirect}}. {{Hatnote}} itself ought to only be used as a fallback when there aren't any more-specific templates appropriate. Doing broad conversion would be relatively quick, but more work in the long run: most instances would still need conversion to more specific templates from the general one, and it'd be harder to isolate the "real" custom cases from those that were just mindlessly converted from manual markup to {{hatnote}}. Moreover, a bot would need to avoid at least one obvious false positive: proper use of definition list markup and italics together ought not to be converted … probably easy enough to avoid with a check for the previous line starting with a semicolon? Either way I'll encourage people to join me in fixing cases such as the ones listed in the searches mentioned. {{Nihiltres |talk |edits}} 23:09, 29 September 2016 (UTC)
- I've done a fair amount of cleanup, but I'm only one person. I mostly use the search functionality to pick out obvious cases with regex by hand. Here's a list of handy searches I've used, copied from my sandbox:
Update NYTtopic IDs
{{NYTtopic}} currently stores values like people/r/susan_e_rice/index.html
, for the URL http://topics.nytimes.com/top/reference/timestopics/people/r/susan_e_rice/index.html
; these have been updated, and redirect to URLs like http://www.nytimes.com/topic/person/susan-rice
; and so the value stored should be person/susan-rice
.
This applies to other patterns, like:
organizations/t/taylor_paul_dance_co
->organization/paul-taylor-dance-company
We have around 640 of these IDs in templates, and many more as external wiki links or in citations.
The URL in the template will need to be changed at the same time - whether that's done first or last, there will be a period when the links don't work.
I've made the template capable of calling values from Wikidata; so another alternative would be to remove these values and add the new ones to Wikidata at the same time; otherwise, I'll copy them across later.
Non-templated links should also be updated; or better still converted to use the template. Can someone help, please? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 12:50, 29 September 2016 (UTC)
- @Pigsonthewing: How do you suggest a bot predict the correct pattern? It seems to vary quite a bit. ~ Rob13Talk 19:25, 30 October 2016 (UTC)
- @BU Rob13: A bot should not "predict", but follow the current link and note the URL to which it redirects. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:37, 30 October 2016 (UTC)
- @Pigsonthewing: If the URLs are redirecting properly, what's the point of this change? Is there a reason to believe the redirects will go away? ~ Rob13Talk 19:50, 30 October 2016 (UTC)
- There is no reason to suppose that they will be maintained indefinitely; however, the reason for the change is so that the template can be updated, to accept new values; and for compatibility with Wikidata, from where the values should ultimately be fetched. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:58, 30 October 2016 (UTC)
- @Pigsonthewing: If the URLs are redirecting properly, what's the point of this change? Is there a reason to believe the redirects will go away? ~ Rob13Talk 19:50, 30 October 2016 (UTC)
- @BU Rob13: A bot should not "predict", but follow the current link and note the URL to which it redirects. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:37, 30 October 2016 (UTC)
Fixing hundreds of broken URLs - updating links to different server with the same reference number
I edit hundreds/thousands of wp articles relating to Somerset. Recently (for at least a week) all links to the Somerset Historic Environment Records have been giving a 404. I contacted the web team who's server hosts the database & they said: "as you may be aware what is now the ‘South West Heritage Trust’ is independent from Somerset County Council – some of their systems eg. HER have been residing on our servers since their move – and as part of our internal processes these servers are now being decommissioned. Their main website is now at http://www.swheritage.org.uk/ with the HER available at http://www.somersetheritage.org.uk/ . There are redirects in place on our servers that should be temporarily forwarding visitors to the correct website eg: http://webapp1.somerset.gov.uk/her/details.asp?prn=11000 should be forwarding you to http://www.somersetheritage.org.uk/record/11000 - this appears to be working for me, so unsure why it isn’t working for you".
According to this search there are 1,546 wp articles which include links to the database. Is there any quick/automated way to find & replace all of the links (currently http://webapp1.somerset.gov.uk ) to the new server name ( http://www.somersetheritage.org.uk/record/ ) but keep the identical record number at the end? A complication is that two different formats of the url previously work ie both formats for the URL ie /record/XXXXX & /her/details.asp?prn=XXXXXX.
I don't really fancy manually going through this many articles & wondered if there was a bot or other technique to achieve this?— Rod talk 14:52, 29 September 2016 (UTC)
- There are approximately 800 links in some 300 articles. The way I would recommend doing this is a find/replace in WP:AWB. --Izno (talk) 15:24, 29 September 2016 (UTC)
- I have never got AWB to work in any sensible way. Would you (or anyone else) be able to do this?— Rod talk 15:29, 29 September 2016 (UTC)
- @Rodw: I'll take a look at this and see what I can do when I get home tonight. It seems like it would be pretty easy to fix with AWB. Omni Flames (talk) 22:16, 29 September 2016 (UTC)
- @Omni Flames: Thanks for all help & advice - the broken links are now fixed.— Rod talk 07:11, 30 September 2016 (UTC)
- @Rodw: I'll take a look at this and see what I can do when I get home tonight. It seems like it would be pretty easy to fix with AWB. Omni Flames (talk) 22:16, 29 September 2016 (UTC)
- I have never got AWB to work in any sensible way. Would you (or anyone else) be able to do this?— Rod talk 15:29, 29 September 2016 (UTC)
blocking IPs that only hit the spam blacklist
I've asked this for Procseebot (User:Slakr), but not gotten any response - maybe there are other solutions to this problem (I also am not sure whether it involves open proxies).
The spam blacklist is blocking certain sites which were spammed. One of the problems that we are currently facing is that there are, what are likely, spambots continuously hitting the spam blacklist. That involves a certain subset of attempted urls. The editors only hit the blacklist (I have yet to see even one editor having constructive edits at all on their IP), and they do that continuously (hence my suspicion that these are spambots). It is good that we see that the spam blacklist is doing it's job, the problem is that sometimes the log becomes unreadable because these IPs hit the blacklist thousands of times, flooding the log (admin-eyes-only example).
When no-one is watching, it sometimes takes a long time before the IPs get blocked. I would therefore request that when IPs without edits are hitting the blacklist for the specific set of urls, that they get blocked as soon as they hit the blacklist (lengthy, I tend to block for a month at first, a year at the second; withdraw talkpage access (see Special:Log/spamblacklist/175.44.6.189 and Special:Log/spamblacklist/175.44.5.169; admin-eyes-only, they hit their own talkpages just as happily and that is not affected by a regular block)), and subsequently tagging the talkpages with {{spamblacklistblock}}. Are there any bots that could take this task? Thanks. --Dirk Beetstra T C 06:14, 5 October 2016 (UTC)
To put a bit of more context on occurrence, User:Kuru and I (who I know follow the logs) have made 14 of these blocks in the last 24 hours. --Dirk Beetstra T C 10:45, 5 October 2016 (UTC)
- This is a really odd spambot; I think it is just one spammer accounting for about 30% of the hits on the blacklist log. They target a small group of core articles (Builletin Board System, for example), and then a larger set of what appear to be completely random articles (usually low traffic or even deleted articles). The links are obvious predatory spam for pharma, clothing, shoes, etc. This occurs daily, and the same bot has been active at least two years. If blocked, they then often switch to attempting to add links to the IP's talk page. These all just seem to be probes to test the blacklist. Oddly, I can't seem to find any recent instance where they've been successful in avoiding the blacklist, so I don't know what would happen on success. Interesting problem. Kuru (talk) 15:55, 5 October 2016 (UTC)
- Filter 271 is set up to handle most cases of 'success'. It looks likely to be the same bot. The filter's worth a read if you think the articles are random. It hasn't been adjusted for a while but might need some adjustment in the NS3 department. Drop me a line if you want to discuss the filter further. Sorry, can't help with a blocking bot. -- zzuuzz (talk) 20:34, 5 October 2016 (UTC)
- @Zzuuzz: The filter would indeed catch those that pass the blacklist, I'll have a look through the results whether there is anything there related to these spammers. Nonetheless, the ones that keep hitting the blacklist should be pro-actively blocked, preferably on one of the first attempts. I tried to catch them with a separate filter, but the filter only triggers after the blacklist, so no hits there. --Dirk Beetstra T C 05:38, 6 October 2016 (UTC)
- That's a really interesting read; certainly the same spammer in some cases. Will have to spend some time digging through there to analyze the pattern. Thanks! Kuru (talk) 17:12, 6 October 2016 (UTC)
- Filter 271 is set up to handle most cases of 'success'. It looks likely to be the same bot. The filter's worth a read if you think the articles are random. It hasn't been adjusted for a while but might need some adjustment in the NS3 department. Drop me a line if you want to discuss the filter further. Sorry, can't help with a blocking bot. -- zzuuzz (talk) 20:34, 5 October 2016 (UTC)
Ping. I've blocked 10 13 17 IPs this morning, that are responsible for a massive chunk (75%) of the total attempts to add blacklisted links. Can someone please pick this up? --Dirk Beetstra T C 03:38, 17 October 2016 (UTC)
- FWIW, I support someone making this bot. It'll need a BRFA, but I'm willing to oversee that. Headbomb {talk / contribs / physics / books} 14:08, 17 October 2016 (UTC)
please. --Dirk Beetstra T C 05:28, 26 October 2016 (UTC)
- How frequently would you want the bot checking for new hits? How do you suggest the bot know which subset of links are worthy of bot-blocking? Anomie⚔ 18:26, 26 October 2016 (UTC)
- @Anomie: 'Constantly' - these are about 200 attempts in one hour. I does not make sense to have an editor running around for 10 minutes and have 34 hits in the list before blocking (it would still flood), I would suggest that we attempt to get the IP gets blocked on the second hit at the latest. For that, I would suggest a quick poll of the blacklist every 1-2 minutes (last 10-20 entries or so).
- Regarding the subset, I'd strongly recommend that the bot maintains a blacklist akin User:XLinkBot/RevertList where regexes can be inserted. The subset of urls is somewhat limited, and if new ones come up (which happens every now and then), the specific links, or a wider regex, can be used (e.g. for the url shorteners, the link needs to be specific, because not every url-shortener added is this spammer, for the cialis and ugg-shoes stuff the filter can be wider). (I do note that the IPs also have a strong tendency to hit pages within the pattern '*board*' and their own talkpages, but that may not be selective enough to filter on). --Dirk Beetstra T C 10:34, 27 October 2016 (UTC)
- I just went through my blocking log, and I see that I block up to 27 IPs A DAY (98 in <10 days; knowing that User:Kuru and User:Billinghurst also block these IPs). Still, the editor here manages to get almost 200 hits .. --Dirk Beetstra T C 10:43, 27 October 2016 (UTC)
- If you'd want to narrow it down, one could consider to have two regex-based blacklists, one for links, one for typical pagenames - if an editor hits twice with a blacklisted link attempt to a blacklisted page, then block. And I have no problem with the bot working incremental - 3 hours; 31 hours; 1 week; 1 month; 1 year .. (the IPs do tend to return). --Dirk Beetstra T C 11:31, 27 October 2016 (UTC)
- @Beetstra: Please fill in User:AnomieBOT III/Spambot URI list. Anomie⚔ 13:23, 27 October 2016 (UTC)
- @Anomie: I have filled in the current domains. I will work backward to fill in some more. Thanks, lets see what happens. --Dirk Beetstra T C 13:47, 27 October 2016 (UTC)
- @Beetstra: Please fill in User:AnomieBOT III/Spambot URI list. Anomie⚔ 13:23, 27 October 2016 (UTC)
Comment: The ip spam hits are the same xwiki, and I am whacking moles at Commons (most), Meta, enWS and Mediawiki; other sites unknown as I don't have rights. The real primary issue is that the spambots are getting in through our captcha defences to edit in the first place. Then we have the (fortunate?) issue that we have blacklisted these addresses, and able to identify the problem addresses. Some of the penetrating spambots are on static IPs, and some are in IP ranges, mostly Russian.
As this is a very specific and limited subset of the blacklist urls, we could also consider the blocking capability of filters themselves. It should be possible to utilise the test and challenge of an abuse filter to warn and then disallow an IP edit, or variation, and then block based on subsequent hits. Plenty of means to stop false positives. — billinghurst sDrewth 12:53, 27 October 2016 (UTC)
- @Billinghurst: That would mean that we would globally de-blacklist the urls, and have a global filter to check for the urls and set that filter to block. That is certainly an option, with two 'problems' - first that it is going to be heavy on the server (regex testing on urls is rather heavy for the AbuseFilter; though there is some pattern in the pages that are hit, it is far from perfect). Second problem is that the meta-spamblacklist is used also way beyond MediaWiki. Though it is not our responsibility, I am not sure if the outside sites would like that we de-blacklist (so that they all have to locally blacklist and/or set up their own AbuseFilters). I have entertained this idea on the meta blacklist, but I don't know whether de-blacklisting and using an AbuseFilter will gain much traction.
- I have considered to set up a local abusefilter to catch them, but the abusefilter does not hit before the blacklist (Special:AbuseFilter/791). That would only work if I would locally whitelist the urls (which would clear the blacklist hits), and have a local filter to stop the IPs (I do not have the blocking possibility here on en.wikipedia, that action is not available .. I would just have to ignore the hits on the filter, or manually block all IPs on said filter).
- Or we use a bot to block these IPs on first sight. --Dirk Beetstra T C 13:20, 27 October 2016 (UTC)
- That being said, I would be in favour of a global solution to this problem .. --Dirk Beetstra T C 13:24, 27 October 2016 (UTC)
- I wasn't thinking of removing them from the blacklist. I see many examples of blacklisted urls in global logs, so it surprises me that it is the case. With regard to no blocking capability in abuse filters, that is a choice, and maybe it needs that review. The whole system is so antiquated and lacking in flexibility. :-/ — billinghurst sDrewth 23:16, 27 October 2016 (UTC)
- @Billinghurst: if you can get filter 791 to work so it shows the same edits as the blacklist (or a global variety of it) so it catches before. These editors don't show up (for these edits) in filter 271 either, though they obviously are there trying to do edits with not blacklisted links. --Dirk Beetstra T C 04:04, 28 October 2016 (UTC)
- I wasn't thinking of removing them from the blacklist. I see many examples of blacklisted urls in global logs, so it surprises me that it is the case. With regard to no blocking capability in abuse filters, that is a choice, and maybe it needs that review. The whole system is so antiquated and lacking in flexibility. :-/ — billinghurst sDrewth 23:16, 27 October 2016 (UTC)
Coding... (for the record). Code is mostly done, I believe, although it'll probably need to wait for next Thursday for phab:T149235 to be deployed here before I could start a trial. Anomie⚔ 13:57, 27 October 2016 (UTC)
- Why do you need to wait for the grant, I thought bot III was an adminbot, so it can see the spamblacklistlog? --Dirk Beetstra T C 14:10, 27 October 2016 (UTC)
- One of the pieces of security that OAuth (and BotPasswords) gives in case you want to use some tool with your admin account is that it limits which rights are actually available to the consumer, instead of it automatically getting access to all the rights your account has. The downside is that if there's not a grant for a right, you can't let the consumer use that right. It's easy enough to add grants to the configuration, as you can see in the patches on the linked task, but code deployment can take a little time. Anomie⚔ 20:18, 27 October 2016 (UTC)
- @Anomie: Thanks for the answer, I wasn't aware of that .. not running admin bots does not expose you to that. --Dirk Beetstra T C 04:04, 28 October 2016 (UTC)
- One of the pieces of security that OAuth (and BotPasswords) gives in case you want to use some tool with your admin account is that it limits which rights are actually available to the consumer, instead of it automatically getting access to all the rights your account has. The downside is that if there's not a grant for a right, you can't let the consumer use that right. It's easy enough to add grants to the configuration, as you can see in the patches on the linked task, but code deployment can take a little time. Anomie⚔ 20:18, 27 October 2016 (UTC)
- Why do you need to wait for the grant, I thought bot III was an adminbot, so it can see the spamblacklistlog? --Dirk Beetstra T C 14:10, 27 October 2016 (UTC)
BRFA filed Anomie⚔ 22:20, 1 November 2016 (UTC)
I feel like this would take a lot of workload off of administrators. As of the time I am writing this (17:54, 5 October 2016 (UTC)), there are 25 pages and 7 media files tagged for speedy deletion under these criteria. An administrator will have to personally delete each of these pages, even though it would fairly simple for a bot to do this, as the bot would only have to look at the category of these pages, and check the edit history to make sure that the tag was added by the user. Sometimes, there are dozens of pages in this category, and they all create unnecessary work for admins. I am not an admin, or I would probably code it up myself, but because you have to be an admin in order to run an admin bot, I cannot. Thanks, Gluons12 talk 17:54, 5 October 2016 (UTC).
- U1 would probably be a good idea (wait 10 minutes or more for the user to rethink it?). However, G6 probably will be impossible to implement as how would a bot know if it's controversial or not? Dat GuyTalkContribs 18:17, 5 October 2016 (UTC)
- I'd support a U1 bot if we restricted its scope to pages tagged by unblocked users, in their own userspace. Headbomb {talk / contribs / physics / books} 19:30, 5 October 2016 (UTC)
- I'd suggest they'd need to be the only author, otherwise anyone could get any page deleted by simply moving it to their userspace and adding a tag (the same principle applies to G6). This should make it a G7 bot instead. -- zzuuzz (talk) 19:35, 5 October 2016 (UTC)
- I'd support a U1 bot if we restricted its scope to pages tagged by unblocked users, in their own userspace. Headbomb {talk / contribs / physics / books} 19:30, 5 October 2016 (UTC)
- Some kinds of G6 are probably automatable ("empty dated maintenance categories") but not all. Le Deluge (talk) 22:00, 5 October 2016 (UTC)
- This tends to be suggested reasonably frequently, and typically gets shot down because there aren't backlogs in these categories and admins dealing with these categories haven't expressed any need for bot help. So your first step should be to get admins to agree that such a bot would be needed. Anomie⚔ 18:50, 6 October 2016 (UTC)
- I would very strongly oppose giving a bot power to delete any article. There are too many chances to go wrong. There are almost never more than 1 or 2 day backlogs at Speedy Deletion. Getting admins to delete articles is not one of our problems. All too many of us admins enjoy it. DGG ( talk ) 02:05, 14 October 2016 (UTC)
Tropical Cyclone Bot
For several years, Wikipedians have updated current tropical cyclone information on articles. On occasion, discussions have been brought up regarding the practice, such as user errors in updating, edit conflicts, and more. However, I believe many of these issues could be addressed by creating a bot to automatically update this information. During Hurricane Sandy in 2012, such a bot was actually put through a trial by Legoktm, even though the idea was eventually deferred and forgotten. I think it would definitely be a worthy endeavor, and I can confirm that this idea has received support from other WikiProject Tropical cyclone editors as well as myself. Dustin (talk) 04:34, 8 October 2016 (UTC)
G4 XFD deletion discussion locator bot
Would it be possible to build a bot that could hone in on CSD-G4 tagged articles and determine if in the template the alleged link to the XFD in question is actually there? So many times people tag an article as G4 but the recreated article is located in a new article space name, and its aggravating when on CSD patrol to have manually look through XFD logs to ensure that the article does in fact have an XFD and therefore does in fact qualify as a CSD-G4 deletion. For example, if Example was afd'd, then recreated at Example (Wikipedia article) an alert user would deduce that this was a recreation of Example, but due to the way the G4 template works the article's afd would not show at Example (Wikipedia article) because that wasn't where it was when the afd closed as delete. Under this proposal, a bot that was programed to monitor the G4 tags would notice then and automatically update the G4 template to link to the afd at Example so that the admin arriving at Example (Wikipedia article) would have the proof required to act on the G4 template in a timely manner. Owing to their role in managing the affairs of an estate I would propose the name for this bot - if it is decided to move forward with writing one - be ExecutorBot. TomStar81 (Talk) 03:16, 11 October 2016 (UTC)
Ecozone moved to Biogeographic realm
The article Ecozone was moved to Biogeographic realm, in a standardisation of biogeographic terminology in WP. Now, the next step is change some pages that are currently using the term "ecozone" instead of "biogeographic realm". Can a boot do it, please? The pages are:
- Template:Infobox ecoregion (and its articles);
- Category:Ecozones (with its subcategories, except subcategory Category: Ecozones of Canada and article Ecozones of Canada, in which the usage of "ecozone" must be maintained). Zorahia (talk) 01:08, 13 October 2016 (UTC)
- Not a good task for a bot. There's very little hope of gaining consensus for a bot changing article text. There's too many edge cases for this to work well. For instance, it's beyond the capability of a bot to avoid editing articles in these categories which discuss the history of the term, etc. ~ Rob13Talk 19:14, 30 October 2016 (UTC)
Fix redirects (specific task)
Could a bot replace all instances of these wrong redirects (where they are used in articles) with their targets? --XXN, 10:20, 15 October 2016 (UTC)
- Isn't there already a bot that does this? Dat GuyTalkContribs 10:22, 15 October 2016 (UTC)
- Double redirects are fixed on regular basis, but these are "normal" redirects, so not sure if anyone fixes them. At least, few time ago an user has reported that some of these redirects are used (have incoming links). --XXN, 10:34, 15 October 2016 (UTC)
- ... is doing manually... --XXN, 21:01, 18 October 2016 (UTC)
- Done. XXN, 21:04, 19 October 2016 (UTC)
- ... is doing manually... --XXN, 21:01, 18 October 2016 (UTC)
- Double redirects are fixed on regular basis, but these are "normal" redirects, so not sure if anyone fixes them. At least, few time ago an user has reported that some of these redirects are used (have incoming links). --XXN, 10:34, 15 October 2016 (UTC)
New task
Could a bot replace all instances of these wrong redirects (where they are used in articles) with their targets? Then I'll go to RFD with them. --XXN, 21:04, 19 October 2016 (UTC)
- @XXN: Is this still applicable? Do you believe there are over 500 pages? --Dat GuyTalkContribs 19:16, 26 October 2016 (UTC)
- Yep. Ran a query on DB: there are 416 unique bad links in 287 unique pages, though there may be more than one unique bad link on page and more than one instance of the same link per page. --XXN, 21:36, 26 October 2016 (UTC)
- Is a bot needed here, or should it be done manually? Pinging Xaosflux since he helped me with my BRFAs. Dat GuyTalkContribs 21:40, 26 October 2016 (UTC)
- @DatGuy: isn't there already a bot that processes RfD deletes? Why would these need to be delinked first - THEN brought to RfD? Just take them to RfD now. — xaosflux Talk 22:58, 26 October 2016 (UTC)
- Hold the RfD first. The thing is, if you delink first, you are pre-empting the outcome of a (potential) RfD, which is an abuse of process. --Redrose64 (talk) 23:24, 26 October 2016 (UTC)
- @Xaosflux and Redrose64: After redirects are tagged for deletion, their redirect function gets broken and they become simple short pages. In a RFD previous discussion some users complained about "who is going to fix all the redlinks that will be created" by deletion of the listed pages. I don't think there is a bot that fixes in articles such redirects deleted at RFD. At this moment it's easier to write a bot for replacing such redirects, than after they will be tagged or deleted. So this is why I came here with this request. --XXN, 09:36, 27 October 2016 (UTC)
- Hold the RfD first. The thing is, if you delink first, you are pre-empting the outcome of a (potential) RfD, which is an abuse of process. --Redrose64 (talk) 23:24, 26 October 2016 (UTC)
- @DatGuy: isn't there already a bot that processes RfD deletes? Why would these need to be delinked first - THEN brought to RfD? Just take them to RfD now. — xaosflux Talk 22:58, 26 October 2016 (UTC)
- Is a bot needed here, or should it be done manually? Pinging Xaosflux since he helped me with my BRFAs. Dat GuyTalkContribs 21:40, 26 October 2016 (UTC)
- Yep. Ran a query on DB: there are 416 unique bad links in 287 unique pages, though there may be more than one unique bad link on page and more than one instance of the same link per page. --XXN, 21:36, 26 October 2016 (UTC)
--XXN, 17:04, 30 October 2016 (UTC)
- Some redirects were fixed in tens of pages at once by fixing them in templates.
- All done now. --XXN, 21:10, 5 November 2016 (UTC)
Add protection templates to recently protected articles
We have bots that remove protection templates from pages (DumbBOT and MusikBot), but we don't have a bot right now that adds protection templates to recently protected articles. Lowercase sigmabot used to do this until it stopped working about two years ago. I generally think it's a good idea to add protection templates to protected articles, so people know (especially if you're logged in and autoconfirmed, because then you would have no idea it would be semi-protected). —MRD2014 (talk • contribs) 13:06, 18 October 2016 (UTC)
Helping to expand (blacklisted) url-shortened links by suggesting to user
I see many editors trying to add shortened or otherwise redirected urls (typically bit.ly, goo.gl, youtu.be, or google.com/url? ..) to pages, and them fail continuously since they fail to expand/replace the link with the proper link (shortening services are routinely globally blacklisted). Some try repeatedly, likely becoming frustrated at their inability to save the page.
I think it would be of great help to editors that when they would hit the blacklist with a shortened url, that a bot would pick up and post a message on their talkpage along the lines of "hi, I saw that you tried to add bit.ly/abcd and that you were blocked by the blacklist. URL shorteners are routinely blacklisted and hence cannot be added to any page in Wikipedia. You should therefore use the expanded url 'http://aaa.bc/adsfsdf/index.htm' instead. (sig etc.)" (the bot should take into account that the original link is blacklisted, but also the target may be blacklisted - so if it fails saving the expanded link with http, it may try again without the http. --Dirk Beetstra T C 12:02, 20 October 2016 (UTC)
- Perhaps a edit filter set to warn would be better? Dat GuyTalkContribs 15:12, 21 October 2016 (UTC)
- @DatGuy: Sorry, that does not work. These links are standard globally blacklisted, and the blacklist hits before the EditFilter. And in any case, the EditFilter cannot expand the links for the editor, they would get the same message as the spam blacklist is providing - expand your links. --Dirk Beetstra T C 15:51, 22 October 2016 (UTC)
- The user gets both MediaWiki:Spamprotectiontext and MediaWiki:Spamprotectionmatch. The first includes in our customized version:
- @DatGuy: Sorry, that does not work. These links are standard globally blacklisted, and the blacklist hits before the EditFilter. And in any case, the EditFilter cannot expand the links for the editor, they would get the same message as the spam blacklist is providing - expand your links. --Dirk Beetstra T C 15:51, 22 October 2016 (UTC)
- Note that if you used a redirection link or URL shortener (like e.g. 'goo.gl', 't.co', 'youtu.be', 'bit.ly'), you may still be able to save your changes by using the direct, non-shortened link - you generally obtain the non-shortened link by following the link, and copying the contents of the address bar of your web-browser after the page has loaded.
- The second shows the url, e.g.
- The following link has triggered a protection filter: bit.ly/xxx
- MediaWiki:Spamprotectiontext does not know the url but MediaWiki:Spamprotectionmatch gets it as $1. It would be possible to customize the message for some url's, e.g by testing whether $1 starts with goo.gl, t.co, youtu.be, bit.ly. In such cases the message could display it as a clickable link with instructions. MediaWiki:Spamprotectiontext also has instructions but there the same long text is shown for all url's. PrimeHunter (talk) 23:32, 22 October 2016 (UTC)
Although it is probably still true if the editor got an extensive explanation on their talkpage - this editor (and many others) simply do not read what the message is saying (and I see many of those). With a talkpage message they at least get the message twice
There is a second side to the request - whereas many of the redirect insertions are in good faith (especially the youtu.be and google.com/url? ones), some of them are bad faith attempts to circumvent the blacklist (Special:Log/spamblacklist/148.251.234.14 is a spammer). It would be great to be able to track these spammers with this trick as well. --Dirk Beetstra T C 03:43, 23 October 2016 (UTC)
BSicons
Could we have a bot that
- creates a daily-updated log of uploads, re-uploads, page moves and edits in BSicons (Commons files with prefix
File:BSicon_
); - makes a list of Commons redirects with prefix
File:BSicon_
; - uses the list (as well as a list of exceptions, probably this Commons category and its children) and uses it to edit RDT code (both {{Routemap}} and {{BSrow}}/{{BS-map}}/{{BS-table}}) which uses those redirects, replacing the redirect name with the newer name (for instance, replacing (
HUB83
) with (HUBe
) and (STRl
) with (STRfq
)); - goes through Category:Pages using BSsplit instead of BSsrws and replaces
\{\{BSsplit\|([^\|]+)\|([^\|]+)\|$1 $2 ([^\|\{\}])+\}\}
with{{BSsrws|$1|$2|$3}}
; and - creates a list of BSicons with file size over 1 KB.
| ||||||||||||||||||||||||||||
The example diagram. |
This request is primarily for #2 and #3, since there've been a lot of page moves from confusing icon names recently and CommonsDelinker doesn't work for BSicons because they don't use file syntax. The others would be nice extras, but they're not absolutely necessary if no one wants to work on them. For clarity, an example of #3 would be changing
{{Routemap |map= CONTg\CONTg BHF!~HUB84\BHF!~HUB82 CONTf\CONTf }}
to
{{Routemap |map= CONTg\CONTg BHF!~HUBaq\BHF!~HUBeq CONTf\CONTf }}
(Pinging Useddenim, Lost on Belmont, Sameboat, AlgaeGraphix, Newfraferz87, Redrose64 and YLSS.) Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 08:59, 25 October 2016 (UTC)
- Point 1. should be all BSicon files, regardless of filetype, so that those (occasionally uploaded) .png files also get listed. Useddenim (talk) 10:48, 25 October 2016 (UTC)
- Updated request. Thanks. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 11:42, 25 October 2016 (UTC)
- Updated request. Thanks. Jc86035 (talk) Use {{re|Jc86035}}
To further clarify, the regex for #3 is \n\{\{BS[^\}]+[\|\=]\s*$icon\s*\|
for BS-map. I have no idea what it'd be for Routemap, but to the left of the icon ID could be one of \n
(newline), ! !
, !~
and \\
(escaped backslash); and to the right could be one of \n
, !~
, ~~
, !@
, __
, !_
and \\
. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 06:21, 26 October 2016 (UTC)
Update all refs from sound.westhost.com to sound.whsites.net
This site has changed domains so there's link rot on probably a lot of audio articles 71.167.62.21 (talk) 11:50, 31 October 2016 (UTC)
Broken fpf.pt external links (English version)
Apparently, FPF.pt's English version was removed and, as a result, many external links are now broken. The good news is that it is easy to fix them by replacing, for example:
* [http://www.fpf.pt/en/Players/Search-international-players/Player/playerId/931970 National team data]
with
* [http://www.fpf.pt/pt/Jogadores/Pesquisar-Jogadores-Internacionais/Jogador/playerId/931970 National team data] {{pt icon}}
Notice the addition of {{pt icon}}. SLBedit (talk) 22:35, 31 October 2016 (UTC)
No one? This is important because these external links are used as source for national team appearances and goals in BLP articles. SLBedit (talk) 19:58, 3 November 2016 (UTC)
Fix references to images and other elements as being on the left or right
Per Wikipedia:Manual of Style/Accessibility#Images #6, it is not appropriate to refer in article text to images and elements as being on the "left" or "right" side of the page, since this information is inaccurate for mobile users and irrelevant for visually impaired users. We should have a bot make the following substitutions:
"the <picture|diagram|image|table|box|...> <to|at|on> [the] <left|right>"
→ "the adjacent <picture|diagram|image|table|box|...>"
Thoughts? —swpbT 19:00, 2 November 2016 (UTC)
- Not a good task for a bot. Very much a WP:CONTEXTBOT. Consider, for example, "There are two boxes in this image. The box on the left is red, while the box on the right is blue." Anomie⚔ 20:56, 2 November 2016 (UTC)
- Withdrawn. Will pursue as an AWB task instead. —swpbT 16:48, 3 November 2016 (UTC)
Bot to notify editors when they add a duplicate template parameter
Category:Pages using duplicate arguments in template calls has recently been emptied of the 100,000+ pages that were originally in there, but editors continue to modify articles and inadvertently add duplicate parameters to templates. It would be great to have a bot, similar to ReferenceBot, to notify editors that they have caused a page to be added to that category. ReferenceBot, which notifies editors when they create certain kinds of citation template errors, has been successful in keeping the categories in Category:CS1 errors from overflowing.
Pinging A930913, the operator of ReferenceBot, in case this seems like a task that looks like fun. – Jonesey95 (talk) 21:13, 3 November 2016 (UTC)