Jump to content

Wikipedia:Bot requests/Archive 74

From Wikipedia, the free encyclopedia
Archive 70Archive 72Archive 73Archive 74Archive 75Archive 76Archive 80

Copy coordinates from lists to articles

Virtually every one of the 3000-ish places listed in the 132 sub-lists of National Register of Historic Places listings in Virginia has an article, and with very few exceptions, both lists and articles have coordinates for every place, but the source database has lots of errors, so I've gradually been going through all the lists and manually correcting the coords. As a result, the lists are a lot more accurate, but because I haven't had time to fix the articles, tons of them (probably over 2000) now have coordinates that differ between article and list. For example, the article about the John Miley Maphis House says that its location is 38°50′20″N 78°35′55″W / 38.83889°N 78.59861°W / 38.83889; -78.59861, but the manually corrected coords on the list are 38°50′21″N 78°35′52″W / 38.83917°N 78.59778°W / 38.83917; -78.59778. Like most of the affected places, the Maphis House has coords that differ only a small bit, but (1) ideally there should be no difference at all, and (2) some places have big differences, and either we should fix everything, or we'll have to have a rather pointless discussion of which errors are too little to fix.

Therefore, I'm looking for someone to write a bot to copy coords from each place's NRHP list to the coordinates section of {{infobox NRHP}} in each place's article. A few points to consider:

  • Some places span county lines (e.g. bridges over border streams), and in many of these cases, each list has separate coordinates to ensure that the marked location is in that list's county. For an extreme example, Skyline Drive, a scenic 105-mile-long road, is in eight counties, and all eight lists have different coordinates. The bot should ignore anything on the duplicates list; this is included in citation #4 of National Register of Historic Places listings in Virginia, but I can supply a raw list to save you the effort of distilling a list of sites to ignore.
  • Some places have no coordinates in either the list or the article (mostly archaeological sites for which location information is restricted), and the bot should ignore those articles.
  • Some places have coordinates only in the list or only in the article's {{Infobox NRHP}} (for a variety of reasons), but not in both. Instead of replacing information with blanks or blanks with information, the bot should log these articles for human review.
  • Some places might not have {{infobox NRHP}}, or in some cases (e.g. Newport News Middle Ground Light) it's embedded in another infobox, and the other infobox has the coordinates. If {{infobox NRHP}} is missing, the bot should log these articles for human review, while embedded-and-coordinates-elsewhere is covered by the previous bullet.
  • I don't know if this is the case in Virginia, but in some states we have a few pages that cover more than one NRHP-listed place (e.g. Zaleski Mound Group in Ohio, which covers three articles); if the bot produced a list of all the pages it edits, a human could go through the list, find any entries with multiple appearances, and check them for fixes.
  • Finally, if a list entry has no article at all, don't bother logging it. We can use WP:NRHPPROGRESS to find what lists have redlinked entries.

No discussion has yet been conducted for this idea; it's just something I've thought of. I've come here basically just to see if someone's willing to try this route, and if someone says "I think I can help", I'll start the discussion at WT:NRHP and be able to say that someone's happy to help us. Of course, I wouldn't ask you actually to do any coding or other work until after consensus is reached at WT:NRHP. Nyttend (talk) 00:55, 16 January 2017 (UTC)

Off-topic discussion
I'm a WikiProject NRHP member and I'd like to support what Nyttend is getting at. I support anyone considering Nyttend's question directly, but want to ask about a variation. Note, it's kind of unfortunate though that the source of coordinates is not identified by WikiProject NRHP editors, neither originally (when the source was probably the NRIS database) nor now. (Marking source of coordinates, going forward, is under discussion at Wikipedia talk:WikiProject National Register of Historic Places#Coordinates conversions, and should we be footnoting coordinates?) Perhaps what Nyttend is getting at, and more, could be done by a bot which would make three-way comparison of coordinates in A) individual articles to B) coordinates in NRHP county list-articles to C) coordinates in the downloadable NRIS database. The NRIS database is the original source of most of the coordinates that Nyttend has painstakingly improved upon, for places in Virginia. I believe them that they have gone through Virginia carefully and that wherever they have changed coordinates in the (B) county list-articles that they have done that well. In other states it is much more random, and the coordinates might have been improved in an individual article OR in the county list-article. I personally have fixed coordinates in individual articles (A) but not in list-articles (B), working the opposite way from how Nyttend has done. Could a bot be programmed to make a three-way comparison. If A and B are the same as C, then mark them as being sourced from NRIS. If the state is Virginia, and just one out of A and B is different than C, then accept the change at the other place too and mark both A and B as being sourced by Nyttend's evaluation (using {{NRHPcoord}}) with "improvedby=Nyttend" parameter. If both A and B are different than C, then mark them as discrepancies (using template NRHPcoord with some suitable parameter). If either A or B already has been marked as improved, then improve the other one and copy the sourcing over. If the (C) NRIS coordinates cannot be found for a given site, then mark something else. I wonder, is it possible for someone to consider running this kind of three-way comparison (and would that be easier/better)? --doncram 02:52, 23 January 2017 (UTC)
No, any three-way comparison is a big distraction. What we need is a bot that will copy human-checked coordinates from lists to articles (with exceptions to be provided by me) and nothing else; we can worry about the other stuff at another time. Nyttend (talk) 15:38, 23 January 2017 (UTC)
What about the coordinates in Virginia lists that were not improved or verified, though. I checked two lists and see that Nyttend changed all 8 sets of coordinates in one, and changed 1 out of 3 sets of coordinates in another. And what about coordinates in individual articles that were improved by another editor (I don't know how many of these exist, but there will be some within the articles for 2,995 Virginia NRHP sites). I think bot editing has to be restricted to cases where the edit will clearly be making an improvement.
A lesser task would be if a bot could mark, using template:NRHPcoord, the specific coordinates in Virginia lists that Nyttend changed recently. If a bot can examine edits and see that the coordinates were changed by Nyttend. That would still be helpful. --doncram 17:03, 25 January 2017 (UTC)
I have confirmed coordinates for every site in the state, aside from a few for which I did not have information, and I logged all of those. Most items on which I changed nothing are items in which the original coordinates were already correct; aside from the items I logged, there's no possibility of the current coordinates being wrong, unless I made a typo or misread a map or something like that. The bot shouldn't worry about whether I changed anything. Nyttend (talk) 20:12, 27 January 2017 (UTC)

Bot for category history merges

Back in the days when the facility to move category pages wasn't available, Cydebot made thousands of cut-and-paste moves to rename categories per CFD discussions. In the process of the renames, a new category would be created under the new name by the bot with the the edit summary indicating that it was "Moved from CATEGORYOLDNAME" and identifying the the editors of the old category to account for the attribution. An example is here.

This method of preserving attribution is rather crude and so it is desirable that the complete editing history of the category page be available for attribution. The process of recovering the deleted page histories has since been taken on by Od Mishehu who has performed thousands of history merges.

I suggest that an adminbot be optimised to go through Cydebot's contribs log, identify the categories that were created by it (i.e, the first edit on the page should be by Cydebot) and

  1. undelete the category mentioned in the Cydebot's edit summary
  2. history-merge it into the new category using Special:MergeHistory.
  3. Delete the left-over redirect under CSD G6.

This bot task is not at all controversial. This is just an effort to fill in missing page histories. Obviously, there would be no cases of any parallel histories encountered - and even if there were, it wouldn't be an issue since Special:MergeHistory cannot be used for merging parallel histories - which is to say that there is no chance of any unintended history mess-up. This should an easy task for a bot. 103.6.159.72 (talk) 10:52, 18 January 2017 (UTC)

There's one thing that i have overlooked above, though it is again not a problem. In some rare cases, it may occur that after the source page has been moved to the destination page, the source page may later have been recreated - either as a category redirect or as a real category. In such cases, just skip step #3 in the procedure described above. There will be edits at the source page that postdate the creation of the destination page, and hence by its design, Special:MergeHistory will not move these edits over - only the old edits that the bot has undeleted would be merged. (It may be noted that the MergeHistory extention turns the source page into a redirect only when all edits at the source are merged into the destination page, which won't be the case in such cases - this means that the source page that some guy recreated will remain intact.) All this is that simple. 103.6.159.72 (talk) 19:37, 18 January 2017 (UTC)
Is this even needed? I would think most if not all edits to category pages do not pass the threshold of originality to get copyright in the first place. Our own guidelines on where attribution is not needed reinforce this notion under US law, stating duplicating material by other contributors that is sufficiently creative to be copyrightable under US law (as the governing law for Wikipedia), requires attribution. That same guideline also mentions that a List of authors in the edit summary is sufficient for proper attribution, which is what Cydebot has been doing for years. Avicennasis @ 21:56, 20 Tevet 5777 / 21:56, 18 January 2017 (UTC)
Cydebot doesn't do it any longer. Since 2011 or sometime 2015, Cydebot renames cats by actually moving the page. So for the sake of consistency, we could do this for the older cats also. The on-wiki practise, for a very lomg time, has been to do a history merge wherever it is technically possible. The guideline that edit summary is sufficient attribution is quite dated and something that's hardly ever followed. It's usually left as a worst-case option where a histmerge is not possible. History merge is the preferred method of maintaining attribution. Some categories like Category:Members of the Early Birds of Aviation do have some descriptive creative content. 103.6.159.72 (talk) 02:21, 19 January 2017 (UTC)
I'm not completely opposed to this, but I do think that we need to define which category pages are in scope for this. I suspect the vast majority of pages wouldn't need attribution, and we should be limiting the amount of pointless bot edits. Avicennasis @ 02:49, 21 Tevet 5777 / 02:49, 19 January 2017 (UTC)
It wasn't 2011 (it can't have been, since the ability to move category pages wasn't available to anybody until 22 May 2014, possibly slightly later, but certainly no earlier). Certainly Cydebot was still making cutpaste moves when I raised this thread on 14 June 2014; raised this thread; and commented on this one. These requests took some months to be actioned: checking Cydebot's move log, I find that the earliest true moves of Category: pages that were made by that bot occurred on 26 March 2015. --Redrose64 🌹 (talk) 12:07, 19 January 2017 (UTC)
Since we are already talking about using a bot, I think it makes sense to do them all (or lest none at all) since that would come at no extra costs. Selecting a cherry-pick for the bot to do is just a waste of human editors' time. The edits won't be completely "pointless" - it's good to be able to see full edit histories. Talking of pointless edits, I should remind people that there are bots around that perform hundreds of thousands of pointless edits. 103.6.159.84 (talk) 16:14, 19 January 2017 (UTC)
As to when it became technically possible, I did it on May 26, 2014. עוד מישהו Od Mishehu 05:32, 20 January 2017 (UTC)
~94,899 pages, by my count. Avicennasis @ 03:36, 23 Tevet 5777 / 03:36, 21 January 2017 (UTC)
That should keep a bot busy for a week or more. The Usercontribs module pulls the processing queue. Here's the setup in the API sandbox. Click "make request" to see the results of a query to get the first three. Though I've never written an admin-bot before, I may take a stab at this within the next several days. – wbm1058 (talk) 04:28, 21 January 2017 (UTC)
The other major API modules to support this are Undelete, Mergehistory and Delete. This would be a logical second task for my Merge bot to take on. The PHP framework I use supports undelete and delete, but it looks like I'll need to add new functions for user-contribs and merge-history. In my RfA I promised to work the Wikipedia:WikiProject History Merge backlog, so it would be nice to take that off my back burner in a significant way. I'm hoping to leverage this into another bot task to clear some of the article-space backlog as well...
Coding... wbm1058 (talk) 13:06, 21 January 2017 (UTC)
My count is 89,894 pages. wbm1058 (talk) 00:58, 24 January 2017 (UTC)
@Wbm1058: Did you exclude the pages that have already been histmerged (by Od Mishehu and probably a few by other admins also)?— Preceding unsigned comment added by 103.6.159.67 (talkcontribs) 12:39, 24 January 2017 (UTC)
I was about to mention that. My next step is to check the deleted revisions for mergeable history. No point in undeleting if there is no mergeable history. Working on that now. – wbm1058 (talk) 14:40, 24 January 2017 (UTC)
Note this example of a past histmerge by Od Mishehu: Category:People from Stockport
Should this bot do that with its histmerges too? wbm1058 (talk) 21:51, 25 January 2017 (UTC)
Yes, when there is a list of users present (there were periods when the bot didn't do it, but most of the time it did). עוד מישהו Od Mishehu 22:24, 25 January 2017 (UTC)

An other issue: Some times, a category was renamed multiple times. For example, Category:Georgian conductors->Category:Georgian conductors (music)->Category:Conductors (music) from Georgia (country); this must be supported also for categories where the second rename was recent. e.g Category:Visitor attractions in Washington (U.S. state)->Category:Visitor attractions in Washington (state)->Category:Tourist attractions in Washington (state). Back-and-forth renames must also be considered, for example, Category:Tornadoes in Hawaii->Category:Hawaii tornadoes->Category:Tornadoes in Hawaii; this also must be handled in cases where the second rename was recent, e.g Category:People from San Francisco->Category:People from San Francisco, California->Category:People from San Francisco. עוד מישהו Od Mishehu 05:35, 26 January 2017 (UTC)

Od Mishehu, this is also something I noticed. I'm thinking the best way to approach this is to start with the oldest contributions, and then merge forward so the last merge would be into the newest, currently active, category. Is that the way you would manually do this? So I think I need to reverse the direction that I was processing this, and work forward from the oldest rather than backward from the newest. Category:Georgian conductors was created at 22:56, 23 June 2008 by a human editor; that's the first (oldest) set of history to merge. At 22:38, 7 June 2010 Cydebot moved Category:Conductors by nationality to Category:Conductors (music) by nationality per CFD at Wikipedia:Categories for discussion/Log/2010 May 24#Category:Conductors. At 00:12, 8 June 2010 Cydebot deleted page Category:Georgian conductors (Robot - Moving Category Georgian conductors to Category:Georgian conductors (music) per CFD at Wikipedia:Categories for discussion/Log/2010 May 24#Category:Conductors.) So we should restore both Category:Georgian conductors and Category:Georgian conductors (music) in order to merge the 5 deleted edits of the former into the history of the latter. The new category creation by Cydebot that would trigger this history restoration and merging is
  • 00:11, 8 June 2010 . . Cydebot (187 bytes) (Robot: Moved from Category:Georgian conductors. Authors: K********, E***********, O************, G*********, Cydebot)
However, if you look at the selection set I've been using, you won't find this new category creating edit: 8 June 2010 Cydebot contributions
It should slot in between these:
To find the relevant log item, I need to search the Deleted user contributions
I'm looking for the API that gets deleted user contributions. This is getting more complicated. – wbm1058 (talk) 16:38, 26 January 2017 (UTC)
OK, Deletedrevs can list deleted contributions for a certain user, sorted by timestamp. Not to be confused with Deletedrevisions. wbm1058 (talk) 17:18, 26 January 2017 (UTC)
After analyzing these some more, I think my original algorithm is fine. I don't think it should be necessary for the bot to get involved with the deleted user contributions. What this means is that only the most recent moves will be merged on the first pass, as my bot will only look at Cydebot's active contributions history. The first pass will undelete and merge the most recently deleted history, which will expose additional moves that my bot will see on its second pass through the contributions. I'll just re-run until my bot sees no more mergeable items. The first bot run will merge Category:Georgian conductors (music) into Category:Conductors (music) from Georgia (country). The second bot run will merge Category:Georgian conductors into Category:Conductors (music) from Georgia (country). The first bot run will merge Category:Visitor attractions in Washington (U.S. state) into Category:Tourist attractions in Washington (state), and there's nothing to do on the second pass (there is no mergeable history in Category:Visitor attractions in Washington (state)). The first pass would merge Category:Hawaii tornadoes into Category:Tornadoes in Hawaii – I just did that for testing. The second pass will see that Category:Tornadoes in Hawaii should be history-merged into itself. I need to check for such "self-merge" cases and report them (a "self-merge" is actually a restore of some or all of a page's deleted history)... I suppose I should be able to restore the applicable history (only the history that predates the page move). Category:People from San Francisco just needs to have the "self-merge" procedure performed, as Category:People from San Francisco, California has no mergeable history. Thanks for giving me these use-cases, very helpful.
I should mention some more analysis from a test run through the 89,893 pages in the selection set. 2369 of those had no deleted revisions, so I just skip them. HERE is a list of the first 98 of those. Of the remaining 87,524 pages, these 544 pages aren't mergeable, because the timestamp of the oldest edit isn't old enough, so I skip them too. Many of these have already been manually history-merged. That leaves 86,980 mergeable pages that my bot should history-merge on its first pass. An unknown number of additional merges to be done on the second pass, then hopefully a third pass will either confirm we're done or mop up any remaining – unless there are cats that have moved four times... wbm1058 (talk) 22:42, 26 January 2017 (UTC)
Some of the pages with no deleted reivsions are the result of a category rename where the source category was changed into something else (a category redirect or disambiguation), and a history merge in those caes should be done (I juse did onesuch merge, the thirds on the list of 99). However, this may be too difficult for a bot to handle; I can deal with those over time if you give me a full list. The first 2 on the list you gave are different - the bot didn't delete them (it did usually, but not always), and they were removed without deletion by Jc37 and used as new categories. I believe, based on the link to the CFD discussion at the beginning, that the aanswer to that would be in Wikipedia:Categories for discussion/Log/2015 January 1#Australian politicians. עוד מישהו Od Mishehu 05:34, 27 January 2017 (UTC)

This whole thing seems a waste of time (why do we need to see old revisions of category pages that were deleted years ago), but if you want to spend your time writing and monitoring a bot that does this, I won't complain; it won't hurt anything. I'm just concerned by the comments up above that point out a lot of not-so-straightforward cases, like the tornadoes in Hawaii and the visitor attractions in Washington. How will the bot know what information is important to preserve and what isn't? Nyttend (talk) 05:28, 27 January 2017 (UTC)

The reasons for it, in my opinion:
  1. While most categories have no copyrightable information, some do; on these, we legally need to maintain the history. While Cydebot did this well for categories which were renamed once, it didn't for categories which were renamed more than once. Do any of these have copyrightable information? It's impossible to know.
  2. If we nominate acategory for deletion, we generally should inform its creator - even if the creation was over 10 years ago, as long as the creator is still active. With deleted history, it's difficult for a human admin to do this, and impossible for automated nomination tools (such as [[WP::TW|Twinkle]]) or non-admins.
עוד מישהו Od Mishehu 05:37, 27 January 2017 (UTC)
  1. Because writing a bot is fun, isn't it? As only programmers know. And especially if the bot's gonna perform hundreds of thousands of admin actions.
  2. Because m:wikiarchaeologists will go to any lengths to make complete edoting histories of pages visible, even if it's quite trivial. Using a bot shows a far more moderate level of eccentricity than doing it manually would. Why do you think Graham87 imported thousands of old page revisions from nostwiki?
103.6.159.76 (talk) 08:59, 27 January 2017 (UTC)

I think it may be best to defer any bot processing of these on the first iteration of this. Maybe after a first successful run, we can come back and focus on an automated solution for these as well. It's still a lot to be left for manual processing. I'll work on the piece that actually performs the merges later today. – wbm1058 (talk) 13:49, 27 January 2017 (UTC)

@Wbm1058: For the pages that were copy-pasted without rhe source catgeory being delted, you can still merge them. Use of Special:MergeHistory ensures that only the edits that predate the creation of the destination category will be merged. 103.6.159.90 (talk) 08:32, 29 January 2017 (UTC)

BRFA filed I think this is ready for prime time. wbm1058 (talk) 01:17, 28 January 2017 (UTC)

Fix duplicate references in mainspace

Hi. Apologies if this is malformed. I'd like to see a bot that can do this without us depending on a helpful human with AWB chancing across the article. --Dweller (talk) Become old fashioned! 19:11, 26 January 2017 (UTC)

As a kind of clarification, if an article doesn't used named references because the editors of that article have decided not to, we don't want to require the use of named references to perform this kind of merging. In particular, AWB does not add named references if there are not already named references, in order to avoid changing the citation style. This is mentioned in the page linked above (which is an AWB subpage), but it is an important point for bot operators to keep in mind. — Carl (CBM · talk) 19:27, 26 January 2017 (UTC)
Been here bonkers years and never come across that, thanks! Wikipedia:Citing_sources#Duplicate_citations suggests finding other ways to fix duplicates. I don't know what those other ways are, but if that makes it too difficult, maybe the bot could only patrol articles that already make use of the refname parameter. --Dweller (talk) Become old fashioned! 19:57, 26 January 2017 (UTC)
It's easy enough for a bot to limit itself to articles with at least one named ref; a scan for that can be done at the same time as a scan for duplicated references, since both require scanning the article text. — Carl (CBM · talk) 20:26, 26 January 2017 (UTC)
Smashing! Thanks for the expertise. --Dweller (talk) Become old fashioned! 20:44, 26 January 2017 (UTC)
Note: this is not what is meant by CITEVAR. It is perfectly fine to add names to references. All the best: Rich Farmbrough, 00:48, 5 February 2017 (UTC).

NB Your chart, above, is reading my signature as part of my username. Does that need a separate Bot request ;-) --Dweller (talk) Become old fashioned! 12:00, 1 February 2017 (UTC)

add birthdate and age to infoboxes

Here's a thought... How about a bot to add {{birth date and age}}/{{death date and age}} templates to biography infoboxes that just have plain text dates? --Zackmann08 (Talk to me/What I been doing) 18:13, 20 December 2016 (UTC)

These templates provide the dates in microformat, which follows ISO 8601. ISO 8601 only uses the Gregorian calendar, but many birth and death dates in Wikipedia use the Julian calendar. A bot can't distinguish which is which, unless the date is after approximately 1924, so this is not an ideal task to assign to a bot. (Another problem is that if the birth date is Julian and the death date is Gregorian the age computation could be wrong.) Jc3s5h (talk) 19:07, 20 December 2016 (UTC)
@Jc3s5h: that is a very valid point... One thought, the bot could (at least initially) focus on only people born after 1924 (or whichever year is decided). --Zackmann08 (Talk to me/What I been doing) 19:13, 20 December 2016 (UTC)
Without comment on feasibility, I support this as useful for machine-browsing. The ISO 8601 format is useful even if the visual output of the page doesn't change. ~ Rob13Talk 08:22, 30 December 2016 (UTC)

I all go for it. I am filing a BFRA after my wikibreak. -- Magioladitis (talk) 08:24, 30 December 2016 (UTC)

  • When I open the edit window, I just see a bunch of template clutter, so I would like to understand what the template is used for, who on WP uses it, and specifically what the purpose of micro format dates is; It strikes me that the info boxes are sufficiently well labelled for any party to pull date metadata off them without recourse to additional templates. -- Ohc ¡digame! 23:13, 14 February 2017 (UTC)

Clean User Sandbox

Please could someone create a bot that cleans user Sandboxes. Rhinos17 (talk) 15:31, 18 April 2017 (UTC)

Wouldn't that be up to the individual users, unless they are otherwise violating Wikipedia policy? — Maile (talk) 17:06, 18 April 2017 (UTC)
It would make it easy as new users like me may put test edits on there and it saves them having to manually revert théier edits every time on their main Sandbox.Rhinos17 (talk) 17:10, 18 April 2017 (UTC)
Users may experiment with something, and want to come back to it months later. So no robot should alter a user's sandbox. Jc3s5h (talk) 18:58, 18 April 2017 (UTC)
Declined Not a good task for a bot. I agree with Redrose64 Jamesjpk (talk) 23:04, 24 April 2017 (UTC)

Bot to help with FA/GA nomination process

Resolved

The process is as follows: (Pasted from FA nomination page):

Before nominating an artic)le, ensure that it meets all of the FA criteria and that peer reviews are closed and archived. The featured article toolbox (at right) can help you check some of the criteria. Place {{FAC}} should be substituted at the top of the article talk page at the top of the talk page of the nominated article and save the page. From the FAC template, click on the red "initiate the nomination" link or the blue "leave comments" link. You will see pre-loaded information; leave that text. If you are unsure how to complete a nomination, please post to the FAC talk page for assistance. Below the preloaded title, complete the nomination page, sign with ~~~~ and save the page.

Copy this text: Wikipedia:Featured article candidates/name of nominated article/archiveNumber (substituting Number), and edit this page (i.e., the page you are reading at the moment), pasting the template at the top of the list of candidates. Replace "name of ..." with the name of your nomination. This will transclude the nomination into this page. In the event that the title of the nomination page differs from this format, use the page's title instead.

May be a bot could automate that process? Thanks.47.17.27.96 (talk) 13:08, 16 January 2017 (UTC)

This was apparently copied here from WP:VPT; the original is here. --Redrose64 🌹 (talk) 21:34, 16 January 2017 (UTC)
I think that at WP:VPT, the IP was directed here - see here. TheMagikCow (talk) 19:40, 17 January 2017 (UTC)
There is some information that is required from the user, both with the FAC and GAN templates, that can't be inferred by a bot but requires human decision making. I don't think this would be that useful or feasible. BlueMoonset (talk) 21:35, 22 January 2017 (UTC)
Declined Not a good task for a bot. The GA nomination process is already largely automated. The FA nomination process requires a nominating statement, which is really the only thing not already made easy here. A bot wouldn't cut down on the work involved. ~ Rob13Talk 15:48, 3 March 2017 (UTC)

Can a useful bot be taken over and repaired.

(Was posted at WP:VPT, user:Fastily suggested to post here if there was no takers)
User:Theopolisme is fairly inactive (last edit May). He mde User:Theo's Little Bot. Of late the bot has not been behaving very well on at least one of it's tasks (Task 1 - reduction of non-free images in Category:Wikipedia non-free file size reduction requests. It typically starts at 06:00 and will drop out usually within a minute of two (although sometimes one is lucky and it runs for half an hour occasionally). Messages on talk pages and github failed to contact user. User:Diannaa and I both sent e-mails, and Diannaa did get a reply - He is very busy elsewhere, and hopes to maybe look over Xmas... In view of the important work it does, Dianna suggested I ask at WP:VPT if there was someone who could possibly take the bot over? NB: See also Wikipedia:Bot requests#Update WikiWork factors Ronhjones  (Talk) 19:44, 25 December 2016 (UTC)

Now this should be a simple task. Doing... Dat GuyTalkContribs 12:39, 27 December 2016 (UTC)
@DatGuy: FWIW, I'm very rusty on python, but I tried running the bot off my PC (with all saves disabled of course), and the only minor error I encountered was resizer_auto.py:49: DeprecationWarning: page.edit() was deprecated in mwclient 0.7.0 and will be removed in 0.9.0, please use page.text() instead.. I did note that the log file was filling up, maybe after so long unattended, the log file is too big. Ronhjones  (Talk) 16:24, 28 December 2016 (UTC)
Are you sure? See [1]. When it tries to upload it, the file is corrupted. However, the file is fine on my local machine. Can you test it on the file? Feel free to use your main account, I'll ask to make it possible for you to upload files. As a side note, could you join ##datguy connect so we can talk more easily (text, no voice). Thanks. Dat GuyTalkContribs 16:33, 28 December 2016 (UTC)
Well just reading the files is one thing, writing them back is a whole new ball game! Commented out the "theobot.bot.checkpage" bit, changed en.wiki to test.wiki (2 places), managed to login OK, then it goes bad - see User:Ronhjones/Sandbox2 for screen grab. And every run adds two lines to my "resizer_auto.log" on the PC. Bit late now for any more. Ronhjones  (Talk) 01:44, 29 December 2016 (UTC)
Ah, just spotted the image files in the PC directory - 314x316 pixels, perfect sizing. Does that mean the bot's directory is filling up with thousands of old files? Just a thought. Ronhjones  (Talk) 01:49, 29 December 2016 (UTC)
See for yourself :). Weird thing for me is, I can upload it manually from the API sandbox on testwiki just fine. When the bot tries to do it via coding? CORRUPT! Dat GuyTalkContribs 10:28, 30 December 2016 (UTC)
25 GB of temp image files !! - it there a size limit per user on that server? Somewhere (in the back of my mind - I know not where - trouble with getting old..., and I could be very wrong) I read he was using a modified mwclient... My PC fails when it hits the line site.upload(open(file), theimage, "Reduce size of non-free image... and drops to the error routine, I tried to look up the syntax of that command (not a lot of documentation) and it does not seems to fully agree with his format. Ronhjones  (Talk) 23:29, 30 December 2016 (UTC)
OTOH, I just looked at the test image, have you cracked it? Ronhjones  (Talk) 23:31, 30 December 2016 (UTC)

BRFA filed. Dat GuyTalkContribs 09:19, 1 January 2017 (UTC)

@DatGuy: And approved I see - Is it now running? I'll stop the original running. I see it was that "open" statement that was the issue I had! Ronhjones  (Talk) 00:34, 3 January 2017 (UTC)
@DatGuy: Is this running? ~ Rob13Talk 11:36, 27 February 2017 (UTC)
Yes, although irregularly. I run it when I see more than one page in the category. Dat GuyTalkContribs 14:54, 27 February 2017 (UTC)
@DatGuy: Is there no way this could be hosted on Labs and run continuously? This task needs to be done fairly rapidly, as every image that hasn't been resized is essentially a copyright violation. Using high resolution images when low resolution images would work fails our own local non-free use criteria, but it also fails fair use under US copyright law (which is why we have the local criteria). The WMF could be served DMCA notices on these images if they aren't handled swiftly. ~ Rob13Talk 15:50, 3 March 2017 (UTC)
@BU Rob13: No, it can't. I'll run the file a lot more often, however the module pyexiv2 uses a dll called libexiv2python. This runs fine on my Windows machine, but since dll files are for Windows, they do not work on Labs, which uses Linux. Dat GuyTalkContribs 23:08, 4 March 2017 (UTC)
@BU Rob13: Actually, I think I found a workaround. I'll try and make it run 12:pm daily. Dat GuyTalkContribs 16:29, 5 March 2017 (UTC)

Missing BLP template

We need a bot that will search for all articles in Category:Living people, but without a {{BLP}} (or alternatives) on article's talk page, and add to these pages missing template. --XXN, 21:21, 20 November 2016 (UTC)

Ideally, not {{BLP}} directly, but indirectly via {{WikiProject Biography|living=yes}}. But we once had a bot that did that, I don't know what happened to it. --Redrose64 (talk) 10:33, 21 November 2016 (UTC)
{{WikiProject Biography|living=yes}} add the biography to Category:Biography articles of living people. TheMagikCow (talk) 18:48, 16 January 2017 (UTC)
Hi@Redrose64:, what was that bot's name? We faced such need recently during the Wiki Loves Africa photo contest on Commons. Hundreds of pictures from a parent category missed a certain template. I am planning to build of bot or adapt an existing one for similar cases.--African Hope (talk) 17:08, 4 February 2017 (UTC)
I'll code this. If there is a living people category but no {{BLP}} or {{WikiProject Banner Shell}} or {{WikiProject Biography}}? I might expand this to see if it has a 'living people' category but it does not have a living or not parameter. Dat GuyTalkContribs 17:28, 4 February 2017 (UTC)
I don't recall. Maybe Rich Farmbrough (talk · contribs) knows? --Redrose64 🌹 (talk) 00:20, 5 February 2017 (UTC)
I have been fixing members of Category:Biography articles without living parameter along with User:Vami_IV for some time. Menobot ensures that most biographies get tagged. I also did a one-off to tag such biographies a couple of moths ago. All the best: Rich Farmbrough, 00:32, 5 February 2017 (UTC).

Yobot was doing this task until recently. It's in my to-do list. -- Magioladitis (talk) 18:07, 5 March 2017 (UTC)

WP:COSMOLOGY banner cleanup following merger with WP:ASTRONOMY

Could someone replace all {{WP Cosmology}} with {{WP Astronomy|cosmology=yes}} following Wikipedia talk:WikiProject Astronomy#Should we merge WP:COSMOLOGY as a taksforce of WP:ASTRONOMY?. Import the |class= and |importance= parameters when possible, but existing |class= |importance= parameters for {{WP Astronomy}} should take precedence if they exist. Headbomb {t · c · p · b} 12:56, 5 May 2017 (UTC)

Some pages have both banners. All the best: Rich Farmbrough, 18:00, 5 May 2017 (UTC).
 Done All the best: Rich Farmbrough, 20:31, 5 May 2017 (UTC).

Bryan (3rd edition) parameter modification

There is a template called {{Bryan (3rd edition)}} it was until recently called {{Bryan}}. However that was a confusing name because there is is now another template that links to another edition in the Internet Archive called {{Bryan (1903 edition)}}.

There is a further complication. {{Bryan (3rd edition)}} is in two volumes, but currently there is no volume parameter so all the articles link to volume one in the internet archive even if the entry for the biography is in the second volume. The two volumes break down into:

  • volume 1: A–K (1886)
  • volume 2: L–Z (1889)

There is a category populated by the template called Category:Wikipedia articles incorporating text from Bryan's Dictionary of Painters and Engravers and so it should be possible to place a volume parameter into the instances of the template in article space based on the letter that the articles appear in the category.

The template is now a wrapper around {{cite encyclopedia}} so the parameter that is currently called article= would be better renamed to "title=". If it is an unnamed parameter (never an option, but apparently used by some) it too should be renamed title=

Taking two examples to show what I mean:

under the letter A in the category: {{Bryan|article=APPIANI, Niccolò}} to {{Bryan (3rd edition)|title=APPIANI, Niccolò |volume=1}}
under the letter Z in the category: {{Bryan|Zimmermann, August}} to {{{Bryan (3rd edition)|title=Zimmermann, August |volume=2}}

There is a third option which is no parameters at all in which case the volume as specified by the category should be added:

  • {{Bryan}} {{{Bryan (3rd edition)|volume=''n''}}}

There are currently 989 articles in the category. Yes I know that some will end up with the wrong volume number, but number that are correct after such a transformation will be much closed to 100% than the current situation where about half are linking to the wrong volume.

-- PBS (talk) 22:30, 21 April 2017 (UTC)

Other than the unnamed parameter issue, this should be fairly straightforward to code. I'm not sure why "some will end up with the wrong volume number" if it's alphabetical and split between K and L, but okay. Primefac (talk) 18:19, 26 April 2017 (UTC)
I'll try and fix this one this evening. All the best: Rich Farmbrough, 12:32, 27 April 2017 (UTC).
Rich Farmbrough, are you coding directly into the template (to add an "if no volume given, assign one")? Because I think that makes the most sense. Primefac (talk) 15:00, 27 April 2017 (UTC)
It's probably worth doing. I'm running through the articles first, looks like you have given explicit volume numbers to many of them. All the best: Rich Farmbrough, 20:11, 27 April 2017 (UTC).
Indeed it looks as if every use has a volume number. All the best: Rich Farmbrough, 20:19, 27 April 2017 (UTC).

Yes this request can be shut I've done the lot using AWB. -- PBS (talk) 21:03, 27 April 2017 (UTC)

Done checkY -- PBS (talk) 21:03, 27 April 2017 (UTC)

The list of pages to be tagged is available at User:Headbomb/sandbox6 and all associated talk pages of those in Category:Women in Red edit-a-thons, Category:Women_in_Red_headers, Category:Women_in_Red_redlink_lists, Category:Women in Red social media, Category:Women in Red templates and their sub categories (they are clean). Just add {{WIR}} at the top of the talk page. If you find \{\{WIR, the page should be skipped, however. User pages should also be skipped. Headbomb {t · c · p · b} 12:40, 19 April 2017 (UTC)

 Doing... All the best: Rich Farmbrough, 21:25, 23 April 2017 (UTC).
 Done All the best: Rich Farmbrough, 20:57, 26 April 2017 (UTC).

Could someone write a bot (or do this in AWB if that's easier; I don't care) that would look at mainspace pages to find redlinks that meet certain criteria?

  1. Link is red
  2. Link concludes with a parenthetical element that is not closed
  3. If the link were closed, the link would be blue

Whether or not the link is piped shouldn't matter. This might be a basic enough issue to allow the bot to fix it directly, but if you're concerned about CONTEXTBOT issues, I suppose the bot could just generate a list of pages on which this appears and the links on those pages that have this problem. I understand that this is potentially a massive project (you'd have to check every link on 6,913,862 articles), so actually I'll be somewhat surprised if it can be done. I was thinking of asking that this be added to BracketBot's workload, but it's not run since the middle of last year, and its owner is likewise inactive. Nyttend (talk) 02:08, 19 April 2017 (UTC)

I will try to do this tomorrow. All the best: Rich Farmbrough, 21:23, 25 April 2017 (UTC).
I ran this for a January dump, there were about 300 hits (including some false positives). I am fixing these now, so this can be considered  Done for now. All the best: Rich Farmbrough, 20:57, 26 April 2017 (UTC).

begin(aligned) and end(aligned) to begin(align) and end(align)

Resolved

Please replace all occurences of the strings \begin{aligned} resp. \end{aligned} with \begin{align} resp. \end{align}. The former expressions produce errors in PNG-mode. The latter are OK. See [2] and [3]. Thx - DVdm (talk) 15:59, 4 April 2017 (UTC)

This only affects a handful of articles, and can easily be done with AWB [4]. Headbomb {t · c · p · b} 18:00, 4 April 2017 (UTC)
@Headbomb: Yes, I have been looking at that too. Either the insource-regex search excludes anything inside "\displaystyle {", or the standard search also searches the meta-content of articles, including the styling and such—which we don't need here. So perhaps indeed the 7 hits are the only articles affected. If that's the case I'll be glad to fix them manually. - DVdm (talk) 18:51, 4 April 2017 (UTC)
 Done: edited the 7 articles. - DVdm (talk) 20:36, 4 April 2017 (UTC)

col → row

Need a bot to cleanup an error I've made. I recently sorted out a specific table on a large number articles in Category:Grand Tour cyclists. Artur Ershov for example before and after. I wrongly used "col" instead of "row". I need a bot to run through that cat and change ! scope="col" | —(new line)| Did not compete to ! scope="row" | —(new line)| Did not compete. BaldBoris 12:06, 31 March 2017 (UTC)

 Doing... All the best: Rich Farmbrough, 10:46, 26 April 2017 (UTC).
 Done All the best: Rich Farmbrough, 19:17, 26 April 2017 (UTC).

I have a long mixed list of articles and red links. Is there a bot that can add a category to each of the articles in the list but leave the red links untouched? Abyssal (talk) 14:36, 16 March 2017 (UTC)

@Abyssal: Technically, that's very easy. What's the list and category? ~ Rob13Talk 15:41, 16 March 2017 (UTC)
@BU Rob13: Thanks for the quick reply. The category is Category:Paleozoic southern paleotemperate deposits and the list follows: (snip) — Preceding unsigned comment added by Abyssal (talkcontribs) 15:49, 16 March 2017 (UTC)
List available here. Please don't drop large lists on this page. — JJMC89(T·C) 16:26, 16 March 2017 (UTC)
Sorry, I figured we'd just delete when we were done so that it would only be there for a bit. Abyssal (talk) 17:10, 16 March 2017 (UTC)
Abyssal@ I can do this - presumably skipping members of
* Cambrian southern paleotemperate deposits‎
* Carboniferous southern paleotemperate deposits‎
* Devonian southern paleotemperate deposits‎
* Ordovician southern paleotemperate deposits‎
* Silurian southern paleotemperate deposits‎
All the best: Rich Farmbrough, 16:59, 20 April 2017 (UTC).

Linkfix: www.www. to www.

Resolved

We have 87 links in the form www.www.foo.bar which should really be www.foo.bar - the form www.www. is normally a fault. A simple link checker with text replacement would help.--Oneiros (talk) 13:49, 16 February 2017 (UTC)

OK - I will have a look into this. TheMagikCow (talk) 17:19, 16 February 2017 (UTC)
BRFA filed TheMagikCow (talk) 10:33, 18 February 2017 (UTC)

Bot to remove old warnings from IP talk pages

There is consensus for removing old warnings from IP talk pages. See Wikipedia talk:Criteria for speedy deletion/Archive 9#IP talk pages and Wikipedia:Village pump (proposals)/Archive 110#Bot blank and template really, really, really old IP talk pages.. This task is being done using AWB by BD2412 for several years now. Until around 2007, it was also being done by Tawkerbot.

I suggest that a bot should be coded up to remove all sections from IP talk pages that are older than 2 years, and add the {{OW}} template to the page if it doesn't already exist (placed at the top of the page, but below any WHOIS/sharedip templates) There are many reasons why this should be done by a bot. (i) Bot edits marked as minor do not cause the IPs to get a "You have new messages" notification, when the IP talk page is edited. (ii) Blankings done using AWB also remove any WHOIS/sharedip templates, for which there is no consensus. (iii) This is a type of mundane task that should be done by bots. Human editors should not waste their time with this, rather spend it at tasks that require some human intelligence. 103.6.159.93 (talk) 14:41, 23 January 2017 (UTC)

Needs wider discussion. These are pretty old discussions to support this sort of mass blanking of talk pages. If I recall correctly, an admin deleted a bunch of IP user talk pages a while back and this proved controversial. This needs a modern village pump discussion. ~ Rob13Talk 20:21, 24 January 2017 (UTC)
Here is one such discussion that I initiated. I think that two years is a bit too soon. Five years is reasonable. When I do these blankings with AWB, I typically go back seven, just because it is easy to skip any page with a date of 2010 or later on hte page. I think some flexibility could be built in based on the circumstances. An IP address from which only one edit has ever been made, resulting in one comment or warning in response, is probably good for templating after no more than three years. I would add that I intentionally remove the WHOIS/sharedip templates, because, again, these are typically pages with nothing new happening in the past seven (and sometimes ten or eleven) years. We are not a permanent directory of IP addresses. bd2412 T 01:01, 25 January 2017 (UTC)
@BU Rob13: don't be silly. The is consensus for this since 2006. Tawkerbot did it till 2007 and BD2412 has been doing it for years, without anyone disputing the needs for doing it on his talk page. You correctly remember that MZMcBride used an unapproved bot to delete over 400,000 IP talk pages in 2010. That was obviously controversial since there is consensus only for blankings, not for deletions. Any new discussion on this will only result in repetition of arguements. The only thing that needs discussion is the approach. 103.6.159.84 (talk) 04:29, 25 January 2017 (UTC)
  • I wrote above "remove all sections from IP talk pages that are older than 2 years". I realise that this was misunderstood. What I meant was remove the sections in which the last comment is over 2 years old. This is more moderate proposal. Do you agree with this, BD2412? 103.6.159.84 (talk) 04:29, 25 January 2017 (UTC)
    I have two thoughts on that. First, I think that going after individual sections, as opposed to 'everything but the top templates' is a much harder task to program. I suppose it would rely on the last date in a signature in the section, or on reading the page history. Secondly, I think that there are an enormous number of pages to deal with that would have all sections removed even under that criteria, so we may as well start with the easy task of identifying those pages and clearing everything off of them. If we were to go to a section-by-section approach, I would agree with a two year window. bd2412 T 04:35, 25 January 2017 (UTC)
As mentioned, deletion should NOT be done (and is also not requested), deletion results in hiding tracks that may be of interest (either discussions on a talkpage of an IP used by an editor years ago that has relevance to edits to mainspace pages (every now and then there are edits with a summary 'per discussion on my talk'), and it hides that certain IPS that behaved bad were actually warned (company spamming in 2010, gets several warnings, sits still for 7 years, then someone else spams again - we might consider blacklisting with reasoning 'you were warned in 2010, and now you are at it again' - it may be a different person behind a different IP, and the current editor may not even be aware of the situation of 2 1/2 years ago, it is the same organisation that is responsible). If the talkpage 'exists', and we find the old IP that showed the behaviour, it is easy to find the warnings back; if it involves 15 IPs of which 2 were heavily warned, and those two pages now are also redlinks, we need someone with the admin bit to check deleted revisions on 15 talkpages - in other cases, anyone can do it.
Now, regarding blanking: what would be the arguments against archiving threads on talkpages where:
  1. the thread is more than X old (2 years?)
  2. the IP did not edit in the last Y days (1 year?)
We would just insert a custom-template in the header like {{IPtalkpage-autoarchive}} which is pointing to the automatically created archives and which provides a lot of explanation, and we have a specified bot that archives these pages as long as the conditions are met. Downside is only that it would preserve utter useless warnings (though, some editors reply to warnings and go in discussion, and are sometimes right, upon which the perceived vandalism is re-performed), upside is that it preserves also constructive private discussions.
(@BD2412: regarding your "I think that going after individual sections, as opposed to 'everything but the top templates' is a much harder task to program" - the former is exactly what our archiving bots do). --Dirk Beetstra T C 05:51, 25 January 2017 (UTC)
As far as I am aware, editors have previously opposed archiving of IP talk pages and so this would require wider discussion at an appropriate forum first. Regarding removal of warnings by section, I don't think there is any need to bother about the time the IP last edited -- the whole point of removing old warnings is to ensure that the current (or future) users of the IPs don't see messages that were intended for someone who used the IP years ago. Ideally, a person who visit their IPtalk should see only the messages intended for that person. 103.6.159.84 (talk) 06:19, 25 January 2017 (UTC)
That consensus could have changed - it may indeed need a wider community consensus. As I read the above thread, however, removal is not only restricted to warnings, it mentions remove the sections in which the last comment is over 2 years old, which also would include discussions. Now, archiving is not a must, one is allowed to simply delete old threads on ones 'own' talkpage.
Whether you archive, or delete - in both cases the effect is the same: the thread that is irrelevant to the current user of the IP is not on the talkpage itself anymore. And with highly fluxional IPs, or with IPs that are used by multiple editors at the same time it is completely impossible to address the 'right editor', you will address all of them. On the other hand, some IPs stay for years with the same physical editor, and the messages that are deleted will be relevant to the current user of the page, even if they did not edit for years. And that brings me to the point whether the editor has been editing in the last year (or whichever time period one choses) - if the IP is continuously editing there is a higher chance that the editor is the same, as when an IP has not been editing for a year (though in both cases, the IP can be static or not static, thwarting that analysis and making it needful to check on a case-by-case basis, which would preclude bot-use). --Dirk Beetstra T C 10:53, 25 January 2017 (UTC)
I favour archiving using the {{wan}} template rather than blanking. It alerts future editors that there have been previous warnings. If the IP belongs to an organisation, they might just possibly look at the old warnings and discover that the things they are about to do were done before and were considered bad. SpinningSpark 12:07, 25 January 2017 (UTC)
I think that archiving for old IP talk pages is very problematic. One of the main reasons I am interested in blanking these pages is to reduce link load - the amount of dross on a "What links here" page that obscures the links from that page to other namespaces, which is particularly annoying when a disambiguator is trying to see whether relevant namespaces (mainspace, templates, modules, files) are clear of links to a disambiguation page. All archiving does for IP talk pages is take a group of random conversations - link load and all - and disassociate them from their relevant edit history, which is what a person investigating the IP address is most likely to need. This is very different from archiving article talk pages or wikispace talk pages, where we may need to look back at the substance of old discussions. bd2412 T 14:08, 25 January 2017 (UTC)
Agree with that. I also don't think archiving of IP talk pages is useful. In any case, it needs to be discussed elsewhere (though IMO it's unlikely to get consensus). There is no point in bringing it up within this bot request. 103.6.159.89 (talk) 15:59, 25 January 2017 (UTC)
I see the point of that, but that is also the reason why some people want to see what links to a page - where the discussions were. The thread above is rather unspecific, and suggests to blank ALL discussions, not only warnings. And that are the things that are sometimes of interest, plain discussions regarding a subject, or even discussions following a warning. If the talkpage-discussions obscure your view, then you can choose to select incoming links per namespace.
@103.6.159.89: if there is no consensus to blank, but people are discussing whether it should be blanking or archiving or nothing, then there is no need for a discussion here - bots should simply not be doing this. I agree that the discussion about what should be done with it should be somewhere else. --Dirk Beetstra T C 03:29, 26 January 2017 (UTC)
You can not choose to select incoming links per namespace if you need to see multiple namespaces at once to figure out the source of a problem. For example, sometimes a link return appears on a page that can not actually be found on that page, but is transcluding from another namespace (a template, a portal, a module, under strange circumstances possibly even a category or file), and you need to look at all the namespaces at once to determine the connection. It would be nice if the interface would allow that, but that would be a different technical request. bd2412 T 17:06, 28 January 2017 (UTC)
I agree that that is a different technical request. But the way this request is now written (to remove all sections from IP talk pages that are older than 2 years) I am afraid that important information could be wiped. I know the problems with the Wikimedia Development team (regarding feature requests etc., I have my own frustrations about that), but alternatives should be implemented with extreme care. I would be fine with removal of warnings (but not if those warnings result in discussion), but not with any other discussions, and I would still implement timing restrictions (not having edited for x amount of time, etc.). --Dirk Beetstra T C 07:32, 29 January 2017 (UTC)
If there is a really useful discussion on an IP talk page that has otherwise gone untouched for half a decade or more, than that discussion should be moved to a more visible location. We shouldn't be keeping important matters on obscure pages, and given the hundreds of thousands of existing IP talk pages, there isn't much that can be more obscure than the random set of numbers designating one of those. (Yes, I know they are not really random numbers, but for purposes of finding a particular one, they may as well be). bd2412 T 17:02, 5 February 2017 (UTC)
@BD2412: and how are you going to see that (and what is the threshold of importance)? When you archive it is at least still there, with blanking any discussion is 'gone'. --Dirk Beetstra T C 03:54, 9 February 2017 (UTC)
Then people will learn not to leave important discussions on IP talk pages with no apparent activity after half a decade or more. bd2412 T 04:13, 9 February 2017 (UTC)
Your kidding, right? Are we here to collaboratively create an encyclopedia, or are we here to teach people a lesson? --Dirk Beetstra T C 05:49, 9 February 2017 (UTC)
We are not here to create a permanent collection of random IP talk page comments. bd2412 T 00:33, 14 February 2017 (UTC)
But these are not a (collection of) random IP talk page comment(s). --Dirk Beetstra T C 09:55, 7 March 2017 (UTC)
I agree. There is no benefit in hiding the warning history of IPs and making editors search through the history to find it. The most common case of IPs that collect masses of warnings is schools (and other public institutions) who have a fixed IP address. Schools all too frequently get to the stage where they get blocked for years. The old warnings are a good measure of how well the school controls misuse of IT. It is bad for the encyclopaedia if this past history is not visible. When the block expires and the vandalism starts up again, it is perfectly obvious that it is not going to stop and the IP can be blocked again quickly. This is less likely to happen if the warnings have been wiped. As I said earlier, it is preferable to archive them so a reviewing admin can immediately see that there is past behaviour to look at. Dynamic IPs, on the other hand, rarely collect more than one or two warnings before the IP changes so we are really looking at a mostly non-problem there. SpinningSpark 16:35, 7 March 2017 (UTC)

If the problem is link load (as according to BD2412), then a better solution is a bot to unlink pages in old posts rather than remove the posts altogether. SpinningSpark 16:38, 7 March 2017 (UTC)

I'm fine with a bot doing that unlinking as well. For more complex pages, that makes sense. For the tens of thousands of pages containing nothing but a single warning perhaps five years old or more, this permanent record is useless, and possibly detrimental if an innocent user comes to edit from that IP address years later. bd2412 T 03:45, 26 April 2017 (UTC)
No, the record is not 'useless'. IP-hopping spammers and POV pushers have a tendency to stick (I have spam cases documented that are here for 10 years - 10 years the same entity that is here to spam). Although they are under new IPs, they are the same physical person, and it is very important that we know that these people have been warned in the past. If someone spams now a certain site, and they got a good handful of warnings 7 years ago on several IPs .. I may very well blacklist (and I know, it may be a new employee to the same company, or a different SEO hired, it is the companies responsibility that they do not spam Wikipedia). If I cannot find old warnings, I may resolve to XLinkBot or manual revert and warn. Which may result in more work. Old warnings and discussions can be extremely important. I still think that an enhancement of the system is more in order (there are many log/contribs/links pages that would benefit from namespace selection like our search system). --Dirk Beetstra T C 03:57, 7 May 2017 (UTC)

This has been here since January and it's obvious that this is no less controversial now than it was in January. As noted earlier, this needs wider discussion. I can't see BAG approving this without a wider discussion. Archiving. ~ Rob13Talk 05:17, 7 May 2017 (UTC)

Bot to delete emptied monthly maintenance categories

I notice that we have a bot, AnomieBOT that automatically creates monthly maintenance categories (Femto Bot used to do it earlier). Going by the logs for a particular category, I find that it has been deleted and recreated about 10 times. While all recreations are by bots, the deletions are done by human adminstrators. Why so? Mundane, repetitive tasks like the deletion of such categories (under CSD G6) when they get emptied should be done by bots. This bot task is obviously non-controversial and absolutely non-contentious, since AnomieBOT will recreate the category if new pages appear in the category. 103.6.159.93 (talk) 14:21, 23 January 2017 (UTC)

Needs wider discussion. It should be easy enough for AnomieBOT III to do this, but I'd like to hear from the admins who actually do these deletions regularly whether the workload is enough that they'd want a bot to handle it. Anomie 04:54, 24 January 2017 (UTC)
Are these already being tagged for CSD by a bot? I don't work CAT:CSD has much as I used to, but rarely see these in the backlog there now. — xaosflux Talk 14:08, 24 January 2017 (UTC)
I think they are tagged manually by editors. Anyway, this discussion is now shifted to WP:AN#Bot to delete emptied monthly maintenance categories, for the establishment of consensus as demanded by Anomie. 103.6.159.67 (talk) 14:12, 24 January 2017 (UTC)
Thanks for taking it there, 103.6.159.67. It looks like it's tending towards "support", if that keeps up I'll write the code once the discussion there is archived. I also see some good ideas in the comments, I had thought of the "only delete if there are no edits besides AnomieBOT" condition already but I hadn't thought of "... but ignore reverted vandalism" or "don't delete if the talk page exists". Anomie 03:21, 25 January 2017 (UTC)
@Xaosflux: No, {{Monthly clean-up category}} (actually {{Monthly clean-up category/core}}) automatically applies {{Db-g6}} if the category contains zero pages. Anomie 03:12, 25 January 2017 (UTC)
My experience shows this is safe to delete. They can even be recreated when needed (usually a delayed reversion in a page edit history). -- Magioladitis (talk) 23:09, 24 January 2017 (UTC)
The question isn't if they're safe to delete, that's obvious. The question is whether the admins who actually process these deletions think it's worth having a bot do it since there doesn't seem to be any backlog. Anomie 03:12, 25 January 2017 (UTC)
Category:Candidates for uncontroversial speedy deletion is almost always empty when I drop by it. — xaosflux Talk 03:39, 25 January 2017 (UTC)

The AN discussion is archived now, no one opposed. I put together a task to log any deletions such a bot would make at User:AnomieBOT III/DatedCategoryDeleter test‎, to see if it'll actually catch anything. If it logs actual deletions it might make I'll make a BRFA for actually doing them. Anomie 14:45, 31 January 2017 (UTC)

@Anomie: Is it useful to keep this section unarchived at this point, or can it be archived to make it easier for botops to find open tasks? ~ Rob13Talk 15:45, 3 March 2017 (UTC)
There's probably no real need to keep it unarchived. Anomie 17:02, 3 March 2017 (UTC)

CSD/AfD/PROD template mover

So recently I have been deleting articles and stumbled upon people who weren't placing deletion templates in the top of the page. I believe, that a bot could handle such template moves. I would like to develop a bot myself, but I need some guidance from more experienced editors. I am really looking forward to the development of such a bot, because those template moves are very tedious to do. Cheers, FriyMan talk 11:01, 10 March 2017 (UTC)

Website suddenly took down a lot of its material, need archiving bot!

Per Wikipedia_talk:WikiProject_Academic_Journals#Urgent:_Beall.27s_list, several (if not) most links to https://scholarlyoa.com/ and subpages just went dead. Could a bot help with adding archive links to relevant citation templates (and possibly bare/manual links too)? Headbomb {talk / contribs / physics / books} 00:31, 19 January 2017 (UTC)

Cyberpower678, could you mark this domain is dead in IABot's database so that it will handle adding archive urls? — JJMC89(T·C) 01:13, 19 January 2017 (UTC)
@Cyberpower678: ? Headbomb {talk / contribs / physics / books} 10:50, 2 February 2017 (UTC)
Sorry, I never got the first ping. I'll mark it in a moment.—CYBERPOWER (Chat) 16:52, 2 February 2017 (UTC)
Only 61 urls were found in the DB with the domain.—CYBERPOWER (Chat) 17:39, 2 February 2017 (UTC)
@Cyberpower678: Well that's 61 urls that we needed! Would it be possible to have a list of those urls, or is that complicated? It would be really useful to project members to have those centralized in one place. Headbomb {talk / contribs / physics / books} 20:04, 13 February 2017 (UTC)
I would but, the DB is under maintenance right now.—CYBERPOWER (Be my Valentine) 20:06, 13 February 2017 (UTC)
I'll ping you next week then. Headbomb {talk / contribs / physics / books} 20:07, 13 February 2017 (UTC)
@Cyberpower678:. Headbomb {talk / contribs / physics / books} 21:00, 22 February 2017 (UTC)
The interface link will be made available soon, but...
Giant list removed. It may still be seen in the page history. Anomie 23:03, 24 February 2017 (UTC)
Cheers.—CYBERPOWER (Chat) 06:11, 24 February 2017 (UTC)
Please don't dump giant lists of stuff in this page. Put them in a subpage in your userspace and link to them instead. Thanks. Anomie 23:03, 24 February 2017 (UTC)

@Cyberpower678:, I may have been unclear in my request, but what I meant was is is possible to have a consolidated list of the archived versions and also have a bot update the current citation templates (and bare links, if any) with the archived version? Headbomb {talk / contribs / physics / books} 18:19, 24 February 2017 (UTC)

@Headbomb: I'm not sure I fully understand. Searching by archive is not easy. I need to ping the DB directly. It likely wouldn't be accurate because sometimes an archive only shows up if it's trying to fix the dead link. It usually doesn't bother sniffing out an archive if the link isn't dead. So you would need to run the bot on those articles to get an accurate list of what has an archive and what doesn't.—CYBERPOWER (Around) 16:29, 4 March 2017 (UTC)

Redundant - these should be caught by IABot on it's pass through - I don't see a specific need for a bot to be written just to post the archived versions to the talk page. References should be updated fine. Mdann52 (talk) 21:07, 5 May 2017 (UTC)

Non-free images used excessively

I'd like a few reports, if anyone's able to generate them.

1) All images in Category:Fair use images, defined recursively, which are used outside of the mainspace.

2) All images in Category:Fair use images, defined recursively, which are used on more than 10 pages.

3) All images in Category:Fair use images, defined recursively, which are used on any page that is not anywhere in the text of the file description page. i.e. If "File:Image1.jpg" was used on page "Abraham Lincoln" but the text "Abraham Lincoln" appeared nowhere on the file page.

If anyone can handle all or some of these, it would be much appreciated. Feel free to write to a subpage in my userspace. ~ Rob13Talk 20:29, 21 January 2017 (UTC)

No bot needed for tasks 1 and 2:
  1. https://tools.wmflabs.org/betacommand-dev/nfcc/NFCC9.html
  2. https://tools.wmflabs.org/betacommand-dev/nfcc/high_use_NFCC.html
Task 3 was done by User:BetacommandBot, but the bot and its master have been since blocked. User:FairuseBot, I think. I'd very much like to see this task being done by a bot. – Finnusertop (talkcontribs) 20:42, 21 January 2017 (UTC)
Still need task 3 here. ~ Rob13Talk 15:46, 3 March 2017 (UTC)
@BU Rob13: Doing... Might take me a few days, but I'll at least do a one-time run over the parent category to start with, then the rest. I'm not sure if I'll be able to build something capable of running regularly for a while. FYI, I've planned this by comparing the image use with links on the page - this might lead to more false positives, however it's faster and less server-intensive than grabbing the whole page, as far as I can see. Mdann52 (talk) 19:19, 5 May 2017 (UTC)

Y Done In Wikipedia:Bots/Requests for approval/JJMC89 bot 12. Mdann52 (talk) 22:59, 10 May 2017 (UTC)

 Requesting immediate archiving...

For some time Russian soccer stats website Klisf.info is inactive, inavailable. There are many links to this website (either as inline references or simple external links like the one from this article) and they should be tagged as dead links (at least). --XXN, 12:54, 22 January 2017 (UTC)

No one bot operator is interested in this task? This is an important thing, there are a lot of articles based on only one Klisf.info dead link, and the WP:VER is problematic. I don't request (yet) to remove these links - just tag them as dead, and another bot will try to update them with a link to an archived version, if possible. The FOOTY wikiproject was notified some time ago, but there is nothing controversial. XXN, 13:55, 10 February 2017 (UTC)

There are 148 links. Listed here. All the best: Rich Farmbrough, 13:20, 27 April 2017 (UTC).

@XXN:klisf.info used robots.txt, so i don't think Archive.org will have any of its page archived.Redhat101 Talk 23:52, 7 May 2017 (UTC)
Ok, then. Resolved by me; external links removed, klisf refs tagged as dead to be replaced. XXN, 13:51, 8 May 2017 (UTC)

Add https to ForaDeJogo.net

Please change foradejogo.net links to https. I have already updated its templates. SLBedit (talk) 19:24, 22 January 2017 (UTC)

@SLBedit: see User:Bender the Bot and its contribs. You can contact directly the bot operator, probably. XXN, 13:59, 10 February 2017 (UTC)
I'll keep it in mind. --bender235 (talk) 15:40, 10 February 2017 (UTC)
Seems that it's done. I'll fix manually the remaining 7 pages. --XXN, 09:18, 8 May 2017 (UTC)

 Requesting immediate archiving...

Replacement of NCBI template in gastropods articles

The template {{NCBI}}, now called {{NCBI taxid}}, was used many times in the Wikipedia:WikiProject Gastropods in the External links section. Now the template is deprecated and can be replaced with {{taxonbar}} template.

  • This applies to Category:Gastropods and its subcategories only.
  • This applies to External links section only.
  • When there is template taxonbar in the article, remove the NCBI template from External links section. For example like this: removing NCBI template when there is taxonbar already
  • When there is template NCBI in the External links section, remove the NCBI template and add taxonbar template to the end of External links section (above category).

Thanks. --Snek01 (talk) 20:52, 29 April 2017 (UTC)

There are about 545 of these, and a couple of hundred of non-gastropod ones.  Doing... All the best: Rich Farmbrough, 21:21, 5 May 2017 (UTC).
User:Snek01 What do you think of Pomacea maculata - I removed both NCBI external links. I'm not sure of the value of the other one, an apparent Ssynonym. All the best: Rich Farmbrough, 22:00, 5 May 2017 (UTC).
You did it all right. --Snek01 (talk) 13:09, 6 May 2017 (UTC)

Y Done

Could someone have one of their bots update this page frequently? A bot once updated it, but stopped in 2014. MCMLXXXIX 16:59, 16 February 2017 (UTC)

Hi 1989. We have Special:LongPages. Is that insufficient? --MZMcBride (talk) 05:30, 17 February 2017 (UTC)
Yes. The list only shows articles while the page I referenced has talk pages. MCMLXXXIX 09:21, 17 February 2017 (UTC)
I have popped up a page on tool labs that lists the fifty longest talk pages that are not user talk page or sub-pages. Hope this helps. - TB (talk) 12:35, 17 February 2017 (UTC)

All,

I found a broken link to an NTSB.gov accident brief on the White spirit page. When fixing it, I noticed that the new URL is not too different from the old one, such that it could probably be fixed by a bot. I don't know how to check to see how many URLs of this form are on Wikipedia, though.

The old URL format was: http://www.ntsb.gov/aviationquery/brief2.aspx?ev_id=20020916X01610&ntsbno=NYC02LA181&akey=1

The new URL format is: http://www.ntsb.gov/_layouts/ntsb.aviation/brief2.aspx?ev_id=20020916X01610&ntsbno=NYC02LA181&akey=1

The only difference is changing "aviationquery" to "_layouts/ntsb.aviation".

Thanks! — Preceding unsigned comment added by 174.58.153.20 (talk) 09:58, 3 March 2017 (UTC)

Only five such links existed in articles, so I fixed them manually. I was able to tell by looking at Special:LinkSearch/ntsb.gov/aviationquery/, if you're curious. Someguy1221 (talk) 10:04, 3 March 2017 (UTC)
Thanks! And now I know about that search, too. 174.58.153.20 (talk) 21:55, 9 March 2017 (UTC)

Per this RFC closure, "magic links" that are going away should be replaced by their corresponding templates. Is there anyone here who would be willing do these replacements with a bot? It may be best to start a separate BRFA for conversion ISBN, RFC, and PMID (three tasks), with an option to merge them into a single task in order to keep watchlist noise to a minimum.

The relevant tracking categories are Category:Pages using ISBN magic links, Category:Pages using PMID magic links, and Category:Pages using RFC magic links. The prospective bot operator may want to dig up the actual string test that determines whether a magic link is currently applied so that only the existing magic links are converted to templates. There are roughly 380,000 pages affected. – Jonesey95 (talk) 03:43, 20 March 2017 (UTC)

Wikipedia:Bots/Requests for approval/Yobot 27. --Edgars2007 (talk/contribs) 07:49, 20 March 2017 (UTC)
That closure predates the RFC. A new BRFA would presumably be approved. Headbomb {talk / contribs / physics / books} 09:55, 20 March 2017 (UTC)
BRFA filed. Primefac (talk) 14:14, 24 March 2017 (UTC)

I resubmitted my BRFA. -- Magioladitis (talk) 22:08, 16 April 2017 (UTC)

Fixing autocorrection error

MS Office tends to autocorrect of the into oft he. For some reason, also Wikipedia is affected (see insource:/ oft he /). I guess the errors may be fixed using AWB. --Leyo 14:24, 6 April 2017 (UTC)

A full search of Wikipedia mainspace produces 25 returns--and regardless of such, wouldn't be good for a bot to do per WP:CONTEXTBOT. --Izno (talk) 15:18, 6 April 2017 (UTC)
The search result is incomplete. --Leyo 16:08, 6 April 2017 (UTC)
Unlikely, and you are welcome to prove your statement. The regex search performed using your search link timed out for me at a similar magnitude (21?), and quotations for our purposes here work because we are only reviewing for the exact phrase. --Izno (talk) 17:13, 6 April 2017 (UTC)
Google, which might lag, finds 75 or so. --Izno (talk) 17:14, 6 April 2017 (UTC)
I don't think that's due to autocorrect but due to poor keyboard layout generally, something to do with thumb rhythm on the spacebar for very short words. I make the same mistake several times a day. I did it just now while typing "time sa day", then again typing "I did itj ustn ow". I've had too much coffee. Ivanvector (Talk/Edits) 17:24, 6 April 2017 (UTC)
This looks like a perfect Wikipedia:Adopt-a-typo candidate. And searching for "oft he" (including the quotation marks) should do the trick. P.S. I just fixed all of those, but "ofthe" appears to have another 50 or so instances in article space. As with "oft he", not all of them are errors. – Jonesey95 (talk) 02:54, 7 April 2017 (UTC)

A bot to upload OCR of Marathi books

I am using this https://github.com/tshrinivasan/OCR4wikisource to generate OCR of marathi books and it requires bot permission to upload them on mr.wikipedia. — Preceding unsigned comment added by Vaibhawvipul (talkcontribs) 12:01, 27 April 2017 (UTC)

@Vaibhawvipul: No Wikipedia can host copyright violations, nor can any wiki on Wikimedia servers in fact. Did you write those books? --Izno (talk) 12:30, 27 April 2017 (UTC)
@Izno: I will just be uploading books under CC license. Especially these https://commons.wikimedia.org/wiki/Category:Books_in_Marathi .
@Vaibhawvipul: Why would you upload OCR of these text to mr.WP if they would probably be better on Commons? --Izno (talk) 13:07, 27 April 2017 (UTC)
Agreed, best place is Commons, then the Marathi Wikisource. All the best: Rich Farmbrough, 14:59, 6 May 2017 (UTC).

Requesting bot for wikisource

I'm not sure exactly what to say here, at least in part because I'm not sure exactly what functions we are necessarily seeking a bot to do. But there is currently a discussion at wikisource:Wikisource:Scriptorium#Possible bot about trying to get some sort of bot which would be able to generate an output page roughly similar to Wikipedia:WikiProject Christianity#Popular pages and similar for the portals, author, and categories over there at wikisource. I as an individual am not among the most knowledgeable editors there. On that basis, I think it might be useful to get input from some of the more experienced editors there regarding any major issues which might occur to either a bot developer or them but not me. Perhaps the best way to do this would be to respond at the first linked to section above and for the developer to announce himself, perhaps in a separate subsection of the linked to thread there, to iron out any difficulties. John Carter (talk) 14:31, 6 February 2017 (UTC)

How about a bot to update (broken) sectional redirects?

When a section heading is changed, it breaks all redirects targeting that heading. Those redirects then incorrectly lead to the top of the page rather than to the appropriate section.

Is this desirable and feasible? If so, how would such a script work? The Transhumanist 22:14, 6 February 2017 (UTC)

This may turn out to be a WP:CONTEXTBOT. How often do people delete the section entirely, or split the section into two (then which should the bot pick?), or revise the section such that the redirect doesn't really apply anymore? Can the bot correctly differentiate these cases from cases where it can know what section to change the target to?
Such a script would presumably work by watching RecentChanges for edits that change a section heading, and then would check all redirects to the article to see if they targeted that section. It would probably want to delay before actually making the update in case the edit gets reverted or further edited. Anomie 22:29, 6 February 2017 (UTC)

User's recognized content list

List like Wikipedia:WikiProject Physics/Recognized content generated by User:JL-Bot/Project content seems very neat. Is it possible to generate and maintain the same list, tied to a user instead of a Wikiproject? For example, I can use it to have a list of DYK/GA/FAs credited to me in my user page. HaEr48 (talk) 03:51, 23 January 2017 (UTC)

Let's ping JLaTondre (talk · contribs) on this. Headbomb {talk / contribs / physics / books} 04:43, 23 January 2017 (UTC)
He replied in User talk:JL-Bot#Generating User-centric recognized content and said that he doesn't have time to add this new feature right now, and the way such a thing can be implemented is a bit different from JL-Bot's existing implementation. So probably we need new bots. HaEr48 (talk) 06:50, 9 February 2017 (UTC)

Creating a list of red-linked entries at Recent deaths

I request a bot to create and maintain a list consisting of red-linked entries grabbed from the Deaths in 2017 page, as and when they get added there. These entries, as you may know, are removed from the "Deaths in ... " pages if an article about the subject isn't created in a month's time. It would be useful to maintain a list comprising of just the red entries (from which they are not removed on any periodic basis) for editors to go through. This would increase the chances of new articles being created. Preferably at Wikipedia:WikiProject Biography/Recent deaths red list, or in the bot's userspace to begin with. (In the latter case, the bot wouldn't need any BRFA approval.) 103.6.159.71 (talk) 12:54, 15 February 2017 (UTC)

Check book references for self-published titles

This is in response to this thread at VPT. So here's the problem. We have a list of vanity publishers whose works should be used with extreme caution, or never (some of these publishers exclusively publish bound collections of Wikipedia articles). But actually checking if a reference added to Wikipedia is on this list is time consuming. However, it occurs to me that in some cases it should be simple to automate. At any Amazon webpage for a book, there is a line for the publisher, marked "publisher". On any GoogleBooks webpage, there is a similar line to be found in the metadata hiding in the page source. If an ISBN is provided in the reference, it can be searched on WorldCat to identify the publisher.

So it seems to me like a bot should be able to do the following:

1) Watch recent changes for anything that looks like a link or reference to a book, such as a "cite book" template, a number that looks like an ISBN, or a link to a website like Amazon or GoogleBooks
2) Follow the link (if to Amazon or GoogleBooks), or search the ISBN (if provided), to identify the publisher
3) Check the publisher against the list of vanity publishers
4) Any positive hits could then be automatically reported somewhere on Wikipedia. There could even be blacklisted publishers (such as those paper mirrors of Wikipedia I mentioned) that the bot could automatically revert, after we're sure there are few/no false positives

What do people think? Doable? Someguy1221 (talk) 00:13, 16 February 2017 (UTC)

WP:UAAHP

Hi, is it possible for a bot, such as DeltaQuadBot, to remove stale reports at the UAA holding pen (those blocked and those with no action in seven days), like it does with blocked users and declined reports at WP:UAAB? If this is not possible I would be happy to create my own bot account and have it do this task instead. Thanks! Linguisttalk|contribs 22:23, 28 January 2017 (UTC)

You should ask DeltaQuad if she would consider adding it to her bot. Also, is this something that the UAA admins want? — JJMC89(T·C) 22:39, 28 January 2017 (UTC)
I haven't asked the UAA regulars but I'm sure this would be helpful. In fact, I'm almost the only one who cleans up the HP and it would be helpful to me. Linguisttalk|contribs 22:41, 28 January 2017 (UTC)
This would certainly be helpful if it could remove any report that is more than seven days old, where the account has not edited at all. This is the bulk of what gets put in the holding pen, so keeping it up to date would be quite simple if these type of reports were removed automatically. Beeblebrox (talk) 22:10, 19 February 2017 (UTC)

Replace br tags with plainlists in Infobox television

I would like a bot to replace br tags with plainlists in Infobox television. -- Magioladitis (talk) 10:31, 26 February 2017 (UTC)

Please provide a few diffs to give an idea of which parameters, etc. we're talking about. ~ Rob13Talk 11:21, 27 February 2017 (UTC)

Reformation

Protestant Reformation was moved to Reformation. I'd like a bot to replace the many piped links [[Protestant Reformation|Reformation]] by the simple link. --Gerda Arendt (talk) 10:14, 27 February 2017 (UTC)

Needs wider discussion. @Gerda Arendt: That's probably more trouble than it's worth. Such an edit would be a purely cosmetic fix to the wiki markup, which violates WP:COSMETICBOT. You could seek consensus that this is a useful bot task at a broad community venue, but I doubt that would be easy to build. The piped links don't hurt anything, so probably not worth your time. ~ Rob13Talk 11:25, 27 February 2017 (UTC)
If it's not easy - looked easy enough to me - I will just do the "cosmetics" myself. --Gerda Arendt (talk) 11:32, 27 February 2017 (UTC)
@Gerda Arendt: It's technically trivial, but there would need to be strong consensus for overriding COSMETICBOT in this instance. That would have to come at a venue like WP:VPT or something similar, probably. ~ Rob13Talk 11:41, 27 February 2017 (UTC)
As I said before: If it's not easy (which includes easy to achieve) I will do the cosmetics myself ;) --Gerda Arendt (talk) 11:45, 27 February 2017 (UTC)

Gerda Arendt It's not cosmetic to actually point to the correct article. -- Magioladitis (talk) 12:06, 27 February 2017 (UTC)

I did it for the four templates on the page, but agree that there, it didn't even fall under pipe linking. --Gerda Arendt (talk) 12:10, 27 February 2017 (UTC)
Piece of cake. -- Magioladitis (talk) 12:18, 27 February 2017 (UTC)

This may not be WP:COSMETICBOT, but it certainly is WP:NOTBROKEN. Headbomb {talk / contribs / physics / books} 12:27, 27 February 2017 (UTC)

It's misleading to pipe to a redirect when the correct term is shown on the text. Moreover, it only had to change 4 templates. -- Magioladitis (talk) 12:34, 27 February 2017 (UTC)
How does that mislead? How does bypassing a redirect change the visual output of the page (or anything else, for that matter). ~ Rob13Talk 12:35, 27 February 2017 (UTC)

Headbomb, see what happens when the mouses hovers over the link. Thanks for claryfying that Rob's argument was wrong. -- Magioladitis (talk) 12:37, 27 February 2017 (UTC)

Do you mean this link: Reformation? What the mouse hovering reveals is that we are complicated, but perhaps that needs to be shown a few more times. - Of course the visual appearance is the same, but why go via a redirect? In articles I write, I will not do that. --Gerda Arendt (talk) 12:46, 27 February 2017 (UTC)

Gerda Arendt did not ask for [[Protestant Reformation]] to change but only for [[Protestant Reformation|Reformation]]. -- Magioladitis (talk) 12:38, 27 February 2017 (UTC)

This is textbook WP:NOTBROKEN. I don't see what's so hard to understand about it. Headbomb {talk / contribs / physics / books} 12:40, 27 February 2017 (UTC)

NOTBROKEN reads: It is almost never helpful to replace [[redirect]] with [[target|redirect]]. It does not say about [[redirect|target]] with [[target]] because this would allow people to redirect to mispellings, invalid redirects, etc. The first case shows correct when the mouse hovers over the link, the second is misleading. -- Magioladitis (talk) 12:43, 27 February 2017 (UTC)

Related(?) comment: I try to imagine how would my neighbors would react to a link like [[FYROM|Republic of Macedonia]] this one. Hehe. -- Magioladitis (talk) 12:55, 27 February 2017 (UTC)

  • Whether or not changing all instances of [[Protestant Reformation|Reformation]] to [[Reformation]] by bot is WP:COSMETICBOT is subject to WP:CONSENSUS. I think Rob was right to point out that such consensus needs to established before a bot can proceed with operating such task. Thus far the discussion here shows no consensus either way. If my opinion were asked I'd side with those who adopt the COSMETICBOT approach.
  • Regarding the applicability of WP:NOTBROKEN it is true that the case is not explicitly mentioned in that guidance. Nonetheless I think the fact that "Protestant Reformation" shows up on mouseover is generally an advantage. Not all readers would in all contexts see the connection to the Protestantism-related topic when seeing Reformation in an article: the mouseover clarifies that even without clicking the bluelink. So "don't fix what isn't broken" applies as a general principle I think (whether or not the specific case is literally explained in the NOTBROKEN guidance). --Francis Schonken (talk) 13:19, 27 February 2017 (UTC)
    • My disagreement is on the pipe. If we agree that the correct place to link is the "Protestant Reformation" then the correct link in the page should be [[Protestant Reformation]] (Independent for the fact that this is a redirect). My example is to show that. In Greek articles I would use FYROM per the Manual of Style and I would avoid the use of RoM in all cases. This is not dependet from the fact that the English Wikipedia uses RoM as the page title.-- Magioladitis (talk) 13:31, 27 February 2017 (UTC)
      • IMHO your argumentation is misleading. There's nothing "misleading" in [[Protestant Reformation|Reformation]] (whether with or without mouseover). The [[FYROM|Republic of Macedonia]] example is of no relevance: we're not discussing it, we're discussing changing all instances of [[Protestant Reformation|Reformation]] to [[Reformation]] by bot. If people need to understand the intricacies regarding FYROM / Republic of Macedonia before they can understand what's going on regarding (Protestant) Reformation I think that strengthens what I was trying to say above: not all people would upon reading "Reformation" automatically understand that in Wikipedia context this usually means "Protestant Reformation", in which case what shows up on mouseover is helpful. --Francis Schonken (talk) 13:45, 27 February 2017 (UTC)

One-off bot to ease archiving at WP:RESTRICT

This isn't urgent, or even 100% sure to be needed, but it looks likely based on this discussion that we will be moving listings at WP:RESTRICT to an archive page if the user in question has been inactive for two years or more. Some of the restrictions involve more than one user and would require a human to review them, but it would be awesome if a bot could determine that if a user listed there singly had not edited at all in two or more years it could automatically transfer their listing to the archive. There are aloso some older restrictions that involved a whole list of users (I don't think arbcom does that anymore), and in several of those cases all of the users are either blocked or otherwise totally inactive. This would only be needed once, just to reduce the workload to get the archive started. (the list is extremely long, which is why this was proposed to begin with) Is there a bot that could manage this? Beeblebrox (talk) 18:46, 4 February 2017 (UTC)

Ongoing would be better, and even bringing back "resurrected" users might be helpful too. All the best: Rich Farmbrough, 01:01, 5 February 2017 (UTC).
 Doing... All the best: Rich Farmbrough, 23:25, 13 February 2017 (UTC).
Awesome, the discussion was archived without a formal close, but consnesus to do this is pretty clear. Beeblebrox (talk) 20:57, 15 February 2017 (UTC)
I don't mean to be a pest, and as always I note I know nothing about bot operations, but I am wondering if this is still happening at some point? Beeblebrox (talk) 06:28, 1 March 2017 (UTC)

RfD maintenance bot

There are several aspects to a redirect's history that admins need to check before closing a discussion as "delete" (WP:RFD#HARMFUL). A maintenance bot could speed up processing by noting (within the discussion) whether the redirect has a significant edit history (by count or bytes added) or previous moves from that page, etc. I'm thinking of something in line with the AIV clerking bot or the clerking bot that once worked the renames page, for instance. Previously discussed at Wikipedia talk:Redirects for discussion#Maintenance bot. czar 01:32, 7 March 2017 (UTC)

Autoassess drafts

As a follow-up to the task to autoassess redirects, I would like to propose the similar autoassessment of drafts in draftspace. That is, drafts in draftspace tagged with WikiProjects as "stubs" or "C"-class having those parameters commented out so that the template can automatically and correctly autoassess the draft as draft-class. @Enterprisey czar 05:12, 1 March 2017 (UTC)

@Enterprisey: Could you handle this? Seems like a very easy extension of your existing task. Just comment out the code determining what is a redirect and instead pull a list of all pages in the draft namespace. ~ Rob13Talk 05:15, 1 March 2017 (UTC)
As an observation, not all WikiProject banners recognise Draft-class - all those with the vanilla "standard" scale do not (for example, {{WikiProject Beauty Pageants}}, {{WikiProject Community}}), and some of those with custom scales do not do so either (for example, {{WikiProject The Simpsons}}), {{WikiProject South Park}}). In such cases, if |class= is blank or absent, pages in draft space are automatically set to NA-class; this also happens when explicitly set to |class=draft. --Redrose64 🌹 (talk) 11:02, 1 March 2017 (UTC)
Sounds like a pretty good idea. I don't know, however, whether there currently are enough drafts with project banners that call them stubs or C-class articles to justify such a task (while reviewing, I certainly don't see many drafts with talk pages, let alone talk pages with banners), but hey, if people think it's useful, I can implement it. Bit busy IRL, so ping me here again for further responses. Enterprisey (talk!) 21:09, 1 March 2017 (UTC)
Re: projects that don't classify drafts/redirects, the way I see it, it's better to have a bot correctly tag the article as NA than to leave it with the wrong classification. Either way, someone from the project needs to evaluate whether the project banner still belongs on that talk page—the only difference then is whether the classification should stay NA or incorrectly stub/start/etc. czar 01:40, 7 March 2017 (UTC)
A bot shouldn't need to tag it NA explicitly (it's hard to think of any situation where |class=na is useful). If all the bot does is to blank out the value of the |class= parameter, the code within the subtemplates of {{WPBannerMeta}} (around which most WikiProject banners are built) will autodetect whether the page in subjectspace is a redirect, also which namespace the page is in, and pass a suitable value to the code specific to that WikiProject, which will then automatically set Redirect-Class, Draft-Class, or NA-Class as appropriate. If a WikiProject which didn't recognise Draft-Class later has its project banner amended so that drafts are recognised, there will be no need to send the bot around again.
When the article is moved to mainspace, hopefully its talk page will be moved too, and if |class= is still not set, then the page will automatically change from Draft-Class or NA-Class to Unassessed. --Redrose64 🌹 (talk) 09:40, 7 March 2017 (UTC)

Script works intermittently

Hi guys, I'm stuck.

I forked the redlink remover script above, with an eye toward possibly developing a bot from it in the future, after I get it to do what I need on pages one-at-a-time. But first I need to get it to run. Sometimes it works and sometimes it doesn't (mostly doesn't).

For example, the original worked on Chrome for AlexTheWhovian but not for me. But later, it started working for no apparent reason. I also had the fork I made (from an earlier version) working on two machines with Firefox. But I turned one off for the night. And in the morning, it worked on one machine and not the other.

The script I'm trying to get to work is User:The Transhumanist/OLUtils.js.

I'm thinking the culprit is a missing resource module or something.

Is there an easy way to track down what resources the script needs in order to work? Keep in mind I'm a newb. The Transhumanist 01:41, 11 February 2017 (UTC)

After some trial and error, I learned the following: in Firefox, if I run the Feb 28 2016 version of User:AlexTheWhovian/script-redlinks.js and if I use it to strip redlinks from a page (I didn't save the page), then I can load the 15:05, December 26, 2016 version and it works.

Does anyone have any idea why using one script (not just loading it) will cause another script to work? I'm really confused. The Transhumanist 05:33, 11 February 2017 (UTC)

Maybe one has dependencies that it doesn't load itself, instead relying on other scripts to load them. --Redrose64 🌹 (talk) 21:55, 11 February 2017 (UTC)
The author said it was stand alone. (They are both versions of the same script). I now have them both loaded, so I can more easily use the first one (User:The Transhumanist/redlinks Feb2016.js) to enable the other (User:The Transhumanist/OLUtils.js). Even the original author doesn't know why it isn't working.
What's the next step in solving this? The Transhumanist 06:46, 12 February 2017 (UTC)
You've changer the outer part: that's what I would suspect, maybe not loading the mw library properly. Possibly the best way is to make the changes step-by-step, with a browser restart between. (Or better still binary chop.) All the best: Rich Farmbrough, 22:32, 13 February 2017 (UTC).
@Rich Farmbrough: Fixed it. Wasn't easy. And it was strange. It required another script to be loaded just to work -- turned out they shared a localStorage slot name. Translated the lingo line by line, and discovered it was working weird because of a function invocation being placed out of context, at the start of the script. There was also a function invocation missing from a conditional in the body of the script. The script now runs stand-alone (without the crutch of the other version being run). What a headache that was. The Transhumanist 03:31, 11 March 2017 (UTC)
@The Transhumanist: Good work! All the best: Rich Farmbrough, 14:35, 11 March 2017 (UTC).

Featured article candidate statistics

Since early last summer I've been manually collecting information about the featured article candidates process. I've been using this information to post statistical data, including

  • monthly summaries of who has done the most reviewing at FAC (example)
  • notes at FAC discussing the statistics (example)
  • raw data, in a form that could be cut and pasted into a spreadsheet (example)
  • summarized and massaged forms of the data (example)

A couple of weeks ago I proposed at WT:FAC that a bot should place a count of some reviewing and nominating statistics against nominators' names at FAC, as is done at WP:GAN. That would require the data to be accessible to a bot. There's no consensus to do this yet; the discussion is ongoing and an RfC is probably going to be required eventually to decide the question. Even if it's not done in exactly that form, most of the alternatives discussed there would also require a bot.

In addition, the data may be of interest to others, and it's not queryable at the moment. For example, the data (in combination with some other bot-accessible data) would allow one to answer these questions:

  • How many reviews does it take on average to promote a featured article?
  • What's the correlation between editor experience/edit count and chances of getting an article promoted?
  • How much does it help to have previously successfully nominated a featured article?
  • Does peer review help promote an article? How about A-class?
  • Which are the most prolific reviewers?
  • Are any reviewers strongly correlated with successful promotion of the articles they review?
  • What's the count of nominations and reviews for a given editor?

No doubt there are dozens of other questions that could be asked.

I don't know if this is exactly a bot request, but here's what I would like to ask for. I would like some way to make the data I harvest from FAC accessible to any editor to run this sort of query. This is probably initially a toolserver page or something like that. I suppose it could be done with a bot that watches a request page, but a toolserver page seems more natural to me. I have the data in mysql tables at the moment, and can run some of the above queries myself, but some of them require integrating with other data such as edit count, or WP:WBFAN. If someone can write such a tool, I can provide the data in whatever format is needed, and would do so on an ongoing basis. Is anyone interested in working on such a thing? Or is there a better page (maybe WP:VPT) to post this? Mike Christie (talk - contribs - library) 23:43, 15 March 2017 (UTC)

Replacement of the same PNG->SVG in multiple articles

Disclosure: I have asked the question at Wikipedia:AutoWikiBrowser/Tasks#Replacement of the same PNG->SVG in multiple articles before, but not gotten an answer. Ihave asked the question at Wikipedia:Reference desk/Computing#Replacement of the same PNG->SVG in multiple articles and have been referred here.

Dear Botmasters. I create SVG replacements for PNG images, as requested in Commons:Top 200 logo images that should use vector graphics. Of course I start with those most valuable, as judged by number of wikipedia articles that use them. Once I have uploaded the SVG to Commons and tagged it as being an svg replacement for the png in question, the wikitext in those articles still needs to be updated. Is there a way to automate or semi-automate the replacement of these image in the articles' wikitext? The images I deal with are very common logos, used in 100s of articles spread over multiple wikipedias, in the case of icons sometimes several 1000s of articles. Doing this by hand is insane. Can you please help me? Thank you very much. — Preceding unsigned comment added by Lommes (talkcontribs) 20:13, 9 March 2017 (UTC)

See c:Commons:GlobalReplace. --Edgars2007 (talk/contribs) 07:44, 16 March 2017 (UTC)

Move GA reviews to the standard location

There are about 3000 Category:Good articles that do not have a GA review at the standard location of Talk:<article title>/GA1. This is standing in the way of creating a list of GAs that genuinely do not have a GA review. Many of these pages have a pointer to the actual review location in the article milestones on the talk page, and these are the ones that could potentially be moved by bot.

There are two cases, the easier one is pages that have a /GA1 page but the substantive page has been renamed. An example is 108 St Georges Terrace whose review is at Talk:BankWest Tower/GA1. This just requires a page move and the milestones template updated. Note that there may be more than one review for a page (sometimes there are several failed reviews before a pass). GA reviews are identified in the milestones template with the field actionn=GAN and the corresponding review page is found at actionnlink=<review>. Multiple GA reviews are named /GA1, /GA2 etc but note that there is no guarantee that the review number corresponds to the n number in actionn.

The other case (older reviews, example 100,000-year problem) is where the review took place on the article talk page rather than a dedicated page. This needs a cut and paste to a /GA1 page and the review transcluding back on to the talk page. This probably needs to be semi-automatic with some sanity checks by human, at least for a test run (has the bot actually captured a review, is it a review of the target article, did it capture all of the review). SpinningSpark 08:30, 22 January 2017 (UTC)

Discussion at Wikipedia talk:Good articles/Archive 14#Article incorrectly listed as GA here? and Wikipedia:Village pump (technical)/Archive 152#GA reviews SpinningSpark 08:37, 22 January 2017 (UTC)

Bumping to keep thread live. SpinningSpark 00:51, 25 March 2017 (UTC)

Remove 'image requested' category from biographies w/photo in infobox

Maybe this applies to a more general category of articles as well. Please see my comment on the WikiProject Biography talk page. ~Eliz81(C) 22:15, 26 March 2017 (UTC)

Alexa rankings

The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
Whoops, didn't notice there was already a section on this. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
04:12, 28 March 2017 (UTC)

Would it be possible to have a bot periodically update Alexa rankings? I think User:OKBot used to do this but it got blocked due to a malfunction and never got fixed. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
05:02, 22 March 2017 (UTC)

See also #Bot to update Alexa ranks. --Edgars2007 (talk/contribs) 04:00, 26 March 2017 (UTC)

The above discussion is closed. Please do not modify it. Subsequent comments should be made in a new section.

Bot to mass move articles with year ranges

There should be a bot that moves all articles with a date range of the from "yyyy–yy" in parentheses in their titles to titles with date ranges of the form "yyyy–yyyy". It was an established policy to require 4-digit end years in MOS:DATERANGE. GeoffreyT2000 (talk, contribs) 16:38, 28 March 2017 (UTC)

Any person or bot attempting to fulfill this request will have to be mindful of the exceptions allowed in the RFC outcome. For example, sports season articles, of which there are many (e.g. 1952–53 Boston Celtics season), should not be moved. – Jonesey95 (talk) 19:30, 28 March 2017 (UTC)
This is probably a "not done" due to WP:CONTEXTBOT. --Izno (talk) 02:14, 29 March 2017 (UTC)
Could somebody give link to RFC? We might do this step by step, for example at first we move people article and then move forward. --Edgars2007 (talk/contribs) 12:42, 30 March 2017 (UTC)
I agree. As I mentioned before, we need this bot, that can also correct it in the article's texts. I've corrected thousands of articles but it is too big of a task for a human. --ExperiencedArticleFixer (talk) 14:48, 30 March 2017 (UTC)

Non-free image rationales linking to redirects

Could a bot produce a list (at user:Thryduulf/NFUR with redirects) of images in Category:Wikipedia non-free files where the description page includes a link to a redirect in the article namespace. Ideally I'd like this list as a sortable table with two columns, the first with a link to the file description page, the second with a link to the redirect. Each redirect should get a separate entry in the table (so if a file links to two redirects it gets two entries).

After the report is generated I may request further bot action (possibly as part of the bot request above), but what will depend on the scale and details of what is found here. Thanks. Thryduulf (talk) 16:41, 6 April 2017 (UTC)

Create a bot for formatting periods of time

We have thousands of navboxes violating WP:MoS, for example with - instead of – between years in timespans, or with spaced ndashes or mdashses instead of nonspaced ndashes between two years. --ExperiencedArticleFixer (talk) 18:15, 24 March 2017 (UTC)

WP:COSMETICBOT which small horizontal line is used and if it is perfectly formatted is not in any way a crucial or even important issue. Beeblebrox (talk) 19:44, 24 March 2017 (UTC)
That something is 'crucial' or not is besides the point. Replacing - with – (and similar changes) in navboxes when appropriate is a fine task for a bot, assuming it can avoid the improper dashing things that should not be dashed. Headbomb {talk / contribs / physics / books} 19:56, 24 March 2017 (UTC)
I agree that if we wanted to make sure we were absolutely consistent in which small horizontal line we used and how they were spaced, that is a perfect task for a bot. I just don't see it as being worth the effort to program and execute a task that makes a tiny visual difference that 99.99% of our readers won't notice and wouldn't care about if they did. Beeblebrox (talk) 20:13, 24 March 2017 (UTC)
Well, it is worse if a human has to do it! I personally have corrected thousands manually... So, you guys admit it is a perfect task for a bot, I'm proposing it. --ExperiencedArticleFixer (talk) 23:06, 24 March 2017 (UTC)
There's no "you guys" just me, and I'm not actually a bot operator. That being said; you say it's worse if a human has to do it, and I would argue that nobody has to do it and the encyclopedia suffers no harm of any kind from having the "wrong" small horizontal line somewhere. Bots are here to help improve Wikipedia and defend it from those who try to harm it, not to enforce slavish obedience to obscure punctuation rules made by the few persons who acually think it makes a difference whether we use a hyphen or a dash. Beeblebrox (talk) 22:10, 30 March 2017 (UTC)
The task is perfect for a bot, and "who cares" isn't a valid objection. If someone wants to code a bot for this, it'll be approved if is meets technical requirements. Headbomb {t · c · p · b} 16:49, 6 April 2017 (UTC)

Per Wikipedia:Redirects for discussion/Log/2017 March 20#Wikipedia:Wikipedia:Five pillars I'm requesting that the following task be performed by a bot:

Also, in the spirit of that request, I'm also requesting the following two tasks as I don't see them being controversial either:

Thanks! Steel1943 (talk) 20:58, 5 April 2017 (UTC)

Also, please do no replace any links on any page that starts with "Wikipedia:Redirects for d". Steel1943 (talk) 16:55, 6 April 2017 (UTC)
Actually, come to think of it, please only replace the links that appear in the "User talk:" namespace. (I'll add that to this request.) Steel1943 (talk) 16:59, 6 April 2017 (UTC)
  • As long as the bot works properly, I can't imagine why someone would oppose this request; it seems like a basic maintenance procedure that ought easily to be approved. Nyttend (talk) 16:30, 6 April 2017 (UTC)
@Nyttend: That's what I was thinking as well. Either way, I brought this request to your attention since I know per the discussion, you are against the redirects being deleted, but wasn't sure if you were against their current links being replaced. Steel1943 (talk) 16:42, 6 April 2017 (UTC)
Support, for what it's worth, ought to be entirely uncontroversial (see WP:BRINT, specifically "spelling errors and other mistakes should be corrected"). Whoever codes the bot may notice that I replaced about 800 of these links already with AWB, before admitting that I was not faster at it than a bot would be. Ivanvector (Talk/Edits) 17:20, 6 April 2017 (UTC)
Steel1943, there's a huge difference between "don't delete this redirect" and "don't change this redirect". It's like any other erroneous link that's been used a lot: it should be kept (in the deletion sense) because it's present in lots of places, but it shouldn't be kept (in the present-in-current-versions-of-pages sense) because it's an error. Nyttend (talk) 00:22, 7 April 2017 (UTC)
@Nyttend: I know; I was, more or less, ensuring that I wasn't assuming your stance on replacing the links. I thought there wouldn't be any controversy, but I pinged the participants in the discussion to make sure. Steel1943 (talk) 02:10, 7 April 2017 (UTC)
Sure, and thank you! Aside from everything else, it demonstrates that you're not trying to stack the deck. Nyttend (talk) 02:13, 7 April 2017 (UTC)

File PROD bot

Following the consensus at WT:PROD, PROD is being extended to files. The agreement is also that {{Deletable image-caption}} is to be placed by a bot on pages that use the file. The template should also be removed by the bot when the file is deleted or de-PROD'ed. This is essentially a revival of Sambot 11 with different categories. — Train2104 (t • c) 17:11, 24 March 2017 (UTC)

{{BOTREQ|doing}} Sounds like this might be fun to implement. I'm at WMCON in Berlin this week so coding time is limited, but I'll go ahead and "claim" this task. I've got Twinkle stuff to work on too in relation to the new file PROD process MusikAnimal talk 14:49, 27 March 2017 (UTC)
{{BOTREQ|coding}} MusikAnimal talk 13:16, 4 April 2017 (UTC)
@Train2104: I regret to say I'm going to have to back out of this. I will admit I claimed this without giving enough thought into the implementation, and for that I apologize. For me, this is an extremely challenging bot task, and unfortunately to do this right it would require more time than I have to offer. It really doesn't help that the syntax to add images is quite variable and in many ways unpredictable, as opposed to templates which have a more consistent structure. There are some Python libraries and frameworks out there to help with this, such as the MediaWiki Parser from Hell, and even AWB, so bot operators who work with these technologies may be able to do this bot task much easier than I can. Sorry to have misled you! I got somewhat far with my approach to parsing the image syntax, so if another botop needs a few pointers feel free to ping me. Best MusikAnimal talk 20:07, 8 April 2017 (UTC)

Non-free image bot

Could someone code a bot that does the following:

  1. Checks each non-free image on the site. (Category:Wikipedia non-free files)
  2. For each article the non-free image is used in, check if that article's name is somewhere within the file description page.
  3. If not, comment out that image from that specific article with the following edit summary "File lacks a non-free use rationale for this article; see WP:NFCC#10c"

This would be incredibly helpful. ~ Rob13Talk 14:50, 6 April 2017 (UTC)

We had one once, but it died; I'm certain that I requested a replacement some time in the last two years, it's probably in this page's archives. --Redrose64 🌹 (talk) 15:27, 6 April 2017 (UTC)
Coding... — JJMC89(T·C) 08:34, 9 April 2017 (UTC)
BRFA filed. — JJMC89(T·C) 05:50, 10 April 2017 (UTC)

I discovered today in this article that Cornell University, the #1 site for looking up U.S. law code, changed the format for their URLs. Since I do not run a bot nor have the time to learn, I'm requesting someone to write a very simple script.

Here are example old and new URLs:

Old format looks like this:

http://www#.law.cornell.edu/uscode/uscode##/usc_sec_##_########^----###-.html
  • Note that I do not know if any old format links have just www. so assume that they can.
  • Similarly, assume www##. can exist just in case.
  • The ^ represents a letter that may or may not exist.

This is typical, which means a script would need to crawl through every page on the site, find URLs with the old format:

  • Change http: to https:
  • Change www#.law to www.law
  • Change /uscode##/ to /text/#/ or /text/##/
    • Make sure to remove leading zeroes entirely. i.e. 11 becomes 11, but 08 becomes 8
  • Remove usc_sec_##_ entirely.
  • Remove leading zeroes from ########
    • i.e. 00001101 becomes 1101
  • Include any letter following the 8 digit number but before ----
  • Remove ----###-.html entirely.

Probably also need to change the accessdate on the refs. - sarysa (talk) 00:40, 7 April 2017 (UTC)

Coding... Your example "old" ref is actually incorrect, and there's a ton of variation, but I count about 6000+ articles from which to base my regex, so I'll start looking into it. Primefac (talk) 20:15, 9 April 2017 (UTC)
BRFA filed. Primefac (talk) 14:53, 10 April 2017 (UTC)

Adoption Category Backlog

Hi! My name is Jamesjpk. Latley I have been patrolling the category Wikipedians seeking to be adopted in Adopt-a-user. In this category, people are listed because they want experienced users to help them grasp the rules of Wikipedia. The main problem with this category however, is that there are many in-active users. It takes a long time to find users that need adoption and that are active enough for adoption. A guideline in this category is that a user should be removed if they have been inactive for over a month. The problem is, that usres in this category are sometimes over a year old. I think a bot would help this situation.

Bot Steps:

1. Check if a page is a talk page or a user page.

If a page in the category is a talk page or a user page move onto step 2.
If a page in the category is not a talk page or a user page, then move onto step 2b

2. Check to see if the user has edited in the past month.

If the user has edited in the past month, then move onto the next user.
If the user has not edited in the past month, move onto step 3.
2b. Remove the {{adoptme}} tag from the page.
Move onto step 2c once the {{adoptme}} tag has been removed
2c. Put in the edit summary "This template should be moved to a user page or a user talk page"

3. Remove the {{adoptme}} tag from the page.

Move onto step 4 once the {{adoptme}} tag has been removed 4. Check if a page is a talk page or a user page. :If the page is a user page then fill the edit summary with "See talk page" and go onto step 5. :If the page is a talk page, then remove the <nowiki>{{adoptme}} tag and put {{subst:Adoption notice expired for bots}} at the bottom of the page.

5. Move onto the user's talk page, and add {{subst:Adoption notice expired for bots}} at the bottom of the page. — Preceding unsigned comment added by Jamesjpk (talkcontribs) 19:47, 9 April 2017 (UTC)

Doing... ProgrammingGeek talktome 12:04, 10 April 2017 (UTC)

BRFA filed ProgrammingGeek talktome 17:27, 12 April 2017 (UTC)
Just in case you haven't seen it, the policy is on the top of Category:Wikipedians seeking to be adopted in Adopt-a-user. It says

— Preceding unsigned comment added by The Transhumanist (talkcontribs) 01:31, 14 April 2017 (UTC)

Bot to update Alexa ranks

OKBot apparently used to do this, but blew up in April 2014 and has never been reactivated. It would be quite handy as there's a lot of articles that contain Alexa ranks and they do change frequently. Triptothecottage (talk) 05:35, 18 February 2017 (UTC)

@Triptothecottage: I've made a bot request on Wikidata (here), because it seems an easier solution to put the data there so it can be used on other projects as well. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
10:06, 15 April 2017 (UTC)
@Jc86035: Nice! Thanks! Triptothecottage (talk) 10:26, 15 April 2017 (UTC)

Remove notability tags from articles kept at AfD

Articles whose most recent AfD closed with a result of "keep" should have their notability tags removed if still present. I have done this to Raj Bhavan (Assam), After School (film), Machine (film), and Chakradhari (1954 film); may be this could be a task for a bot. GeoffreyT2000 (talk, contribs) 04:45, 31 March 2017 (UTC)

In my experience with AfD the loosing side may not appreciate a blanket removal. Probably needs an RfC to show consensus. -- GreenC 21:01, 14 April 2017 (UTC)
Agree. "Keep" includes "Keep for now" and is not a judgement about notability explicitly. All the best: Rich Farmbrough, 17:51, 20 April 2017 (UTC).

Bot to copy a message from a soft redirect user talk page

Recently a bot posted a message on the talk page of my public editing account. I have that page set as a soft-redirect to the talk page on my main account, so I was surprised to see that the bot couldn't pick up on this. Is there any way a bot could be made to identify messages posted to redirect pages and relocate them over to the main page? I know there aren't that many soft-redirect talk pages, but it still couldn't hurt.--Molandfreak (talk, contribs, email) 19:52, 3 April 2017 (UTC)

User:Femto Bot used to do this for me. I can look at re-activating that task. All the best: Rich Farmbrough, 17:53, 20 April 2017 (UTC).

Bot to "manually" categorize articles that are currently auto-categored by an infobox

If the year of start is defined (for example, "2014"), Template:Infobox Asian comic series currently automatically categorizes all the articles it or its child templates (like Template:Infobox manhwa) are used in as Category:2014 comics debuts. I want to override this automatic categorization, so I asked about it on its talk page, which started this short discussion. I was told to request a bot which "manually" categorizes the articles that use these infoboxes, before the feature will be deleted, which is why I'm here. ~Mable (chat) 12:38, 26 March 2017 (UTC)

 Doing... ~ Rob13Talk 20:11, 30 March 2017 (UTC)
@Maplestrip: To clarify, based on the discussion, I should skip categorizing any pages that already have a webcomic debut category on them, correct? ~ Rob13Talk 20:12, 30 March 2017 (UTC)
@BU Rob13: That would be nice, if possible. Else, I'd have to manually remove them afterwards, which I can do, but it would be a waste of time of course. ~Mable (chat) 09:41, 31 March 2017 (UTC)
This is still on my "to-do"; waiting for my two pending bot tasks to be approved. May take a bit. ~ Rob13Talk 21:47, 9 April 2017 (UTC)
@Maplestrip:  Done semi-automatically with AWB. Note some of the edit summaries are screwed up and just said "Unicodifying" - I accidentally hit the scroll wheel while hovering over the edit summary box of AWB while using it to scroll through diffs for errors. That changes the edit summary to an old version, and it took me about 50 edits to catch the mistake. The actual edits themselves are fine, though, and you can take out the categorization from the template for start year. ~ Rob13Talk 02:04, 15 April 2017 (UTC)
@BU Rob13: Thank you! I'll be sure to request the template to be edited right away! :) ~Mable (chat) 07:39, 15 April 2017 (UTC)

WP 1.0 bot

Hi, there is a problem with WP 1.0 bot that produces the various project assessment tables and logs of changes to article assessment. The bot has not been working since early February and both of the stated maintainers are no longer active. We need some one to get the bot operating again and possibly a couple of people who could take over the maintenance of the bot. Any offers? Keith D (talk) 23:51, 19 February 2017 (UTC)

@Keith D: Oh no ... that's a bit of a mess. Have you tried reaching out to the maintainers via email? In the absence of a handing over of existing code, a bot operator would need to start from scratch. Someone could definitely do it (not me ... but someone), but it would take longer. ~ Rob13Talk 00:13, 20 February 2017 (UTC)
Thanks, sorry for not putting brain in gear I had forgotten about e-mail. Keith D (talk) 13:08, 20 February 2017 (UTC)
@Keith D and BU Rob13: There is a formal procedure for adding maintainers/taking over an abandoned project: Tool Labs Abandoned tool policy. --Bamyers99 (talk) 14:47, 20 February 2017 (UTC)
Is there any update on this? The bot is still not running properly, although it does run if you start it manually. I understand that Theo has been contacted. Is there anyone with the skill to maintain this bot, which is quite critical for many WikiProjects. Thanks, Walkerma (talk) 15:39, 22 March 2017 (UTC)
Most of what it used to do can be done automatically by having a full set of WikiProject categories. Whether it has advanced since then I don't know.
All the best: Rich Farmbrough, 20:43, 24 April 2017 (UTC).

I request a bot to remove any wikilink at a page that redirects back to the same page or section. Thanks.Anythingyouwant (talk) 03:08, 22 February 2017 (UTC)

It isn't clear to me exactly what is being asked for. If a link points to a redirect that points to a different section in the original article, the link should not be removed. If anything, it should be replaced with a section link, see Wikipedia:Manual of Style/Linking#Section links (second paragraph). Anyway, in all cases care is needed for redirects that have recently been created from existing articles. Sometimes such redirects are controversial and will be reverted. Thincat (talk) 08:55, 22 February 2017 (UTC)
I said the same section, not a different section. Here is an example of what the bot would do. In that example, the redirect has existed for years (since 2013).Anythingyouwant (talk) 17:22, 22 February 2017 (UTC)

This is already part of CHECKWIKI. I can do them semi-automatically. -- Magioladitis (talk) 17:51, 22 February 2017 (UTC)

Magioladitis@ This is not a good idea. Redirects with possibilities. All the best: Rich Farmbrough, 21:16, 24 April 2017 (UTC).
True. I only fix those which the target actually matches the page title. -- Magioladitis (talk) 22:38, 24 April 2017 (UTC)

Undelete deleted draft namespace redirects

Consensus was achieved at Wikipedia:Village pump (proposals)/Archive 135#Draft Namespace Redirects for not deleting draft namespace redirects for pages moved to the article namespace from the draft namespace. A bot should therefore undelete all deleted draft namespace redirects resulting from moves by Captain Assassin!, KGirlTrucker81, and perhaps some other users with valid mainspace targets per WP:RDRAFT. GeoffreyT2000 (talk) 19:24, 20 April 2017 (UTC)

I don't know if I would agree that a consensus to not delete these redirects going forward amounts to consensus to undelete those already deleted. (Full disclosure - I'm sure that this would also effect hundreds of draft redirects that I have deleted). bd2412 T 23:35, 24 April 2017 (UTC)

Episode list sublist template

Per the discussion at Wikipedia talk:WikiProject Television § Episode list sublist template, a discussion has taken place to update the usage of {{Episode list/sublist}}.

  • From: {{Episode list/sublist|Article being transcluded TO}}
  • To: {{Episode list/sublist|Article being transcluded FROM}}

The following edits would first need to be made to Module:Episode list (which I can easily do as a template editor), but immediately after (this is where the bot request comes in), the following regular-expression find-and-replace would need to be executed by a bot to all articles that use the template (approximately 3,700), to prevent as little disruption as possible between the module implementation and replace implementation.

  • Find: \{\{Episode list\/sublist\s*\|(.*)
  • Replace: {{Episode list/sublist|{{subst:PAGENAME}}

If you've any questions, don't hesitate to ask. Cheers. -- AlexTW 12:10, 3 April 2017 (UTC)

Alex, just a quick question - in your replace, shouldn't it be {{subst:PAGENAME}} instead of %%title%%? Or are we throwing scrib coding directly into the article? reping Primefac (talk) 16:55, 8 April 2017 (UTC)
@Primefac: [6] Missed the initial ping, yeah. Now, %%title%% seemed to work when I was testing out the find-and-replace in previews, but if {{subst:PAGENAME}} works better, then definitely! -- AlexTW 17:11, 8 April 2017 (UTC)
If it works, it works. BRFA filed. Primefac (talk) 17:33, 8 April 2017 (UTC)

Comment:  %%title%% always works, {{subst:PAGENAME}} does not work within ref tags. All the best: Rich Farmbrough, 21:46, 28 April 2017 (UTC).

Tagging run for WikiPedia:WikiProject Women in Red, part 2

See User:Headbomb/sandbox7 for details.

Basically, I'll be compiling lists of articles to be tagged (one per WiR Edit-a-thon, basically, to be tagged with the edit-a-thon specific template).

Right now I've done list 1 and 2 myself. I've got list 3/4/5 ready for a bot, with instructions, and the amount of articles to be tagged is staggering (at least for manual runs). I need someone to do this tagging, because I can't.

Once tagging is done, I'll create new lists. I'll produce new ones, and work my way up to the most recent Edit-a-Thon (#41). Nothing is urgent, but if I could have someone to work with me over the next few days that would be nice. Headbomb {t · c · p · b} 20:36, 20 April 2017 (UTC)

 Done All the best: Rich Farmbrough, 21:24, 25 April 2017 (UTC).
Rich Farmbrough You mind if I ping you when I compile the next lists? Headbomb {t · c · p · b} 23:15, 26 April 2017 (UTC)
That's fine. From the discussion, though, I think most of the later ones are substantially done. All the best: Rich Farmbrough, 06:29, 27 April 2017 (UTC).
Rich Farmbrough you missed 1 with {{WIR-3}} and quite a few with {{WIR-4}}. Not quite sure what caused this, but I fixed it. As for the other ones, the discussion is off. I'm not sure most are tagged. In 5 edithathons in 2015, there was 736 articles created/improved. If this scales, with 27 editathons in 2016, we're looking at ~4000 articles, rather than the current ~1900 tagged. Headbomb {t · c · p · b} 21:57, 28 April 2017 (UTC)

A bunch of ideas are being bounced around there, in which it would be helpful for input on just how much bots could help with these types of things. Like question redirects. The Transhumanist 21:43, 27 March 2017 (UTC)

Might as well create a bot to throw coins into a fountain. --Redrose64 🌹 (talk) 23:21, 27 March 2017 (UTC)
In the face of accelerating change? The Transhumanist 02:10, 31 March 2017 (UTC)
@The Transhumanist and Redrose64: It might be relatively easy to implement this feature using Attempto Controlled English. Jarble (talk) 04:31, 22 May 2017 (UTC)

Bot questions

There are bot questions at Wikipedia:Village pump (idea lab)/Archive 23#Question redirects. The Transhumanist 23:23, 28 March 2017 (UTC)

Add information on endangered species to Taxobox

I started trying to add the following to the critically-endangered plants on the List_of_critically_endangered_plants#Orchidales:

  • taxobox
| status = CR
| status_system = IUCN3.1
| status_ref = <ref name="iucn redlist">''IUCN database entry page for the species here''</ref>

So I got to Bulbophyllum moratii. In alphabetical order. And that's just the orchids.

It's pretty repetitive. I'd love to have a bot do at least some of this; the IUCN Redlist apparently has a decent API and gives a scientific species name to Zotero scrapers. Such a bot would also be really useful for updating the threat categorizations regularly. For real bonus points, it could also read some of the other status systems, as some endangered species have not been assessed by the IUCN (e.g.), but that would be icing. HLHJ (talk) 21:02, 21 April 2017 (UTC)

The data is already (AIUI) in Wikidata. Why not modify the template to display it from there (and expend any bot effort in keeping Wikidata up-to-date, where the work benefits not one, but 295 Wikipedias)? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:27, 3 May 2017 (UTC)

Specific archival bot that posts to TP of thread's original poster

We Teahouse folks would like a new archival bot, see here for the details. The Teahouse is intended for newbies, who might not understand what archival does, and could therefore lose track of their questions if they do not log in in time. The idea is to post to their talk page when a thread gets archived with a link to the correct archive.

The "archival" part is already handled by existing bots, but I convinced myself that the "post to OP's page" part is nontrivial, because determining the OP is hard (posts regularly get refactored, threads title change, etc.) As far as I know that second task cannot be handled by any existing bot but I know very little about WP bots, so I post here in the hope of a pleasant surprise. TigraanClick here to contact me 17:23, 4 May 2017 (UTC)

I wonder if this would need to look at timings - so if someone has edited after the last comment, perhaps the note could be considered redundant. All the best: Rich Farmbrough, 20:56, 5 May 2017 (UTC).
@Rich Farmbrough: Once the OP is determined, it shouldn't be too hard to check whether the last post in the thread (match a "User talk:" + timestamp sequence) is theirs and refrain from posting the archival notice, with the assumption that it was a "thank you" or similar. (That was your intention, right?) But in some cases it could be a request for clarification that went unanswered, and then not posting the note is bad. I agree it is something to think about. TigraanClick here to contact me 11:40, 10 May 2017 (UTC)

Encode all URLs containing -{ (dash curly-bracket)

It was announced in TechNews that -{ in URLs should be encoded to %2D{ due to code fixes to the preprocessor. Ideally it would encode to %2D%7B because the opening curly-bracket is a reserved character in CS1|2. -- GreenC 02:52, 9 May 2017 (UTC)

This mw:page implies that the problems have been fixed on en.WP. DePiep may know more. – Jonesey95 (talk) 03:13, 9 May 2017 (UTC)
That's correct, I've encoded them %2D{ in URLs in enwiki, so kept the curly bracket. Other lang wikis to be done (see the linked mw page), I need AWB access in those. -DePiep (talk) 09:07, 9 May 2017 (UTC)
  • So, some 9 wikis left need those edits in URL and regular code (like chemical names), ~1000–1500 pages altogether. I planned to do this by WP:AWB, also using the visual check (for false positives listed). That requires AWB permisson on each wiki. Or is there an easier bot route? -DePiep (talk) 09:17, 9 May 2017 (UTC)
If i was me I would go with AWB since it gives future capabilities to run AWB there, whereas a bot gives only perms for 1 time run. If you want a bot though I could write one in awk in about 10 minutes. -- GreenC 16:35, 10 May 2017 (UTC)
Thanks. AWB is preferred because there are false positives, hard to exclude (e.g. in REGEX strings, oin protected pages/edit request, ...). Propose: Consider closed here. -DePiep (talk) 19:41, 10 May 2017 (UTC)
But, Help:CS1 says that the { should escaped when it occurs in urls used in citation template parameters. So, only when a "-{" occurs in a url in a citation template parameter, do you need to escape both of those characters. But, if if the "-{" occurs in a url outside of that, it is sufficient to just escape "-". But, of course, it won't hurt and is safe to escape both those characters. SSastry (WMF) (talk) 18:11, 11 May 2017 (UTC)

"de-amazon" bot

Per this discussion, many links to Amazon.com can be replaced by more neutral links, like the magic ISBN. Can someone please consider this concept and see whether we can come to a consensus on what is suitable to replace? I presume that replacement should primarily take place in the citations and further-reading sections - in external links sections most can probably be summarily removed (though some external links(-sections) tend to be actually further reading sections). --Dirk Beetstra T C 10:28, 23 April 2017 (UTC)

I've had a stab at a several manual replacements. The citations, {{cite book}}, need 3 fixes (I think):
  • remove the |asin= parameter or the |id={{ASIN|...}} parameter where a populated |isbn= or |oclc= exist.
(an example of the id= usage)
  • fetch the ISBN from Amazon using the ASIN, populate |isbn= & remove |asin=.
  • fetch the OCLC number from Worldcat using the title, populate |oclc= & remove |asin=.
OCLC contains more than just books, so other cite templates may need to be looked at too. Hope that helps, Cabayi (talk) 13:12, 23 April 2017 (UTC)
Note that some Amazon links contain ISBNs or are followed by ISBNs. The links can be removed and replaced with "ISBN nnnnnnnnnn", or simply removed and replaced with the text content of the link. If an ISBN is present after an Amazon link to a book's title, this regex has worked well for me:
/\[https*:\/\/[a-z]+\.amazon[a-z\:\/\.\d\-\?\&\=\+\_\%\#\,]+\s+(.+?(?=[\]]))\]/gi, '$1'
Jonesey95 (talk) 14:23, 23 April 2017 (UTC)
An ASIN is not necessarily also an ISBN. If it is entirely numeric, has 13 digits, and begins either 978 or 979, then it's an ISBN. If it is 10 characters long, and is either entirely numeric or consists of 9 digits followed by the letter X, then it's an ISBN. --Redrose64 🌹 (talk) 23:06, 23 April 2017 (UTC)
See also Category:CS1 maint: ASIN uses ISBN. – Jonesey95 (talk) 23:09, 23 April 2017 (UTC)
While it's a Keep, it's as a reference/link of last resort to be avoided where more non-commercial alternatives such as {{OCLC}}, |oclc=, {{ISBN}}, or |isbn= exist. This request is not for simple removal of ASINs but for their replacement where possible. Cabayi (talk) 14:28, 25 April 2017 (UTC)
I agree there, doesn't amazon also already have ISBN numbers on the product description pages? - Knowledgekid87 (talk) 13:20, 26 April 2017 (UTC)
For books, yes. Hence the suggested step above,
  • fetch the ISBN from Amazon using the ASIN, populate |isbn= & remove |asin=.
Cabayi (talk) 13:58, 26 April 2017 (UTC)
Amazon has a valuable resource for users: actual text that often allows them to verify a statement without access to the physical book> ISBN-ASIN and OCLC have no such advantage. The majority of amazon links I have seen on Wiki are designed for this usage. this usage is entirely free and non-commercial. Rjensen (talk) 13:54, 26 April 2017 (UTC)
Rjensen, {{ISBN}} links to Special:BookSources which links to Google, Amazon (multiple Amazon sites not just the .com site), local libraries and online indexes. That seems more inclusive & open than just one Amazon link to me. Cabayi (talk) 14:02, 26 April 2017 (UTC)

As I sort of stated in the WT:WPSPAM discussion, I'd support a bot to replace all replaceable Amazon links with ISBN links to Special:BookSources. I'd also consider having the bot replace {{ASIN}} with {{ISBN}} or another template containing the identifiers if an ISBN and other identifiers can be found, and leave other instances of {{ASIN}} where no ISBN can be found alone. Jo-Jo Eumerus (talk, contributions) 14:59, 26 April 2017 (UTC)

there is a unique function helpful to users and chosen by Wiki editors that vanishes if amazon link is replaced by Special:BookSources which is hard to use and does NOT tell is there is free text to read. The issue is not the existence of the book, it is the existence of an extensive excerpt. Rjensen (talk) 15:27, 26 April 2017 (UTC)
Amazon is just one one of many book sellers online, one of many who print book excerpts. We don't favor one commercial book seller over another. If the citation is to the book expert ie. the contents of the book, then it should be a book citation with a page number. Amazon product page content is not stable anyway. -- GreenC 15:58, 26 April 2017 (UTC)
it is not true that Amazon is "one of many who print book excerpts" Removing the link hurts our readers. Rjensen (talk) 05:28, 27 April 2017 (UTC)
And those excerpts are still available through the ISBN for those that have an ISBN. And I'd like to see how many excerpts are only available on Amazon (and have that number put into context with other links). --Dirk Beetstra T C 06:34, 27 April 2017 (UTC)
Comment I've done web scraping of Amazon product pages and they are blocking bots. I got around it with a VPN that changes IP every couple page checks, but it's really slow. -- GreenC 15:58, 26 April 2017 (UTC)
@Green Cardamom: I saw http://search.cpan.org/dist/Net-Amazon/lib/Net/Amazon.pm (which is for perl, which I generally use on Wikipedia bots). That seems to be capable of hooking into their 'REST interface'. Maybe the main site is blocked for bots, and they have an 'API' for bot interfacing? --Dirk Beetstra T C 04:00, 27 April 2017 (UTC)
@Beetstra: Yes they have API (plural) but I never looked at it. I just did, took a while but found this page [7] for how to register and sending a SOAP request which the Perl library uses. Using their 'Product Advertising API' to de-Amazon Wikipedia would be a worthwhile thing. -- GreenC 04:44, 27 April 2017 (UTC)
Found a Amazon PHP Library if any PHP programmers are interested. -- GreenC 05:02, 27 April 2017 (UTC)
...and an Amazon Python Library for the trifecta. -- GreenC 05:15, 27 April 2017 (UTC)
Down the rabbit hole.. Alternatives to Amazon API. -- GreenC 05:26, 27 April 2017 (UTC)
I read that using the Amazon API requires an account(s) and you have to prove the API is being used for the purpose of selling Amazon products -- thus its name 'Product Advertising API' -- by providing a link to a website where the API results will be deployed. Thus using the API to de-Amazon Wikipedia would almost surely require subterfuge hiding it's true intent. -- GreenC 15:33, 27 April 2017 (UTC)
@Green Cardamom: Now don't tell me that we use this API to FILL the references? If so, we areexactly at the point where we should be.
Anyway, all others that can legitimately be de-amazonized could be done by bot. We can do the others by hand if no automated process can be defined. --Dirk Beetstra T C 04:10, 1 May 2017 (UTC)
Screwed up ping ... the mediawiki software should just understand who I want to ping :-). @Green Caramom:. --Dirk Beetstra T C 04:11, 1 May 2017 (UTC)
@Beetstra: As you know, this edit did not notify Green Cardamom (talk · contribs) - but this one will not have done either. --Redrose64 🌹 (talk) 11:27, 1 May 2017 (UTC)
There are other ways: LibraryThing, WorldCat, OpenLibrary - have good APIs. -- GreenC 14:10, 1 May 2017 (UTC)

There's about 5,000 such, I started removing them, but there are relatively few that are "well defined" - then we lost power so I will have to start from scratch. All the best: Rich Farmbrough, 20:38, 12 May 2017 (UTC).

Arbitrary break

Primefac is an idiot and doesn't know how to read. Ignore this section.
  • Comment that original linked thread is almost 4:1 in opposition against blacklisting Amazon. Personal feelings aside, I'm seeing little to no support to remove/replace Amazon via bot, and anything else is going to fall afoul of WP:CONTEXTBOT. Primefac (talk) 18:16, 26 April 2017 (UTC)
The opposition is to blacklist Amazon because it's a convenient link that beats nothing, and can be used to populate templates automatically. But every one also agrees the amazon link is inappropriate when we have an ISBN. Headbomb {t · c · p · b} 18:20, 26 April 2017 (UTC)
I am with Headbomb here. We regularly run into sites where the linking is sub-optimal, but where the editor finds great convenience in using the link (I am sure that the reasoning here can also be applied to google-books, though that is less aimed at selling, and many other book sites). On all those there is a sauce of spammers that do add (commercial links to) their books to Wikipedia articles in the hope to sell some of their books because readers are following their commercial links. Many of such links could be 'neutralized' through ISBN. It is a bit less convenient, but the functionality, the convenience, is still there (one step more, you go to Special:ISBN, choose Amazon/Google/whatever). If we can run this on Amazon, we can likely also run this on many other online-book retailers. --Dirk Beetstra T C 04:00, 27 April 2017 (UTC)
Primefac, The closing summary you gave at Wikipedia:Templates for discussion/Log/2017 April 13#Template:ASIN, while accurately recording the Keep overlooked the strong sentiment expressed that it should be kept only as a source of last resort. This discussion here is not about blind removal of asins, but their removal where better, more neutral, less commercial, alternatives already exist or their replacement where the alternatives can be found. Please don't simplify the contending positions to the point where their essence is lost. Cabayi (talk) 08:59, 27 April 2017 (UTC)
You're exactly right, I rather thoroughly misread this thread and the non-!voting implications of the original linked discussion. Consider myself fully trouted. Primefac (talk) 11:58, 27 April 2017 (UTC)

Parameter titles

Could someone write a bot which replaces all parameter aliases of {{Extra chronology}}, {{Extra track listing}}, {{Extra album cover}}, {{Audiosample}}, {{Infobox album}}, {{Singles}}, {{Infobox single}} and {{Infobox song}} with the preferred parameter names? (For the templates without TemplateData, preferred should be either the outermost parameter in a nested set or – if the parameter is not a proper noun – the lowercase version, with underscores not spaces. For {{Singles}} the preferred parameters are probably singlen and singlendate, although I haven't asked yet.) {{Infobox single}} and {{Infobox song}} are currently being merged, so it may help to replace or substitute one of them in the process. I have not discussed this, although per MOS:INFOBOX lowercase parameters are preferred. Thanks, Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
17:30, 10 May 2017 (UTC)

@Jc86035:Would this not be a case of WP:COSMETICBOT? Might be worth getting this added to WP:AWB instead, so it's caught while doing other tasks. Mdann52 (talk) 23:02, 10 May 2017 (UTC)
@Mdann52: Because of the template merger every instance of {{Infobox single}} and {{Infobox song}} will have to be gone through (it technically wouldn't count as cosmetic because e.g. |writer= would be changed to |written= and so would the displayed text), so some of the other templates could be dealt with then. Would doing it semi-manually with AWB be allowed for the remaining {{Infobox album}} transclusions? Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
01:37, 11 May 2017 (UTC)
Doing... @Jc86035: I'll get on this later today - thanks for the additional explanation. If you could start a discussion on the merger and replacement, that world be great. Mdann52 (talk) 07:05, 11 May 2017 (UTC)
@Mdann52: The RfC to merge the templates has ended, so the templates could be merged as soon as Ojorojo thinks the merged template ({{Infobox single/sandbox}}, which would probably replace the contents of {{Infobox song}}) is ready. I've added notices on both templates' talk pages as well as WT:SONGS. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
07:56, 11 May 2017 (UTC)

I'm currently cleaning Infobox album with AWB. I've got about 3500 articles left to do. I'll add this change my code, and do it for those articles while I'm removing defunct fields and fixing the formatting mess that a lot of them are in. It will make the bot run easier as I'm having to do a lot of manual fixes. - X201 (talk) 07:51, 11 May 2017 (UTC)

@X201: Are you doing this for the subtemplates as well? It would be very helpful if those were fixed as well. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
07:56, 11 May 2017 (UTC)
@Jc86035: Not at the moment, I could though. I've been concentrating on clearing Category:Pages using infobox album with unknown parameters. I haven't looked at the sub templates (although I've fixed any that were causing errors) What needs fixing in them apart from the Capital First Letter? - X201 (talk) 08:31, 11 May 2017 (UTC)
@X201: Other than lowercase letters and underscores, I'm not sure although {{Audiosample}} might have some values of |type= (aka |background=) which aren't correct (e.g. "khaki" instead of "single"). I nominated that for merging with {{Extra music sample}} so it's probably best taken care of later. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
09:49, 11 May 2017 (UTC)
I've got the lowercase code working for Infobox album, I'll add the sub-templates and fix them as I go. - X201 (talk) 20:52, 11 May 2017 (UTC)
@X201: Note that {{Extra chronology}} could potentially use some parameter fixing, if we're deprecating the old parameters (they don't function identically to the new ones; I've corrected the TemplateData). Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
08:52, 12 May 2017 (UTC)

@Mdann52 and X201: don't think another BRFA is necessary, AnomieBOT seems to handle this well enough. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
09:56, 19 May 2017 (UTC)

Convert refbegin blocks to use unordered lists

The objective is to change all references in a block starting with {{Refbegin}} icw the parameter "indent=yes" and ending with the template {{refend}}. For these blocks all lines in between, starting with : should be changed to *. This converts them from an incomplete 'definition list' format into a ordered list format, which is better for screenreaders (accessibility) and is a more semantic. As can be seen on Template:Refbegin/testcases#V2_indentation, the styling for both has been synchronised, and once this conversion has taken place, the old method will be deprecated completely and no longer style with hanging indentation. —TheDJ (talkcontribs) 12:09, 12 May 2017 (UTC)

Refinement. Should skip the page if the contents of the block contain a line starting with ; —TheDJ (talkcontribs) 18:51, 12 May 2017 (UTC)

Please provide examples of this "problem" in existing articles. My concern is that editors may have created references with the bibliographic information in a first paragraph and explanatory information in a second paragraph, like this:

References

  • Astronomical Applications Department. (2011 July 2). Multiyear Computer Interactive Almanac Version 2.2.2. Washington: U.S. Naval Observatory.
Calculate longitude of Sun using apparent geocentric ecliptic of date.

In such a case, the bot would convert one reference, consisting of the bibliographic information plus an explanation, into two references, the latter of which wouldn't make any sense. Jc3s5h (talk) 12:34, 12 May 2017 (UTC)

@Jc3s5h: This is the largest set of article that would be part of this change: https://tools.wmflabs.org/bambots/TemplateParam.php?action=paramlinks&wiki=enwiki&template=Refbegin&param=indentTheDJ (talkcontribs) 13:15, 12 May 2017 (UTC)
I sampled a few of the articles in the list TheDJ provided, and it appears they follow the instructions at {{Refbegin}}. There is no mention at {{Refbegin}} or the associated talk page of this change. I believe you should seek consensus at Template talk:Refbegin, with a notice of the discussion at WT:Citing sources and Help talk:Citation Style 1 before requesting a bot. Jc3s5h (talk) 13:31, 12 May 2017 (UTC)
Template_talk:Refbegin#Using_a_bot_to_convert_hanging_indent_uses_of_this_template_to_proper_lists the request for feedback has been open for more than a week now, and there do not seem to be significant objections. —TheDJ (talkcontribs) 17:53, 20 May 2017 (UTC)
In that discussion you said the bot would skip 2013 Bulgarian protests against the first Borisov cabinet. How would the bot determine which articles are to be skipped? Thincat (talk) 18:05, 20 May 2017 (UTC)

CCI bot

WP:CCI is totally overwhelmed with close to seven years' backlog of long-term copyright violation investigations. At this point, I think it would make the most sense to take a triage approach because there is no way that all these articles will be manually cleared. There are a couple of ways the task might be approached:

  • A bot that runs the current text of affected articles (without regard to listed diffs) through the Earwig tool, then either (1) posts a list of articles ranked by likely percent violation based on Earwig or (2) posts in the respective CCIs the likely percent violations. Then we process starting with the worst likely violations on an article-by-article basis - either using the list or using the posts throughout the CCIs.
  • A bot that runs the text of listed diffs through the Earwig tool (or modified Earwig tool) somehow. The Earwig tool is not set up to review diffs so I don't know how this could be done. Perhaps we could work with Earwig/WMF to modify the tool. Then the bot posts in the respective CCIs the likely percent violations. Then we could process starting with the worst likely violations on a diff-by-diff basis. (This approach has the advantage of being more likely to identify copyright violations because it would work from content before it is modified by later users.... but the more it has been transformed by later users, the greater the odds that the copyvio is no longer present.)

Thoughts? This is a major problem and volunteer manpower is apparently insufficient to fix it. We now have Copypatrol running so I think copyright problems are getting caught sooner and we won't have the same level of massive investigations in the future. If bot-assisted review can identify the worst violations perhaps we can declare CCI bankruptcy, wipe things out, and start from scratch. Calliopejen1 (talk) 23:31, 24 May 2017 (UTC)

Automated querying of the tool is possible, though I ask that anyone attempting to do so be careful to keep the query rate low. There's a hard limit on the number of checks we can perform per day, and trying to put all of those articles through the detector would surely exceed that. It should be fine if done very slowly, and judging by the age of the backlog, limiting the speed isn't necessarily a bad thing... As for the diff idea, I like it. That is worth implementing, though I can't make any promises soon. — Earwig talk 03:53, 26 May 2017 (UTC)

FA by length bot

{{User:ClueBot III/ArchiveNow}} I would like a bot to keep this page up to date. It does not have to happen more than once a week. It finds all pages that have been FACed, sorts them by length, and refreshes the page. Using the prosecounter tool found in the DYK check would be a bonus. For instance, the current leader in the table has well over 200,000 characters, but only about 56,000 of prose. Maury Markowitz (talk) 02:23, 1 May 2017 (UTC)

FYI, you can get the list here. Not a solution, of course. I don't see any reason, why number of articles differs in category page and Petscan results, though. --Edgars2007 (talk/contribs) 14:32, 1 May 2017 (UTC)
Doing... Actually, it'd be REALLY easy to use the Petscan tool to keep this up to date. I'll work on it. I'm in a python class this week and need a project.--v/r - TP 13:25, 8 May 2017 (UTC)
@TParis: fyi: Wikipedia talk:Database reports#Featured articles by length. --Edgars2007 (talk/contribs) 12:54, 19 May 2017 (UTC)
Yes, thank you.--v/r - TP 14:01, 19 May 2017 (UTC)
@Edgars2007: I have an initial bot running at User:TParis/Featured_Article_By_Length. I'm going to watch it for a couple of days before submitting it to BRFA.--v/r - TP 16:30, 19 May 2017 (UTC)
BRFA filed--v/r - TP 22:24, 19 May 2017 (UTC)

Bot to mark user pages with {{Blocked user}}

{{archive now}}

After a certain period of time, say 60 days, the bot would replace the contents of the user page with the {{Blocked user}} tag, and mark that page patrolled to remove it from the queue. If it were a CU block, then the bot would use the CU block tag. d.g. L3X1 (distænt write) )evidence( 17:42, 7 June 2017 (UTC)

Unless there is demonstrated consensus for such a task, no BAG member will ever approve this. Do you have a link to a discussion that establish such consensus? If not, I suggest starting one at WP:VPR (or WP:AN). Headbomb {t · c · p · b} 17:46, 7 June 2017 (UTC)
Thanks, I opened a section at VPR. d.g. L3X1 (distænt write) )evidence( 20:37, 7 June 2017 (UTC)
Unfortunatelly, seems that this proposal wasn't supported Wikipedia:Village_pump_(proposals)/Archive_140#Bot to mark user pages with .7B.7BBlocked user.7D.7D RETRACTED. No consensus, ready for archival. XXN, 11:32, 1 July 2017 (UTC)

Tagging categories for renaming

{{User:ClueBot III/ArchiveNow}} Tag the subcategories of Category:Women's organizations by country for renaming. See Wikipedia:Categories for discussion/Log/2017 June 25. An example of the rename is Category:Women's organisations in Afghanistan to Category:Women's organisations based in Afghanistan. The tempate {{Cfr}} needs to be added to each of the 118 subcategories. Tim! (talk) 16:27, 6 July 2017 (UTC)

According to the main article (Dnipro). The subcategories and subsubcategories also--Unikalinho (talk) 03:56, 25 July 2017 (UTC)

N Not done: This needs to go to WP:CFD. — JJMC89(T·C) 04:18, 25 July 2017 (UTC)

{{archive now}}

Incorrectly-substituted templates on various pages

There are over 400,000 pages which contain {{{subst|}}} and various unsubstituted parser functions due to malformed substitutions before the introduction of safesubst: (about 200 templates, many in Category:TestTemplates, still contain the deprecated code). Should these be fixed? Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
15:02, 28 May 2017 (UTC)

Table formatting fixes for song and album infoboxes

There are some transclusions of {{Infobox album}}, {{Infobox song}}, {{Extra chronology}} and so on, which have infoboxes nested incorrectly and end up displaying incorrectly. Example: Symphony (Clean Bandit song). To fix this, a bot would go through pages in Category:Music infoboxes with malformed table placement and move pairs of brackets around, or add |misc= (for infoboxes song/album), to make the tables display properly.

This should be done after the merger of {{Infobox single}} into {{Infobox song}}, as the category will not fill up for some time and AnomieBOT is going to be substituting basically every page using any of the templates affected after the merger is complete. Some parameters in current use will be deprecated, although this shouldn't affect the bot's operation. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
16:09, 22 May 2017 (UTC)

Actually, per the below example deleting the incorrectly-nested template after substitution, the bot might need to go through every page transcluding the templates mentioned in the category plus {{Infobox single}}, before the templates are added to AnomieBOT's substitution list. This is because the parameters would be replaced automatically in the subst through Module:Unsubst-infobox using a lot of Module:String searches, and this has the potential for data loss if the templates are used incorrectly. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
16:15, 22 May 2017 (UTC)

Examples

One of the most extreme cases of this problem would be having to make this edit (the level of nesting caused template errors).

Before

| Misc         =
{{Extra chronology
| Artist       = [[Zara Larsson]] singles
| Type         = singles
| Last single  = "[[So Good (Zara Larsson song)|So Good]]"<br/>(2017)
| This single  = "'''Symphony'''"<br/>(2017)
| Next single  = "Don't Let Me Be Yours"<br/>(2017)
{{External music video|{{YouTube|aatr_2MstrI|"Symphony"}}}}}}

After

| Misc         =
{{Extra chronology
| Artist       = [[Zara Larsson]] singles
| Type         = singles
| Last single  = "[[So Good (Zara Larsson song)|So Good]]"<br/>(2017)
| This single  = "'''Symphony'''"<br/>(2017)
| Next single  = "Don't Let Me Be Yours"<br/>(2017)
}}{{External music video|{{YouTube|aatr_2MstrI|"Symphony"}}}}
BRFA filed; separate task still required for auxiliary templates nested within each other. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
08:41, 29 May 2017 (UTC)

Speedy AFC decline bot

A few of us on IRC were chatting about drafts and how much of a pain it can be if someone resubmits a draft without actually changing anything. Could we get a bot that scans through the 0 days ago cat and auto-declines drafts that were resubmitted without any changes? The bot would check to see if the edit immediately preceding a submission was a draft decline (Example). Primefac (talk) 00:45, 26 March 2017 (UTC)

Doing... ProgrammingGeek talktome 21:10, 9 April 2017 (UTC)
@Primefac: An AfC reviewer gets a review wrong. What happens then? ~ Rob13Talk 21:47, 9 April 2017 (UTC)
Funny how four AFC reviewers can be chatting about it and miss the bleeding obvious. You make a very good and completely valid point. Thanks Rob. Primefac (talk) 23:31, 9 April 2017 (UTC)
Pinging ProgrammingGeek. Hopefully this saves you some work. Primefac (talk) 02:36, 10 April 2017 (UTC)
Thanks Primefac. To be honest I hadn't started yet. ProgrammingGeek talktome 12:00, 10 April 2017 (UTC)
@Primefac and ProgrammingGeek: .. I guess this still is a question of how often do you re-decline an unchanged resubmitted AFC vs. one that gets approved after an unchanged resubmittal? I mean, I guess that reviewers get it wrong every now and then, but that is then easier solved by the bot refusing to edit-war (do not auto-decline to 'self'). --Dirk Beetstra T C 12:12, 10 April 2017 (UTC)
If you are going for a bot like this, I would suggest the decline template says something like "If you believe the reasons this draft was declined are incorrect, please follow the instructions at X, explaining why. If you resubmit the draft without making any changes it may be speedily rejected by a bot without further human review." (note I am not proposing this exact wording be used). Thryduulf (talk) 16:24, 10 April 2017 (UTC)
More importantly, this needs discussion since it's not uncontroversial. No bot operator could take this up until the community vetted the idea, likely at WT:AFC with a notice at relevant other projects. ~ Rob13Talk 03:11, 7 May 2017 (UTC)
If a bot did this, it would probably need a expanded template message, not just the default AfC one, that explains what the bot did and repeats the previous decline reason. (Just an observation.) Enterprisey (talk!) 21:48, 30 May 2017 (UTC)

{{This is a redirect}} was deprecated on October 3, 2016 and replaced with {{Redirect category shell}}. {{This is a redirect}} has over 250,000 transclusions. If we can get a bot to convert {{This is a redirect}} to {{Redirect category shell}}, it would be very helpful. —MRD2014 📞 contribs 02:25, 13 April 2017 (UTC)

@MRD2014 and Paine Ellsworth: wouldn't it be easier to convert {{This is a redirect}} into a wrapper for {{redirect category shell}} and then subst out of existence? It looks like the main difference between the two is one takes multiple named params and the other just takes one. Primefac (talk) 15:53, 13 April 2017 (UTC)
To editors MRD2014 and Primefac: it might help to know that Christian75 has an AWB setup for the conversion. Perhaps there is a need to create a temporary category for {{This is a redirect}} that will hold all of its transclusions in one place?  Paine Ellsworth  put'r there  12:36, 14 April 2017 (UTC)
Paine Ellsworth, we don't need a category; just use Special:WhatLinksHere or the tools page linked above. Christian75, if you've got the AWB settings I can submit a BRFA using it. Primefac (talk) 12:40, 14 April 2017 (UTC)
The settings I used in AWB was "find and replace - advanced settings". E.g. replace "{{redr|from plural|from move}} with "{{Redirect category shell|{{R from plural}}{{R from move}}}}", or replace "{{Redr|move|nick}}" with "{{Redirect category shell|{{R from move}}{{R from nickname}}}}" (newlines removed here). And then I had 155 of them (just counted) and when I saw a new combination I added that to the list. But I stopped using it, because I was advised to get a bot account before I continued. Christian75 (talk) 13:08, 14 April 2017 (UTC)
You actually went and gave manual find/replace with no regex? A bit wasteful, but it works I guess. I'll see if I can convert (using your model as a template) to a wrapper, but if not I'll submit a BRFA (though they're a little backlogged at the mo). Primefac (talk) 00:16, 15 April 2017 (UTC)
To editors MRD2014, Paine Ellsworth and Christian75: I've wrapped it (basically just safesubsted all the #ifs). I'm going to let it percolate through the system for a bit before getting to the autosubst phase. On that note: Anomie, given that it's going to be substed one way or another, I assume it's not a huge deal if AnomieBOT substs templates on 250k pages? Primefac (talk) 03:20, 15 April 2017 (UTC)
There's also the issue of protected redirects, AnomieBOT's template subster is not running under the adminbot account. – Train2104 (t • c) 03:27, 15 April 2017 (UTC)

The issue here is 250k pages with the template. While I'm just pulling numbers out of thin air, I think leaving a hundred or two for manual cleanup isn't really a huge deal. Primefac (talk) 13:37, 15 April 2017 (UTC)

250k isn't a problem, although I should probably add some code to rotate things so the 250k won't block every other template being substed. Just make sure it substs correctly in all cases before you trigger the bot to start doing it. Anomie 13:55, 16 April 2017 (UTC) For the record, the bot change was made about an hour after that comment. Anomie 11:51, 25 April 2017 (UTC)
BD2412 has been doing some of those replacements, but it seems like they haven't been doing it correctly. —MRD2014 📞 contribs 02:02, 25 April 2017 (UTC)
I'm just poking around with it to work out the best process. One snag I have hit: what to do with "This is a redirect" parameters like "e0=Recommend using this as the wikilink when referring to The Guardian in a citation"? For the time being, I have just moved the message outside the template at Ir. Times and Guard., but that seems wrong. bd2412 T 15:37, 29 April 2017 (UTC)
To editor ­bd2412: The new {{Rcat shell}} has an |h= parameter that does the same thing as the |e0= param in the deprecated template. Just be sure to end it with a pipe symbol or the redirect will be sorted to Category:Miscellaneous redirects.  Paine Ellsworth  put'r there  10:19, 30 April 2017 (UTC)
Thanks. I'll follow the example in your edit to Ir. Times. Cheers! bd2412 T 13:38, 30 April 2017 (UTC)

I've been fiddling with this. There are also a number of deprecated px parameters still around. All the best: Rich Farmbrough, 16:31, 29 April 2017 (UTC).

I've noted that some diligent workers who are converting these have been converting subject pages but not the associated talk pages. If a bot does all the subject pages and "thinks" it is finished, there will still be thousands of transclusions of the deprecated template on any associated talk pages that exist.  Paine Ellsworth  put'r there  10:19, 30 April 2017 (UTC)

This is the first I am hearing of a talk page component to the task. It would have been nice to have that included in the request. I'm not sure how to begin to address this, except to ask if someone can generate a list of non-matching talk pages. I can't imagine that there are too many. The vast majority of redirects have no talk page. bd2412 T 15:51, 30 April 2017 (UTC)
Don't know about "vast majority", bd2412, since the subject-page redirects I've tagged in the past few years have had existing talk pages about half the time. (1) if the talk page existed as a redirect, then I tagged it with Redr, (2) if the talk page existed as a talk page, then I topped it with the {{Talk page of redirect}} template and any appropriate project banners (sometimes all these had were Old rfd templates or similar), (3) if the talk page was redlinked on the subject page, I didn't go there – in other words, I don't think talk pages should be created just to tag 'em with rcats, project banners or anything else. The exceptions were redirect talk pages with deletion discussion templates and template /doc, /sandbox and maybe /testcases talk pages, the discussions of which needed to be centralized to the main template talk pages (and BTW, many of these are also tagged with the deprecated template).  Paine Ellsworth  put'r there  16:21, 30 April 2017 (UTC)
Also not using 1=, for example. All the best: Rich Farmbrough, 18:00, 30 April 2017 (UTC).
And not honouring the layout in the documentation...
Here is a list of suitable replacements for some of the un-named parameters. All the best: Rich Farmbrough, 15:24, 1 May 2017 (UTC).
Rich, I'm a little curious why |1= is necessary when it may be omitted and functionality remains the same, and why "Redirect from..." is used when "R from" will do the job? Again, it's no biggie, just curious if I'm missing something.  Paine Ellsworth  put'r there  17:23, 1 May 2017 (UTC)
"Redirect from..." is clearer. Obviously it's more typing for manual work. As to the 1= it ensures that the parameter is treated as {{{1}}} even if there is an "=" sign in it, or there is a previous unnamed parameter due to some error, e.g. {{Redirect category shell|XXXX|1=YYY}} will display the YYY.
All the best: Rich Farmbrough, 19:20, 1 May 2017 (UTC).
BRFA filed All the best: Rich Farmbrough, 22:01, 1 May 2017 (UTC).
You are far more experienced than I am, Rich, so this is not meant as argumentative; however, exclusion of |N= in template arguments is a time-honored, traditional shortcut, and the redundancy of "Redirect from" as opposed to that of "R from" seems much less palatable when there are several rcats used to tag a redirect. Just sayin'.  Paine Ellsworth  put'r there  03:13, 2 May 2017 (UTC)
Well if there is consensus to use "R from" we can do that, and I do agree that the meaning of "R" is probably clearer when there is a container than when there isn't. As for the "1=" I am not wedded to it, indeed I can now see a (probably) better way of dealing with in this case, so I will change that from here on. All the best: Rich Farmbrough, 07:21, 2 May 2017 (UTC).
I am unclear about the status of this task. Are there portions remaining where manual repair is required? bd2412 T 12:57, 3 May 2017 (UTC)
I'm curious too, since the Redirect category shell's transclusions have risen to more than 241,000, about eight or ten times the number when this began; however, This is a redirect's transclusions have only been reduced to just over 232,000, which is just about 30,000 or so lower than when this began. How is it that the new template's usage has increased so dramatically while the deprecated template's usage has not decreased much? Is the server playing catch-up?  Paine Ellsworth  put'r there  17:22, 3 May 2017 (UTC)
That is certainly something that can happen when large numbers of changes are involved. Is it possible that substing is causing pages to still register as both? bd2412 T 01:00, 4 May 2017 (UTC)
There were certainly some that had been substed that showed up in what-links-here. I suspect that the entire thing will be susbsted before there's any chance to use this as an opportunity to make other improvements. All the best: Rich Farmbrough, 19:02, 4 May 2017 (UTC).

Yep. We now have 250k pages like this:

{{redirect category shell|{{R from move}}{{R hatnote}}{{R nick}}{{R p}}}}

Well I tried. All the best: Rich Farmbrough, 12:58, 6 May 2017 (UTC).

OK, looks like all is not lost.
One additional question:
What should happens if only one template is contained in {{Redirect category shell}}?
All the best: Rich Farmbrough, 13:47, 6 May 2017 (UTC).
In cases such as {{This is a redirect|from move}} the subst. is the same as for two or more templates, i.e., {{Redirect category shell|{{R from move}}}}. If more rcats are needed, they will be added later.  Paine Ellsworth  put'r there  07:22, 7 May 2017 (UTC)
If more needs to be done in terms of manual insertions, I'm glad to help. Cheers! bd2412 T 20:51, 12 May 2017 (UTC)

User:Paine Ellsworth - pages such as Chu Chung shing have parameters such as "e2" - what should happen top these? In the hundred thousand or so that have been subst'ed, any such parameters have been thrown away, I think. All the best: Rich Farmbrough, 19:47, 22 May 2017 (UTC).

Thank you, Rich! Of all the |e= parameters, the |e0=, the topnote or hatnote, is probably the most needed. I've been handling these on an individual basis, using some of the e1= through eN= parameters and discarding others. None are critical, and even if an e0 gets omitted, it's not all that bad. The |h= parameter in the new template can take the place of any of them when you think the information should remain on the redirect. Best to you!  Paine Ellsworth  put'r there  03:14, 23 May 2017 (UTC)
AnomieBOT took care of a bunch by substing them, but we still have about 57,000 remaining, and a lot of those redirects transcluding {{Redr}} are fully protected, so AnomieBOT cannot edit & substitute those. —MRD2014 talk contribs 23:30, 31 May 2017 (UTC)

Commons deletion notices

Am looking for someone willing to build a bot that provides notices to Wikiprojects when images used by that Wikiproject are put up for deletion on Commons. Details here. Anyone able to help? Would love to see the output added to here. Best Doc James (talk · contribs · email) 09:34, 2 June 2017 (UTC)

@Doc James: Changes to Article Alerts isn't really a BOTREQ thing, it's one for Hellknowz (talk · contribs) and/or Headbomb (talk · contribs). --Redrose64 🌹 (talk) 20:48, 2 June 2017 (UTC)
Okay posted at here Doc James (talk · contribs · email) 20:58, 2 June 2017 (UTC)

Fix curly quotes to straight quotes

Per MOS:QUOTEMARKS, all curly quotes (single and double) should be changed to straight quotes. It would be nice if a bot could do that. Llightex (talk) 21:11, 31 May 2017 (UTC)

This would probably be denied per WP:CONTEXTBOT because of ʻOkina and Prime (symbol) and other similar marks. – Jonesey95 (talk) 00:54, 1 June 2017 (UTC)
Llightex, the next best thing is semi-automated. AWB fixes curly quotes through either its typo fixing or general fixes (I can't remember which). The Transhumanist 22:31, 3 June 2017 (UTC)
@Jonesey95: @The Transhumanist: What about merely replacing double quotes, such as using regular expressions to replace text in quotes such as “ ” to " "? Llightex (talk) 14:34, 4 June 2017 (UTC)
Same answer. – Jonesey95 (talk) 23:24, 4 June 2017 (UTC)

A bot say Hello and inviting to TeaHouse!

I want a bot to fellow persons who joined wikipedia fell welcomed — Preceding unsigned comment added by TalismanOnline (talkcontribs) 09:55, 5 June 2017 (UTC)

N Not done There is already User:HostBot. Dat GuyTalkContribs 10:00, 5 June 2017 (UTC)

Portal:Indigenous peoples in Canada

There has been a consensual move of Portal:Aboriginal peoples in Canada to Portal:Indigenous peoples in Canada. We are wishing a bot can help us do this non-manually. A request for portal image change has been fulfilled (Template talk:Portal#Portal:Indigenous peoples in Canada).--Moxy (talk) 16:51, 6 June 2017 (UTC)

Google Books citation bot

There are hundreds of bare-link citations for Google Books in Wikipedia, and I'd like to convert them into proper citations. Would it be possible to automate this task using User:Quadell's Google Books bot? Jarble (talk) 21:07, 17 May 2017 (UTC)

This bot was created several years ago, so I hope it is still being maintained. Jarble (talk) 21:09, 17 May 2017 (UTC)

@Jarble: Looks like the bot hasn't run for several years. I'll look into this tonight if I get chance. Mdann52 (talk) 16:46, 25 May 2017 (UTC)
@Mdann52: Has the bot been repaired yet? Jarble (talk) 18:57, 3 June 2017 (UTC)
@Jarble: Still in progress unfortunately. Having to rewrite from scratch :( Mdann52 (talk) 20:32, 8 June 2017 (UTC)

Stub Marker Bot

A bot that will mark pages as stubs according to what the guidelines say. 2602:306:34AB:2830:25D6:CB16:249B:3AAE (talk) 02:04, 11 June 2017 (UTC)

Although AWB removes stub tags from pages bigger than 500 words, it only adds a stub tag if the article has less than 300 characters - see Wikipedia:AutoWikiBrowser/General_fixes#Tagger (Tagger). Between 300 characters and 500 words it neither adds nor removes a stub tab, leaving it to editorial discretion. -- John of Reading (talk) 05:41, 11 June 2017 (UTC)

Bot edit to improve the efficiency of data scraping off of wikipedia

Hello,

This is a generalized bot request based on a more specific request that I posted to https://en.wikipedia.org/wiki/Talk:United_States_presidential_election,_2016

Could someone please consider writing a bot to add extra "–" to indicate "no result available" to the rows in tables missing them? A global replace of "||||" with "||–||" might do the trick.

This would help me and other data scrapers to extract data more quickly and easily.

An example is shown below in the "=== Results by state ===" section of https://en.wikipedia.org/wiki/United_States_presidential_election,_2016#Results:

old:

| align=left|[[United States presidential election in Alabama, 2016|Alabama]] || WTA ||729,547||34.36%||–||1,318,255||62.08%||9||44,467||2.09%||–||9,391||0.44%||–||||||–|| 21,712 ||1.02%||–||588,708||27.72%||2,123,372||AL||Official<ref>{{cite web|title=State of Alabama: Canvass of Results|url=http://www.alabamavotes.gov/downloads/election/2016/general/2016-Official-General-Election-Results-Certified-2016-11-29.pdf|date=November 29, 2016|accessdate=December 1, 2016}}</ref>

new:

| align=left|[[United States presidential election in Alabama, 2016|Alabama]] || WTA ||729,547||34.36%||–||1,318,255||62.08%||9||44,467||2.09%||–||9,391||0.44%||–||–||–||–|| 21,712 ||1.02%||–||588,708||27.72%||2,123,372||AL||Official<ref>{{cite web|title=State of Alabama: Canvass of Results|url=http://www.alabamavotes.gov/downloads/election/2016/general/2016-Official-General-Election-Results-Certified-2016-11-29.pdf|date=November 29, 2016|accessdate=December 1, 2016}}</ref>

Below is a list of rows in the results by state section that would benefit from the "||||" to "||–||" replacement, as a normal row should have 25 elements:

  • # row: 3 row header: Alabama data elements: 23
  • # row: 4 row header: Alaska data elements: 23
  • # row: 11 row header: District of Columbia data elements: 23
  • # row: 12 row header: Florida data elements: 23
  • # row: 14 row header: Hawaii data elements: 23
  • # row: 17 row header: Indiana data elements: 23
  • # row: 23 row header: Maine, 1st data elements: 24
  • # row: 24 row header: Maine, 2nd data elements: 24
  • # row: 29 row header: Mississippi data elements: 23
  • # row: 32 row header: Nebraska (at-lg) data elements: 23
  • # row: 33 row header: Nebraska, 1st data elements: 23
  • # row: 34 row header: Nebraska, 2nd data elements: 23
  • # row: 35 row header: Nebraska, 3rd data elements: 23
  • # row: 36 row header: Nevada data elements: 21
  • # row: 38 row header: New Jersey data elements: 23
  • # row: 41 row header: North Carolina data elements: 23
  • # row: 42 row header: North Dakota data elements: 23
  • # row: 44 row header: Oklahoma data elements: 21
  • # row: 45 row header: Oregon data elements: 23
  • # row: 49 row header: South Dakota data elements: 21
  • # row: 55 row header: Washington data elements: 23
  • # row: 58 row header: Wyoming data elements: 23

Thanks very much for your very valuable work, and please let me know if you have any questions.

Sincerely and gratefully,

-Chris Krenn (democracygps)

PS. Note: I was unable to quickly import the "=== Results by state ===" section of https://en.wikipedia.org/wiki/United_States_presidential_election,_2016#Results using the google sheets importer. I had more luck with a Wolfram Mathematica importer, but this importer treated "||||" as a single delimiter instead of two. I would be happy to request Mathematica to update their import code, but I would also prefer not to spend another several hundred dollars to benefit from this change.

PPS. I know that 20-16 election data is also available in wikidata, but it would require significantly more time and effort to extract wikidata into a table than it did to use the existing table in wikipedia. — Preceding unsigned comment added by Democracygps (talkcontribs) 17:42, 11 June 2017 (UTC)

Bot to remove redirecting links.

There should be a bot that can change links that go to a redirect page to the actual page as I've recently noticed a lot of redirects and it would be easy for there not to be a need to leave redirect-only pages. 2A00:23C4:9E0A:1400:DDA1:5E75:984:6CD1 (talk) 17:04, 11 June 2017 (UTC)

Do you mean soft redirects? They're intentional. All other redirects automatically go to the target page. Dat GuyTalkContribs 17:29, 11 June 2017 (UTC)

Fixing bare URLs

There now are now hundreds of thousands of Wikipedia articles with bare URL citations. There was a previous discussion in 2013 about creating a bot to clean up some of these references, but this task is still unfinished. Would it still be feasible to automate this cleanup process? Jarble (talk) 19:52, 11 June 2017 (UTC)

It's potentially feasible but non trivial. There is now WP:ReFill (formerly Reflinks discussed in 2013). -- GreenC 03:10, 12 June 2017 (UTC)
@Green Cardamom: There are many open-source citation generators that could make it easier to automate this task. Jarble (talk) 22:12, 12 June 2017 (UTC)
I will note IABot does this to an extent to links it tries to archive. But it won't touch them if it's not adding archives to them. To an extent, IABot's tool can be uses and set to add archives to all non-dead references, in which case it will auto convert them.—CYBERPOWER (Message) 01:36, 13 June 2017 (UTC)

Military Times Hall of Fame

The site has re-organized its directory/file structure (example is for Bernhard Jetter, fixed):

Old and wrong: http://militarytimes.com/citations-medals-awards/recipient.php?recipientid=277

New and right: http://valor.militarytimes.com/recipient.php?recipientid=277

Is there a bot that can fix this?

--Georgia Army Vet Contribs Talk 00:34, 20 June 2017 (UTC)

 Done. Paul Clemens (United States Army) has a dead link to conflict=49. — JJMC89(T·C) 03:49, 21 June 2017 (UTC)
Thanks.--Georgia Army Vet Contribs Talk 13:31, 21 June 2017 (UTC)

Tagging article talkpages with WikiProject

Hello,
WikiProject WP:MAFIA has been recently re-activated. I was thinking to get a bot's assistance for tagging the talk pages of articles that fall into the scope of the project. Would that be possible? I would prefer to do it myself if it is possible. Kindly ping me while replying. Thanks.
PS: This was previously requested at User talk:Meno25#Request for bot. Meno25 politely suggested me to post the request here.
usernamekiran(talk) 23:33, 28 June 2017 (UTC)

@Usernamekiran: The person who codes a bot is almost always the one who runs it. They're the person most familiar with the code, which is needed to debug, etc. If you mean to create your own bot, I'd recommend WP:AWB as a way to do it. If you mean to get someone else to do the task for you within certain parameters (e.g. list of categories, tagging all pages within those categories), then I can do that. I've done WikiProject tagging several times in the past with BU RoBOT. ~ Rob13Talk 23:36, 28 June 2017 (UTC)
Hi Rob. I meant, I would like to be kept in the loop lol. :-)
How much time will it take you to start working on the task? —usernamekiran(talk) 23:41, 28 June 2017 (UTC)
@Usernamekiran: It's a task I already have written, essentially. You just need to link me to the WikiProject's template, tell me if you want class auto-assessed (see User:BU RoBOT/autoassess for information on what that means), and give me a list of categories (not defined recursively; I need a list of every single category you want tagged to prevent issues). ~ Rob13Talk 00:18, 29 June 2017 (UTC)
@BU Rob13: sure. That will take some though. I will contact you on your talkpage for further communication. Thanks a lot again. —usernamekiran(talk) 00:24, 29 June 2017 (UTC)

AfD data collection bot

I was working on collecting some data regarding AfD and realized there is an opportunity for a bot to do some good work in collecting data. The functions of this bot would be to:

  • Look at daily AfD pages logged at Wikipedia:Articles_for_deletion/Log. For each daily log, determine three pieces of data:
    1. The number of AfDs listed
    2. The average number of discernible !votes per AfD
    3. How many AfDs were relisted once, twice, three or more times.
  • Output the data into a date sorted table on a page of the bot creator's choosing, looking something like this:
AfD Log Date Number of AfDs Average # votes Relisted Relisted twice Relisted > twice
2017 June 1 46 7.2 9 5 2
2017 June 2 67 6.8 17 18 7

I'd like to see the bot run through all the logs dating back forever if possible, but if that is too onerous a couple of years would do. Further, after putting together this data, I'd like to see the bot continue to function, such that on a daily basis it looks for the log file that is one day past the last complete log file (all AfDs closed), determines if all AfDs in that new log file have all been closed, and if so then runs the same numbers for that log file and appends them to the data page. I'm finding an increasing need for data to help evaluate various items of interest to Wikipedia maintenance. Another example of such a data bot output (in this case, at typical backlog areas) can be found at User:EsquivalienceBot/Backlog. I would ask Esquivalience, but that editor has semi-retired and is not very active anymore. --Hammersoft (talk) 17:59, 29 June 2017 (UTC)

Correcting redirection

Hi Seasons' Greetings,

I need semi automatic bot support to correct redierections from Legitimacy (law) to Legitimacy (family law). for example [[Legitimacy (law)|bastard son]] link in one of the articles needs to be changed to [[Legitimacy (family law)|bastard son]] or for example [[Legitimacy (law)#History|illegitimate]] needs to be changed to [[Legitimacy (family law)#History|illegitimate]]

You can find pages needing correction at https://en.wikipedia.org/w/index.php?title=Special:WhatLinksHere/Legitimacy_(law)&limit=250&from=0 (those may be around 500). Please do make corrections where you are sure the link is about family law from the context. The remaining I will do manually.

Following pages links are not related to family law so need not be included :

Thanks for support, and regards

Mahitgar (talk) 04:39, 1 July 2017 (UTC)

A bot won't be able to employ discretion in telling which links should be changed. If human judgment is needed, you'll have to manually compile a list of articles that need changing. Maybe using a tool like AWB to make those edits would be a better way? (In the mean time you could go ahead and convert Legitimacy (law) into a disambiguation page, so that other editors will know that the link should be avoided.) --Paul_012 (talk) 07:20, 1 July 2017 (UTC)
Declined Not a good task for a bot. per WP:CONTEXTBOT. — JJMC89(T·C) 20:20, 1 July 2017 (UTC)
User has started an RfC at Talk:Legitimacy (criminal law)#RfC requesting concensus to move article to Legitimacy (law). To me, it reads like a WP:RM. --Redrose64 🌹 (talk) 20:26, 1 July 2017 (UTC)

Remove DMOZ comments

A large number of pages include the string , or submit your link to the relevant category at the Open Directory Project (dmoz.org) and link back to that category using the {{dmoz}} template, as part of some boilerplate text. As DMOZ tells us, that is no longer possible. The string should be removed, like this, or like this, please. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:22, 4 July 2017 (UTC)

Migrate from deprecated WikiProject Caribbean country task forces

Four of the task forces for WikiProject Caribbean have graduated to full-fledged WikiProjects with their own banner templates, and the task force parameters have been deprecated: {{WikiProject Cuba}}, {{WikiProject Grenada}}, {{WikiProject Haiti}}, and {{WikiProject Trinidad and Tobago}}. We need a bot to go through the existing transclusions of {{WikiProject Caribbean}} and perform the following changes:

  • If none of the four task forces above are assigned, leave the {{WikiProject Caribbean}} template as it is.
  • If any of the four task forces above are assigned, remove the relevant task force parameters and add the relevant WikiProject banners. If there are no other task force parameters remaining after 1 or more have been migrated, remove the {{WikiProject Caribbean}} template.

Please also migrate the task-force importance parameters if they exist, for example |cuba-importance=. If there isn't a task-force importance parameter, just leave the importance blank. The |class=, |category=, |listas=, and |small= parameters should be copied from {{WikiProject Caribbean}} template if they exist. Kaldari (talk) 18:58, 4 July 2017 (UTC)

@BU Rob13: Any interest in tackling this one? It's slightly different than the WikiProject Central America migration since only four of the task forces have graduated. Kaldari (talk) 19:02, 4 July 2017 (UTC)
@Kaldari: Let's see if someone else bites first. I'm going to be on vacation soon plus I have some important life stuff coming up. Ping me in late August if this still hasn't been done and I'll take a look. ~ Rob13Talk 06:40, 5 July 2017 (UTC)
@BU Rob13: Any thoughts on picking this up? Kaldari (talk) 05:53, 5 September 2017 (UTC)
@Kaldari: I'll try to get to this next weekend if I can. ~ Rob13Talk 12:40, 5 September 2017 (UTC)

Bot to add the type of business

In some articles the type of business is provided (https://en.wikipedia.org/wiki/Slack_Technologies) but not in others. The info is often provided in the main text. I think there are 3 types of business: Startup; Public; Private.

Thanks! — Preceding unsigned comment added by 81.164.122.167 (talk) 13:08, 5 July 2017 (UTC)

Declined Not a good task for a bot. This is a classic case of WP:CONTEXTBOT - a text can mention startup without the business being one, or talking about how it historically was a startup but has expanded and no longer can be considered one. TheMagikCow (T) (C) 11:55, 6 July 2017 (UTC)

IPs has been adding extra reviews like this in the template in some articles lately, while the guidelines clearly says only add ten reviews. Can there be a bot revert these edits who add reviews more than ten? TheAmazingPeanuts (talk) 02:37, 5 July 2017 (UTC)

Is it possible to link to the guidelines so that the specific wording is clear? For a bot to remove these could be controversial, and might require further discussion. We could also have an issue with WP:CONTEXTBOT here, especially as the review added may be from a more important review site and the review to remove - to take the count down to 10 again - should be the least significant review website, which may not be the one added. And if the issue of 10 really is that much of an issue in every case (a bot would remove a template from every case) perhaps it would be better to sort the issue at source, bu programming the template to only show 10 scores. TheMagikCow (T) (C) 17:20, 6 July 2017 (UTC)
@TheMagikCow: You wandering where do it says to only add ten reviews in the template, it says it right here. TheAmazingPeanuts (talk) 15:29, 6 July 2017 (UTC)
But it also says "but you can add more in exceptional circumstances." so you could not just trim to 10 by a BOT. Keith D (talk) 21:22, 6 July 2017 (UTC)
@Keith D: Maybe not, but my issue is there has been some editors keep adding The Needle Drop in the template because of this video, and it doesn't help that The Needle Drop is not an reliable source (WP:ALBUMAVOID). See here, here and here for example, but I realized that this isn't the place to report this kind of issue here, so sorry if I wasting your time. TheAmazingPeanuts (talk) 18:18, 6 July 2017 (UTC)
Whilst these edits may be unconstructive, it is not to say that a bot would be useful here. The bot would replace all cases where there are over 10 reviews - and as the specific guideline says there are circumstances where more than 10 reviews can be present. A bot can't tell the context (WP:CONTEXTBOT) so would actually do harm to pages. Removing excess reviews from pages where they don't meet policy is good, but not a task for a bot. I think you should talk to the editor and explain the policy of no more than 10 to them. If they continue, a block may be needed as a preventative measure. TheMagikCow (T) (C) 11:20, 7 July 2017 (UTC)
@TheMagikCow: There has been more than one editor who been adding reviews more than 10. If an editor add a review which is not an reliable source, I just revert the edit with an edit summary explaining the edit is not constructive. But sometimes they don't respond. TheAmazingPeanuts (talk) 07:08, 7 July 2017 (UTC)

User:WP 1.0 bot, 2.0

Could someone create a replacement for User:WP 1.0 bot? Neither operator is active (one's last real activity ended two years ago, and the other has made eight edits in the last three and a half years), and as the bot's making some small errors (see [8]) that don't at all justify a block, it would be nice if we had an alternate bot doing the same thing yet run by an operator who could respond to requests for code tweaks. Note that a link to the bot's source code is posted on its userpage, so I suspect that it won't be much work to create a new bot. Nyttend (talk) 23:11, 4 July 2017 (UTC)

There is a statement at User:WP_1.0_bot/Web/Guide#About_the_Wikipedia_Release_Version_tools that "Wikipedia Release Version Tools" will replace 1.0 bot. @CBM: you wrote that in 2009; please could you update it? – Fayenatic London 21:34, 6 July 2017 (UTC)
Note to whoever may do an update: please revise the code where it specifies the category containing the assessment categories, from category:Wikipedia 1.0 assessments to Category:WikiProject assessments. (See CFD 2017 Feb 17 and Wikipedia_talk:Version_1.0_Editorial_Team/Index#Bot_is_down_again.) – Fayenatic London 08:19, 7 July 2017 (UTC)

This bot has been operating erratically for several years now and once again its not functioning correctly as of this post. We've had to beg people to make fixes and its getting very tiring. Many projects depend on this bot to spot incorrect assessments and a multitude of other tasks. The last editors in charge of the bot seemed to have vanished. Brad 00:02, 8 July 2017 (UTC)

It might be useful after magic links are disabled to inform unaware editors, making edits which would currently result in the generation of magic links, that the use of templates would be required to make ISBN and DOI links; sort of like how users are told to sign with four tildes and not to link to disambiguation pages. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
14:20, 7 July 2017 (UTC)

Magic links are currently created for ISBN, PMID, and RFC (not DOI). FYI. AFAIK. – Jonesey95 (talk) 15:10, 7 July 2017 (UTC)
You may want to start a quick village pump discussion to make the eventual BRFA run more smoothly, Jc86035. I imagine you'd get a quick SNOW support. This is obviously helpful. ~ Rob13Talk 00:33, 8 July 2017 (UTC)
@Jonesey95 and BU Rob13: I've started a discussion at Wikipedia:Village pump (proposals)#Magic links reminder bot. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
08:18, 10 July 2017 (UTC)

Mass importance assessment for WikiProject Thailand

I see several bots are approved for WikiProject tagging, but I'm not sure which ones do assessment, so asking here. (Previously asked Anomie but he's busy.)

This is a request on behalf of WikiProject Thailand that importance parameters be added to the project banner ({{WikiProject Thailand}} or redirects) according to this list. If it's feasible, please also perform auto-stub assessment. Since it's been some time since the list was compiled, please skip any conflicts that are encountered. Discussion and project approval is here. --Paul_012 (talk) 15:53, 22 June 2017 (UTC)

I will have a a look into this and see how feasible it is with the MW API. TheMagikCow (T) (C) 15:24, 28 June 2017 (UTC)
Paul 012 It looks very do-able - Coding.... Should have this done soon. TheMagikCow (T) (C) 16:12, 28 June 2017 (UTC)
BRFA filed Wikipedia:Bots/Requests_for_approval/TheMagikBOT_4 TheMagikCow (T) (C) 10:09, 29 June 2017 (UTC)
Thanks a lot, TheMagikCow. FYI, I've just removed five red links from the list. Also, I probably should have mentioned—just to make sure—that the bot should follow redirects for this task, as some pages may have been moved since the list was compiled. --Paul_012 (talk) 14:40, 29 June 2017 (UTC)
All  Done! TheMagikCow (T) (C) 18:42, 12 July 2017 (UTC)
Thanks a lot! --Paul_012 (talk) 21:57, 12 July 2017 (UTC)

List Duplicates

Hi all! I was wondering what your thoughts are on a bot removing items that are duplicated in lists (eg aluminium chloride fluoride in Dictionary of chemical formulas). 2 non-blank rows have exactly the same wiki code, and I can't see a situation where that occurs intentionally. I am totally aware that this may be straying into some serious WP:CONTEXTBOT territory, so this may be better suited to the semi-automatic AWB. Has anybody got some thoughts? TheMagikCow (T) (C) 14:05, 12 July 2017 (UTC)

Unclosed <!-- comment tags

This might be covered in some sort of database report already; if so, please tell me.

Could someone write a bot to go through everything (presumably via a database dump) and tag for cleanup all pages that have an unclosed comment tag (i.e. an instance of <!-- doesn't have a corresponding -->), and perhaps also all pages that have a closing tag with no opening tag? {{cleanup|reason=Broken comment tagging}} or something of the sort should suffice. This isn't a WP:CONTEXTBOT issue, since the bot won't be deciding where to add the missing tag; it's merely reporting on the problem. The bot should check the code, not the rendered text; it should ignore pages where the coding is visible, perhaps because someone intentionally used the character entity reference (like I did here) or because someone inserted nowiki tags, or zero-width spaces, or something like that in order to prevent the coding from working. Nyttend (talk) 02:26, 15 July 2017 (UTC)

Nyttend: Does this report (Wikipedia:WikiProject Check Wikipedia/List of errors #5) meet your needs? – Jonesey95 (talk) 14:45, 15 July 2017 (UTC)
Yes, it does. Thanks! Nyttend (talk) 20:33, 15 July 2017 (UTC)

Remove references sections from pages with no references

Would be possible to have a bot do something like this? At least for some cases. Or at least create a list of pages that have references sections and no references and thne treat them manually. -- Magioladitis (talk) 09:01, 16 July 2017 (UTC)

Possible, I imagine, but not desirable. I sometimes add a reflist template and section to an article with no references when I tag it as unreferenced. That way, when someone adds a reference, it appears in the correct location, with a header. – Jonesey95 (talk) 12:51, 16 July 2017 (UTC)
Jonesey95 Do you add an empty section tag too? Since we have bots to add sections when missing, havng a empty references section does not seem to be right though. -- Magioladitis (talk) 12:53, 16 July 2017 (UTC)
I add exactly the text that you removed in your diff above, so that the section is in the right place, formatted correctly, and ready for the first reference to be added to the article. Without the section in place, a new reference would appear below the stub template, which is not desirable. – Jonesey95 (talk) 13:16, 16 July 2017 (UTC)
Jonesey95 OK... -- Magioladitis (talk) 13:17, 16 July 2017 (UTC)
Jonesey95 Since it's not clear what to be done, we can discuss it somewhere else. I 've been tagging empty references sections till someone told that it's better to remove them instead. -- Magioladitis (talk) 13:33, 16 July 2017 (UTC)
Jonesey95 In 2013... T100741 -- Magioladitis (talk) 13:34, 16 July 2017 (UTC)
An empty References section is much different from, and much more useful than, another empty section (e.g. Reception or Early life). – Jonesey95 (talk) 13:48, 16 July 2017 (UTC)
OK. This is your opinion. I have already marked the disciussion here as "not done" since the tasks should be uncontroversial and this is obviously not the case. Thanks for your comments. -- Magioladitis (talk) 13:52, 16 July 2017 (UTC)
Any empty ref section will hopefully help convince people to add refs. Doc James (talk · contribs · email) 05:26, 17 July 2017 (UTC)
It tends to be newer users who have yet to fully grasp wikisyntax. Often I have seen <ref> tags in a page with no {{reflist}} section. Somebody that does not understand why the refs are not showing at the bottom would, understabably, be frustrated. Why whould we want to make the problem worse? We should not make it harder to create pages with proper referencing - in fact I would advocate putting on a references section with a reflist. TheMagikCow (T) (C) 07:46, 17 July 2017 (UTC)

TheMagikCow In fact the reflist will autoshow when a ref is added and we have bots to add missing references sections. In the "yes references section, no refs" scenario there is an empty section shown at the bottom of a page. No big deal though that's why I cancelled my request. -- Magioladitis (talk) 11:27, 17 July 2017 (UTC)

An empty reference section is grossly misleading to readers, especially those with screen readers, as they are told there are reference in the TOC when there are, in fact, none. That is a much graver crime than making life slightly more difficult for editors. If the concern is someone puts a <ref></ref> tags and they wonder were their citation went, that's rather a non-argument given articles will display such references even in the absence of a reference section. They just won't display it in the reference section. The solution to that is to have a bot add missing reference sections when they are found (CW Error #03), not prevent a bot from removing such empty and misleading references sections.Headbomb {t · c · p · b} 12:17, 17 July 2017 (UTC)

However, I'll add that this is clearly in need of wider input than what can be gained at WP:BOTREQ. If someone is interested in coding this bot, they should take it to WP:VPR to gain consensus first before they start coding. Headbomb {t · c · p · b} 12:24, 17 July 2017 (UTC)

Rewriting arguments of template:NFL predraft so they can be used with template:convert

At the moment, template:NFL predraft only displays imperial units. I would like to change it to also display metric units, using template:convert. Please see User:Spike/Sandbox/Template:NFL predraft for my suggested new version of the template. Unfortunately, many arguments of template:NFL predraft are given using fractional values in a format which is not allowed in template:convert: They use template:frac, template:fraction or Unicode characters like "⅞", see, e.g., Eric_Berry#2010_NFL_Draft. So, I would like to request a bot which performs a string substitution on all arguments of template:NFL predraft which use template:frac, template:fraction or Unicode fraction characters such that the arguments obtain the form "x+y/z". E.g., "{{frac|1|3|8}}" should become "1+3/8", "2{{fraction|1|2}}" should become "2+1/2", "5¼" should become "5+1/4", and so on. I have done some searches, see e.g. [9] and [10], and I believe that give or take some false positives, there should be around 700 cases. Spike (talk) 22:54, 20 July 2017 (UTC)

Before taking on this bot task, I'd recommend consensus be established for making the change to add {{convert}} in the first place in a discussion that includes members of WP:NFL. You may have an easier time of it if your sandbox version used {{#iferror:}} to avoid displaying errors before articles are updated. Anomie 12:13, 21 July 2017 (UTC)
Thanks very much for the iferror suggestion. I have implemented it. Regarding the consensus: I have asked for comments on the template talk page under Template_talk:NFL_predraft#Proposal_to_add_metric_values and on the NFL project talk page under Wikipedia_talk:WikiProject_National_Football_League#Proposal_to_add_metric_values_to_Template:NFL_predraft four days ago and I have not received any answers until now. Is there a time frame after which I could interpret zero negative comments as a kind of silent consent? Or does someone have to actively say "do it!"? Spike (talk) 13:13, 21 July 2017 (UTC)

Read created articles and insert them to my META userpage

The bot should call the website [11] and insert the articles there to my userpage located at the meta-wiki [12]. --Mathmensch (talk) 13:31, 21 July 2017 (UTC)

Automatically replacing Template:Unreferenced with Template:Refimprove

There are several articles with references in Wikipedia that still contain {{unreferenced}} tags. Can we configure one of Wikipedia's bots to automatically replace these tags in articles like this one? Jarble (talk) 19:31, 22 July 2017 (UTC)

It would need to be a bit more complicated than that, for example if someone had adequately referenced an unreferenced article you wouldn't want to simply move to refimprove. ϢereSpielChequers 22:20, 22 July 2017 (UTC)
I think this is what AWB does. -- Magioladitis (talk) 22:26, 22 July 2017 (UTC)
I think this is best done semi-automatically with AWB. You can pre-parse the category for {{unreferenced}} to find those that contain ref tags, then go through with genfixes. AWB genfixes by default will change {{unreferenced}} to {{refimprove}} in those cases, but the editor should check whether there's adequate sourcing to remove the tag entirely. ~ Rob13Talk 03:00, 23 July 2017 (UTC)

Can a bot fix comma spacing errors?

I have recently undertaken the fixing of comma spacing errors, which are a great annoyance. In most instances in English text (and other Romance languages), a comma has a space after it. What we are supposed to have is:

Smith formed Smithco, Inc., in Provo, Utah.

Instead, we often see spacing errors like:

Smith formed Smithco,Inc., in Provo,Utah.

or:

Smith formed Smithco ,Inc., in Provo ,Utah.

or even:

Smith formed Smithco , Inc. , in Provo , Utah.

On the other hand, there are rare occasions where a comma should not be followed by a space. For example, many URLs contain commas (and would be broken if a space was added), as do some templates or template structures (such as "display=inline,title" and "display=inline,source" statements, or basically anything following "display="). Many filenames also contain bad comma spacing, which ideally should be fixed by renaming the file, but should be skipped by automated means that do not have the capacity to do this, since "fixing" the spacing error will break the link to the file. Finally, large numbers in the thousands and above take a comma (1,000, 1,000,000, etc.), and certain chemical and mathematical formulas intentionally contain a comma not followed by a space. So far as I know, there is never a situation where there is supposed to be a space before a comma. In the course of carrying out my spacing fixes, I have also noticed an unusual number of instances of external link text beginning with a stray comma , like this.

Basically, what I am looking for is a bot smart enough to:

  1. fix errant spacing around commas like the examples above, but
  2. ignore commas in URLs and as part of template structures (but not in regular text that happens to be in a template)
  3. eliminate commas that occur in the space between a URL and the text of a formatted external link
  4. ignore commas in large numbers, mathematical and chemical formulas
  5. ignore but make a record of commas in filenames.

If this is not possible, the next best thing would be some setup where a list of possible errors meeting these characteristics can be assembled for manual checking and repair. bd2412 T 02:07, 10 July 2017 (UTC)

I don't think that a bot could do this task, but here's an insource search (with a LOT of false positives) that you could start working on. – Jonesey95 (talk) 02:58, 10 July 2017 (UTC)
Thanks, but there are too many errors to be fixed without some kind of automation of some aspect of the task. bd2412 T 03:28, 10 July 2017 (UTC)
I think this would be a perfect job for a bot. It's "Look for pattern X, Y, or Z, and replace it with pattern A". ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 04:09, 10 July 2017 (UTC)
I just don't see how a bot could determine whether adding a space to "a,b" would be a correct action. I suspect, however, that someone could create a regex that said "show me '[a-z],[a-z]' in article space, but not inside of square brackets, curly braces, or tags", do an insource or database search, and create an article list for a discerning human to traverse with AWB. – Jonesey95 (talk) 13:57, 10 July 2017 (UTC)
A list would be helpful. I can't help thinking that there must be some portion of the task that a bot could undertake. Suppose we start with instances where there is an extra blank space before the comma? There are no URLs or mathematical formulas that will contain that, nor should there be any template calls affected by it. bd2412 T 14:17, 10 July 2017 (UTC)
You can search for pages like this yourself. Here's an insource search for 'letter space comma letter'. The search times out, but I get only 188 pages (some of which have math formulas, and some of which are templates; go figure). You should be able to fix all of those on your own with a simple regex find and replace script, checking each one before saving, of course. – Jonesey95 (talk) 05:14, 11 July 2017 (UTC)
This is some serious WP:CONTEXTBOT territory. I'd be most comfortable with AWB doing something like this. There's no way an operator could consider all edge cases. ~ Rob13Talk 14:36, 10 July 2017 (UTC)
Yeah, WP:CONTEXTBOT applies, unless some advanced filtering can be used or something. Headbomb {t · c · p · b} 19:57, 10 July 2017 (UTC)
I'd be glad to do this by AWB if a list could be generated with the high probability fixes, and excluding the issues referenced above. bd2412 T 00:41, 11 July 2017 (UTC)
IMO this seems like a really bad idea WRT WP:COSMETICBOT. Standing aside from that, the rule set for determining what should be fixed feels more like a human eyes-brain task. No objection to a bot adding a maintenance template that applies a hidden category so that editors can sort through the problem. Hasteur (talk) 12:35, 11 July 2017 (UTC)
WP:COSMETICBOT deals with Wikitext changes that are not visible to the reader. Incorrect spacing is glaringly visible to the reader, and diminishes the credibility of the encyclopedia. bd2412 T 12:46, 11 July 2017 (UTC)
BD2412, this is less of an issue with COSMETIC and more of an issue with CONTEXT (I think the above uses of COSMETIC might have been either mistaken or errors). As mentioned previously, there are so many edge cases, rules, and exceptions, that while the majority of rules could be programmed in for AWB use, at the end of the day there should still be a human checking the edit and clicking "save". Primefac (talk) 19:13, 11 July 2017 (UTC)
Ok, then - what can we do? If the best we can do is to come up with a list of highly probable errors to be manually checked, let's do that. I'll run the list if someone can make one. bd2412 T 19:25, 11 July 2017 (UTC)
I would say that perhaps an AWB regex search would be the way to go. There is some good information here. TheMagikCow (T) (C) 14:07, 12 July 2017 (UTC)
@BD2412: Yeah, the best search will definitely be using AWB to go through a database dump using a regex search. You'd want to search for three things: a letter, comma, then letter; a space, comma, then letter; and a space, comma, then space. Note you would want to use letters (not all characters) due to commas being valid with no spaces between many numbers. This isn't hard to do with regex, but let me know if you're unfamiliar with it and I can write something up quick. ~ Rob13Talk 22:42, 15 July 2017 (UTC)
Is it possible to do a search like that which excludes commas occurring in URLs? That seems to be the biggest source of false positives. bd2412 T 22:45, 15 July 2017 (UTC)
Try tacking (?<!https?://[^ \|\{\}\[\]\<\>]*) on the end of your regular expression. That should exclude most urls. -- John of Reading (talk) 05:20, 16 July 2017 (UTC)
Isn't there an option to search on visible text only? Then nothing in <ref>...</ref> would matter, and we would have very few false positives, once numbers are excluded. — JFG talk 09:10, 25 July 2017 (UTC)
Comma spacing errors in references (particularly in book titles) also do need to be fixed, though. Perhaps these can be separated into distinct projects with different approaches. bd2412 T 12:05, 25 July 2017 (UTC)

Automated Unblock Bot

Could there be a bot that notifies user, who has made unblock request that he is already unblocked. Many cases I have seen occurring that the person who requests unblock is already unblocked or may e autoblocked. Besides providing Note to blocked users, it should be to notify them on their talk pages. Thankyou.Abeer Mehr (talk) 19:01, 27 July 2017 (UTC)

Such a bot is possible, but approval would require evidence that such a bot is also wanted by the community and/or by the admins who handle the unblock requests. Anomie 19:47, 27 July 2017 (UTC)
How often does this occur? Is this something that is causing admins problems? Hasteur (talk) 19:58, 27 July 2017 (UTC)
There was one user whose name I have forgot, and he was requesting unblocked, and he was unknown of fact that he is auto-blocked. What do you think, there should be. Abeer Mehr (talk) 05:14, 28 July 2017 (UTC)
@Anomie:We can also contact community if they need it. Or not?
I'd personally be against a bot responding to such unblock requests instead of admins. Admins want to see these requests because they sometimes indicate sock puppetry. I've caught socks based on auto block unblock requests before. ~ Rob13Talk 05:21, 28 July 2017 (UTC)
If it notifies an admin, will it be all right? I have no any problem with it, Thankyou BU Rob13. Abeer Mehr (talk) 05:34, 28 July 2017 (UTC)
If the bot were to be approved it would need administrative privileges and therefore an Administrator has to run it. I think this is a very bad idea as very few AdminBots get approved and then with a great amount of scrutiny and community consensus. Hasteur (talk) 05:39, 28 July 2017 (UTC)
Hasteur, I will take full responsibility so don't care about it. But thing to be thought is that is it great idea? If it's not, kindly explain. Abeer Mehr (talk) 05:45, 28 July 2017 (UTC)
You are not an admin and I would be challenging the Bot Coder and Bot Operator very closely as adminBots have the great potential to be abused and go off the rails. Let's lay out a hypothetical. User A has been causing mischeif on the site to the point that they have been blocked. User A logs out and continues causing mischeif via IP address. User B registered for their account at home, but when they log in to do some spare time editing at work, they become auto-blocked because of User A's antics. User B doesn't have a block on them, but they are caught by the auto-block of User A. Without a lot of work and fuzzy logic, the bot is not going to be able to determine what's going on. When User B makes their request to be unblocked, they don't want a bot to respond to them, they want a human being to help them and try to figure out what's going on. Hasteur (talk) 05:55, 28 July 2017 (UTC)
You mean this is bad idea? User:Hasteur. If not, tell me what should I do? Abeer Mehr (talk) 05:58, 28 July 2017 (UTC)
@Hasteur: I think you're mistaken. Declining an unblock request is an admin action, but it does not require admin tools. This would not need to be an admin bot in the sense of having access to the tools. Having said that, I do think this isn't a good idea, Abeer Mehr. I've continued thinking about this since I last commented and I'm now more concerned about something else. An admin confronted with an unblock request of a long-time editor who is caught in an autoblock knows to grant them the IP block exempt user right, which will allow them to edit through the block. A bot could not do that. For this reason (and my previous thinking, to a lesser extent), I think this idea wouldn't work out well. Either way, thank you for thinking about ideas for automated tools to make lives easier. ~ Rob13Talk 06:38, 28 July 2017 (UTC)
@BU Rob13: Thankyou for replying, I think I have requested an uncommon thing, which was subject to attention. I will consider it as  Not done, but no problem, I will return soon with great ideas. I requested work of adminbots. But thankyou for your assistance. Abeer Mehr (talk) 06:46, 28 July 2017 (UTC)

Bot to tell about Bare References

OK. So I am back with another idea. Actually some editors cite sources into articles but often forget to write title, accessdate and this message is often displayed at the References section. If editors be given message that they have missed something then it is the subject what the bot should do. Like DPL Bot tells about disambiguation links, this bot would tell that they have not written title of something. How is the idea? Share. ABEER!!(wanna talk?) 10:44, 28 July 2017 (UTC)

We had a bot at one point which would auto-generate titles of URLs for references with only a bare URL. Maybe that one should simply be rebooted, or a similar bot coded. It's not something we need to tell users about IMO. --Izno (talk) 12:20, 28 July 2017 (UTC)
OK. Waiting for other administrators' reviews. @Izno: You may search about it and if it is no long, let's have together. ABEER!!(wanna talk?) 13:29, 28 July 2017 (UTC)
@BU Rob13: share your reviews. ABEER!!(wanna talk?) 07:17, 30 July 2017 (UTC)
ReferenceBot used to do tasks like this, but it has been idle for a few months. – Jonesey95 (talk) 11:41, 30 July 2017 (UTC)