Wikipedia talk:Bots/Requests for approval/Archive 4
This is an archive of past discussions on Wikipedia:Bots. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 | Archive 5 | Archive 6 | → | Archive 10 |
"Request for Bot Approval" seems to be broken
Hi. Wikipedia:Bots/Requests for approval/Polbot 3 is a request for bot approval that's been open for about a month now. The last time a BAG member edited it was June 16. It's been tagged "attention requested" for about two weeks now, but no action has been taken. It has had unanimous approval from those who have commented. I'm considering withdrawing the request and just running the bot. This seems to me to be an "Ignore all rules" case: there seems to be community consensus to run it, the bot will help the encyclopedia, and the only thing stopping it from running is a rule. Are there any objections to this? If so, what do you recommend I do? (I mean no disrespect to the BAG members, by the way.) All the best, – Quadell (talk) (random) 14:19, 9 July 2007 (UTC)
- Withdrawn. It's not broken, this one just fell through the cracks. It's been taken care of. – Quadell (talk) (random) 14:51, 9 July 2007 (UTC)
- In case it helps, I made the original suggestion for a bot to handle this sort of stuff, and would gladly help monitor the edits this bot makes, at least in a trial period, to help get any bugs out of the system. I could also help handle any complaints, if there were any. On a more general note, is this lack of timely responses a seasonal thing to do with summer, or people being out of/in school/university? Or is it just that the BAG members all happen to be busy? Carcharoth 14:26, 9 July 2007 (UTC)
Scrub the above. It's been approved for a trial period of 50 edits. I'll discuss with Quadell. Carcharoth 14:31, 9 July 2007 (UTC)
Those links in the list of approved
Those links displayed as [1], [2], etc. seem to be the same as the flag log links. Why do we have two links to the same thing? Andre (talk) 20:07, 12 July 2007 (UTC)
Editing one's own userspace
Do I have to requests for approval to make edits to my own userspace like updating the list of album covers with disputed fair use claim or the list of album covers without fair use rationale? Jogers (talk) 14:28, 19 July 2007 (UTC)
- No ∆ 14:42, 19 July 2007 (UTC)
- Great. Thanks. Jogers (talk) 14:54, 19 July 2007 (UTC)
BoxCrawler
Freak gave me approval but it's still listed under "trial", can this be remedied? Adam McCormick 02:07, 25 July 2007 (UTC)
- Will do; he approved it but didn't use the template. — Madman bum and angel (talk – desk) 02:21, 25 July 2007 (UTC)
Feasibility study before I actually do all that work
Per [1], I would like to get an idea, before I actually go to the trouble of writing the bot, of the likelihood of a bot that updates scores and team records on individual sports team season pages (2007 Indianapolis Colts season, for instance) being approved in principle. I realize that a large part of whether or not my bot gets approved would depend on the particular details of its implementation, but before I go to the work of writing it I'd like to know whether or not the general idea of such a bot is acceptable. Kurt Weber 16:24, 27 July 2007 (UTC)
- If the consensus is that it is OK to have scores on those type of articles, then yes, I would approve a bot that automatically updated the scores. —METS501 (talk) 16:28, 27 July 2007 (UTC)
- And it wouldn't just update scores, either--it'd also update standings, won-loss records, etc. Kurt Weber 16:45, 27 July 2007 (UTC)
- Yeah, same goes for those as for scores. —METS501 (talk) 16:46, 27 July 2007 (UTC)
- And it wouldn't just update scores, either--it'd also update standings, won-loss records, etc. Kurt Weber 16:45, 27 July 2007 (UTC)
Clarification on whether I need approval
I'm interested in creating a crawler just to copy all the page content of the translated pages on Wikipedia:Translation/*/Completed_Translations, and I think I have the same question as Symmetric above. If I'm just screen scraping this small section of Wikipedia (maybe 200 pages), do I need to get permission? I might use the "edit this page" button, but just to get the wiki-style page text (not the full HTML). I wouldn't actually edit anything. Thanks! Sedatesnail 19:47, 2 August 2007 (UTC)
- Nope, you don't need permission. I would recommend Special:Export, however. —METS501 (talk) 20:12, 2 August 2007 (UTC)
- To get the wikitext, use http://en.wikipedia.org/wiki/Main_Page?action=raw, as it reduces output and load. Matt/TheFearow (Talk) (Contribs) (Bot) 21:04, 2 August 2007 (UTC)
- Nice! Thanks for the info and the Special:Export page. Sedatesnail 21:17, 2 August 2007 (UTC)
- Yeah, ?action=raw and ?action=render are useful. There's also ?action=raw&templates=expand to also include the wikicode of the transcluded templates. Matt/TheFearow (Talk) (Contribs) (Bot) 21:33, 2 August 2007 (UTC)
- Nice! Thanks for the info and the Special:Export page. Sedatesnail 21:17, 2 August 2007 (UTC)
- To get the wikitext, use http://en.wikipedia.org/wiki/Main_Page?action=raw, as it reduces output and load. Matt/TheFearow (Talk) (Contribs) (Bot) 21:04, 2 August 2007 (UTC)
CorenSearchBot (ne CorenGoogleBot)
Can I get approval to run the bot in supervised, semiautomatic mode to finish debugging it? (Basically, it performs one new page check, and submits the edits for my approval before doing them).
It wouldn't break Google's TOS since, strictly speaking, the searches aren't automated. Admittedly, that's splitting semantic hair but I doubt they'd object. :-) — Coren (talk) 00:14, 5 August 2007 (UTC)
- Did you send an email through OTRS, per the instructions on the BRFA? If so, i'd happily approve for a small trial. If not, do so then I will. That way its a higher chance of getting any google approval. Matt/TheFearow (Talk) (Contribs) (Bot) 00:21, 5 August 2007 (UTC)
- Yes. Jessica Barrett has been nice enough to assist me with this. At any rate, if I fail to secure Google's permission in a timely fashion, I'll switch to another search engine (although I feel that'd be a loss). — Coren (talk) 00:27, 5 August 2007 (UTC)
- Actually, it'd still be automated; it's just that the results would be reviewed. I'm not trying to Wikilawyer here; I'm just pointing out the fallacy. It really doesn't matter to me either way. — madman bum and angel 00:59, 5 August 2007 (UTC)
- I believe that interpretation to be correct. Traditionally, the remedy has been to just use Yahoo (well, at least AFAIK). I'm pretty sure that the folks at Google wouldn't actually mind but thinking that and having their legal department clear it are two very different things. Unfortunately, the latter requires going through the proper channels which may or may not really be a problem considering that Google seems to be pretty pro-WP, even for someone with a 'don't be evil' mantra. S up? 01:12, 5 August 2007 (UTC)
- Oh, I agree. But, mind you, their no-automated-search policy has arguably a very different target than what my bot is... At any rate, I don't think this should be a problem for a short trial run and I wouldn't start the 'bot "for real" without an official okay from Google (or changing the search engine).
- In any case, I'm taking personal responsibility should Google get annoyed— for that matter, the searches will come from an IP under my control so I alone am responsible. If there is wrist slapping to be done, it'll be my wrists. However unlikely that might be. — Coren (talk) 01:17, 5 August 2007 (UTC)
- The problem is, if you run it, and google complain to the foundation, then it is us, the BAG, that get in trouble. The bot has to obey by the policies of other sites it accesses. Until google gives you the OK, or you switch to another engine, I will not approve the request, even for trial. Matt/TheFearow (Talk) (Contribs) (Bot) 06:29, 5 August 2007 (UTC)
- I suppose. Hm. Well, the actual search is neatly isolated in a function, it wouldn't take me long to swap it for testing— I'll switch to yahoo for the trial run at least. — Coren (talk) 14:32, 5 August 2007 (UTC)
- That was long and painful. :-) I'm using Yahoo with a proper App ID even. :-) — Coren (talk) 15:17, 5 August 2007 (UTC)
Joining the BAG
As part of the BAG policy when joining, I must notify certain pages about my request for BAG membership. Please see my request at Wikipedia talk:Bots/Approvals group#Nomination to join the BAG. Thank you for your time, — E talkbots 12:09, 10 August 2007 (UTC)
Flagging User:ClueBot
I would like to get a higher EPM for ClueBot or perhaps just use a maxlag= value. I was told by Wikihermit that I would need to get the bot flagged before I could have a higher EPM. Can I get User:ClueBot flagged? -- Cobi(t|c|b|cn) 01:52, 16 August 2007 (UTC)
- We actually don't give flags to anti-vandalism bots, so just feel free to go ahead and edit faster. You've got my permission to edit as fast as you need to revert the vandalism and warn the users in real-time. —METS501 (talk) 02:00, 16 August 2007 (UTC)
- Also, I think that the maxlag value isn't necessary here because reverting vandalism is of the utmost importance at all times. —METS501 (talk) 02:05, 16 August 2007 (UTC)
Can someone add my bot to the page, I can't edit the page for some reason. —Preceding unsigned comment added by Vandalstopperbot (talk • contribs) 11:01, 31 August 2007 (UTC)
- You're editing from the bot account. Since the page is semi-protected, the new account cannot edit it. You need to switch back to your actual account. It would be better for you to add it from your account & confirm that Cobi is the actual owner of the bot on the approval request. At the moment, the request & user page have only been edited from the bot account so there is no confirmation from the owner. -- JLaTondre 11:30, 31 August 2007 (UTC)
- Blocked by ST47. MaxSem 22:14, 31 August 2007 (UTC)
Request
Can someone add {{BRFA|AlptaSandBoxBot||Open}}
to Wikipedia:Bots/Requests for approval? Alpta 03:22, 6 September 2007 (UTC)
- Done. ~ Wikihermit 03:38, 6 September 2007 (UTC)
BAG Joining
Wikipedia talk:Bots/Approvals group. Thanks. CO2 21:39, 21 September 2007 (UTC)
Request, too
Can someone add {{BRFA|ChandlerMapBot||Open}}
to Wikipedia:Bots/Requests for approval? Thank you. --ChandlerMap 11:42, 22 September 2007 (UTC)
- Done. — E talkbots 12:15, 22 September 2007 (UTC)
Assistance requested
I've asked for assistance in getting a stable solution to the Sandbox reset problem, at Wikipedia:Bot owners' noticeboard#Critical - intro and sandbox bot missing. Please help. --Quiddity 20:25, 5 October 2007 (UTC)
Question
What is the policy on approval for assisted bots? Like if I want to run solve disambiguation.py only, with no modifications at all, do I need approval? Thanks. :) - cohesion 01:59, 6 October 2007 (UTC)
- You can't run a bot script on the main account (ie your account). Best to get a bot account (like user:CohesionBot), get it flagged, then run it. Carbon Monoxide 03:11, 6 October 2007 (UTC)
Resquest the bot status be rescinded
How does one make a request for removal of bot status? User:JAnDbot has been making questionable removal of valid iw links, such as this one. On following to the owner's talk page (on the Czech WP), it turns out that others have been noticing and commenting on the same problem across several languages. --EncycloPetey 13:01, 9 October 2007 (UTC)
- If talking doesn't work, then WP:BOWN, WP:AN or WP:BN may be in order. —METS501 (talk) 22:25, 10 October 2007 (UTC)
Bad bot!
There is a bot that is clearly flouting Wikipedia:Bot policy#Spell checking bots - GurchBot. Look at this diff to Golden West Network - I suggest something is done if it is an 'unattended fashion' per the bot policy. Auroranorth 12:57, 12 October 2007 (UTC)
- According to Gurch on the bot's BRFA, Yes, I check every edit before I save it, so the bot's manually attended, which a permitted way to run a typo-fix bot. It would be nice if it had the standard template on its userpage to make it easier to block in an emergecny, though (although that seems unlikely to happen), rather than a redirect that means that an inattentive admin might accidentally block its operator instead if they weren't paying attention... --ais523 13:03, 12 October 2007 (UTC)
- I think it's not really that handy if a bot userpage redirects to its owner's page, but talk pages are OK IMHO. Auroranorth 13:06, 12 October 2007 (UTC)
How can I run a bot using WP:AWB?
Do I have to run it from my computer? Thanks in advance.--Tasc0 22:01, 14 October 2007 (UTC)
- Yes, you do. Just install the software, get approved (only applies to Wikipedia) and go for it. – Mike.lifeguard | talk 19:11, 17 October 2007 (UTC)
- Thanks for the reply. But I don't think I'm going to create the bot, at least using WP:AWB because I relly don't feel like installing .NET Framework. If I could do it without running the software from my computer, I might.--Tasc0 20:33, 17 October 2007 (UTC)
Checking here before I take the plunge: bot idea...
Is there a bot that can crawl a subject's lists (basic topic list, topic list, etc.), and check the talk page of each article listed on those lists for the existence of the subject's wikiproject banner, adding it to those pages where it doesn't yet exist?
If not, would there be any objections to the creation of such a bot?
The Transhumanist 06:01, 19 October 2007 (UTC)
- Yeah similar bots exist. Should be no problem. You should support the Banner shell. :: maelgwn - talk 07:04, 19 October 2007 (UTC)
- User:Kingbotk/P may be useful. Dihydrogen Monoxide (H2O) 23:41, 19 October 2007 (UTC)
Minor new task for OrphanBot and ImageRemovalBot
To make it easier for my bots to remove images, it would be useful for the bots to canonicalize image links in the articles they edit: remove unneccessary spaces, convert underscores to spaces, and convert percent-encoded characters to UTF-8. Do I need to get approval for this? --Carnildo 21:51, 23 October 2007 (UTC)
- IAR and just do it, its not a big deal. βcommand 22:01, 23 October 2007 (UTC)
- I would not say that IAR applies, but I would say there's precedent. Almost all AWB bots do general fixes like these in addition to their requested tasks. Have at 'er. — madman bum and angel 22:19, 23 October 2007 (UTC)
Approval Threshold
Some questions on requesting bot approval:
1. I have a few tools on the go that grab various items through index.php and api.php and based on the results grab some more. They are currently read-only. Since one human-click can result in a few hundred page requests, they are definitely "automated tools" - but do they qualify as bots needing approval? My information from a support ticket I placed is that "you can request pages as fast as you want as long as you obey the maxlag= parameter", which I do.
2. Now I want to modify one of my tools to pump some output into a sub-page of my User or User-talk page. Does that now qualify as a bot needing approval? If so, would I need approval before even trying the code? Can I use my own ID for this or do I need to create a FranaBot?
3. In the event the answer to either above is yes, you do need approval, a follow-up question: NOT to be flippant or disrespectful, but given that I could almost certainly make my packets identical to IE7's, why would I care if you approve or disapprove? The obvious answer being that if I cause disruption, I'm tossed out of the project, but are there any other answers?
4.(added) If I distribute the read-only tools to other users, am I now crossing the approval threshold?
Thanks for your help. Franamax 17:40, 27 October 2007 (UTC)
- Hi. In general, read-only bots do not need approval, as a bot flag will do nothing for them. If you are going to be regularly grabbing a significant amount of content, however, I would recommend getting a Toolserver account or hosting them on some else's account (if they permit you to). If you want to output some content when you click a button, I would say, no, that doesn't need approval. If it is frequent, or happens without your approval (i.e. makes edits automatically), then I would have a separate account make the edits. You can start by just doing it on your own account so we can see what it's like, and then we can advise you further. —METS501 (talk) 18:16, 27 October 2007 (UTC)
- If your tool is read-only then posts a response of some sort in user space after human intervention (but it does just a few edits when that happens) then it's not a bot per the policy and you can indeed do that under your own account. If you are going to request a lot of pages, however, you might want to work from a database dump instead. It has no load on WP and it's liable to be much faster for you as well.
As for why you'd want or need approval, it's actually fairly simple: any account that starts making a lot of edits very quickly without a bot flag is liable to be blocked on sight as a rogue bot by admins watching the RC page. When the account has the bot flag, then its edits do not clutter up the RC (unless one asks for bot edits), and the presence of the bot flag shows approval (to avoid twitchy admins). —Preceding unsigned comment added by Coren (talk • contribs) 18:33, 27 October 2007 (UTC)
Thanks both for your response. More specific questions:
1. crPatrol is a handkerchief an admin gave me to sniff - scan Newpages for recent re-created pages and report. I'm tweaking this to also check variant capitalizations of the article name to spot the more enterprising re-adders. I want to push the results to my Talk as there are indications that the Newpage patrollers may find it useful. Eventually I'm thinking about a 10-minute incremental scan, therefore unattended.
- a) if I understand, the scan every 10 minutes and results upload would need approval? But possibly could be conducted under my own account creds?
- b) estimating 3K articles/day, maybe 15K page requests and 30MB data per day to run, does that seem like a relatively minimal server load?
- c) api.php won't yield the Newpage log so I have to scrape html. Any suggestions? I did try hard to RTFM but I don't see it anywhere.
2. wpW5 will eventually do all of who-what-when-where-why. Right now it can take a text fragment and, if it is in the existing article, tell within 3 minutes when and by whom it first appeared (30 seconds is more accurate). I get the revision history by scraping html from index.php - I can get 5000/request that way, at the expense of bandwidth and loading the server. api.php will do it far more efficiently but &prop=revisions is limited to 50/time for non-bots. It would make way more sense for both client time and server load to use the api.php interface, but only if I can use the bot limit of 500 revisions per request.
- a) Can I request bot status efor this purpose? Seems rather unconventional.
- b) I'm hoping to eventually distribute this tool widely. Even if I could get a bot login for it, I assume I wouldn't be able to hand it out with the bot identity embedded?
- c) Is there a realistic path to getting the api.php?prop=revisions non-bot limit relaxed? At least when &rvprop not= content?
Thanks for the other tips. I like that Toolserver, I could get my awk and grep back! Franamax 04:58, 28 October 2007 (UTC)
- Allright, in order:
- (a) It would be best to get approval, yes, and this is potentially useful enough that you might want the target in WP space even; but you really don't want to have bot behavior on a real account. Admins are always twitchy about bots, and never hesitate to block (since no human is harmed).
(b)Nope. That's well within reasonable range.
(c)Actually, using action=query&apm;list=recentchanges will get new new pages: they are type="1" in the response.
- (a)Yes, and the bot limit is the reason why.
(b) No, every operator would needs its own bot account and separate approval. The BAG doesn't just approve bots, it also approves bot operators. Don't want to flag a vandal and all that.
(c)I don't expect so. The devs have placed the bot/not bot distinction there, it's not likely you'll be able to convince them otherwise.
- (a) It would be best to get approval, yes, and this is potentially useful enough that you might want the target in WP space even; but you really don't want to have bot behavior on a real account. Admins are always twitchy about bots, and never hesitate to block (since no human is harmed).
- For the record, the watch-for-new articles function seems to be a good idea. — Coren (talk) 05:35, 28 October 2007 (UTC)
A couple suggestions. For #1, I'd recommend using #en.wikipedia on irc.wikimedia.org to get notification of new pages' creation; it's very efficient and puts NO more load on the server when you're just watching the site, as is appropriate. For #2, I'd use the Toolserver. This is not a bot function, it is a script function. There are already several existing scripts that perform this function as well. If you're confident of your script's efficiency and stability, I'd be willing to have my account host it. — madman bum and angel 19:14, 29 October 2007 (UTC)
- A quick followup here, I'm really happy with my wpW5 tool for exploring article history, pushing it latest for spam investigation. The big problem was acquiring article history, the API limit of &prop=revisions&limit=50 made it more feasible to do things the hard way and it took up to 60 seconds that I couldn't shave away. Now I decided to get serious about wpW5-v0.5 and figure out the best way to go, I created a bugzilla account to request an enhancement, started gathering my presentation, hey, api.php now says the limit for non-bot &prop=revisions is 500!! So the road ahead has now opened up, that's 20-40 efficient calls to get all history for the worst case articles I've seen yet.
- If anyone out there saw this thread and made a change to the API limits in response, thank you very much. If it just happened for other reasons, well, ain't life grand! I don't care how it changed, I think I'm on the trail of a power-tool that is going to improve the life of every Wikipedian (Windows users anyway). Give me any sentence from any article, I'll tell you when it got there, now I can GUARANTEE in less than 2 minutes.
- However I disagree with the API change to &prop=revisions&limit=nnn, I think there is a quantitive difference to the desired &limit parameter depending on whether &rvprop includes "content" - specifically, including the actual diff content involves a join between the "revision" amd "text" tables and a dramatic increase in the DB and network activity. There should be a distinction for the allowed max value of &limit=nnn depending on whether &rvprop contains "content".
- For those of you wondering at my incoherence, write it off to happiness :) Franamax (talk) 00:01, 16 December 2007 (UTC)
Stepwise approval and other questions
Hi! I'm thinking about building something called User:ChoreBot. It's a way for users to pledge to do some regular cleanup task and get reminders. A few questions.
- I'm going to start out with just emailing users who register with ChoreBot. Although that's not quite a page write, I figure that's still enough to require approval. Is that the case?
- At some point (perhaps in a couple of weeks) I'll probably add an option for talk-page notification. Would you like me to include that in this request? Or leave it until I'm sure I'm going to implement it?
- Any other suggestions? I'm an experienced developer, but this is my first Wikipedia bot.
Thanks, William Pietri 01:10, 4 November 2007 (UTC)
- Typically talk-page notification is better then email, and if you do want to use email it should be from the Email This User interface, so that editors wanting to be reminded of chores need not disclose email addresses to you. — xaosflux Talk 05:33, 4 November 2007 (UTC)
- Thanks for the feedback. In this case, I'm inclined to start with email notification, as the goal is to lure people back to work on Wikipedia even when they aren't on the site. But I'd like to include both eventually. I definitely agree about not revealing email addresses, though, and do plan to use the "Email This User" functionality from the get-go. Thanks, William Pietri 06:31, 4 November 2007 (UTC)
- Ok. Since nobody else has any strong opinions, I'll fill out the bot request for both email and talk page writes. Thanks! William Pietri 06:12, 6 November 2007 (UTC)
Sanity checking bot
I'm considering writing a bot that would loop through all the articles containing a certain infobox, reading the infobox parameters and making a list of likely errors such as:
- missing unit of measure, or a unit indicating the wrong type of value (eg. a force in a "power" field)
- incorrect conversion, if given in more than one unit of measure (eg. this)
- implausibly high or low value (eg. this factor of 10 error); this could include checking the ratio of two parameters expected to be related
My initial planned target is Template:Infobox Locomotive (in about 700 articles, the two errors above are from maybe 20 I've looked at closely enough that I'd have noticed), but it could be adapted to almost any template used to display numerical data, and I'm open to suggestions as to which ones would be worth doing.
Since checks of this sort only indicate that a value is probably wrong, not what it should be, the bot would not be able to correct them itself, and would hence be a read-only bot; according to earlier discussion it would hence not need formal approval, but I'd like to know if you think this is a good idea, and whether it has been tried before, before I take the time to actually write it.--QuantumEngineer 00:25, 10 November 2007 (UTC)
- Seems to be a decent idea, and don't see an issue with it "writing" to a log or other report page either. To judge it's usefulness I'd suggest that you start on the Project_talk: page of one of the projects that makes heavy use of the infobox you want to check. Thanks, — xaosflux Talk 03:24, 10 November 2007 (UTC)
- YOu may want write to the article's talk page. Rich Farmbrough, 16:35 13 November 2007 (GMT).
- Since this is likely to be used on long series of individually not very important articles, it might be a long time before anyone read such a talk page note; I hence think posting all the results for one infobox together on an appropriate WikiProject for its subject would make more sense.
- While it would be technically fairly easy to have the bot leave a note on the talk page saying what appears to be wrong (or even tag the dubious information as such in the article itself), my aim is to get mistakes corrected where possible, rather than merely tagged as doubtful and forgotten about, which requires that the results be read and acted on by a human.
- I don't rule out this eventually becoming a read/write bot, but I'm inclined to first get it working as read-only and see what it finds.--QuantumEngineer 22:50, 13 November 2007 (UTC)
- YOu may want write to the article's talk page. Rich Farmbrough, 16:35 13 November 2007 (GMT).
- Good idea. It can also check if the page has been replaced with "does this work?" 1 != 2 04:56, 28 December 2007 (UTC)
Antivandal bots
As someone who frequently RC patrols, and often speaks to others, I would like to make a comment about the various new antivandal bots that are being approved at the current time. I feel the approvals process is not strict enough in this regard, and that approving bots that aren't as good as existing bots may in fact have a negative overall effect. The simple fact of the matter is that, at the moment, we have at least four approved antivandal bots that, for the most part, just race each other to the same reverts. However, these bots are not equally reliable in giving correct warnings and reports to AIV as each other. Given MartinBot is currently shut down, the best bot by far in terms of reliability and accuracy of warnings that we have at the moment is ClueBot. After monitoring the contributions of all the bots for some time, I can conclusively say that it is by far the most consistent with its warning levels and reports to AIV. Furthermore, it uses a standard format of warning templates which mean that other scripts and bots can detect which warning level a user has previously received very easily. However, ClueBot is not the fastest bot. Looking at its status on IRC, it reveals that VoABot II has beaten it to a revert on over 2000 occasions that it recognised recently, and Counter Vandalism Bot has beaten it on 250 occasions. Since both of these bots are not as consistent with their warnings, this is counterproductive. Furthermore, VoABot II has increased response speed so much recently, that it is even causing many rollback failures to admins (notably DerHexer who is one of the most active in this regard). Again, I believe this is counterproductive, because I do not believe a bot is going to be able to judge the appropriate warning as effectively as a person.
There are further disadvantages that also stem from having various different antivandal bots racing each other. One such example, as I have seen on a number of occasions recently, is that Counter Vandalism Bot reverts and gives a level four vandalism warning to a dynamic IP from which there have been no edits for the best part of a month, and should be given a level one or two warning. ClueBot then reverts that IP on a separate page, detects it has just received a level four warning, and reports it to AIV. This may then be turned down at AIV because the IP has only made two edits. Thus, in such cases, Counter Vandalism Bot is actually worsening the effectiveness of ClueBot.
Now, unlike some, I am not against antivandalism bots by any stretch of the imagination. In fact, I very strongly support them. However, I would make some strong recommendations which I think would improve the effectiveness of these bots. Firstly, and likely controversially, I think they should operate at a slight delay (only a matter of seconds) so that, if any actual person were going to revert a particular edit, they would have a chance to do so before the bots. Although this would mean that vandalism would appear on pages for a slightly longer period, I think the net result would be positive due to a greater accuracy of warnings and which would see people learning their errors or indeed being blocked more promptly, as appropriate. Secondly, and regardless of whether the first recommendation is adopted, I think the various antivandal bots should have differing time delays so as to ensure that the most reliable bot at warning users (ClueBot) always makes a revert if it recognises something, and the other bots only revert later if ClueBot has missed the vandalism. I think this would be greatly advantageous, and would probably not require much of a time delay to effect.
Finally, I think a far greater consideration needs to be made before approving new antivandal bots. The criteria should not simply be "can it revert vandalism and warn users?" New antivandal bots should exhibit a high reliability (and very low rate of false positives) at reverting vandalism that existing bots would not revert anyway. They should also warn and report users to AIV at least as accurately as existing bots. There is no benefit to approving a new bot that will just race the existing ones to the same reverts, and makes more errors in warning users. I hope all these recommendations will be considered, as I have spent some time thinking about this and indeed watching the current antivandal bots in action. Will (aka Wimt) 21:07, 12 November 2007 (UTC)
- I do echo wimt's sentiments, and did discuss the issue with him a couple of weeks ago. I'd suggest that there would be no problems caused to the wiki if we were to (temporarily) remove the approval of (or block) the offending bots for the period of time required to fix their issues. Those who run "successful" antivandalism bots (Cobi, myself, others) can aid those who are trying to fix their own bots (myself - by email only...). On the issue of antivandalism bots, I shall be trying to bring MartinBot into commission again within the next few weeks (recent problems were caused by it not "knowing" users, hence it would revert admins etc. When I have the time, Misza has kindly agreed to send me his userlists for a IRC recent chagnes reporting bot). Thanks, Martinp23 21:43, 12 November 2007 (UTC)
- For what it's worth, I would be more than unwilling to approve an AV bot at this time unless something really innovative came along, and I'm all in favor of an AV bot behavior standard. In particular:
- Make reporting uniform in behavior and a little more cooperative. Perhaps with shared bot-specific warning templates that are machine parsable so that they can be certain not to duplicate (and escalate) warnings
- Introduce standard delays to improve ordering
- Share whitelists/blacklists (where applicable)
- I don't think it's reasonable to block the 'bots at this time, but a notice on the every operator's page to meet on some suitable workshop subpage? — Coren (talk) 23:00, 12 November 2007 (UTC)
- Bot-specific warning may be the way to go here, using a standard code that the bots could write around. As for reporting to AIV, I think this should be based on warnings per time period, rather then # of levels. The blocks are still done by admins, who can see what to do. — xaosflux Talk 23:40, 12 November 2007 (UTC)
- I must say I agree here with Wimt - I've been frustrated by a lack of streamlining in antivandal bots, and it really would help, I think. The next antivandal BRFA will be interesting. Dihydrogen Monoxide 08:59, 13 November 2007 (UTC)
- Further to this, there is nonetheless value in having a number of different bots working on different frameworks and different logic (although following policy on warning levels seems a no-brainer). They make the overall system more resilient. Rich Farmbrough, 16:28 13 November 2007 (GMT).
- Well, no one wants to deny them their own heuristics and internal methods. But they should share blacklists and whitelists, and they should have the same means of warning and reporting. — madman bum and angel 16:50, 13 November 2007 (UTC)
(undent)
I'm going to go ahead and draft up a specification for behavior of AV bots today, and we'll poke all the AV bot operators to chime in and contribute. My objective is to have a document we can point to as the "minimally correct" set of behavior that must be implemented for an AV bot to be approved, and which currently approved bots should conform to as quickly as reasonably possible.
Anyone wants to tackle the four templates? I'm thinking {{uw-avb1}}, {{uw-avb2}}, {{uw-avb3}} and {{uw-avb4}} which should minimally take the name of the tagging bot, the article and a timestamp as arguments? (like {{uw-avb2|bot=ClueBot|time=~~~~~|page=c}} ).
- Good work, Coren. Once a final draft is in place, I suggest that we should post unobtrusive notices around (Village Pump, Administrators' Noticeboard, CVU) to encourage development of consensus as all contributors, especially administrators, are affected by the behavior of anti-vandalism bots. — madman bum and angel 19:42, 13 November 2007 (UTC)
One time request for AMbot
I would like to be able to help out another administrator who has requested that AMbot be permitted to remove a number of user talk pages from a category, which they were put in through an improper template substitution. As this is a one-time task, and it is very similar to what is approved in Wikipedia:Bots/Requests_for_approval/AMbot, I was hoping to seek simple approval though this talk page rather than submitting a new formal request. For more details, please see the request at [2] concerning Category:User-specific Welcome templates. --After Midnight 0001 21:10, 23 November 2007 (UTC)
- While I'll speedily approve this as being close enough to your bot's scope, I think it would be better to have Category:User-specific Welcome templates go through CFD before mass-depopulating it, unless a clear consensus (not just a request) to remove these pages has been established. — xaosflux Talk 21:24, 23 November 2007 (UTC)
- I really do think that the user talk pages found there way here in error, through an error. The other pages in the category would still remain there. The category is only meant to contain the templates, not the pages that have been "templated". I'll ask the requestor to come here to comment also. --After Midnight 0001 21:30, 23 November 2007 (UTC)
- The editor that made the 'error', or a link to where they admin it was an error would be sufficient as well. This should use very clear edit summaries as well, as it will trigger "New Messages" flags to flip on all those pages. — xaosflux Talk 21:47, 23 November 2007 (UTC)
- (ec) My request was to remove all pages in the user talk namespace from Category:User-specific Welcome templates, which is designed to include subpages in the user namespace. The user talk pages are in the category due to the mass substing of {{Anonymous anonymous welcome}} in late September. See, for instance, [3] and [4]. Since the template did not have <noinclude> tags around the category, every user talk page was categorised during the process of substing. I've added the <noinclude> tags to User:Anonymous anonymous/welcome, which is where the template was moved, so this should be a one-time task. I hope this explanation sheds more light on the purpose of my original request. – Black Falcon (Talk) 21:52, 23 November 2007 (UTC)
- Thanks, sounds good enough to me. — xaosflux Talk 23:13, 23 November 2007 (UTC)
- I really do think that the user talk pages found there way here in error, through an error. The other pages in the category would still remain there. The category is only meant to contain the templates, not the pages that have been "templated". I'll ask the requestor to come here to comment also. --After Midnight 0001 21:30, 23 November 2007 (UTC)
Please flag ClueBot.
A patch of mine to MediaWiki has just gone live which enables bots to mark their edits as non-bot. ClueBot is already set up to do this, so it should start working as soon as it is flagged. ClueBot needs to be flagged because unflagged rollbackers are limited to 5 reverts per minute, which ClueBot regularly goes over during morning hours. Thanks. -- Cobi(t|c|b) 06:23, 13 January 2008 (UTC)
- ... but it also has the option to hide them? Wasn't the original reason consensus to grant it to ClueBot and VoABot done under the rationale that their edits will always be shown? Wasn't there a discussion about the potential for abuse? And, no offense, but it seems a little strange that you would propose the policy, and, within a few days of it changing, propose an expansion off that same policy without posting for consensus at WP:VP/T or similar venue, but rather on the talk page of B/RFA (instead of opening a new permission request for the bot like most people have to do? --slakr\ talk / 19:50, 13 January 2008 (UTC)
- Hmm, how do we handle the edit summary when using rollback issue. Previously the grab each version of the page was used to attempt to prevent multiple vandals from injecting vandalism and having a bot revert to a vandalized state. Do we want bots using rollback calls directly? -- Tawker (talk) 20:07, 13 January 2008 (UTC)
- No, I think you misunderstand, Slakr. It will look like a normal edit, whether or not it is flagged with this new patch. Appending
&bot=0
to the edit page or to the rollback link (which ClueBot currently does) will make it look like a normal edit, in both Recent Changes/Watchlists and in the page history. If&bot=0
is appended, it is exactly as if it weren't flagged at all, except rollback limits aren't so strict that they hinder the operation of the bot during peak hours. And, Tawker, ClueBot already does use rollback. But, it uses its own edit summary so it looks exactly like the bot's manual revert. Rollback is just more efficient. Thanks, Cobi. (At an untrusted computer, thus the IP instead of logging in) 24.211.185.44 (talk) 22:49, 13 January 2008 (UTC)
- No, I think you misunderstand, Slakr. It will look like a normal edit, whether or not it is flagged with this new patch. Appending
- Hmm, how do we handle the edit summary when using rollback issue. Previously the grab each version of the page was used to attempt to prevent multiple vandals from injecting vandalism and having a bot revert to a vandalized state. Do we want bots using rollback calls directly? -- Tawker (talk) 20:07, 13 January 2008 (UTC)
- Given that the patch has gone live the only issue against giving a flag to Cluebot is that it may revert vandalism too quickly for each edit to be manually checked. Obviously, we could apply the same standard to every bot that it not be given a flag because we can't manually check every edit. Ludicrous of course, so I fully support giving a flag to Cluebot and don't honestly feel "community input" is needed on this issue, seeing as I expect a lot of the masses would immediately oppose without fully understanding how it is implemented (probably I'm just full of bad faith ;)), and as this is a form of bot flagging where the only thing that changes is the max rollback rate - recent changes is still used (which is desirable). Very little change is actually made to the status quo. Martinp23 22:05, 14 January 2008 (UTC)
Even though I'm a non-member of BAG, I also approve this. If we're making it easier for the bot, let's make it easier all the way. To any bystander this will look like an ordinary rollback, but will allow the bot to run faster. Миша13 20:56, 15 January 2008 (UTC)
- Should other anti vandal bot operators be asked to move their Bots to this system? Flag the Bots but have them mark their edits as non-Bot ones so they show in recent changes? WjBscribe 00:23, 18 January 2008 (UTC)
This sounds good. And when using rollback the bot still has it's own custom edit summary right? So, it's flagged in order to go faster, turns off "bot" for the edit so it appears in RC, and the rollback has a custom edit summary saying why the reversion was made, where to complain, etc? --kingboyk (talk) 18:28, 21 January 2008 (UTC)
Regarding the Python Wikipedia Bot
I'm wondering, do I need approval to use this bot? It's listed on the semi-bot pages and I did not understand if the software listed there need always approval from the community. Please confirm.--Ivo Emanuel Gonçalves talk / contribs (join WP:PT) 03:59, 15 January 2008 (UTC)
- Pywikipedia needs approval. βcommand 04:00, 15 January 2008 (UTC)
- pywikipedia can be used to create user assisted tools as well, I don't see why that would need approval. BJTalk 04:05, 15 January 2008 (UTC)
- Hm, I meant in the sense of using it to help clean up vandalism in the English Wikipedia. Sorry for not making that clear. Alright, I'll be asking for approval later in the future.--Ivo Emanuel Gonçalves talk / contribs (join WP:PT) 04:15, 15 January 2008 (UTC)
- It is a python library, you need to code something on top of it to do anything(it does include some scripts). BJTalk 04:26, 15 January 2008 (UTC)
- Hm, I meant in the sense of using it to help clean up vandalism in the English Wikipedia. Sorry for not making that clear. Alright, I'll be asking for approval later in the future.--Ivo Emanuel Gonçalves talk / contribs (join WP:PT) 04:15, 15 January 2008 (UTC)
- pywikipedia can be used to create user assisted tools as well, I don't see why that would need approval. BJTalk 04:05, 15 January 2008 (UTC)
A comics bot
I've only just started using AWB, but I was wondering if it was possible to set up a bot account to use it to do some tasks for WP:COMICS. Could a bot account using awb run through the comic stub categories and unstub long articles automatically, and could it replace {{comicsproj}} with {{comicsproj}}? I don't want to request a bot account until I understand what I can do with it and if it is neccesary. Appreciate the help, this is all a bit new to me. Hiding T 13:06, 17 January 2008 (UTC)
- All the +bot flag does it hide the revisions from the recent changes list, which is needed if it operates at high speed. BJTalk 13:37, 17 January 2008 (UTC)
- I thought it would also allow AWB to run automated? Am I wrong on that score? Hiding T 13:09, 18 January 2008 (UTC)
- You're kind of right yes. To use AWB in an automated way you have to be added to the bots list on the AWB checkpage; to get so added on EN you need to have a bot approval (mostly). --kingboyk (talk) 18:25, 21 January 2008 (UTC)
- I thought it would also allow AWB to run automated? Am I wrong on that score? Hiding T 13:09, 18 January 2008 (UTC)
NOEDITSECTION
I've removed it on WP:BRFA, because I found it intensely annoying. Why was it added? I can see potential ugliness from the new "first section" edit headers, but can they not be disabled in each transcluded request rather than making it far more difficult to do anything with WP:BRFA? Martinp23 02:38, 12 February 2008 (UTC)
- Good call, that was annoying. I did not notice that magic word. βcommand 02:40, 12 February 2008 (UTC)
- Hmm I expect the original reasoning was to stop users pressing the (+) button to make a new request. I expect we can just revert and educate in cases where that happens though. Martinp23 02:43, 12 February 2008 (UTC)
Assessmentbot
This is just to run an idea past you guys before actually doing something about it. :-)
Problem
Some of the Country Projects have huge numbers of Unassessed articles - Spain has 14030, France has 10857. France has another 11610 Stubs waiting to be assessed for Importance. Past experience has shown that noone's much interested in doing anything about that when there's 1000's to be done - I've assessed several 1000 articles myself, it's tedious, back-breaking work - but that if you can get it down to zero, then people are quite good at keeping on top of it after that. Assessing for Class greatly slows down assessment, as you have to grab the article before moving to the Talk page, whereas you can assess for Importance just based on the article name. So a way of assessing for Class automagically would help considerably. When you look at them, you find that most of these Unassessed articles are transwiki'd communes and villages.
Proposal
Two bots, to be run as a one-off per Project. The first is a read-only bot that reads in a list of the Unassessed articles in a Project, and counts article length, number of .jpg links, <ref tags, headings, stub templates and looks for a "town" infobox such as Template:Infobox_CityIT - if present, it extracts the population of the place. Then I would manually set up lists to feed into auto-mode Kingbot for assessment in the Project banner on Talk pages thus :
- Redirects - <100 bytes, would be handled manually as they quite often represent article moves gone wrong, it's worth catching them
- Stubs - anything <1750bytes, anything <2250 bytes without a .jpg (gathered material), anything <2750 bytes without either a .jpg, 3+ headings or 4+ inline refs.
- Starts - between 5000 and 10000 bytes with 3+ headings and <10 inline refs, or between 3500 and 5000 with a jpg, 3+ headings and <10 inline refs.
- Low importance - population <5000 people
- Mid importance - population >20000 people but <100000
- Based on my experience with the Italy Project, I'd guess that those rules might assess about 60% of Unassessed articles by Class and about 30% by Importance - so a useful amount of work, but still leaving large grey areas for human assessors, I think the above rules are pretty conservative and could be tweaked upwards with experience. Manual intervention should allow me to catch any articles that are obviously of >Mid importance - and I like to just eyeball things anyway. ;-/
- My main interest in this is for one-off Project assessment, so it would be a non-mainspace bot, but a logical expansion would be to use similar rules to apply "Project" stub tags on the article itself, perhaps with more convervative rules (<1250 bytes?, <1000 bytes?). I know the stub sorters get a bit twitchy about that kind of thing, and it wouldn't be my main focus, but how would people feel about that? FlagSteward (talk) 12:05, 12 February 2008 (UTC)
Review of RfBA BetacommandBot Task 8
In the light of the discussion going on at Wikipedia:Administrators' noticeboard/Incidents#BetacommandBot "rating" articles and leaving notes about it, it may be necessary to review at some point whether there is community consensus for this task. I don't know what the procedure is, but I thought that the people running RfBA should at least be aware of this discussion. -- Jitse Niesen (talk) 20:04, 10 November 2007 (UTC)
- I'm guessing the procedure (if required, although from my glance at ANI it won't be) would be to run another BRFA, and to invite broader discussion than this one received. If the community approves the task, BAG approves it. If not, no bot. Dihydrogen Monoxide 02:53, 11 November 2007 (UTC)
- Oh. I complained to him about that recently. Everywhere I go I see those infernal messages, and it seems to me that 1) they're not needed 2) judicious use of templates could have the same effect. Didn't know it had been raised on ANI...
- SkiersBot has been doing a similar thing but his was somewhat worse, as I posted elsewhere on this page I believe. --kingboyk (talk) 17:22, 24 January 2008 (UTC)
- Per this conversation, I created this process Wikipedia:Requests_for_comment/User_conduct#Use_of_bot_privileges, but in the the specific case of BCB, another user already started WP:AN/B, so that would be the correct forum for all BCB editing issues IMHO. If they reach consensus, then it should probably go to a WP:BOTS/Subpage for the BAG to review the bot situation and tell the crats whether or not this bot has community backing, etc. MBisanz talk 01:52, 26 February 2008 (UTC)
- Looks good, though I am not convinced it would get much use. I think it would belong better at WP:BRfC like WP:BRFA. I think with the general slow pave of things around BAG it will be hard to get many opinions about this. Is there a more formal discussion of the proposal going down? -- maelgwn - talk 03:54, 26 February 2008 (UTC)
- Well the shortcust can point anywhere, right now the only discussion is at Wikipedia:BN#Bot_change and its more about my motives than the proposal. I think I hit Wikipedia:Village_pump_(proposals)#-BOT_forum with no comments, so I guess here is as good a place as any. MBisanz talk 04:00, 26 February 2008 (UTC)
- Per this conversation, I created this process Wikipedia:Requests_for_comment/User_conduct#Use_of_bot_privileges, but in the the specific case of BCB, another user already started WP:AN/B, so that would be the correct forum for all BCB editing issues IMHO. If they reach consensus, then it should probably go to a WP:BOTS/Subpage for the BAG to review the bot situation and tell the crats whether or not this bot has community backing, etc. MBisanz talk 01:52, 26 February 2008 (UTC)
Code publishing?
Just out of curiosity, why don't we publish the code of the bots for community review? What are the pros and cons of publishing vs. not? Lawrence § t/e 14:09, 4 February 2008 (UTC)
- It's up to every programmer.
- Pros : Allow others to check and add improvements, allow others to use it.
- Cons : Allow others to use it (sigh, depends on the bot), exploits, abuses. You have to clean your code before publishing it. You can not longer claim to be the only one running this particular bot.
- NicDumZ ~ 14:24, 4 February 2008 (UTC)
- Would the open source requirements for WP extend to bots, and require code release if asked for? Lawrence § t/e 14:25, 4 February 2008 (UTC)
- It seems very hard to enforce as a policy. It's better to discuss it with individual bot operators, and convince them to release their bot code. — Carl (CBM · talk) 14:50, 4 February 2008 (UTC)
- its always been that coders are requested to make their code open source, but we dont force it. βcommand 15:10, 4 February 2008 (UTC)
- I'm wondering though if ona going forward basis, the disclosure of source code could be made a pre-requisite to Bot flagging. For bots with sensitive code (vandal bots), the code could be sent to OTRS. I'm imagining that the efficiency of most bot swould increase, since more eyes would find more ways to improve the code. Also, when a person left the project, it would be rather easy to replace them, as opposed to waiting until someone new can re-code a new bot that looks likes its doing the same thing (as happene to either an archiver or vandal bot I beleive.) MBisanz talk 19:56, 4 February 2008 (UTC)
- Im opposed to forcing users to publish their code, Like I said feel free to request it, as for archiving bots, we have not had any issues. their is one that comes stock in pywikipedia. As for anti-vandal bots, you might ask cobi, but the AVB's that tawker and co. used to run, where rendered useless to changes in mediawiki. βcommand 20:04, 4 February 2008 (UTC)
- I'm wondering though if ona going forward basis, the disclosure of source code could be made a pre-requisite to Bot flagging. For bots with sensitive code (vandal bots), the code could be sent to OTRS. I'm imagining that the efficiency of most bot swould increase, since more eyes would find more ways to improve the code. Also, when a person left the project, it would be rather easy to replace them, as opposed to waiting until someone new can re-code a new bot that looks likes its doing the same thing (as happene to either an archiver or vandal bot I beleive.) MBisanz talk 19:56, 4 February 2008 (UTC)
- its always been that coders are requested to make their code open source, but we dont force it. βcommand 15:10, 4 February 2008 (UTC)
- It seems very hard to enforce as a policy. It's better to discuss it with individual bot operators, and convince them to release their bot code. — Carl (CBM · talk) 14:50, 4 February 2008 (UTC)
- Would the open source requirements for WP extend to bots, and require code release if asked for? Lawrence § t/e 14:25, 4 February 2008 (UTC)
- This would be a good idea. Lawrence § t/e 20:00, 4 February 2008 (UTC)
(outdent) The vandal bots aren't sensitive and ClueBot is already open source. I have two objections to releasing my code, first I would have to clean it up as mentioned above and second I don't want to be liable for fixing more that one copy of the code. If I leave and a botop that knows what their doing takes over my code that is fine but I don't want 10 clones of BJBot screwing up everywhere. BJTalk 21:33, 4 February 2008 (UTC)
- I'm not suggesting current bots be required to release code, only that new bots, as part of the approval process should be required to release it, at least to the BAG mailing list (if that actually exists) or OTRS. And of course, approval for 1 bot to do 1 task wouldn't empower any other bots to do that task, even using the identical code, so if someone installed it and started using it, I'd say they should be blocked on sight. What about even archiving code somewhere? I seem to remember that Gurch had a bot updating the ARBCOM elections and took a lot of heat when he needed to travel during the elections. If there was a common repository, it would've been easier to fix. MBisanz talk 21:44, 4 February 2008 (UTC)
- I'm opposed to a "private release" to some local instance, because bot ops from other projects would benefit a lot from a public release.
- Bjweeks, I think that the second point you raised is easy to avoid. You can license your code, or simply request that when you're still active, no one has the right to run another BJBot on en: (and on every project where you are) : Since BAG would request, for approval of new bots, the source, I don't see how a clone of BJBot could run here :) . Also, bot ops are responsible for their edits, whoever wrote the script : you shouldn't worry about an outdated version of your code running somewhere else, the other bot op should :þ
- From my personal experience, I'd say that those curious to look at a source of a bot don't really care about how messy the code is : They just want to know roughly how it works, or will clean the code by themselves if they want to reuse it elsewhere. But sure, right, I do hate submitting dirty code, and that's maybe why I mentioned that point. :( NicDumZ ~ 22:33, 4 February 2008 (UTC)
- I'll start cleaning up my code for release, some of the code I need to bring back into working order on enwiki first. I'm nowhere near was dedicated as Betacommand and others so long absences from me are a possibility and that is why I should start publishing my code. BJTalk 23:54, 4 February 2008 (UTC)
- Also, documentation is almost nonexistent in my code, which makes it harder to reuse. BJTalk 23:56, 4 February 2008 (UTC)
- Though still under development, I have published the code for User:XLinkBot (see User:XLinkBot/Code). I don't see any harm in others to see the code, if they want to use it, feel free, any comments are also welcome (though Versageek and I will be the ones to see if it is really implemented, or there must be strong opposition to certain features). Also, be aware that the code there may not be totally up-to-date, as I am still developing the wiki-interface of the bots (something that did not exist at all in User:AntiSpamBot, that bot only listened to IRC). There is no explanation in the code, and it is far from 'clean', it works at the moment 'properly' (which does not mean that there is no improvement possible).
- In a way, it would be good to see if spammers now will try to 'use' the code to circumvent the bots (they would need to see exactly where and how it uses perlwikipedia, and to understand the inner workings of wikipedia) .. that leads to a nice way of improving the code of said bots. --Dirk Beetstra T C 15:39, 6 February 2008 (UTC)
I suggest that we discuss whether it is appropriate to encourage bot code to be released under GFDL.
GFDL-published bot code should of course be approved before anybody is allowed to run it, and it should only be run by approved bot operators. Currently used bots may not be released under GFDL, either because some parts of the code are taken from a non-free source, or because the developer does not want to release the code. However, this can be treated similar to how we treat copyrighted/free images. For instance: A proprietary bot should not be allowed if a free (and approved) one is available that does the same task. A long-term goal may be to use only GFDL-bots. Oceanh (talk) 22:22, 25 February 2008 (UTC).
- I think the GPL is better suited for programs than the GFDL, though. The ClueBots are released under the GPL. -- Cobi(t|c|b) 22:28, 25 February 2008 (UTC)
- The GFDL isn't any more suited for code than the GPL is suited for documents. --Carnildo (talk) 22:29, 25 February 2008 (UTC)
- Thanks. The point is to publish under a suitable GNU license, and I agree that GPL probably is better suited. If the ClueBots are already released under GPL, that's a good start! We don't need to rely on proprietary bots doing tasks that GPL-bots already do. And these bots may be inspiration for other wannabe bot programmers. I can see some problems with "version explosion", which should be avoided. And also, since bots are powerful tools, certain skills and insights are required to make good bots, or good modifications of existing bots. Oceanh (talk) 22:46, 25 February 2008 (UTC).
BRFA table...
Hey, is it just me, or, is the bot that normally maintains the table near the top of the BRFA's MIA? (You know, the one that shows when blah was last edited, and, then, when blah was last edited by BAG etc etc) SQLQuery me! 13:52, 15 February 2008 (UTC)
Unregistered bot
User:AtidrideBot registered on Dec 10, 2007 and thus far as made a single drive-by edit of the editabuse template [5]. 2 things, one the edit itself seemed to target a bot owner, two the user's name looks like a bots, even though it isn't flagged as such. MBisanz talk 20:52, 25 February 2008 (UTC)
- Block and point to WP:BRFA βcommand 20:57, 25 February 2008 (UTC)
- I notified, but did not block since I wanted to see what would happen. They've repeated the edit to the template (and I've reverted) [6]. They've also told me "why" on my talkpage [7] and gone and moved the BC subpage to a formal noticeboard title [8]. Could the owner of the WP:SPA please stand up? Only registered users would know of BC and BCB's image work. I agree a block is called for BC, but I'd like a less involved user to confirm it for me. MBisanz talk 14:16, 26 February 2008 (UTC)
Kddankbot
I posted this on the main talk, but could someone go ahead and approve this? Geoff Plourde (talk) 07:56, 8 March 2008 (UTC)
NFCC BRFA
I'd like to suggest reopening it. It really wasn't open for long enough at all. I'd suggest a minimum term to guage community consensus of 7 days. If the bot fails, then so be it - clearly the community doesn't want it. We should therefore, perhaps, get over it and not try to push a bot which seems to have support within BAG through the system. Thoughts? Martinp23 23:32, 7 March 2008 (UTC)
- that is not a good idea. most of the issues have been addressed. its just a simple BCBot clone. βcommand 23:47, 7 March 2008 (UTC)
- Well maybe my questions can be dealt with without reopening it. 1. Will it respect a NoImageBots tag once I create oen on the model of NoBots? 2. Will you consider adding a tagging notification feature for the phase 4 implementation. MBisanz talk 02:17, 8 March 2008 (UTC)
- I'm thinking about the questions of the community at large. Whether or not it is a clone, if they don't want it we won't run it. Martinp23 07:04, 8 March 2008 (UTC)
- Thats one I don't have an answer to. I guess its just not an issue we've faced that frequently. I see there is a debate on the boards about Cambridge U's computer security dept having a role account. And I think there is some BOCES-style program that operates what I can only guess is a role account. And I discovered that User:WP 1.0 bot can be run by anyuser from a web interface. MBisanz talk 07:15, 8 March 2008 (UTC)
- I'm OK with re-opening it, but, I'd prefer not to have another tarpit-slugfest. Would you be OK with me (I'm not sure I have to ask -- I'm not BAG, so, in theory, I shouldn't be editing closed discussions) cleaning up the proposal a bit, to get rid of any offensive terminology? I'd also like to make it clear from the get-go. This is a discussion on if or if not the bot should run on the English Wikipedia. While feature requests are often nice, and very helpful, ofttimes, the operators are not the bot developers. We never ask people running interwiki.py, if they'd please have the bot leave a message on the talkpage of the article it's modifying, for instance. In this instance, I'd like to treat this bot in the same manner, as a feature-frozen program. Now, this is not to say, that bugs can't be addressed, by the upstream developer (Betacommand), as in any other normal software package (AWB, Pywikipedia, etc). Also, I would like to keep discussion about betacommand to a minimum, should this discussion be re-opened. While yes, he is the developer of this software, this isn't about betacommand. If we can stick to these points, I see no issues with re-openening the BRFA. SQLQuery me! 11:17, 8 March 2008 (UTC)
- Furthermore, I think, at this time, any serious concerns about the bot's username, would probably be best discussed in a username RFC, as the bot is, at present an existing user, and, so far as I can tell, not a blatant violation of the username policy. SQLQuery me! 11:19, 8 March 2008 (UTC)
- This request should NOT be reopened because it will simply become another flame war which none of the bot operators are interested in dealing with. This bot is the same as BCBot. It is a clone. This is why it was speedy approved. Certain members of the community have proven that they are not capable of discussing BCBot in a reasonable way or allowing it to be discussed reasonably by others. --uǝʌǝsʎʇɹnoɟʇs(st47) 12:05, 8 March 2008 (UTC)
- The request definitely needs to be reopened with a minimum amount of time for community discussion. The "approval" for this bot was simply ridiculous, the account was given bot status almost immediately and despite ongoing discussion. When someone attempted to add more discussion (something that should never be discouraged on Wikipedia), they were reverted and the page was protected. This just screams to be done again. —Locke Cole • t • c 11:08, 9 March 2008 (UTC)
- Furthermore, I think, at this time, any serious concerns about the bot's username, would probably be best discussed in a username RFC, as the bot is, at present an existing user, and, so far as I can tell, not a blatant violation of the username policy. SQLQuery me! 11:19, 8 March 2008 (UTC)
- I'm OK with re-opening it, but, I'd prefer not to have another tarpit-slugfest. Would you be OK with me (I'm not sure I have to ask -- I'm not BAG, so, in theory, I shouldn't be editing closed discussions) cleaning up the proposal a bit, to get rid of any offensive terminology? I'd also like to make it clear from the get-go. This is a discussion on if or if not the bot should run on the English Wikipedia. While feature requests are often nice, and very helpful, ofttimes, the operators are not the bot developers. We never ask people running interwiki.py, if they'd please have the bot leave a message on the talkpage of the article it's modifying, for instance. In this instance, I'd like to treat this bot in the same manner, as a feature-frozen program. Now, this is not to say, that bugs can't be addressed, by the upstream developer (Betacommand), as in any other normal software package (AWB, Pywikipedia, etc). Also, I would like to keep discussion about betacommand to a minimum, should this discussion be re-opened. While yes, he is the developer of this software, this isn't about betacommand. If we can stick to these points, I see no issues with re-openening the BRFA. SQLQuery me! 11:17, 8 March 2008 (UTC)
- Thats one I don't have an answer to. I guess its just not an issue we've faced that frequently. I see there is a debate on the boards about Cambridge U's computer security dept having a role account. And I think there is some BOCES-style program that operates what I can only guess is a role account. And I discovered that User:WP 1.0 bot can be run by anyuser from a web interface. MBisanz talk 07:15, 8 March 2008 (UTC)
- I'm thinking about the questions of the community at large. Whether or not it is a clone, if they don't want it we won't run it. Martinp23 07:04, 8 March 2008 (UTC)
- Well maybe my questions can be dealt with without reopening it. 1. Will it respect a NoImageBots tag once I create oen on the model of NoBots? 2. Will you consider adding a tagging notification feature for the phase 4 implementation. MBisanz talk 02:17, 8 March 2008 (UTC)
- Re-opening it would be a minimum. Regarding the current WP:RfAr#BetacommandBot, one of the Arb's comments involves having "too many issues" for being able to make a manageable RfAr case out of it. Re-opening the BRFA would at least split off one of these issues, via normal community process. And please, BAG members take some time to give those raising concerns the impression that their concerns are heard. I'm not talking about that that didn't happen, but maybe there was a perception problem. Concerns were heard at WP:ANI, but when looking solely at Wikipedia:Bots/Requests for approval/Non-Free Content Compliance Bot it gives the impression that for a hot issue, those making decisions preffered not to give too much attention to objections, and went straight for the "speedy" decision. --Francis Schonken (talk) 14:09, 9 March 2008 (UTC)
- this bot was approved 10 months ago. the NFCC bot is just a clone of that. there is no reason to re-open it. βcommand 15:21, 9 March 2008 (UTC)
- Re. "there is no reason to re-open it", there is: that reason is WP:RfAr#BetacommandBot, and the suggestions given there by arbitrators. --Francis Schonken (talk) 15:54, 9 March 2008 (UTC)
- so your saying FT2's comments about supporting the actions of BAG is a reasion to re-open? this is just a clone of an existing bot, and BAGs actions where made to avoid a flamewar. βcommand 16:24, 9 March 2008 (UTC)
- Bypassing community discussion is unacceptable. That alone is reason to re-open it and allow for a reasonable amount of discussion. —Locke Cole • t • c 21:27, 9 March 2008 (UTC)
- If BAG will not permit re-opening the discussion, there is a simple solution:create a separate place for such discussions. --BrownHairedGirl (talk) • (contribs) 02:32, 14 March 2008 (UTC)
- Re. "there is no reason to re-open it", there is: that reason is WP:RfAr#BetacommandBot, and the suggestions given there by arbitrators. --Francis Schonken (talk) 15:54, 9 March 2008 (UTC)
- this bot was approved 10 months ago. the NFCC bot is just a clone of that. there is no reason to re-open it. βcommand 15:21, 9 March 2008 (UTC)
KevinBot
KevinBot is no longer active and can be de-flagged. I may reactivate it sometime in the future but I'm just too busy these days. Kevin Rector (talk) 01:18, 14 March 2008 (UTC)