User Details
- User Since
- Oct 25 2014, 1:38 AM (517 w, 6 d)
- Availability
- Available
- LDAP User
- Unknown
- MediaWiki User
- RobinHood70 [ Global Accounts ]
Mar 14 2024
LOL, no, let's not put entire documents into Lua. That would be...bad. Thanks for the response!
Mar 13 2024
Okay, so using Cargo as an example, and forgive me if there's any syntax issues here because I only looked at the docs quickly, if you have the following on a page, presumably split up by other text:
Thanks! Do you happen to know how either of them handle possible out-of-order processing? If not, or if it would take us too far off-topic, no worries, it's just idle curiosity.
I apologize, it wasn't intended as an ultimatum, just a statement of my understanding of how things stand. If, indeed, Variables or other extensions can be made to work in some way that's compatible with parallelism, which I think is what you're suggesting when you say to continue working on the patch, I'd be very interested to hear what you have in mind. If my understanding of the direction the patch was taking is correct, moving the variable storage from Parser to ParserOutput would indeed be a more modern approach, but it still wouldn't solve the parallelism problem. How do you ensure that a variable is set before being read with Parsoid?
My understanding of the problem is that parallelism is the main issue. That's something no patch can address, since it's a fundamental design difference between Parsoid and the legacy parser. Documents/templates are being used as state machines and therefore expect linear parsing. You can't set a variable to X, then read that variable before it's been set. That obviously won't work.
Feb 22 2024
Just to maybe give some ideas, what we do in our homebrew extension is to have all variables local to the frame (template) they're created in, but then we also have a #inherit function to pull in a variable from the parent or higher frame (climbing up until it finds the variable or runs out of frames). This is tremendously useful to act as a pseudo-setting, like setting "showweight" to 1 on the page, then repeatedly calling a template that can optionally show a weight column. It really adds up when there are dozens or even hundreds of calls where you don't have to add |showweight=1 every time. While much less used, we also have a #return function that sets variables on the frame one higher than where the #return was called from, assuming it's not already a top-level frame.
Dec 5 2023
We also have multiple wikis with similar problems, but using a custom extension similar to Variables. Ours modifies the PPTemplateFrame variables directly (allowing basic scoping), and other functions store information in ParserOutput as parser functions are parsed.
Mar 18 2023
That would be ideal, and I can see at least one potential side-benefits to that approach, although it would depend on exactly how Parsoid works. I'm thinking specifically of dynamic functions. With an approach like you're outlining, the document would inherently be aware of whether parts of it change based on other parts, so if you had a dynamic function that had no effect on any other part of the document, that bit could remain uncached and updated at every refresh, while the remainder of the document could still be drawn from the parser cache.
Oct 27 2022
Thanks for the update, ssastry! Just for clarification, when I talk about maintaining state, I mean that if, in a template or even just a regular document, you can define a variable in some fashion (as in Variables, Loops, etc.) and it will take effect in the document only from that point forward. Then, if changed again later in the document, the change will only take effect after that. In concept, it turns wikitext itself into a primitive programming language, which is what Variables, Arrays, and Loops are essentially doing. Obviously, this will break in a non-linear environment, but if a linear option is maintained, I think that would likely allow those extensions to continue to work as they are, at least for the time being.
Oct 18 2022
Sep 28 2022
Sorry for the necropost, but since this is something people are likely to land on when supporting older MW versions, I thought I should mention the solution that worked for me, which I found in the wild on the internet.
Aug 29 2022
Yeah, it almost certainly is. About the only benefit to this one is that I link to existing examples. I'm not all that familiar with Phabricator, so feel free to merge this with or mark it as a duplicate, whatever's normally done.
Aug 28 2022
Aug 24 2022
Jun 21 2021
Jun 20 2021
Also, I'm not seeing anything that looks like a vote, but I'm not sure if I've ever actually voted in Phabricator before, so I might just be missing it. Can someone grant me permissions, just to be sure? Thanks!
While I personally prefer the spaced-out version, I'm also a big fan of following specifications unless they're significantly hampering the project. So, if the PSR-12 says no space, then I'd go with that.
Jun 18 2021
I was chatting about this in the MediaWiki Discord server with another user by the name of "skizzerz", and they had an interesting idea: what about implementing the legacy parser as a content model? That would allow wikis to use one or the other on a per-namespace and even per-page basis, as needed, potentially even mixing and matching if transclusions can be handled correctly between the two. The legacy parser could be defined as always being linear, while Parsoid would not. That doesn't solve the underlying issue of allowing Variables (et al) to work with Parsoid, but it does, at least, provide a method for wikis to move forward with either or both of the legacy parser and Parsoid, potentially moving over gradually as time and resources permit. I've never looked at content handlers at anything but a surface level myself, so I don't know if this is actually a viable solution, but it seems like it might be a good way forward if it can be made to work without too much effort.
May 23 2021
I thought it might be helpful to the discussion to provide a link to the template documentation: https://en.uesp.net/wiki/UESPWiki:MetaTemplate
May 22 2021
Sorry, I had my extensions mixed up. I've been meaning "Semantic" whenever I said "Scribunto", though both are something we may end up using at some point. I corrected it in my latest message, but wasn't going to go back and edit every message I might've said it in.
The current version of the extension was written by a hobby programmer (albeit one who had remarkable insight into how the MW parser works) back between MW 1.10 through 1.15 or so, and has been hacked only enough to get it working as the years have gone by. One of the main modules is actually written entirely in global space! So, probably not something we'd want to release as is, even if we could. More importantly, however, It's her code, so not really ours to decide what to do with. We do have the code publicly available if you want to have a look at it, but not "public" public, as in intended for others to use. https://github.com/uesp/uesp-wikimetatemplate
TemplateSandbox, as I understand it, still requires the sandboxed templates be saved each time you make a change. What we're doing is injecting values during Show Preview and as the template code gets processed, so you can literally just preview/fix/preview/fix until you're sure the template is working correctly. Once it is, you can save it without having to copy anything anywhere, cuz you're working on the normal template page itself. This is much faster and more convenient than any kind of sandboxed process I've used on other wikis.
May 21 2021
May 10 2021
Re-reading, I realize now that my initial comment came out much more acerbic than I intended, and undoubtedly set the wrong tone for my later comments, so I apologize for that. I also picked up this comment, which I'd somehow missed the first time around.
Yup, here it is: https://github.com/uesp/uesp-wikimetatemplate
I'm not trying to make "charged statements", and I apologize if I'm coming across that way. I'm just presenting my personal view and the views from a few other wikis that I've dealt with. I'm not trying to say it's representative of anyone else, and I'm not trying to be contentious. I'm just saying that not everybody's at the same place you are.
I wasn't trying to be difficult or insulting; I was just presenting the reality that I, personally, have seen. As a user of mostly small- to medium-sized wikis, most of which lag a fair bit behind the current version, I think my view of things is very different from yours where, as you say, you're supporting primarily large wikis with very different needs. All I was trying to say, really, was that while we may not have the page count/page views of the larger wikis, there is nevertheless a set of wikis that will prefer the legacy parser for one reason or the other. It may not even be for technical reasons such as ours, it might be simply a matter of processing power, preference, or whatever else. As I said in the beginning, and as has been on display throughout this thread, the attitude here is clearly "this is the direction we're going, get on board", and from the perspective of these smaller wikis that are behind by several versions, that's practically an overnight shift and it comes as a slap in the face.
Thank you for the explanation and the offer to engage with our requirements. Perhaps as the migration to Parsoid continues, better solutions will present themselves, but for us, right now the reality is that we've got 1700+ templates affecting over 75k content pages (not to mention those that affect non-content pages, like talk and redirect pages, bringing us to 300k pages overall). Having had our custom extension in place for 12 years now, most of our templates rely on it at this point, and we have only a handful of template coders to maintain them. So, hopefully you can understand that migrating to something that will essentially break all of that isn't just a pain point for us, it's simply not a viable option. While I would hope that this isn't the case, the reality may well have to be that we stop upgrading at whatever the latest version is that will support our needs.
We're not interested in migrating to Parsoid, nor were we aware that this was to become integrated rather than simply yet another extension, so wikis like ours likely ignored any calls for feedback, if they were aware of them at all. I certainly don't recall seeing anything about it in the few things I pay attention to/mailing lists I'm subscribed to, but that could well be what I just mentioned...the assumption that this was an optional component.
I'm not angry, really, so much as I see this sort of WMF-centric thinking from the developers often, and I think there needs to be some better feedback mechanism than simply trusting wfDeprecated() and the like to tell the developers what's in use and what's not. The reality outside of WMF wikis is that most lag several versions behind the current. Just browsing around, I easily found wikis between 1.25 and 1.33; I found none at 1.34 or above. So, deprecating something in 1.34 and then removing it in 1.35 or 1.36 because nobody complained or was logged as using the feature is not really a good plan for wikis like these. I think, if nothing else, there needs to be some kind of communication of planned deprecations/removals that allows extension developers who may not be at the current version to be made aware of breaking changes in advance and be able to say "Hey, we're still using this. We need a path forward."
May 9 2021
The attitude I'm seeing here from WMF is rather concerning. Essentially, it's "it's been decided". Well, that's lovely for WMF, but what about the rest of the wikis out there who maybe don't use (or perhaps even want/need) Parsoid, who don't use the Visual Editor, who don't use Flow, etc.? What do we do?
May 2 2021
Nov 1 2019
Oct 23 2019
Good start, but there's a standard function to add title and namespace (the main concern being that 'ns' is the standard key for namespace). I'm not really a PHP programmer, but I believe the first couple of lines should look like this:
$res = []; ApiQueryBase::addTitleInfo( $res, $title );
Oct 15 2019
Okay, fair enough on both counts. After reporting this, I found a bunch more modules that also allow empty prop values and produce no meaningful output as a result, so if anything's done about it at all, it should probably be a larger project. For the count option, you're right, that would have to be a new option of some kind if it wasn't going to be a breaking change. I believe most database engines can count grouped records well enough as long as the relevant fields are indexed, but MW supports such a wide array of database engines that it would need testing on the whole lot, so that's probably a bigger change than I was thinking.
Oct 13 2019
Just to add a bit more info, It occurred to me to try the equivalent query in the API, and it works fine there, producing the expected:
<root><tplarg><title>1</title><part><name index=\"1\"/><value/></part></tplarg></root>.
Oct 11 2019
I just checked, and I'm thinking of the old manually documented examples (Template:ApiEx). They all pointed at enwiki. I finished the API portion of my project quite some time ago, so I haven't had much call to look at the live docs since then.
Oct 10 2019
I hadn't actually thought to check that, Reedy. Oops! That's odd, though. I could swear most of the examples used to work. Did they maybe point to en-wiki or something? Or am I just remembering the old manual documentation? <shrug>
Sep 27 2019
Mar 5 2019
Feb 25 2019
That was fast! It looks like there's another report here that just came in recently. I hadn't noticed it before I posted.
Feb 24 2019
Jun 19 2018
Mar 15 2017
Feb 2 2017
Jan 4 2017
Good to know. Thanks for putting me onto that; at some point in the future, I'll likely strip out all the coding for older versions and add features like assertuser in their place. At the moment, it does me no good, though, since most of the sites I'm targetting are using anywhere between 1.19 to 1.26. Outside MediaWiki sites themselves, I rarely come across sites that are on 1.28+.
Dec 30 2016
Now that I've understood what's going on, and adjusted my bot to compensate, no, but it was an unexpected point of failure, and the error message was uniquely unhelpful in figuring out the real problem.
Dec 20 2016
I should add that the API layer is nominally complete now, give or take a couple of new modules like clientlogin that I'll tackle later on...so I'm probably finished with the ream of bug reports now. :)
Unfortunately, it's nothing that would be useful to you for your testing procedures. I've been developing a C# bot framework that implements about 98% of the API. So, as I've been going through each module to determine what the inputs and results are for all of them, I've been noting the discrepancies.
Dec 19 2016
Anomie: That fix is filed under the wrong task.
Dec 17 2016
Dec 16 2016
Yeah, I'd thought of that. It's a kludge, but you could potentially just add the type and leave the existing information alone. A client could then read the type, and from that, they'd have the key to read in the value.
Aug 7 2016
Jul 28 2016
Just a note on this: based on the tests in PreprocessorTest.php, it seems MW breaks convention and deliberately allows tags with spaces.
Jul 26 2016
Yes. I realize it's not going to be a priority, but if the code is going to check for stupidly named hooks at all, which it already does, I think it should at least cover off basic correct syntax by excluding spaces, single-quotes, double-quotes, equals signs, and slashes. That would be the easy change, since it's just a matter of typing the extra characters into the Regex (and maybe making that a static Regex or whatever PHP supports, rather than repeating the same one in every function). Using the XML spec would probably be even better from a purely technical standpoint, but is probably overkill in this context.
Jul 25 2016
Jul 13 2016
I agree with Nicolas_Raoul here. There's nothing wrong with the app, the server should not be storing useless, deleted categories that have neither pages on the wiki, nor any category tags associated with them. As I said almost two years ago, this is a resource leak, and a malicious user could theoretically add to the table indefinitely, not that that's likely.
Nov 15 2015
Nov 12 2015
Oct 29 2015
Oct 27 2015
Sorry, I clued into how old this task was and that there was a new one a little after commenting. That's certainly a novel idea.
Oct 26 2015
Actually, a List module would probably make even more sense. (D'oh!)
Behaviourally, this would make more sense as a Meta module. That still leaves it as a query module, for whatever internal reasons there are for that, but gets it out of the prop space where it really doesn't belong at all. I'm not 100% sure of this, but it might even still be able to inherit from ApiQueryImageInfo, with little or no change.
Oct 15 2015
Sep 11 2015
Looking around some more, it seems not to be a MediaWiki specific issue at all. This reddit thread gives some insight. https://www.reddit.com/r/chrome/comments/3j1oqk/beta_or_canary_users_have_versions_later_than_44/
I'm getting this error repeatedly on mediawiki.org. Nothing loads at all on most attempts, and gives the above error. Sometimes, I'll get the requested page, but with no CSS at all. So far, I'm not getting the error on any other site, including the original Commons link. Exiting and restarting the browser *might* help, but that might just be a fluke.
Jul 13 2015
I'm not sure if normalizing on its own is sufficient. There might need to be a "normalized" block as well, like there is with a page query, so you can map the input value to the output value.
Jun 18 2015
Jun 17 2015
@TTO: Yes, I can see where you're coming from. It's one of the pitfalls of working in a different language is that I see the JSON as data to be parsed, not a fully realized data object, as it obviously would be when working in JavaScript. As Anomie says, though, even in JS, it becomes usage-dependent as to which way is easiest to deal with. In the end, at least for me, the difference is minor, so if the decision is to go back to an object, and possibly remove the id field as redundant, I can deal with that.
@TTO: Not at all. I was assuming that people would map the JSON to a more useful collection. It never occurred to me that would actually work with the JSON directly without parsing it. In the context of parsing it, in most languages, parsing a key-value pair is more work (albeit only slightly) than having all the data in the same place (i.e., one element of the array), so it seemed to me that the logical way to go was to convert it to an array.
Jun 16 2015
Jun 14 2015
I'm not convinced that this is fixed from what I see in JobQueueDB.php in the latest nightly. I won't claim to understand all the code, but I think jobs that have been attempted, but failed, are still never being recycled. I can confirm this behaviour as of 1.22, but don't currently have a wiki higher than that where I've been able to confirm it. It seems to be related to either the claimTTL setting (as I originally speculated) or the fact that the job_token/job_token_timestamp fields are still populated in a failed job.
Jun 1 2015
May 27 2015
Actually, I just figured it out. It's the Recent Changes setting that controls it. In earlier versions (confirmed up to 1.22), the preferences menu text makes it appear that that only controls Recent Changes when, in fact, it appears to control several things, including logs, as seen in the more modern preferences menu.