User Details
- User Since
- Feb 11 2015, 5:47 AM (502 w, 2 d)
- Availability
- Available
- LDAP User
- Bugreporter
- MediaWiki User
- GZWDer [ Global Accounts ]
Yesterday
By the way, we should also purge all notifications (probably also include ones already read) once the temporary account is expired.
Currently all Wikidata properties are usable in Commons Metadata (though some are missing UI support). In a (long-term) future of "fediverse" of Wikibase instances, any Wikibase instances should able to use any properties/items in predefined set of other Wikibase instances. e.g. Wikibase A can use properties from another Wikibase B, C and itself. So I don't think the number of federated properties is a problem.
Wed, Sep 25
Add a column to a CentralAuth global table and use that to denote the expiry of the temp
(1) For wikis not using CentralAuth, we still need an alternative. So we may at some point need to add a column to both local (for consistency of non-CentralAuth wikis) and CentralAuth user tables, where in SUL wiki the local column is never used.
(2) Alternatively (as an Option 5), we can create a new dedicated table for expiry date and status of temporary account. A schema change is still needed but it is easier and table may be a bit smaller.
(3) If we want to lookup expiry date and status of temporary account in any wiki, in Option 2 we can just select one wiki to store expiration information. The more logical one is loginwiki or metawiki. loginwiki obviously has a smaller logging table so querying it may be quicker, but I am not sure the long-term future of loginwiki (it may be completely replaced after T120484: Create password-authentication service for use by CentralAuth). I still prefer Option 2 (in my opinion 2>3,4,5>>>1).
I have plans to work on improving horizontal scaling of mediawiki (in general) but the work on it will start at least two years from now.
Yeah, what to do in near term (i.e. 2024-2025) is described in T297633#7646661. However there is another point to consider: s8 takes around 100,000 queries every second so it is meaningful to analyze the read pattern of Wikidata database. Some reads are not necessary or have alternative (e.g. item ID of a local page can be read from page property table instead of from Wikidata wb_items_per_site table).
Tue, Sep 24
One issue is gb_address in replica will no longer have an index, which will significantly degrade labs users querying such column. In related block_target table, we introduced alternative views that has indexes on IP address column and does not included autoblocks. However, existing tools needs to switched to new views since we introduced a breaking change.
One point to consider is user_email_token_expires and user_password_expires may be at some point moved to another table, database or even cluster: T352823/T183420. So it may be impossible to join it with other tables. Furthermore, the time temporary account is expired should be considered public, where these two fields are not.
Mon, Sep 23
Another idea is introduce some sort of flat (RocksDB-like) secondary item store, so clients accessing Wikidata data can bypass Wikidata database completely. This does not reduce the size of Wikidata database, but will reduce the number/frequency of queries, so this does not solve the issue completely but may reduce many of problems.
Querying wb_items_per_site forward side (item to page) can be replaced with queries to the new secondary storage; accessing statements of items can also use such storage. Page to item query should use the page_property table instead.
Such secondary storage can naturally splited to multiple shards, since we only need to support key to value query.
Per T375087#10159309
Having a dedicated revision backend will make several tasks easier, e.g. T189412: Granular protection for Wikidata items, T217324: Have a more fine-grained history for property values on item pages. But there are much more to consider. For example, it is bad to introduce a mandatory 3rd party database as a requirement of Wikibase installation.
Sun, Sep 22
Fri, Sep 20
https://meta.wikimedia.org/wiki/Special:CentralAuth?target=IKhitron+IA: This account is an interface admin but not admin in hewiki.
This is because content models can be changed by admins, and in ruwiki, engineers (which you are one), but not interface admins. This is not a regression.
Thu, Sep 19
Wed, Sep 18
Tue, Sep 17
Alternatively, to support use cases similar to Wikitech account migration, the current migration mechanism can be moved to a new dedicated extension.
Mon, Sep 16
Another issue is since Wikitech will have CentralAuth on October 1, if someone has SUL username XXX and LDAP/Wikitech username YYY, and there is no LDAP/Wikitech user named XXX, an account XXX will be created in Wikitech once the user is globally logged in and viewed Wikitech. Such users should be renamed to something like XXX~wikitechwiki before YYY can be renamed to XXX and connected to its SUL account.
For translatewiki.org only, it also use override files like MessagesQqf.php to override rtl setting (see https://www.mediawiki.org/wiki/Manual:Adding_and_removing_languages#Right_to_left_languages). Of course the long-term solution is described in the third point of T359761#9810409.
Sun, Sep 15
Sat, Sep 14
Is https://wikitech.wikimedia.org/wiki/Wikitech:Rename_requests and this task really necessary? We already have ways to connect LDAP and SUL accounts with different names (in Bitu).
Another option is that maybe we can simply not track nor update private wiki usages.
If we only track json usage centrally and not locally, we should also make sure third party (non-WMF and standalone) wiki users does not get this (unused) globaljsonlinks table or any other table created after they installed JsonConfig with default setting and run update.php.
Such issues will be gone after 24 hours.
Fri, Sep 13
See also T161859#10144077 for four cases we can safely connect LDAP and SUL accounts, which may be processed automatically.
We should also mention if you reset your Wikitech password in this transitional period, for LDAP users with known SUL connection, the password will be temporary and removed on SUL migration. In some cases your Wikitech email will be changed to match SUL too (T161859#10144077).
Some thoughts:
(1) LDAP-SUL connection
- Many people created LDAP accounts for Toolforge uses and has LDAP-SUL connection via Striker. They may not be aware that Bitu used a different connection (T371595). So we may import SUL user connections from Striker for users that does not yet have one in Bitu.
- Another potential import is matching SUL database. We can safely connect LDAP and SUL account with same username if they have same confirmed email address. Also, SUL account A and LDAP account B can be connected if they have same confirmed email address which is not used in other SUL or LDAP account.
make it specific to JsonConfig consumption, call it globaljsonlinks_*
Note:
- Chart is not the only .tab consumer. See also T153966: Track Commons Dataset usage across wikis (what links here).
- JsonConfig previously does not need to create database table, and this task may bring schema change to the extension. Since JsonConfig is deployed in every Wikimedia wikis (where GlobalUsage only in Commons) we may want to somehow prevent this table be created in other wikis. Alternatively we may create a dedicated extension for global JSON usage tracking, or reuse GlobalUsage (though with different tables) for it.
Note: after moswiki is created, cleanupTitles should be run in wikis still have MOS: pages in main namespace (the second list of T363538#10123348) so that such pages can be recovered.
Thu, Sep 12
Which kind of support do you want (label or monolingual text)? Note if there are users actively building a test wiki, the better way is finish 13% of translation of MediaWiki core messages in translatewiki so the language support will be added to MediaWiki core, and thus will be available in all places in Wikidata.
I don't think betafeatures_user_counts should be considered private since such information is provided in Special:Preferences for every user.
Wed, Sep 11
In Chinese Wikipedia ArbCom is purposefully a group without Access to nonpublic personal data policy requirement, let alone CheckUser or Oversight access (for now they are elected independently).
Tue, Sep 10
For archive (and filearchive) table, please note the archive table is currently replicated to cloud, so the table itself should not be considered private - what is private is the content of deleted pages/revisions, not the existence of such pages/revisions.
Mon, Sep 9
If you mean "don't run tests if they're not current, until a user manually requests it", I worry that that would make Functions even harder to understand for non-technical users.
What I propose: when user view function page without cached result (1) the page will show the tests not yet has a result and provide a button to refresh; (2) send a request to run tests asynchronously. Optionally we can use some push technology (such as polling or WebSocket) so when a result is available it is shown in function page, but such request must query results only, not invoking test itself.
The issue is API request will timeout. E.g. when viewing https://www.wikifunctions.org/wiki/Z10786, there is an synchronous API request to https://www.wikifunctions.org/w/api.php?action=wikilambda_perform_test&format=json&wikilambda_perform_test_zfunction=Z10786&wikilambda_perform_test_zimplementations=Z10789%7CZ10807&wikilambda_perform_test_ztesters=Z10787%7CZ10805%7CZ10808%7CZ10860%7CZ10861%7CZ14225%7CZ14266%7CZ14267%7CZ14268%7CZ14269%7CZ14270%7CZ14271%7CZ14272%7CZ14273%7CZ14274&uselang=en, which can not be completed in one minute and thus will be killed by server. If the requests are run asynchronously with persistent cache, at least you can see a result immediately next time.
Note sunset of Mathoid will mean users using Chrome<63 can not see math formulas properly, since native MathML is only supported in Chrome 109 and Chrome<63 users do not get JavaScript (to run MathJax). However since we may drop grade C support for Chrome<70 as part of T367821: Discovery: Deprecation of TLS 1.2, it is not much a concern.
Sun, Sep 8
Not a MediaWiki issue since the image is not uploaded via UploadWizard or any known tools, so there are no automatic extractation of EXIF data.
Sat, Sep 7
For clarification, what I mean for "caching" is different of the current cache of evaluation result. Currently if you view function page that have not been viewed before, it will run all test combinations synchronously and cache each evaluation result; When viewing the test again, the cached evaluation results are used. However what I prefer is a persistent cache of test results.
In my opinion, the current design of wikilambda_perform_test is really, really bad. I have described the issues in T374306: Do not run tests synchronously in wikilambda_perform_test.
Thu, Sep 5
with some confidence I can say that any ticket older than 2 years is already being filtered out of team work boards (or the workboard is too big that it is being effectively ignored) so it is effectively being treated exactly the same as a declined ticket.
Teams should always use a dedicated team project to organize their ongoing (current) or short-term (foreseeable future) works which is different from component projects, represents all issues and tasks about a specific software. Any tasks not in any team projects should be considered backlog and may be kept open indefinitely, potentially be picked up one day.
Wed, Sep 4
Did Special:CreateLocalAccount work?
Tue, Sep 3
The solutions proposed in this task is:
- Deprioritize translation pages from search results
- Redesign search results so that all translations of single page are "folded" into one item (require redesign of API and UI, which is a huge breaking change)
Dumps should be disabled until they no longer cause db lag.
Or, we should introduce dedicated app servers and db replicas for dumps.