Enable [[Project:Terminology gadget|Terminology gadget]] by default?

Screenshot from the gadget

Hello all!

About a year ago, I started developing the Terminology gadget, which is a tool to help translators keep terminology consistent for their language.

Now I would like to request that it is enabled as a default gadget, i.e. that it is turned on for everyone automatically. Since it is a gadget, anyone who doesn't like it can of course disable it via their preferences.

Any comments/objections?

Jon Harald Søby (talk)12:58, 23 September 2022

Its quit good for me and looks to me a component of the Translate extension.

Cigaryno (talk)16:32, 23 September 2022

Yes as long as it can be disabled (as least as long as we don't refresh the page, or in preferences for experienced translators that don't need it all the time but can feed the terminology separetely using another window or tab).

  • It may cause input troubles in some cases, or could provide unrelated hints when there are lexical ambiguities, without the possibility to make an appropriate choice with the suggestion(s) given, or the termiloogy data itself may be defective, needing additions or fixes.
  • The terminology (even if it is correct) may also contain extra terms to contextually remove (to avoid repetitions in the translated text).
  • In some cases, a repeated term may also be contextually replaced by a locally-unambiguous pronoun, or rewritten in another form (active vs. passive, noun vs. verb or participle) that is easier to read, making also long sentences easier to understand while preserving the intended meaning.
  • Finally the terminology list will always be defective for grammatical derivation of terms (plural/gender/case forms, conjugated verbs), and required contextual transforms (such as mutations/agglutination/elisions, as well as capitalisation): it should just offer hints, not requirements.

Also it does not seem that the data provided in the terminology lists is used to feed the normal search engine: I've never seen search results returning terms from these lists, only those from normal wiki pages and translation units. As well the terminology gadget still does not propose results from the local search engine (when showing hints, or when adding terms to terminology lists).

The terminology gadget also does not allow separating the proposed terms with some free text disambiguating its context of use: it tries term-for-term lexical correspondances, not lemma-for-lemma.

It may also start looking up for lexical entries or for labels of elements from Wikidata (possibly correlating them with sitelinks in Wiktionnary or Wikipedia for relevant languages). But this could also be integrated in the local search engine if the gadget can properly use the local search engine already configured to make these extra searches.

So finally, the gadget would remain quite simplist. More efforts should be made in the search engine, showing more contextual hints on the right pane, below the message doc. This right pane is also the place where, later, Abstract Wikipedia (also working with Wikifunctions and Wikidata, and possibly other open data sources) could also provide their own hints (in addition to existing translation tools).

In fact I did not find that gadget very useful and productive for now, I have however massively used the local search engine (to perform checks of coherence when there were multiple choices, or to find possible related discussions), and search engines of Wikimedia wikis and of development project sites (outside MediaWiki itself).

It should also be noted that projects not directly related to MediaWiki use their own terminologies with distinctive terms. The gadget currently does makes any difference, depending on the project we are translating for within the Translate UI. Once again this is the problem that it shows no context to allow proper choices: translating something without taking into account the context of use is full of traps! That's why the right pane, showing the doc for each translation unit and other relevant infos, is still the best place).

So for me this gadget is just a transition tool just here to fill a gap for "fast startup": for languages that are already very advanced, it is already not needed at all. And in my opinion it is more a pollution than a real help when working on translation reviews. It may just help early translators, but not reviewers (we need more reviewers, but to capture them we need more efficient search tools to help them: so please continue working on the right pane of the translate UI, which will not just help Translatewiki.net itself, but also all other translation tools for wikis in Wikimedia or elsewhere, using the Translate UI or extensions based on it!).

Verdy p (talk)17:29, 23 September 2022
 

I forgot I had to enable this and thought this was already a default feature. I support making it default, it is useful

Bgo eiu (talk)23:19, 23 September 2022
 

I would add though, related to the concerns above - it would be useful if it were possible to maintain a list or table for a given term, rather than a single string, to account for complexities in how the word is supposed to inflect, or differences such as the inclusion of diacritics where in the context of a sentence it could be ambiguous. For example, in English there is "page" and "pages," in Punjabi this is صفحہ and صفحے in the direct case, but in the oblique case (followed by a postposition), we have to use صفحے and صفحیاں for singular and plural respectively.

Bgo eiu (talk)18:33, 2 October 2022

That's in fact omore complicae than that in various languages, which also require special aggutination (no prefixes, suffixes, infixes), mutations or reductions (e.g. transforming "de le" into "du" in French, though that those term would not be in the glossary), contextual elision (frequent in French or Italian for pronouns, articles, and particles), deagglutination (think about German verbs whose prefix can be detached and moved to the ned of sentence as in: "(Something) angehen" - infinitive; into: "Ich gehe (something) an." - indicative). In French and English we can also have a variation about whever some words or prefixes should be glued, attached by an hyphen, or detached, As well there are wellknown words with several accepted orthographies (depending on whever an ortographic reform was applied or not).

Now consider the case of conjugating verbs; there are too many forms (and listing them exhaustively requires lot of data: see for example conjugation rules in French Wiktionnary; and they are still not exhaustively listed, because feminine or plural variations of their participles depending on other words are not shown). Complex rules also exist for Slavic languages, and usually verbs are listed by their infinitive (but this is not the case for Latin where traditioanlly they are listed by their first person present indicative, or by the two first persons and the infinitive; but there are also many defective forms where only some persons or tense mode are existing, so this is not a general rule...).

If we start enumerating all forms, there's no well defined order. In my opinion we should just list the main lemma entry found in dictionnaries, e.g. within Wiktionnary, which could be linked by a lexical Wikidata item, that would list all their other forms (but attaching links to other languages will remain very difficult as it is hard to bind lemmas, where it is simpler to bind specific forms to their main lemma entry within the same language (but there are also complex exceptions there).

Verdy p (talk)19:39, 2 October 2022

Attaching lexemes is a good idea, at least for now we may even just link these in the notes section, I had not thought of this.

What I had in mind though was that in including a limited selection of forms rather than a full conjugation table, it would be helpful to parse that grammatical information more easily. The nature of translating for software interfaces necessarily limits the contexts in which different words are used, making particular forms more used than others. For example, a typical Punjabi verb has over 100 possible forms. However, for some we may only need imperatives and for others we may only need imperfect participles. There are also some considerations for underlying meaning, for example, for the verb 'to put' I am only using the politest imperative form پایو rather than پاؤ which sounds forceful or پا which is just rude. So I would indicate that this particular form is the pertinent one, along with four perfect participle conjugations and a conjunctive participle form. For other verbs though, like 'to search,' I would make it more polite if the interface is telling the user to search as کھوجیو but normal politeness for the user telling the software to search as کھوجو. (Even though the software is not a person, being rude or informal with the software would not be read well.)

Bgo eiu (talk)19:39, 3 October 2022
 

I would say that the gadget expects users to know how to inflect a given word in their own language by themselves. But in any case, both fields in the gadget can use normal wikitext, including templates , so if you make a template that can show inflections of words there, feel free to go ahead and use it. Modules are not available on Translatewiki at the moment, and even if they were, I don't know if there's any way for them to access Wikidata lexemes the same way modules in Wikimedia projects can, so using lexeme data could be tricky. That would be beyond the purview of the gadget, though.

Jon Harald Søby (talk)13:43, 14 October 2022

I think just linking forms in the notes is sufficiently helpful that not much else is needed now that I've thought about it.

It is not really a matter of knowing how to inflect words but rather a reminder of which inflections make sense given the context. This is not always obvious from the English source strings, and a number of inflections are exclusively colloquial or exclusively written, or specific to a dialect. Potohari/Mirpuri Punjabi dialects for example allow for future tense while "standard" Punjabi does not, and in the Doabi dialect I am familiar with verbs can have negative conjugations which aren't used in other dialects. So there are various reasons why linking additional context might be helpful, especially for "low resource" languages which don't have as much precedent for software translations.

Bgo eiu (talk)15:09, 14 October 2022

Agreed, English can be confusing there, especially since the present tense, infinitive and imperative are almost always identical in English. Those ambiguities should be noted in the message documentation, though, since they vary by message (and not by word form), and knowing which one is intended is useful for many/most languages.

Jon Harald Søby (talk)00:27, 15 October 2022
 
 
 

Hi, The gadget doesn't seem to work in the current version of Seamonkey. It does work in Firefox. Any clue? Thanks a lot!

PiefPafPier (talk)11:29, 15 October 2022

Hi! I downloaded SeaMonkey to see what was wrong, and there was a regex it didn't like. I fixed that, but now there's another regex it doesn't like. Since SeaMonkey has minuscule use and the current version is based on a Firefox version from 2018, I don't think it's worthwhile to spend more time trying to fix it – I'd recommend using a more modern browser instead, or waiting until they update it to a more current Firefox version.

Jon Harald Søby (talk)13:08, 17 October 2022

Wow, then it's lagging behind a lot! Thank you for looking into this.

PiefPafPier (talk)14:05, 17 October 2022

IF it was based on Firefox 2018, it would support ES6 (ECMAScript 6, i.e. ECMA standard 262 for Javascript) from 2016; adn would then have all what is necessary for regexps. Most probably SeaMonkey uses a very antique Javascript engine that does not even support ES6. So lot of scripts won't work because ES6 is the based standard for all modern browsers (even IE11 had support for the same level of Regexp standardized in ES6).

This also means that SeaMonkey or most probably not secure at all and does not implement many sandboxing features needed today no the web (for your security or privacy) and will be extremely slow when using polyfills (for emulating regexps). For these reasons, you should xconsider abandonning that webbrowser with a more maintained one: today there remains only three major web browser bases: Safari (required base for iOS and OSX), Firefox, and Chromium/Webkit (for most other browsers, including Chrome, Android, Edge, Vivaldi, Opera...). And 3 base Javascript engines (today now only Safari lags a bit behind, but it is still supported). For mobile environments, only Safari/iOS and Webkit/Android. Firefox is now mostly used on Linux, but various wellknown distribs have decided to drop it in favor of a Chromium-based webbowser (including for their desktop environments, e.g. with KDE).

Verdy p (talk)16:17, 17 October 2022

I use Linux and FOSS. I like Firefox on Android, but on PCs I think it's too simplistic. The security of SeaMonkey doesn't worry me that much, because they are still backporting fixes. I never use it for tricky stuff. The rendering engine upgrade is long overdue though, and is becoming annoying. But I'm patient, I don't want to lose my proggie yet. :-)

PiefPafPier (talk)19:24, 17 October 2022

Use this browser as you want for other sites. It's just that it's not usable for using TranslateWiki.net (or the Translate UI extension in other MediaWiki sites). The priorities for now in MEdiawiki was to offer better accessiblity with modern and secure browsers on most common desktop and mobile systems, and it's a fact that today there's a large and accelerated move to use modern browsers that are confoming at least to ECMAScript 6.

SeaMonkey probably does not have support for Unicode-based regular expressions (standardized in ECMAScript 6 since many years), which are REQUIRED for correct management of translations. The translate UI cannot be reliably used with an old browser than only supports 8-bit charsets in its engine for regular expression: it should support it if it was using for example ICU inside its Javascript implementation (almost all other borwsers have integrated ICU, even those that are still not fully supporting more advanced HTML5 parsers or renderers, and various accessiblity and security features in HTML, DOM or Javascript). There's NO easy way to integrate a fast and reliable regexp engine rewritten in Javascript if it does not support Unicode regexps based on Unicode codepoints and not on isolated bytes of UTF-8 encoded sources, or if it still does not support UTF-16 in Javascript strings, and does not recognize non-BMP characters, or makes incorrect string normalizations or case conversions; there exists some polyfills, but they are not reliable or maintained, as there's almost nobody using such old non-conforming Javascript engine!

There are SO MANY sites that assume ECMAScript 6 to support Unicode, and a very vast majority of the web now uses Unicode everywhere (and Mediawiki has this support required since even longer, and will no longer port anything for old browsers, or old servers or old databases that still do not support it after more than 20 years!). SeaMonkey is then a dinosaure (just like old Internet Explorer with all its tweaks and bugs, but at least the last supported versions of IE had full Unicode support and ES6 requirements for regular expressions).

Verdy p (talk)09:42, 18 October 2022

For what it's worth: https://www.lambdatest.com/web-technologies/es6-support-on-firefox-60
Firefox 60.8 being the engine of SeaMonkey, and bugs aside, ES6 is supported.

I don't experience weird things on this site. They do fix the worst bugs for the time being. I've compiled SeaMonkey against my system libraries, not the bundled ones, including ICU.

PiefPafPier (talk)10:58, 18 October 2022

No, if this does not work, this does not mean that the support of Firefox as the base for the HTML/CSS parser and renderer implise that SeaMonkey uses a Javascript engine compying to ES6. The JavaScript engine is completely decoupled from the HTML/CSS parser and renderer Even if there's an interface between both engines, for access to the HTML DOM, or to the CSS properties from Javascript, or for handling HTML events into Javascript, this does not imply any dependency, they are completely independant. This means that the Firefox engine does not depend at all on ES6-compliancy, it does not even uses itself the Javascript engine, and may use its own regular expression engine (possibly even a different one for the CSS engine, with specific tunings for performances).

The Firefox Javascript engine has changed a lot (and in fact compltely rewritten) for performance so that it could still compete with the performances needed today and matched by other browsers (notably against the Javascript engine used by Chrome/Chromium, which uses an internal JIT compiler to native code running in a sandboxed VM). But Firefox also has recently changed its old API for plugins (including javascript engines) for security reasons and this probably breaks the compatiblity with existing extensions made in SeaMonkey, so that it could not easily replace the old Javascript engine by the new one based on the new plugin API (which now also supports most plugins made for Chrome/Chromium, and uses a much better handling of events, with asynchronous behavior and completions, multihreading and running scripts in a separate sandboxed VM, instead of the old system based on synchronous callbacks inside the same thread, which was really slow and extremely unsafe).


The exact statement from SeaMonkey is “Under the hood, SeaMonkey uses much of the same Mozilla Firefox source code.” (this does not mean it uses all its code).

Here is an example of Javascript polyfill needed for SeaMonkey, even for navigating in its own GitHub repository:

There's a (slow) fix there demonstrating that SeaMonkey does NOT respect the ES6 requirements for regexps in its embedded Javascript engine!

You'll see in that polyfill that ES6-compliant regexps using Unicode properties on Unicode codepoints do not work (such as the line commented out no https://github.com/JustOff/github-wc-polyfill/blob/master/bootstrap.js#L475 which uses UCD properties like "\\p{Emoji}" and numerical codepoint references like "\\u{1f300}" for characters outside the Unicode BMP: this is required also for modern languages, including Chinese, mathematical symbols, and almost all emojis, or languages supported in MediaWiki which are using scripts entirely written outside the BMP, such as Gotic, or newly-encoded Asian and African scripts, all of them needing characters in "supplementary planes"). These characters may be supported in the HTML/CSS parser and renderer, without being really supported in the Javascript engine (historically Javascript used only flat 16-bit encoding, and had limtied character properties defnied only ion the BMP, treating all non-BMP characters all as "opaque" surrogates without any other properties; so nothing was made in its embedded regexp engine, and in fact it did not even have a full UCD support for all characters of the BMP, only basic POSIX properties on ASCII, possibly extended to ISO889-1 only but with bugs when there was another locale in use: the Javascript engine was not using exclsuively UTF-16 when working with the HTML DOM and CSS; these old APIs were broken!).

This means that the regexp engine used in SeaMonkey DOES NOT conform to ES6 and that SeaMonkey does not really support Unicode, and treats strings only as opaque vector of 16-bit "characters" without properties; it also uses some "custom" non-standard flags in regexps. ES6 added the required support of Unicode codepoints, not just the basic interface with opaque 16-bit encodings and basic character properties on ASCII characters; it also adds the required support of Unicode codepoint ranges (not just 16-bit codeunit ranges), character properties from the Unicode UCD, at least for general categories), and Unicode normalizations. The regexp engine complying to these rules are obviously more complex to integrate, but this has been done in a real Javascript engine for real Firefox, as well as the one in Chrome/Chromium, Edge, Safari and most other browsers... (for Chrome/Chromium-based Javascript engines this is an optimized version, other browsers generally use the ICU library for all these purposes; writing a *performant* Unicode-compliant regexp is very challenging, expecially with Unicode extensions; other projects had similar difficulties, notably Perl which was the first to offer full Unicode-compliance in PCRE, even before if was standardized in ES6 for Javascript, even if ES6 does not require all PCRE extensions).

Note that even if you think tht non-BMP characters are not needed for your language when navigating the web (e.g. you don't know Chinese or Asian and African languages needing them for their scripts), this is no longer true, at least on many social networks and many sites that now use many emojis. As well on Mediawiki, emojis are accepted in page names, user names, and are even used now in some messages (notably for the mobile interface).

There are specific rules that may be applied to emojis or other non-BMP characters that must match in Javascript regexps, or for transforming the content. MediaWiki has full support of non-BMP characters (except on some legacy wikis that were installed with old versions of Mediawiki that were installed with some old database engines tht were not converted to use full Unicode encodings and that were limited to just BMP characters: these wikis are broken, causing various issues each time there's a non-BMP character entered, because they validate inside Mediawiki but fail to be stored and retrived correctly to/from the underlying database: this is a documented problem that administrators of these wikis must handle themselves, because this requires an offline database maintenance, and possibly a very long time to reindex all its content for the new encoding; this is not the case of Wikimedia wikis whose database storage have been updated since many years, by using UTF-8 or UTF-16 everywhere, and no longer the old UCS2 encoding supporting only characters in the BMP or multibyte-encoded characters needing more than 2 bytes in the database encoding); if these wikis are not upgraded, any attempt to retrieve and parse the stored wiki content causes a fatal server-side error if there's a single non-BMP character in it (and Mediawiki itself silently accepts to store these contents that will make the page uneditable or truncated silently!). In other words, Mediawiki requires full Unicode support for all 16 supplementary planes on the server side, and these characters must not break the JavaScript regexp rules used by the UI on the client side.

Verdy p (talk)11:31, 18 October 2022

Well, I do see the character for Japan here, and this page looks scripted to me: https://unicode-table.com/en/search/?q=%F0%9F%97%BE

The implementation of JavaScript in SeaMonkey is called SpiderMonkey, and I have no doubt that it's the same version from Firefox 60, but still being patched. That's not an ideal situation, of course.

PiefPafPier (talk)18:02, 18 October 2022
Edited by author.
Last edit: 06:31, 19 October 2022

Seeing the characters for Japan in the HTML/CSS parser and renderer is not a sufficient condition. We are talking about Javascript support of regular expressions.

Once again the regular expression engine used in Javascript (with ES6) is absolutely never needed to parse and HTML and render texts, and for supporting fonts with supplementary characters that are outside the BMP. You could disable Javascript entirely in the browser, you would still see the same Japanese text (however you would not interact much with web sites using only HTML and CSS, at least not with client-side scripts, but only via HTML form submissions processed on the server, or with active media "objects" like audio and videos which embed their own interactions wit hthe user and communication with the remove servers, without actually modifying the HTML and CSS structure of the documetn you're viewing and navigating via HTML hyperlinks, but form submission buttons; as well all data validation would be performed by the server, and navigation would be really slow, forcing whole HTML pages to be reloaded to reflect the new states implied by user interactions).

This suggests that the Terminology gadget should perform a Javascript Regexp capacility tests to see if the browser conforms to ES6, so that it will disable itself and instruct their users to use it with another (ES6-compliant) browser. Other non-ES6 compliant webbrowsers that are still sold and used today are "smart TVs" (not those coslty models with Android, but most Chinese TV brands, using very limited browsers like Opera Mini), and some very cheap smartphones that were recycled and refurbished in poor markets, but only usable to make phone calls and not to use webapps or browse the web (though, modern chip smartphones, made in China, are probably sold at better prices than buying old refurbished smartphones, whose main interest will not be their use on the web, but their extended battery charge time, when the web is not used, there's no wiki and no 3G+ mobile data network, and there's not energy-intensive displays).

(old Nokia, Blackberry and Windows Phones are no longer sold since years, and probably now all defunct, for having lost all support since years and not even being able to use any apps due to very insufficient of storage space, even just for updated noly their embedded apps; there's no change to get any newer browser on these devices, whise batteries are also probably dead since long)

Verdy p (talk)18:12, 18 October 2022

TL;DR Lost my interest in this thread. Off-topic parts can be removed. thx

PiefPafPier (talk)04:56, 19 October 2022

Note that there's a new error message available since today (to be translated) that warns users when their web browsers does not have a Javascript engine not compatible with ECMAScript 6 (ES6).

It should be displayed now for such gadget.

Verdy p (talk)06:11, 21 October 2022
Edited by author.
Last edit: 16:21, 21 October 2022

Go ahead and #!$% people %#$ who do NOTHING wrong, dude.

PiefPafPier (talk)06:54, 21 October 2022
Edited by author.
Last edit: 16:09, 21 October 2022

Insults are very unwelcome. I did not insult you or any one, but your reply was effectively doing SOMETHING wrong here which is not accepted (read the contributor terms, guidelines, and help pages).

I just tried to give some explainations and fix some incorrect assumptions, so that you could eventually make your choice. We are not supporting any browser development here, so that's not the place to bring your birds terms here. And the message I jsut showed was not created by me but shows that I was right and that existing MediaWiki developers have no time to support non-ES6-compliant browsers (unless you want to find solutions and support them to help that project).

Verdy p (talk)14:42, 21 October 2022

Both "piss off" and "rumble" are characterisations of the other person's actions or intent which are unnecessary because they do not convey additional information, and therefore can be perceived as personal attacks. I'm not sure what's going on here, but perhaps we can all tone down?

Nemo (talk)15:10, 21 October 2022

Yes but "piss off" is really insulting directly against someone, while "rumbling" is a descriptive attitude. And I gave te asme argument as you when I signaled it to Nick before your reply, Nemo. I admit that PiefPafPier is not pleased of this situation, but what I posted jsut before his reply was a revealing an effective fact, which may also be related to this extension and recent discussions (and not directed at all against him or any one).. It also jsut says that existing developers have no time to invest for now in trying to support all browsers (and for indefinite long time), when they already have lot of other priorities for many other people. If that user wants to change that and has technical capabilities and enough time, he should help another way by explaining what fixes he found that may work (but proably this is not needed: gadgets are extensions that can be disabled and replaced by user gadgets that they develop test and fix themselves.

Verdy p (talk)16:18, 21 October 2022

"Pissed off" may also be an accurate description of how someone feels, while "rumbling" is often a subjective judgement of someone else's actions.

Nemo (talk)16:46, 21 October 2022

That's completely the opposite of what I perceive, and any translation would treat the former as a strong slang word, not the later one which is very weak, and frequently used by people for themselves. I've seen that term frequentl used by official supports and moderators in forums, and by developers that receive offensive comments, when these supprots are treated being "lazy", or not abeing able to support a request which is postponed or abandoned later when stalled without possible action for too long (but no one was initerest to provide a patch, maintenance or review, but users just complain and reject other proposed solutions/alternatives). Here he was propsoed an alternative that is well supported (he has the choice of supported browsers for his platform, we are not forcing them). I understand his deception. But how would you decribe the reaction he unexpectedly had above? See for example how Quora describes it (and what moderation action they may take if someone acts like this); there are similar rules in moderated and reviewed social networks (like 4chan, Reddit), or support forums (Microsoft, Oracle), but unfortunately not in most others (Twitter, Facebook, do nothing unless there's a violation of law, or violation of their censorship on some topics, even if it's perfectly legal and respectable, including very wellknown artworks in museums)...

Verdy p (talk)21:08, 21 October 2022
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

As a side note: you are free to code in any version of JavaScript you want, but there is a user in this thread who thinks it should be ES6 compliant. Don't listen to him, because that would mean you will have to fix it to support SeaMonkey, as we have to regard the opinion of lambdatest.com and Mozilla more than anonymous user 'Verdy p' of translatewiki.net.

PiefPafPier (talk)04:43, 22 October 2022

I am not anonymous. And I'm one of the oldest contributors of translatewiki.net, and have made lot of maitnenance work and support for helping many languages to startup or develop themselves, and added many missing documentation. But this site is not the support or developer site for any webbrowser. Translatewiki.net is an open project, and like all projects (free or not) it has limitations, most of tham based on available resources in humans and the time they can contribute, plus technical skills needed and sometimes money for the infrastructure. Most users here (including admins) are not paid for the work they already do for you and for all others since many years, as a best effort only, but without any contract with you. You are free to contribute, but if you can't, don't blame others if they can't help you more, when they explain their limits and propose you possible alternatives than most users accept without problem and that are available to you. If you don't like these alternatives, then there's nothing more we can do.

That's a fact that SeaMonkey does not support ES6 regexps (no support for non-BMP characters, it only supports "character classes" where characters are single UTF-16 code units, and treats non-BMP characters only as a pair separate "surrogate" code units, which count for 1 "character" in regexp character classes; as well no support for "\u", "\p{}" and "\P{}" and the "\U" flag, which are part of ES6): https://gitlab.com/seamonkey-project/seamonkey-2.53-mozilla/-/blob/2_53_14_final/js/src/irregexp/RegExpEngine.cpp

You should ask to SeaMonkey developers/support for such extension and revision of their regexp engine (note that SeaMoneky already has optional support for ICU, but it is not used by its regexp engine). Without it, transforming regexps is not evident: e.g. transforming ES6 regexps like /😀+/ or /[a😀]/ is not trivial: without ES6 support for unicode codepoints (not code units), these regexps will match more things than expected. The same affects regexps that would contain Chinese characters in the supplementary ideographic plane, or any attempt to match words written in Shavian, Deseret, and some modern scripts like Osage, Warang Citi, Adlam, Wancho and Toto, or existing MediaWiki translations for Gotic Wikipedia.

It's up to participants and supporters to the SeaMoney project to ask for ES6 compliance and help there if they can. But if they don't this is not surprising that this webbrowser is not even listed in https://caniuse.com/?search=es6 and that the browser has low usage. (non-ES6 browsers include IE11, and Opera Mini, bith of them also having limitations on this site, and IE11 is even no longer supported; Opera Mini remains only inside some old cheap smartTV's an a few appliances that should also not be used with Translatewiki.net). We don't really need all ES6 features in MediaWiki, but MediaWiki now depends on more and more components that need it (some support may still be present to emulatewhat is missing but it will be either slow or limited, and it is normal for these wikis to inform users with messages about things that do not work and that are not working, and will likely not work soon if this requires heavy development and tests, and constly maintenance that current developers have no way to support for long when they have many more things to do, fix and maintain elsewhere).

Verdy p (talk)08:53, 22 October 2022
 

Side note 2: Mind the insane edit and re-edit history of this thread.

PiefPafPier (talk)09:19, 22 October 2022
 
 
 

I would support this, as this indeed provides help to maintain consistency across Chinese languages.

Lakejason0 (talk)12:51, 27 October 2022