Please note actions by [[User:Liuxinyu970226]]
The thing with zh locales is that one would really like to use automation given that most differences come not from sentence structures, but from word choices. Split locales mean split storage, and as long as there is some newcomer is obsessed with getting the percentage to 100%, we get whole lots of redundancy.
A relevant system where mostly automated conversion across zh variants is MediaWiki's LanguageConvertor.
I don't think LanguageConverter should be implemented here, since it'd be more and more tricky (please, believe me, /zh pages are really bad in MediaWiki.org as per phab:T106131#1481796)
I should have clarified that I am not talking about the mediawiki.org site but the software component.
Yes, the commonly-used form of LC which operates on nearly-parsed wikitext is bad for most usages other than HTML documents. Yet this does not rule out the possibility of using a more constrained interface[1] like a "dumb" LC (e.g. translate($text, $variant)
), with a different text scanner passing text needing conversion to LC.
Some extra release-time work will be needed if TWN is used this way with language conversion, as it's unlikely that anyone would want to run some ./do_ui_language_generation.sh
on what they have downloaded. Technical difficulties can be overcome as experiments and PoCs appear; it's mainly the consensus that needs to converge.
Notes[edit source]
- An interesting way of making LC HTML-dumb is through
<pre>-{}-blah...</pre>
.