TheWikipediaLibrary - json-based i18n files in addition to existing gettext
← Thread:Support/TheWikipediaLibrary - json-based i18n files in addition to existing gettext/reply (6)
We're working to address a longstanding issue that precluded some of our content from being translatable via translatewiki. We're moving messages from the database into json files over the course of a few PRs, with the idea that these would be picked up by translatewiki. Our first take on this is to do a json file per message type and language, and we've started with the descriptions of our partners who provide resources for the library. You can see here that we've got the descriptions for all partners together in a file for each language. If we move forward with this, we'll add new files for other fields as we take that content out of the database. Before we proceed any further, I'd like to check in to see if this approach is workable on your end, or if we need to organize the data differently. Here's our 1st PR to make this move, so you can see what it looks like currently: https://github.com/WikipediaLibrary/TWLight/pull/654
Feel free to ignore the python, which does the job of extracting the info and creating those initial json files, and just focus on the json.
Feedback is welcome!
Thanks a lot! JSON is generally easier to manage than PO.
One issue I immediately see is that you output non-ASCII characters as escapes, such as
"101_short_description": "<p>\u0627\u0644\u0645\u0646\u0647\u0644", etc. According to the convention documented at mw:Localisation file format, it's preferable to use the actual letters and not escapes. Will it be possible to do it? Since humans won't have to deal with these files very often, it's probably not a blocker, although User:Abijeet Patro and User:Nike should correct me if I'm wrong. That said, humans may have to deal with it; for example, sometimes it's too difficult to find things in translatewiki itself and it's more convenient to grep through the source.
Moreover, perhaps you don't need to convert the translated files at all: maybe it will be enough to add en.json and qqq.json and let the translatewiki import/export scripts generate the JSON files with the translations.
I agree that using plain UTF-8 would be better (the "\uNNNN" escape can cause problems, as it requires encoding pairs with surrogates from every character in Supplementary planes, and the convention is not universal, as some tools will require the long form "\U00NNNNNN" instead.
As well it is overlong and some tools may not accept leading zeroes, generating ambiguities, unless they use a strictly conforming JSON parser). No such problem with UTF-8, as the only escapes needed are for quotation marks and backslashes inside actual texts, and some ASCII controls like TAB and NEWLINE: this is much simpler to parse anyway even with only a few ASCII exceptions).
This is all extremely useful stuff. We should be able to do UTF-8 in our initial output and add a json linter to our CICD process to make sure that we can parse the input.
On the creation of files: It's actually easiest for us to create them all no matter what, but we can update to cull the empty ones. We do have a couple of existing translations currently, so it sounds like the ideal process might be to create qqq, en, and then any translations that we do have?
The JSON structure appears to be good and we should be able to process it on translatewiki.net. It will require a small change on our end to read the new JSON file. This change will also be needed if more JSON files are added in the future. You can also use the banana-i18n format for strings in the JSON.
Could we skip the keys that don't have any definition such as: "24_description", "78_description"? These keys appear to be machine generated, can we assume that these keys will not change?
I see that (almost) all the strings in the English language end with "\n". Can this be added programmatically? It maybe difficult for translators to recognize and add this at the end of every translation that they make.
Regarding creation of the language / translation files, only create the language files for which you have translations. When importing the messages translatewiki.net will take care of reading them.
One more thing, translatewiki.net will add a @metadata key to the JSON file. Example: https://github.com/wikimedia/mediawiki-extensions-TwoColConflict/blob/master/i18n/af.json#L2. I hope that this is not an issue.
Yeah, I can see that we need to normalize those trailing newlines. Would stripping them out everywhere (instead of adding them) be a good solution?
We can add that metadata key to start with.
And yes, those keys will not change. We may add new ones over time, but won't change what's there.
You do not have permission to edit this page, for the following reason:
The action you have requested is limited to users in the group: Users.
You can view and copy the source of this page.Abijeet Patro (talk)
Thanks for the feedback! We have updated our GitHub PR with all the suggestions you made: https://github.com/WikipediaLibrary/TWLight/pull/654. Let us know if you have any more additional feedback.
Opened: https://phabricator.wikimedia.org/T279530 to track this
Our team was just discussing this some more today and we realized that we may be able to accommodate the content covered by this change with our existing ugettext workflow. Feel free to de-prioritize this task while we verify. We should know in a week or so. I cross-posted this to the phab task as well.