WordPress ships in dozens of languages because thousands of volunteers translate it string by string.

That model helped WordPress scale for years, but it now leaves too many plugins and themes waiting in review queues that move slowly or do not move at all. Even for the few locales with many voluteers moving translations fast, it is still anyway a huge waste of time.

The current process is slow and unreliable

The current workflow depends on human approval for every translated string. When reviewer capacity is healthy, projects move along. When it is thin, strings can sit in limbo for months.

If you write a WordPress.org plugin or theme and want it available in other languages, you depend on the Polyglots team (unless you add translation packs manually). Volunteers submit translation suggestions on translate.wordpress.org, and then a editor (PTE, GTE, CLPTE, etc) reviews and approves them. Nothing goes live until someone with the right role approves it.

PTEs handle a specific project in a specific locale. GTEs have broader access across all projects for a locale. Both roles are volunteer positions, and both become bottlenecks when there are more strings than reviewers. I was a GTE for Brazilian Portuguese for years, so I have seen that backlog up close. And still see it as I’m always following the work done the local team.

Some locales have hundreds of projects waiting for review, and many suggestions need cleanup before approval. That makes the queue slower, discourages contributors, and leaves plugin and theme authors waiting far longer than they should.

AI translations should be the default

WordPress.org can keep human oversight without requiring humans to approve every single string. AI can produce the first version, publish it, and leave editor time for corrections and edge cases.

IMO, here is how we should take it:

  • For now, auto-translation should be opt-in. Authors who want their project translated can enable it explicitly, while authors who are still not confident can keep the current path.
  • Once a project opts in, new strings should be translated into every active locale as soon as they are submitted. Smaller languages would no longer wait behind long queues just to get a basic localized version online.
  • Editors (especially GTEs) would still matter, but their job would change. They would review flagged strings, correct mistakes, maintain glossaries, and guide terminology for each locale. That is a better use of volunteer time than approving thousands of routine strings one by one.
  • Any logged-in user should be able to flag a string as wrong, awkward, or inconsistent. Flagged strings can go to the locale editors for review, much like moderation queues in other community systems (such as the the WP.org forums).

The quality argument

The bar we’re comparing against is not professional translators; it’s volunteers with varying levels of skill and familiarity with the project.

Good AI translation already does better on consistency than much of that queue work. With the right context, a model can follow a glossary, reuse established terminology, and keep style choices stable across thousands of strings. That matters in WordPress, where the same UI terms appear everywhere.

WordPress.org should judge AI output against the quality it already publishes through the current queue. That baseline is uneven, and anyone who has reviewed community submissions at scale already knows it.

As a former GTE, I spent a significant amount of time rejecting or fixing human-submitted translations with grammar issues, inconsistent terminology, or awkward phrasing. Volunteer work varies a lot depending on the contributor’s skill, the project context, and the amount of time available.

Note: After publishing this, I found that Matt opened a very similar discussion on the Make/Polyglots Make blog back in February: Where are we going?.