Translate your Help Scout Docs with Transifex

Help Scout is a popular help desk software with a built-in a knowledge base solution called Docs. If you use Docs, you can now translate it with Transifex Live. Not only can you provide great email support, but you can offer answers to your users 24/7 in their native language too.

Multilingual Help Scout Docs

Translating your Help Scout Docs with Transifex Live is simple. Just add the Transifex Live JavaScript snippet in your site’s settings, save the articles, translate, and publish. Best of all, when you use Transifex Live, you only need to set up one knowledge base to offer it in multiple languages.

Check out our sample Help Scout knowledge base that was translated with Transifex Live, or get started by following the instructions in the documentation.

P.S. If you haven’t heard, we’ve also built a Transifex Live-based WordPress plugin for translating your WordPress site.

Translating Your Node.js App with Transifex

Node.js is a server-side environment for delivering scalable JavaScript apps and services. It’s quickly gaining popularity, having been used by Netflix, PayPal, LinkedIn, and others. In this post, we’ll show you how can use Transifex to easily localize your Node.js apps.

Setting Up Node.js for Localization

While there are several modules for localizing in Node.js, we recommend the i18n-node module. i18n-node stores localization data in standard JSON, making it easy to pass text to and from Transifex.

i18n-node

i18n-node is a simple translation module for integrating localization into a Node.js app. Each locale stores its translations in a separate JSON file, which you’ll use to send localization data to and from Transifex. You can add i18n-node to your project using npm:

$ npm install i18n

Load i18n along with the rest of your modules:

// load modules
var http = require('http'),
      i18n = require('i18n'),
      ...

To configure i18n, specify the locales you want to include in your project and the directory where the localization files will be stored. You can also specify a default locale, a unique extension for newly created localization files, and more. A full list of options is available on the i18n-node GitHub page.

In this example, we’ll store English, Spanish, and German language files in the locales directory under the current directory:

i18n.configure({
	locales:['en', 'es', 'de'],
	directory: __dirname + '/locales',
	defaultLocale: 'en',
	extension: '.json'
});

i18n-node supports both shorthand locale codes and IETF language tags. For example, to specify British English, change “en” to “en_GB”.

You can reference a localized string by using the global i18n keyword. For example, if you have a variable that stores the string “Hello”, you can replace it with a localized string using i18n.__(keyword):

// Hard coded string
// var greeting = 'Hello';

// Localized string
var greeting = i18n.__('Hello');

The exception is when responding to http requests, in which case you can attach the reference to the request object. For example, let’s create a simple web server that displays “Hello” to the user:

app = http.createServer(function(request, response) {
	i18n.init(request, response);
	response.end(response.__('Hello');
});

app.listen(3000, '127.0.0.1');

i18n automatically generates the default locale using the keyword provided. When the app starts, i18n creates three new files:

  • en.json, which contains the default keys
  • es.json, which will contain the Spanish translations
  • de.json, which will contain the German translations

Further down, we’ll show you how to connect your app to Transifex using the Transifex client.

RequireJS

RequireJS provides its own i18n implementation that can also be imported into Transifex. If your Node.js app uses RequireJS, add the i18n.js plugin to your project.

Similar to i18n-node, RequireJS uses JSON to store translations. However, rather than using a standard JSON Key-Value file format similar to i18n-node, RequireJS actually defines the key-value set as an object.

RequireJS searches for localization files in the “nls” folder under the project directory. Additional languages are stored in separate directories and are enabled by adding the language to the main file.

For more details on localizing with RequireJS, see the RequireJS documentation on i18n.

Sending the Source File to Transifex

The Transifex Client makes it easy to move translations between your app and Transifex. If you don’t already have the Transifex Client installed, follow the instructions here.

Start by creating a new Transifex project using the JSON Key-Value (.json) file format. The en.json file will act as our source file. Once you’ve created the project, use the Transifex Client to create a local repository in your app directory:

$ cd myapp
$ tx init

tx init creates a .tx directory in the current folder with a basic config file. We’ll set up the configuration by specifying the directory where localized files are stored, setting en.json as the initial localization resource, setting English as the source language, and specifying standard JSON as the file type.

Transifex_project_slug specifies the URL slug for your project, while default_resource_slug specifies the URL slug for the default resource. You can find both of these in your project page.

[main]
host = https://www.transifex.com

[Transifex_project_slug.default_resource_slug]
file_filter = locales/.json
source_file = locales/en.json
source_lang = en
type = KEYVALUEJSON

Finally, push any local changes to Transifex using tx push -s. The -s flag tells the Transifex Client to update the project with the source file.

$ tx push -s
Pushing translations for resource nodejs-test.enjson:
Pushing source file (locales/en.json)
Done.

The project page should reflect the updated source file:

Transifex project page

Localizing in Transifex

Now that your source file has been pushed to your project, you can use Transifex to start localizing. We’ll show you how to add new languages, update translations, then push the changes back to your Node.js application.

Adding Existing Languages

If your app already contains translations, you can push those translations to Transifex using the Transifex Client. Use tx push with the -t flag instead of the -s flag to push any existing translations to your project.

$ tx push -t
Pushing 'de' translations (file: locales/de.json)
Pushing 'es' translations (file: locales/es.json)
Done.

For smaller projects, it may be easier to create a new translation and enter any existing translations as comments. For more information on how you can do this, see the Transifex Editor tutorial.

Adding New Languages

Begin by clicking “Edit Languages” in the Project Languages table:

Project languages

In the popup window, enter the languages that you want to add to your project. In this case, we’ll add Spanish (es) and German (de). As you type, the search box will filter based on the name of the language. You can also specify locales such as Latin American Spanish (es_419) or Austrian German (de_AT). When you’re ready, click Apply.

Add languages

Adding Translations

Your new languages will appear in your project dashboard. Since no work has been done on them, they’ll both appear at 0% completion. You’ll also see that no translators have been assigned to the new languages. You can learn more about adding translators through the People page.

Adding translations

Let’s start by filling in our German translation. Clicking on the German language will bring you to the resource page. The en.json source file is shown along with its category and completion. From here, you can also assign and manage the project’s team members.

Transifex resources

Click on the en.json resource, then click Translate in the popup window:

Translation popup

This will bring you to the Transifex Editor, where you can modify and review your translations. The left-hand panel shows the strings provided by your resource file. In this case, we only have the one string, “Hello”. The right-hand panel provides tools for entering and reviewing translations. Additionally, you can view suggested translations, read through a history of translations, add comments or instructions, and more. You can learn more about the Transifex Editor through the Editor tutorial.

When you’re ready to translate, select “Hello” so that it appears in the right-hand panel under “Untranslated String”. Type “Hallo” in the translation box underneath the string and press the Tab key or the Save button. The translation will be marked as unreviewed until it can be verified by a reviewer or manager. In the meantime, do the same for the Spanish translation by switching to the Spanish resource and entering “Hola” in the translation box.

Transifex translation editor

The project dashboard will now show the languages as pending review. Next, we’ll sync the new translations back to Node.js.

Syncing Changes Back to Node.js

Using the Transifex Client, you can pull changes back to your Node.js app with tx pull:

$ tx pull -a
New translations found for the following languages: de, es
Pulling new translations for resource nodejs-test.enjson (source: locales/en.json)
 -> de: locales/de.json
 -> es: locales/es.json
Done.

You can verify the results by checking your es.json and de.json files. If the process was successful, you should see a new key-value pair for each language.

$ cat locales/es.json locales/de.json
{
	"Hello": "Hola"
}{
	"Hello": "Hallo"
}

With that, your Node.js app is ready to go! If you’re using i18n-node, you can test your configuration using i18n’s the setLocale() function:

i18n.configure({...});
i18n.setLocale('es');
app = http.createServer(...);

Retrieving Project Information through Node.js

There are other ways to interact with your Transifex project through Node.js. node-transifex is a community module that provides an easy interface to the Transifex API. After adding your project name and user credentials to node-transifex, you can use the module’s built-in methods to pull data about your Transifex project. For instance, as a project owner, you can use the languageSetMethod() function to return a list of the project’s languages along with the coordinators, translators, and reviewers for each language.

node-transifex also allows statistics gathering, such as the percentage of items that have been translated and the percentage of items that have been reviewed. You can gather statistics for the entire project, or for a specific language.

Install node-transifex using npm:

$ npm install transifex

Then, initialize the module with your project name and credentials:

var Transifex = require("transifex");

var transifex = new Transifex({
    project_slug: "myProject",
    credential: "username:password"
});

For a list of available functions, see the node-transifex project page.

Coming Soon: A New, Simplified URL Structure for Transifex

Early next week, we’ll be releasing a new URL structure for Transifex. This update will make navigating around easier, and you’ll be able to quickly tell which Organization or Project you’re in. There’s nothing you need to do for now, but we recommend that you read on for the details.

The current URL structure

Let’s say we have an Organization named “We love our Users”. It has a project called “Transifex Rocks” and a resource named “My Resource”. Here’s how you’d navigate to each page with the current URL structure:

Organization Dashboard:     www.transifex.com/organization/we-love-our-users
Project page:                        www.transifex.com/projects/p/transifex-rocks
Resource page:                    www.transifex.com/projects/p/transifex-rocks/my-resource

As you can see, it’s not exactly easy to tell which Organization a Project belongs to just by looking at the URL. So we simplified things!

The new URL structure

With the new, simplified URL structure, here’s how things will look:

Organization Dashboard:      www.transifex.com/we-love-our-users
Project page:                         www.transifex.com/we-love-our-users/transifex-rocks
Resource page:                     www.transifex.com/we-love-our-users/transifex-rocks/my-resource

Slowly, we see a pattern take shape:

www.transifex.com/Organization-Slug/Project-Slug/Resource-Slug

All URLs in Transifex will follow the paradigm above. It’ll make it easier for you to understand which Organization, Project and/or Resource page you’re visiting at any given time.

(Slugs are the unique names we use within Transifex to identify individual entities such as Organizations, Projects and Resources.)

What about the API?

None of the changes will affect the API, so there’s nothing you need to do or change.

What about existing bookmarks and links?

Existing URLs will be redirected to the new URLs through the 31st of January 2016.

In order to keep our code base clean and not have it slow down our development speed, we won’t be supporting the redirects beyond that point. But until then, your bookmarks and other links are safe, and there won’t be any broken links.

We still recommend you update the links in your bookmark manager and those pointing from your website to your Transifex project page as soon as possible. This gets rid of the redirect, making pages load faster each time you or someone else clicks on a link to Transifex.

Thank you for taking the time to read through everything. Have a great day!

Elasticsearch at Transifex

We recently announced Translation Memory 3.0, which, at its core, uses Elasticsearch.

This blog post will highlight particular areas of interest on what we learnt using Elasticsearch in production. Enjoy!

Resilience

Elasticsearch isn’t our source-of-truth and, at least for now, that’s a good thing! During the lifetime of building Translation Memory 3.0, we’d seen many patches, videos, and interviews speaking about Elasticsearch’s ability to deal with failure. As it stands, Elasticsearch, especially in a scalable-production environment, needs a little bit of support.

Elasticsearch has introduced some really cool roles for nodes within a cluster. A node can be a Master, Worker, Data or Tribe node. Roles define a node’s responsibilities within a cluster. For example, a Worker node is responsible for receiving queries, aggregating the results, and returning them to the caller.

Master nodes are the most important when it comes to resilience. These nodes are solely responsible for managing cluster-level operations and master eligibility. To prevent a split-brain, the cluster must be able to ‘see’ a quorum of Master-eligible nodes. Therefore, nodes within a cluster containing 3 Masters (the minimum number for a quorum), can only receive requests if they are able to communicate with 2 Master-eligible nodes. This can be seen as a pessimistic approach towards the split-brain problem, but it ensures your data stays consistent across the entire cluster and nodes don’t start their own little club.

To set a node as a master, simply set these variables in your elasticsearch.yml configuration file:

node.data: false
node.master: true

Shards

We use the routing parameter with all of our queries. When provided, it allows Elasticsearch to route both index and queries to a single shard, ensuring you’re not sending requests to the entire cluster. I strongly suggest you look at your data and look for appropriate routing variables (e.g. user ID, organisation ID, etc.) This section assumes you’ve done the same.

If you’ve just started with Elasticsearch, then your first question is probably going to be “How many shards?”

This can’t be answered by anyone else and it requires a good deal of testing and a little bit of foresight. Each shard in Elasticsearch is a fully-fledged Lucene index. Too few or too many and you’re going to bring your cluster to a crawl. The general idea is to fire up a single-server cluster (try to use the same hardware you’re going to be using in production), add some real-world data and then start running the same kinds of queries you’ll run in production until the queries become too slow for your use-case.

At this point, you can simply divide your total amount of data by the amount in your test cluster, giving you a rough number.

However, this method will only give you the number of shards you need at that moment. If you expect your dataset to grow, then you’re also going to have to consider that rate and adjust accordingly. But ultimately, and this is something we accepted, if Elasticsearch isn’t your source-of-truth, then you have the freedom to make mistakes. If you’re of the same opinion, then make sure you use aliases with your indexes. These will ensure you can build another index in parallel and then switch your queries over once it’s complete.

Know your data

It’s probably obvious, but the best thing you can do is to know your data and the queries you’ll be running over it. Elasticsearch will happily begin to accept documents from the very start without you configuring a single thing. Don’t do this. Make sure you know what it means to create an index, a document type, and the implications these decisions have on the final product.

The mapping of your documents is incredibly important, and I can’t stress enough how vital it is that you know what it means for a field to be stored, indexed and the various analysis options provided. The Elasticsearch documentation is the best place to find out about these kinds of things.

Indexing

This is where we spent the most of our energy. When we started working on Translation Memory 3.0, we nailed down some hard-requirements. The most critical of these was the system’s ability to work in real-time.

Elasticsearch Translation Memory

We looked at a bunch of different options but couldn’t find anything to suit our needs. We use Postgres and we wanted something which would work well with it. Around this time, we found out that the Elasticsearch River API was being deprecated and the elasticsearch-jdbc plugin didn’t suit our needs.

So, we made our own. We created a library called Hermes to asynchronously receive and process event notifications emitted by Postgres’ NOTIFY.

We’re going to be talking about the things we’ve created with Hermes in a later blog post. If you want to learn more then head over to the repository and take a look through the docs.

Filtering

Designing your queries can be tricky and great care must be taken when deciding the best approach. One thing to note when querying, however, are filters. Filters provide a way to reduce the dataset that you ultimately need to query. In addition, they can be cached for even better performance!

In our case, determining the similarity between two strings in Transifex is done using the Levenshtein distance algorithm. This algorithm, especially at scale, can be very costly. So, we save the length of each string in our index along with a bunch of other metadata. When we need to look for similar strings, we calculate the length boundaries and then filter out anything which doesn’t meet the criteria – greatly reducing the strings we actually have to perform ‘work’ over.

Monitoring

Marvel. I can’t recommend it enough. We used a mix of different graphing and monitoring tools but none came close to the clarity and convenience of Marvel. We used it during development and saw the value in it straight away. If your cluster is important to you and its continued running is critical to your business, then do yourself a favour and try it out!

Have you used Elasticsearch in production? What were the challenges you faced? Let us know in the comments below.

4 Critical Elements of Website Localization

Companies looking to go global must tailor their approach to each target locale to create a successful customer experience. Let’s dive into 4 elements to keep in mind when localizing your website.

1. Branding

What’s one of the first things your customer, or potential customer, will see upon entering your site? Your logo and tagline. You would like to think that your brand and messaging would be able to move seamlessly across borders and be understood by everyone. But that’s not always the case. When you’re globalizing a brand, it’s always a good idea to double check and make sure your tagline or logo won’t translate into something offensive or be misunderstood.

Consider these less than stellar examples of branding gone awry::

  • Clairol launched a curling iron called “Mist Stick” in Germany, even though “Mist” is German slang for manure.
  • Mercedes-Benz entered the Chinese market under the brand name “Bensi,” which means “rush to die” – probably not something you want to be associated with as an automotive company.
  • The American Dairy Association replicated it’s “Got Milk?” campaign in Spanish speaking countries where it was translated into “Are You Lactating?”

When translating your messaging, logo, tagline, etc., you should be aware of the pronunciation, spelling and double check the the actual meaning is culturally appropriate.

2. Space

The number of characters per word differs language by language. German uses 11.66 characters on average per word, while Swedish averages 8.51 per word, and Croatian averages 7.06.

Word count average by language
Image source.

You might be asking, so what? Can a few character difference really change your web design layout? Short answer – yes, it can. When you are dealing with multiple languages, the layout of text displayed on your website will need some tweaking between languages. If your button allows for space for 6 characters, and then another language requires a 10 character space, something will have to give.

One way to tweak your text is to play with font size. Look at Amazon’s homepage in Japanese and German. German tends to use more characters per-word, so Amazon used a smaller font, while the Japanese text is laid out in a larger font, both use almost the same amount of horizontal space, but the vertical space varies.

Amazon's homepage in Japanese and German
Japanese (left), German (right)

3. Culture-specific design

If your website uses graphics, imagery, photos, etc, let’s assume you had it created for your initial audience. Now if you are going global you should make sure that the imagery you have used in your source language website will work in your new locales. If it doesn’t, you need to find replacements that will fit. If you adapt your images to a specific locale you will create a better relationship with your users. If your images are unrelatable, you could ruffle some feathers.

For instance, you wouldn’t want to display hamburgers or beef steaks if India is your target zone. Nor would you want to you want to show someone with muddy boots indoors if you are targeting your website towards a Japanese audience. You should also be aware that gestures don’t have the same meaning globally. For instance, if your image has a person doing a “V” sign facing inward, you might think it means “Peace, man” but if you’re in the UK, it’s the equivalent of giving someone the middle finger in the US. And the “thumbs up” gesture as a way of saying “good” in one culture can become an entirely different, and unpleasant, phrase in another culture.

Dang it Bieber.
Dang it Bieber.

Color associations aren’t universal either. In North America, green often is associated with entities such as nature or wealth, while in some asian cultures, green is often tied to sickness.

4. Simple (sometimes)

Sometimes the key to good design is built upon simple forms, lines, text and composition; other times good design is structured around complex forms and deliveries of information. A well-localized site means staying in tuned with local design norms.

Most Americans prefer simple relatively designs; Google performs very well in the US. Naver, on the other had, a very successful search engine in South Korea that outperforms Google in Korea (leaving Google with a market share of 36.9%). Why is that? One reason is due to Naver’s understanding of local design norms.

Naver vs. Google
Naver (left), Google (right)

Google’s homepage is a stark contrast to Naver’s. Naver’s homepage is much busier, using lots of images, sections and banners. Unlike Google’s page layout with a single search box, Naver displays provides insights and trends into what people are searching for as well as the most frequently searched keywords. It’s tailored for a South Korean audience and reflects an understanding of what its users like and want.

Want another example of culturally adjusted web design? Take a peek at Rakuten’s – a Japanese e-commerce company – website.

Rkuten's homepage in Japanese, Austrian, and American-English
Japanese (left), Austrian (center), American (right)

Not only has Rakuten localized their site into 13 different languages, but they also have a different design that works for the target culture. Rakuten Japan uses several colors and displays a lot of information above the page fold, while Rakuten USA uses more white space.

Research has repeatedly found that people’s visual preferences widely vary. Some countries such as China, Singapore, and Malaysia preferer more colorfulness, while countries such as Denmark, Switzerland, and Sweden prefer less colorfulness. What this means is if your website is consistent with cultural design norms, it will look familiar and create a better user experience.

Hammering it home

The one-size-fits-all approach doesn’t apply when it comes to localization. You might be asking yourself why can’t you just make one site and google translate its text? Why is good website usability so important? Because a good customer experience is what will bring your users back; localization is an investment for a better future.

Why Is Localization So Dang Hard

This post initially appeared on Medium.

Why Is Localization So Dang Hard

With so many software frameworks and development environments supporting the ability to internationalize software, why is localization still so difficult?

I18n, L10n, translation — what does it all mean?

Technically, the most direct way to build a global application is to 1) somehow figure out which locale the user wants and then 2) give the user an user interface specific to the locale requested. Prior to the Internet, often software developers would build separate applications for every locale. Localization files had to be distributed with the application (usually on separate floppy disks, yes, we are going back that far) and the user had to pick the right floppy disk per language; this process was fairly awful. With the advent of the Internet and the proliferation of computer access globally, it has become common and easier to support multiple languages in the same app.

The problem that arose from relying on the user to pick application versions based on language was partially solved by operating systems. The OS software developers built in the capability for the user to pick their locale during configuration. This advance limited the exposure to most users whose operating system was set up for them. While this has been amazing progress for the user, for the software developer who is building the user interface, these changes did not go far enough.

Standards? Where did they fall short?

When learning about globalization, you can find a plethora of documentation on globalization standards. However, when it comes to actually implementing translations in a product or website, there is little guidance. The good news here is that the mechanism to display different languages in a software product, or what is commonly referred to as internationalisation (i18n), is a well understood software engineering problem. As a result, most development environments or web frameworks support i18n. Unfortunately, there is a downside.

Software developers tend to be a fairly disagreeable bunch, and so they disagreed about the *right* way to support i18n. It is in this lack of “universality” where the standards fall short. Each programming language implements a slightly different form of i18n with a slightly different approach. Some languages avoid this altogether and leave it up to frameworks and libraries to solve.

File formats had to become the standard

In the absence of clear guidelines, the software development community has had to find a way manage translation assets. For this reason, they turned to file formats to specify the integration method. In some cases, the programming language simply adopted a well known file format as a “method” of integration. Oh, you are using a PHP framework? Well, then you must be using PO files for managing your translations. However, there are a couple of key issues with a file based approach.

  1. Version management is a nightmare. Developers often make multiple copies of translation files when building applications. This can lead to significant confusion around which set of files are the most current from the translators. Or even worse… software development projects sometimes have last minute text changes. Those changes often result in generating even more translations files.
  2. Process agility is sacrificed. In a file-based approach, the translation file needs to be completed with all translations and generally blocks the development process. On large software projects, having external blocks waiting for translators to complete can often slow even the nimblest development teams. Evidence for this can be seen in the fact that many software startups bypass any localization efforts completely in an effort to keep their development velocity high.
  3. We forgot DRY! Often with the file-based approach translation management tends to organize the translation files around a particular project, or product, or website. After a few iterations, translators are translating the exact same text copy again and again. If there is no process in place to limit this effect it can spiral out of control in time and cost just the same way that real code does when we neglect DRY principles.

Looking for a better way

Looking for a better way

It was in this environment that Dimitris Glezos found himself when working with the Fedora Linux project in 2007. Back then, translation projects had grown so large and unmanageable that Red Hat developers were desperate for help. Dimitris came up with the idea for Transifex.

“The idea is that Transifex will act as a proxy/mediator for translation commits.”

Fast forward to 2015, Transifex is part of the cloud technologies landscape but has this completely solved the problem? We’ve made great progress, but there is always more to be done.

This approach does gain some ground on versioning and agility. However, we have also added some new issues. Clearly, using the cloud to manage our translation files is just one step in solving this problem. Dimitris idea of needing a proxy/mediator between translators and developers still persists even today. Transifex’s developer-centric approach aims to ease management, storage, collaboration, and real-time monitoring that will allow companies to launch products and content in multiple languages without slowing down the development speed; thus, solving these translation issues.

Taking a leap forward

Taking a leap forward

Part of the problem with globalization is that, generally speaking, we’ve been going about it the wrong way. We’ve been focusing on translation management as an engineering problem and have been building developer first focused solutions. But in order to take a leap forward, we need to solve the potentially harder, more people focused issue and make translation efforts truly seamless for the individual. Here are three key aspects of doing this:

  1. Using software tools as an enabler. Software tools should enable us to build on a global level — they shouldn’t be used to define boundaries. There will always be cases in whatever approach we take where issues arise. Our tools should be capable of helping guide us past those issues and smart enough to such that they don’t come up again once they are solved.
  2. Appropriate context for everyone based on their role. Here are some examples. People who are performing the translator role need to see the text copy in context of the website or application and not in some localization file format. Translation project managers need to see a dashboard of timelines and cost so they can have the appropriate context for their role. And finally, developers should not need to spend time digging through UI for translatable strings…this should happen seamlessly as part of their build process.
  3. Keeping translations cycles quick. Agile methods have transformed our approach to developing software. No longer do we spend months in dingy poorly lit rooms building an application before validating it with product experts or even users. Translation projects can benefit the same way. By allowing for shorter cycles, and more transparency not only will timelines be reduced, but overall quality will likely improve as well. This approach enables us to fit to a process rather than force fitting the process to us.

The world is just a big community

With the growth of the Internet, especially in countries outside of the US and Europe, we are quickly finding that the world as a whole is a community unto itself. Even though it is not practical for the entire world to agree on a single language, it *is* practical to expect our software and processes to make this as transparent as possible. When we are building new software products or websites we shouldn’t even need to *choose* for whom to make our product available. It should simply be available to all.

Find out additional information

The future is not as far as you might think!

  1. See how Transifex Cloud APIs helps streamline file-based translation approaches https://www.transifex.com/product/
  2. See how Transifex Live is helping to create translation context for websites http://docs.transifex.com/live/quick-start-guide
  3. See how Transifex Command Line Client is automating software build processes http://docs.transifex.com/integrations/best-practices
  4. See Dimitris’ letter to the Fedora community https://lists.fedoraproject.org/pipermail/i18n/2007-July/000671.html

What is Translation Memory?

As you translate more and more content, whether it’s strings in an app or text on a website, you’ll encounter content that’s similar to or even the same as what’s been translated before. For example, an app may contain both the phrase “Download file” as well as “Download all files.”

In our new infographic, we explain what Translation Memory is, and how it can help you handle these similar content.

Infographic: What is Translation Memory

An Opinionated Guide to WordPress Plugin Development

This guide runs you through the basics to set up a PHP development environment for plugin development. Many steps in this guide are intended for Mac OSX systems running WordPress on a virtual Ubuntu server. If you are attempting to follow this on a different platform, the steps might be different.

What you will need

For starters, let’s download a bunch of things:

Getting WordPress running on your local Virtual Machine

The first task is to get WordPress up and running locally from the image. Following the steps from Bitnami, you should be able to easily import the image.
One key item to note at this point…make sure you are connected to a network with internet access. The VM will setup itself with ‘Bridged’ networking initially, so it needs an external DNS. I’ll address working offline later.

Also reference the following FAQ from Bitnami.

The cloud setup will be slightly different, so be sure you are on the ‘Virtual Appliance’ FAQ.

WordPress dev

If you chose VirtualBox

After you have installed and launched VirtualBox, go to File > Import Appliance, and navigate to the folder of your extracted VM image and import it.

In the next window, allocate at least 2048 MB of RAM.

Next up: starting the VM!

If you get the WRITE SAME failed. Manually zeroing error, just wait. It will eventually boot to the OS.

Note: If the default login credentials don’t work, try bitnami for the username and password. Also, if prompted to change the password, change it to whatever you like.

Set up a repository and Git Client

If you don’t have any existing plugin code, create a new project on GitHub. Then generate your plugin boilerplate here: http://wppb.me/.

The boilerplate generator will create 2 directories: 1 for your assets which will be displayed on the WordPress plugin page , and another for your plugin. I recommend creating two separate projects for these directories. Right now, we are only focused on the plugin code.

Getting the IDE setup, and sFTP access

At this point, go ahead and install NetBeans; you’ll also need to set up Firefox with the FireFTP plugin.
We are SO close to being able to do something productive!

Just a few more quick setup tasks:

  • Shell into the VM as ‘bitnami’ user (you can use console or set up SSH).
    • Enable SSH in the VM
    • The VM’s IP address can be found using ifconfig (the inet address)
  • Make a symbolic link to the WordPress plugin directory as it’s somewhat hidden in Bitnami’s special paths.
    ln -s /opt/bitnami/apps/wordpress/htdocs/wp-content/plugins/ plugins
  • Now, set up your sFTP connection. I recommend just using the IP address displayed on the VM console screen, and the ‘bitnami’ user. (Note: You can create separate users for security purposes…but it’s easy to lose track if you have many vms)
  • While still inside the FireFTP console, navigate on the left pane to your boilerplate plugin (WPPB.me) and copy the plugin directory over to the VM.
  • Finally! You can now log in to WordPress and activate your plugin! The default WordPress login info is ‘user’ / ‘bitnami’

Advanced topics

Getting unit tests setup

You’ll need to set up PHPUnit and WP-CLI on the VM.

To install PHPunit:

wget https://phar.phpunit.de/phpunit.phar
chmod +x phpunit.phar
sudo mv phpunit.phar /usr/local/bin/phpunit

For additional info: https://github.com/sebastianbergmann/phpunit#installation.

To install WP-CLI (similiar to PHPUnit):

curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp

For additional info: http://wp-cli.org/.

If you have existing tests…simply go to your plugin directory and run the WP-CLI initialization script:

sudo ./bin/install-wp-tests.sh wordpress_test <mysql root user> <mysql root user password> localhost

Now, if everything is good and the starts aligned for you…simply run:

phpunit

If you are working on a project that needs tests setup, there’s a few more steps which I’m not going to cover here. Instead, please refer to WP-CLI docs here.

Working offline

When you are not connected to a network, this setup doesn’t work since the WordPress Virtual Machine can’t get an IP address to support a local connection. In this case, you will need to setup ‘host-only networking’. For Virtual Box, this is fairly straightforward…although you do need to take some extra configuration steps.

Unfortunately, the VirtualBox documentation isn’t completely clear. I’ll break it down quickly here:

  • Create the host-only loopback device. The management console will default to ‘vboxnet0′ which is fine. Be sure to turn DHCP server on. This will prevent you from having to change networking on the guest.
  • Now you will be able to flip from bridged to host-only in your Virtual Machine’s settings panel.
  • Be sure to reboot the VM so that the virtual hardware settings are updated and you will get a new IP address on the virtual machine’s console.

If you found this guide useful, please check out our WordPress plugin that helps you to quickly translate your website or blog: https://wordpress.org/plugins/transifex-live-integration/

:octocat:fin

Taking the Headache out of Translating WordPress Sites and Other Web Content

When comes to localization, many people don’t know where to start. Publishing and content platforms are not always internationalized – you couldn’t translate your content even if you wanted to. And even if a publishing platform did come with multilingual support, you often had to localize content in roundabout ways; for example, duplicating pages or copy-and-pasting content. This became exponentially more challenging when you have lots of frequently-updated content.

Not fun, to say the least.

Our vision for Transifex Live

A year ago, we introduced Transifex Live. Our vision was to do for localization what Google Analytics did for website analytics: make localization easy and accessible for content creators of all types. With a single snippet of JavaScript, Transifex Live enabled you to translate websites and online documentation without any platform or engineering dependencies, or complex system integrations.

By eliminating the need to internationalize code, manage files, and track frequently changing web content, Transifex Live made it possible for anyone to create multilingual sites and reach customers in their native language. Today, we’re excited to take our vision a step further with our Transifex Live integration with WordPress.

Transifex Live Multilingual WordPress Plugin

WordPress is by far the most popular Content Management System out there, owning nearly half of the CMS market [1]. We use it for the Transifex website (product tour, blog, etc.), as do many of our users. But it doesn’t offer multilingual support out-of-the-box [2].

After seeing several members of our community build Transifex Live plugins for WordPress, we decided to join forces and create an official Transifex Live WordPress plugin. A big thanks to Brian Miyaji from ThemeBoy and Ayebare Mucunguzi from Zanto!

Transifex Live Multilingual WordPress Plugin

With the plugin, you can:

  • Integrate Transifex Live into your WordPress site without modifying any themes or templates.
  • Auto-detect the browser locale and load the matching translations, e.g. visitors from France will see your content in French.
  • Automatically identify new/updated content and pages on your site.
  • Choose where the language selector appears: top left, top right, bottom left, or bottom right corner of your site.
  • Customize the language switcher by choosing your own color scheme.

Unlike other existing solutions for translating WordPress sites or blogs, the Transifex Live plugin doesn’t require you to create one language per post, insert language tags manually, or host separate instances for each language. Plus, by using Transifex Live together with the WordPress plugin, you can also use all the other translation and collaboration tools in the Transifex platform, including Translation Memory, Glossary, ordering translations, to name a few.

To get the Transifex Live WordPress translation plugin, visit the WordPress Plugin directory or GitHub repository.

There’s more…

We didn’t stop after our WordPress plugin. If you go to our documentation, you’ll now find Integration Guides that walk you through how to use Transifex Live to translate content on a number of CMS, documentation, help desk, e-commerce, and blogging platforms. You don’t use only one tool for your company or organization, so we didn’t stop at one integration guide :)

Currently, there are guides for:

And just like using Transifex Live with WordPress, using Transifex Live with these platforms means there’s no complex system integrations or need to deal with files. You also get the ability to translate in-context and publish translation without waiting for deploys. Best of all, keeping translations up to date is no longer a challenge.

We hope you find the WordPress plugin and Integration Guides helpful in getting started with localizing your web content with Transifex Live. This is just the start!

[1] http://trends.builtwith.com/cms
[2] https://codex.wordpress.org/Multilingual_WordPress

Translation Memory 3.0

Today, we’re proud to announce a completely re-vamped version of our Translation Memory (TM) service for all our users. A TM database is a tool used by translators to find similarities between something they are trying to translate, and previous translations. By leveraging previously translated content, translators can speedily and accurately do their job.

So… what’s new with Translation Memory 3.0?

Real-time suggestions

Firstly, we completely re-engineered the way content is stored and the speed in which it’s done. Transifex now presents translation suggestions in real time. This means that if many translators are translating content concurrently, they will immediately see each others’ translations presented as suggestions if they are deemed similar. This is especially handy if you are primarily translating with your user community.

Improved suggestions

We’ve also significantly improved the accuracy and relevancy of suggestions, regardless of the source language. The following suggestions were derived from the same piece of text, “Project Languages”. Take a look.

Translation Memory 2.0 vs 3.0

As you can see, TM 3.0 is able to provide significantly improved and relevant suggestions. A translator can now completely translate the string using the provided suggestions.

Additionally, the TM now only considers the most recent version of a translation for suggestion.

Premium features

If you are on the Premium plan or above, you also get access to the following features.

Delete TM suggestions

You can now remove unwanted entries from your Translation Memory by clicking the “x” icon next to a suggestion. Once deleted, a suggestion will no longer be shown for similar translations.

Delete Translation Memory Entries

Another way to delete TM suggestions is by navigating to a Project > Manage > Translation Memory. There, you’ll be able to search for a specific term in your source content and see the existing TM entries for strings containing that term. Like before, hit the “x” icon to remove an unwanted entry.

Translation Memory Search and Delete

Multiple TM Groups

In the past, an organization was able to only have a single multi-project TM group. Now, you can enjoy multiple TM groups (groups are managed under Dashboard > Settings > Translation Memory). This will improve consistency of translations across projects since translators will get 100% matches from a unified pool of strings.

Note: Those on the Plus plan can still create a single TM group within their organization.

See TM leverage

Just because a file has 750 words, doesn’t mean all of those words need to be translated from scratch. With TM 3.0, you can see reports that show how much of the content being translated matches what’s already been translated before. For example, if 450 out of 750 words in your file has a 94-75% Translation Memory match, then you know translations shouldn’t take as long as translating those 450 words from scratch.

Translation Memory Leverage

We hope you like the new TM!