WhatsApp Faces EUR 0.25m Fine in Germany If It Fails to Translate Terms of Use

WhatsApp Faces EUR 0.25m Fine in Germany If It Fails to Translate Terms of Use

The scaling of apps and SaaS across the globe can happen breathtakingly fast. WhatsApp, whose pace of user growtheclipsed that of Google, Facebook, and Twitter, now has a billion monthly active users all around the world.

Exponential global growth inevitably requires a sophisticated approach to product internationalization. Beyond the localization of the user interface and marketing material, which drive user growth, more mundane content such as legal disclaimers also require translation.

WhatsApp is learning this the hard way after it lost a lawsuit before the Kammergericht Berlin (the highest State Court in Berlin) lodged by the Federation of German Consumer Organizations (VZBV). The ruling (in German) is available here.

The case is straightforward. On its WhatsApp.com website, the US-based messenger service markets its products to German consumers. The website’s content is almost exclusively in German, ergo specifically targeted at German users. However, the localized website has one crucial omission. WhatsApp has chosen not to translate the Terms of Use from English into German.

“Everyday English is widespread in Germany but not the type of English used in legal texts, contracts and commercial documents”―Berlin State Court

A click on the German-language button “Datenschutz & AGB” (terms of use and privacy policy) leads to WhatsApp’s 6,397-word Terms of Use. In English. Consumer watchdog VZBV complained that the Terms of Use “contain technical legal language” that are “largely incomprehensible to consumers in Germany.” Well, of course, because they are not in German.

The court found that “everyday English is widespread in Germany but not the type of English used in legal texts, contracts and commercial documents.” According to the VZBV press release, “The court noted that no customer should have to face ‘an extensive, complex set of rules with a very large number of clauses’ in a foreign language.”

We would argue this is something most consumers, even outside the translation industry, even outside Europe, would agree with.

It is unclear why WhatsApp went to court and did not simply translate its Terms of Use. We reached out to WhatsApp but have not received a response as of press time.

The State Court ruled that the verdict cannot be appealed. WhatsApp may still contest this and file a so-called Application for Leave to Appeal with the German Federal Court of Justice. It is as yet unclear whether WhatsApp will lodge an appeal.

If the Supreme Court dismisses the case or if WhatsApp decides not to appeal, spending USD 1,500 or so on a translation seems preferable to the alternative. In case WhatsApp still refuses to comply with the law, the State Court will fine the company EUR 0.25m. Failing to pay the fine would result in a prison sentence of up to six months for the company’s Chief Executive Officer.

Jan Koum, make sure you get that translation done before you travel to Germany.

Original article published here: Slator

Facebook ditches Bing, 800M users now see its own AI text translations

Machine learning is accomplishing Facebook’s mission of connecting the world across language barriers. Facebook is now serving 2 billion text translations per day. Facebook can translate across 40 languages in 1,800 directions, like French to English. And 800 million users, almost half of all Facebook users, see translations each month.

That’s all based on Facebook’s own machine learning translation system. In 2011 it started working with Microsoft Bing to power translations, but has since bene working to transition to its own system. In December 2015, Facebook finally completed the shift, and now exclusively uses its own translation tech.

Facebook language Translation Tool

Alan Packer, Facebook’s director of engineering for language technology, revealed this progress today at MIT’s Technology Review’s EmTech Digital conference in San Francisco. The conference has a big focus on artificial intelligence, machine learning and other cutting-edge ways to parse data.

[Update: After his talk, I sat down with Packer, who told me about Facebook ditching Bing-powered translation. This article has been updated to add info from him.]

Earlier, Pinterest’s head of product Jack Chou revealed that just six months after launching its visual search feature, Pinterest sees 130 million visual searches every month. The product was built by a small team of four, and allows people to search using a source image instead of just text. Pinterest also now has 50 million buyable pins from 20 million merchants.

Facebook Translation

Facebook’s ability to not only translate but understand the content of text and images could lead to big advances in the relevancy of the News Feed. Packer explained that if Facebook can understand a post is asking for recommendations of hotels in Paris, it could surface that to friends it knows recently visited Paris, suggest a particular friend to ask or recommend making a related search for public posts of recommendations.

Facebook was an early pioneer of online translation, building a crowdsourcing tool to get users around the world to translate its interface’s text into their local tongues. In 2011, Facebook began using automated systems to translate users’ posts and comments in the News Feed.

Last year, Facebook acquired Wit.ai, a startup using understanding of natural language in text and voice to power new user interfaces. Google and Microsoft/Skype have also been aggressively pursuing translation tools to unite the world across borders.

Alan Packer, Facebook's Director Of Engineering for language technology,

Facebook initially turned to Bing because “we didn’t have our own technology but saw that there was value in it. We did a deal, turned it on, and got a lot of usage” Packer tells me. The problem was that Bing was built to translate more properly written website text, not the way humans talk to each other. Packer says Bing “didn’t do well on slang, idioms, and metaphors. We really needed to train on our own data.”

So Facebook looked at the languages most in need of translation, and got cranking on building a version of the tech that did better than Microsoft. “We did our own internal bake-off. When we could show it was better than Bing [for a specific language to language translation], we would turn Bing off and replace it with our own service.

Now, the translation is fully rolled out for 1800 different translation permutations. When Facebook is confident its translation is perfect, it will automatically show the translation by default with an option to “See Original”, and only shows the opt-in “See Translation” button when it thinks it might have errors.

Packer tells me other Facebook teams from anti-spam and policy enforcement to acquisitions like Instagram are now considering how they could integrate translation.

Post-Translation-Done

The motive is obvious, socially conscious and lucrative. Facebook’s mission is to make the world more open and connected. I asked Packer how translation plays into that, and he said, “The mission of the translation team is removing language as a barrier to making the world more open and connected.”

While he didn’t have concrete numbers, Packer says that access to the translation product leads users to “have more friends, more friends of friends, and get exposed to more concepts and cultures.” And the company knows it’s grown important to users, because “when we turned it off for some people, they went nuts!” The more people across the world that Facebook users can connect with, the longer they’ll spend on the social network, and the more revenue-earning ads they’ll see.

We’re rapidly approaching an era of the AI haves and have-nots. Tech giants who don’t have the engineering prowess to parse the meaning of their content or information won’t be able to deliver it to users as effectively. Companies like Google, Facebook and Microsoft could flourish while others more strapped for cash and resources to invest in research stumble.

A 3D MAP OF THE BRAIN SHOWS HOW WE UNDERSTAND LANGUAGE

A 3D view of a person's cerebral cortex.

A 3D view of a person’s cerebral cortex.

The color of each voxel indicates which category of words it is selective for. for example, green ones are mostly selective for visual and tactile concepts, while red voxels are mostly social ones.

Each day, as we talk to friends, family, and coworkers, and consume podcasts, movies, and other media, we are ceaselessly bombarded by the spoken word. Yet somehow, our brains are able to piece out the meaning of these words, allowing us to seamlessly, in most cases, go about our days, understanding, remembering, and responding when necessary.

But what’s going on in our brains that allows us to understand this endless stream? A group of scientists set out to map how the brain represents the meaning of spoken language, word by word. Their results, published today in the journal Nature, not only represent a first-of-its-kind directory or ‘semantic atlas’ to display how the meaning of language is grouped in the brain, but also shows that we use a broad range of regions in the brain, challenging the belief that language is limited to a few brain areas and involving only the left hemisphere.

Word meaning, also known as semantics, focuses on the relationship between words and phrases and what they mean, their connotation. Words likepurchase, sale, item, store, card, and package, for example, all have something in common — you go to a store to purchase an item using your debit card. To understand semantics in the brain, previous studies used single words or phrases to see what’s going on.

But Jack Gallant, a neuroscientist at UC Berkeley, and his colleagues, including lead author Alex Huth, wanted to know how our brains map out a more natural, narrative story. They asked seven participants to listen to a few hours of The Moth Radio Hour, a storytelling program. They hooked each participant up to an fMRI machine–which measures changes in blood flow and volume caused by the activity of neurons in the brain–so they could see exactly what areas of the brain lit up and when.

Afterwards, they used transcripts of the narratives along with the fMRI scans to understand first what words corresponded to the lit up areas of the brain and second, to create a model that predicts brain activity based on the words the person heard.

An interactive model of how we understand language

By putting together information from all seven participants, with the help of a statistical model, the researchers created a brain atlas, a 3D model of the brain that shows what brain areas lit up at the same time among all the participants. They also created an interactive version of the atlas. The interactive can be found here.

People were scanned using functional magnetic resonance imaging (fMRI) while they listened to hours of stories from The Moth Radio Hour.

Alexander Huth

Participants listened to hours of spoken language while scientists scanned their brains using an fMRI scanner. Using a computer algorithm, they created an atlas of the brain’s response to different word categories.

What does the model tell us about language?

The researchers found that word meaning is distributed across the cerebral cortex, in 100 different areas that span both hemispheres. Looking at all the participants’ scans as a whole, they found that certain regions of the brain are associated with certain word meanings. For example, hearing language about people tends to activate one area of the brain, about places in another, and about numbers in a third–demonstrating a basic organizing principle for how the brain handles different components of language.

While the results were quite similar across individuals, this doesn’t mean that the researchers have created a definitive atlas for language. First, the study only looked at seven participants, all from the same area of the world and all speakers of English. It also only used just one source of input: a series of spoken, engaging narrative stories. The researchers are eager to learn how things like experience, native language, and culture will alter the map.

Further, Gallant says, they think the map could also change if the setting changed or if a person was in a different mental state: if a person read the story instead of hearing it, or instead of hearing an engaging story, the context was tedious cramming for an exam.

Gallant says these would likely change the map as well, and he plans to learn how in future research.

How will this map help?

Many brain disorders and injuries affect language areas in the brain, says Gallant. This atlas could act as the basis of our being able to understand how brains and their maps change after a brain injury affecting language, like a stroke, or are different in language disorders like dyslexia or social language disorders like autism.

For now, Gallant says, this study helps to show the importance of not just brain anatomy but the physiology or function behind that anatomy. “In neuroscience we know a lot about the anatomy of the brain, down to single synapses, but what we really want to know about is the function.” The combination, he says, is the key to really understanding the brain.

With much more research, this idea could be used as a sort of decoding mechanism: a map of fMRI data could be used to “read” what words a person is reading, hearing, or thinking, which could allow people with communication disorders like ALS to better communicate.

Original article published here: Popular Science