The Bargain That Is WhatsApp & 4 More Stories You Need To Know Today

1a181bf

SUCH A DEAL! — There’s been no shortage of opinions on what Facebook paid for WhatsApp, ranging from “I don’t get it” to “You don’t get it.” But the only opinion that matters has now weighed in, and, in his view, WhatsApp was cheap. “I just think that by itself it’s worth more than $19 billion,” Facebook CEO Mark Zuckerberg proclaimed Monday at Mobile World Congress in Barcelona. “The reality is there are very few services that reach a billion people in the world.” The reality is that WhatApp isn’t one of them — it has around 465 million users. But Zuck thinks it can be a billion-member platform, and, again, that’s all that matters.

 

_____

 

DIMON IN THE ROUGH — The Financial Times (subscription required) reports that JP Morgan Chase is set to fire “several thousand” more employees, above and beyond a recently announced round of up to 15,000 job cuts. The reason, per the FT: Better tech at branches and plummeting mortgage applications. Official word may come as early as today, when CEO Jamie Dimon speaks at bank’s annual Investor Day, his first address to shareholders since the bank’s record $13 billion settlement with the Feds over allegations of mortgage chicanery. The bank employs more than 250,000 people.

 

_____

 

HI, FIVE — The Samsung 5 got a nice enough reception from the tech press, which tossed around words like “refined” and “elegant.” Samsung’s newest flagship smartphone boasts some welcome features, like water resistance, fingerprint sensing, a built-in heart rate monitor, pedometer and fitness tracker. But the low-key kudos award goes to BGR Executive Editor Zach Epstein, who Tweeted: “Galaxy S5 is a nice iteration. Good job focusing on refinement vs feature spam but no BUY ME features.” Let’s hope, for Samsung’s sake, that’s not literally true. The S5 will be available in 150 countries on April 11.

 

_____

 

BAD DAY FOR BITCOIN — The virtual currency that’s beginning to attract mainstream attention is facing an “existential crisis” after a leaked document, allegedly from one of the companies which act like banks for the crypto-currency, reveals it was hacked for years. Missing from the Bitcoin exchange in question, Mt. Gox, are a total of 744,408 coins worth some $350 millionBitcoin lost 17% in value in the 24 hours after the revelation, but has since stabilized (to the extent Bitcoin ever does). Mt. Gox was once the biggestBitcoin exchanges and been offline since late Monday. Six other big exchanges — Coinbase, Kraken, Bitstamp, BTC China, Blockchain and Circle — sought to isolate the problem: “This tragic violation of the trust of users of Mt. Gox was the result of one company’s actions and does not reflect the resilience or value of bitcoin and the digital currency industry. As with any new industry, there are certain bad actors that need to be weeded out, and that is what we’re seeing today.”

 

_____

 

ELEVATOR PITCH — In what should come as surprise to nobody — seriously, people — the person behind @GSElevator isn’t a Goldman Sachs employee sharing OH 1% disdain for the rest of us. The New York Times‘ Andrew Ross Sorkin blew the lid off this three-year-old prank “after several weeks of reporting,” outing the Tweeter as John Lefevre. The Texas-based bond executive didn’t actually hear anyone at Goldman Sachs (New York/London/Hong Kong, not Texas) say things like: “I never give money to homeless people. I can’t reward failure in good conscience.” He tells Sorkin his parody was aimed broadly at Wall Street, not Goldman Sachs per se. The Wall Street brokerage was circumspect, telling the Times: “We are pleased to report that the official ban on talking in elevators will be lifted effective immediately.” Lefevre’s last Tweet was Feb. 15, leaving his 628,000 followers in the lurch for tone-deaf white shoe firm humor. Worry not! Lefevre has a book deal. Of course.

 

 

Advertisements

Is Google+ is future? At least Google believe it is !

google+-rich-snippets

It’s common currency in internet punditry circles that Google won the battle to dominate search while Facebook won the battle for social, and that Google+ is just a failed competitor to Facebook. But Google hasn’t given up

It has been clear for a while now that, to make up for the fact that not very many people actively use Google+ as a social network, Google is turning it into a platformon which the rest of Google’s web services are evolving—something that has the effect of making people use Google+ by default. Results from Google+ already clutter search results. YouTube’s commenting system has been replaced by Google+. Chat and Talk, once stand-alone services, were combined into Hangouts and incorporated into Google+.

In a revealing interview with the Indian business newspaper Mint, Steve Grove, a Google+ exec who inks deals with content providers and influential figures, makes it clear that this is just the beginning. Grove tells Mint that “the reason for that is that Google+ is kind of like the next version of Google.”

Why? According to Grove:

There’s a lot of great value here, because Search also shows results from Google+ and this is going to bring more people into Google+; people are going to see that there’s a lot of value in logging into our services, before doing a search.

We’ve written before about how Facebook’s strategy for getting users in emerging markets is to convince people new to the internet that Facebook basically is the internet. Google’s strategy looks a bit like the obverse of this: convince people already on the internet that the internet runs on Google+

But when you look at it longer-term, Google’s strategy is actually very similar to Facebook’s. New internet users, such as the hundreds of millions expected to come online in India in the coming years, will find that being on Google’s social network is increasingly a prerequisite for using Google’s other services. Roping those new users into Google+ from the get-go is the company’s best chance for coming from behind and defeating Facebook’s dominance in social media. And that clearly seems to be Google’s goal, given how much effort it’s pouring into the network.“We focused a lot on Google+ here [in India], and it’s already very active, and people are getting on board on their own,” Grove said.

Facebook wants to know why you didn’t publish that status update you started writing.

 

Facebook-spy

A couple of months ago, a friend of mine asked on Facebook:

Do you think that facebook tracks the stuff that people type and then erase before hitting <enter>? (or the “post” button)

Good question.

We spend a lot of time thinking about what to post on Facebook. Should you argue that political point your high school friend made? Do your friends really want to see yet another photo of your cat (or baby)? Most of us have, at one time or another, started writing something and then, probably wisely, changed our minds.

Unfortunately, the code in your browser that powers Facebook still knows what you typed—even if you decide not to publish it.* It turns out that the things you explicitly choose not to share aren’t entirely private.

Facebook calls these unposted thoughts “self-censorship,” and insights into how it collects these nonposts can be found in a recent paper written by two Facebookers. Sauvik Das, a Ph.D. student at Carnegie Mellon and summer software engineer intern at Facebook, and Adam Kramer, a Facebook data scientist, have put online an article presenting their study of the self-censorship behaviour collected from 5 million English-speaking Facebook users. (The paper was also published at the International Conference on Weblogs and Social Media.*) It reveals a lot about how Facebook monitors our unshared thoughts and what it thinks about them.

The study examined aborted status updates, posts on other people’s timelines, and comments on others’ posts. To collect the text you type, Facebook sends code to your browser. That code automatically analyses what you type into any text box and reports metadata back to Facebook.

Storing text as you type isn’t uncommon on other websites. For example, if you use Gmail, your draft messages are automatically saved as you type them. Even if you close the browser without saving, you can usually find a (nearly) complete copy of the email you were typing in your Drafts folder. Facebook is using essentially the same technology here. The difference is that Google is saving your messages to help you. Facebook users don’t expect their unposted thoughts to be collected, nor do they benefit from it.

Facebook, on the other hand, is analyzing thoughts that we have intentionally chosen not to share.

It is not clear to the average reader how this data collection is covered by Facebook’s privacy policy. In Facebook’s Data Use Policy, under a section called “Information we receive and how it is used,” it’s made clear that the company collects information you choose to share or when you “view or otherwise interact with things.” But nothing suggests that it collects content you explicitly don’t share. Typing and deleting text in a box could be considered a type of interaction, but I suspect very few of us would expect that data to be saved. When I reached out to Facebook, a representative told me that the company believes this self-censorship is a type of interaction covered by the policy.

In their article, Das and Kramer claim to only send back information to Facebook that indicates whether you self-censored, not what you typed. The Facebook rep I spoke with agreed that the company isn’t collecting the text of self-censored posts. But it’s certainly technologically possible, and it’s clear that Facebook is interested in the content of your self-censored posts. Das and Kramer’s article closes with the following: “we have arrived at a better understanding of how and where self-censorship manifests on social media; next, we will need to better understand what and why.” This implies that Facebook wants to know what you are typing in order to understand it. The same code Facebook uses to check for self-censorship can tell the company what you typed, so the technology exists to collect that data it wants right now.

It is easy to connect this to all the recent news about NSA surveillance. On the surface, it’s similar enough. An organization is collecting metadata—that is, everything but the content of a communication—and analyzing it to understand people’s behavior. However, there are some important differences. While it may be uncomfortable that the NSA has access to our private communications, the agency is are monitoring things we have actually put online. Facebook, on the other hand, is analyzing thoughts that we have intentionally chosen not to share.

This may be closer to the recent revelation that the FBI can turn on a computer’s webcam without activating the indicator light to monitor criminals. People surveilled through their computers’ cameras aren’t choosing to share video of themselves, just as people who self-censor on Facebook aren’t choosing to share their thoughts. The difference is that the FBI needs a warrant but Facebook can proceed without permission from anyone.

Why does Facebook care anyway? Das and Kramer argue that self-censorship can be bad because it withholds valuable information. If someone chooses not to post, they claim, “[Facebook] loses value from the lack of content generation.” After all, Facebook shows you ads based on what you post. Furthermore, they argue that it’s not fair if someone decides not to post because he doesn’t want to spam his hundreds of friends—a few people could be interested in the message. “Consider, for example, the college student who wants to promote a social event for a special interest group, but does not for fear of spamming his other friends—some of who may, in fact, appreciate his efforts,” they write.

This paternalistic view isn’t abstract. Facebook studies this because the more its engineers understand about self-censorship, the more precisely they can fine-tune their system to minimize self-censorship’s prevalence. This goal—designing Facebook to decrease self-censorship—is explicit in the paper.

So Facebook considers your thoughtful discretion about what to post as bad, because it withholds value from Facebook and from other users. Facebook monitors those un posted thoughts to better understand them, in order to build a system that minimizes this deliberate behaviour.