Read/Write Web

Subscribe to Read/Write Web feed
Updated: 4 days 16 hours ago

Atlassian's Geeky Software Carves Out A Big Home With Developers

Tue, 04/15/2014 - 17:54



f you're not a developer, you're not going to understand Atlassian's success. Atlassian employs no salespeople, yet it's doing over $200 million in annual sales, according to a recent report in The Wall Street Journal.

While enterprise software companies struggle to make their wares more consumer-friendly, Atlassian builds software that only a developer could love: It's geeky, not super intuitive and frankly somewhat unpleasant to use for a business user like myself.

Yet it's now worth $3.3 billion. How's that?

Of The Developer, For The Developer

Atlassian co-founder Scott Farquhar told The Wall Street Journal that "These days, people are making decisions based on how good the products are." The definition of "good" may not be the same for developers as it is for the average business user, however.

Wikis, issue tracking systems, Git code hosting, etc.—these are not tools your head of marketing really wants to use. I should know: Every time I have to fill out a JIRA request to get content changed on my company's website, a little part of me dies inside.

Then again, I'm not Atlassian's target market. The developer is. And developers love Atlassian.

In the world of developers, the definition of "ease of use" differs. This is a world that still thinks fondly on the command line. Even among this crowd, however, Twitter's Chris Aniszczyk posits that Atlassian's software may not be the best, but rather the best of a bad lot:

@mjasay best option from the crap pile and they have an great a la carte model where you don't have to buy into the whole stack

— Chris Aniszczyk (@cra) April 15, 2014

I'll take Chris' word since I'm not much of a developer tools power user myself, but it's his latter argument that I find so compelling: Atlassian succeeds, in part, because it treats its developer audience with serious respect.

Giving Tribute To Developers

This reason behind Atlassian's success is echoed by Fintan Ryan of Strand Weaving, who suggests Atlassian tools are "the best of a limited bunch, and relatively configurable."

While the first part of Ryan's comment suggests Atlassian doesn't deserve much credit, it's the second half that really sets Atlassian apart. Developers don't want unnecessary frills that get in the way of productivity. This same desire is what has driven GitHub, AWS and other developer-focused software to succeed. 

That group of tools developers love is a very small club. As it turns out, it's very hard to develop tools a wide array of developers want to use. 

Not only does Atlassian support the things developers already do, but as Operational Results web developer Cody Nolden notes, Atlassian's tools may actually expose problems in team workflows:

They’re very configurable and can match whatever workflow your team uses. I’ve found that when I struggle to use Atlassian tools it’s because of more underlying struggles as a team not knowing what process we follow and we haven’t configured accordingly.

Ultimately, Atlassian succeeds not because it's the best tool among a bad bunch, but because it respects developers' time and concerns. Tools like JIRA are intentionally not flashy. They're utilitarian, not because Atlassian lacks creativity, but because the company cares more about what developers want than what marketing or sales or other groups within a company may want. This shows not only in the software itself, but also in how it's sold: Atlassian is salesperson-free, over-the-web, and costs a reasonable amount of money.

That's a great strategy for appealing to developers.

With Gnip, Twitter Is Ready To Sell Your Tweets

Tue, 04/15/2014 - 17:04



Gnip was once a neutral provider of social data, but now that neutrality is gone, and it's in the hands of Twitter.

Twitter on Tuesday announced the acquisition of social data analytics startup Gnip, which is one of the only companies with access to Twitter’s firehose of data—all the tweets and activity streams on Twitter since the platform launched in 2006. The terms of the deal were not disclosed.

Twitter will bring both the revenue and data streams from Gnip in-house, exerting full control over our tweets and how they’re used. 

Gnip has worked with Twitter for years. It’s one of the handful of partner companies, or certified products, that Twitter partners with to handle its data. In fact, selling the firehose, that treasure trove of Twitter data, to Gnip and other analytics providers was one of the first ways Twitter made money. (Topsy and DataSift still have access to Twitter's firehose as well.)

With the Gnip acquisition, no longer is there a man in the middle that deals your data to advertisers and other folks relying on your personal information to sell you things. Now, Twitter can deliver that data directly to buyers, effectively making you a product. 

Twitter Owns All The Data

With complete access to Gnip’s entire data set, Twitter can sell much more than just its own data: The analytics company has exclusive access to all Foursquare and Tumblr data, and it also works with Facebook and Google+. 

And Twitter wants Gnip to expand its offerings. Jana Messerschmidt, VP of global business development for Twitter, wrote in a blog post

Together we plan to offer more sophisticated data sets and better data enrichments, so that even more developers and businesses big and small around the world can drive innovation using the unique content that is shared on Twitter ... And with the help of Gnip’s Boulder-based team, we will be extending our data platform — through Gnip and our existing public APIs — even further.

It will be interesting to see if Gnip’s other partners will sever access to their information. While the majority of Gnip’s data comes from managed public API access, a handful of companies like Tumblr and Foursquare allow Gnip complete access, and now that access belongs to Twitter. 

Hopefully this signals to companies and interested users that Twitter is better prepared to provide more in-depth data, rather than arbitrary statistics, like the conversation surrounding #Sochi2014 during the Winter Olympics. But even if it can, it’s going to make you pay for it. 

After Heartbleed, "Forward Secrecy" Is More Important Than Ever

Tue, 04/15/2014 - 16:35



Internet users have spent the last week changing their passwords and checking their online accounts for potential hacks resulting from Heartbleed, a bug in the open-source security software OpenSSL that left nearly two-thirds of the Web vulnerable to malicious attacks. 

See Also: Protect Yourself Against Heartbleed, The Web's Security Disaster

Heartbleed has caused security nightmares for dozens of websites, especially since companies initially thought it was impossible to steal private certificate keys from servers. That assessment was quickly debunked—just ask the 900 Canadians that had their taxpayer data stolen by hackers over a six-hour period after the bug was publicly announced. 

There is a silver lining to the madness, however: If websites are using encryption called perfect forward secrecy, there is no way for hackers to retroactively decrypt your data, even if they get control of your server’s private key. 

What's Wrong With HTTPS?

First, let's get to know HTTPS, the connection that protects your data on most secure websites.

When you’re on a secure website using traditional HTTPS encryption, your username, password, and all other personal communications are supposedly safe from being intercepted and decrypted by hackers (or the NSA). OpenSSL made it possible for websites to deliver that secure connection, locking down the data sent to and from the browser and server.

Normally, when a secure connection is created, a website generates a master key between the browser and server—this master key is used to encrypt millions of sessions, not just yours. Since only the holder of the private key can “unlock” your session key, all your information is secure. But by exploiting the Heartbleed vulnerability, an attacker could access the website’s private key and then decrypt the information hidden in your session key. 

That’s not all: Any recorded data from HTTPS servers can be retroactively decrypted using private keys exposed by Heartbleed, so if an eavesdropper has been recording website traffic for some reason, they could access the private keys for those sites thanks to Heartbleed.

Why Does Forward Secrecy Matter?

Now we know why HTTPS isn't good enough to stop Heartbleed. So what can websites do about it?

Perfect forward secrecy is an encryption technique that prevents people from “unlocking” your private information history, even if they get their hands on the server’s private key. With forward secrecy, a new temporary session key is created each time you access a secure website, instead of relying on one master key. Essentially, it creates ephemeral encryption—where the keys disappear—so hackers can't decrypt your data like they would with HTTPS. 

“Forward secrecy gives you client and server that use a different method for agreeing on a session key,” said Timo Hirvonen, senior researcher at security firm F-Secure. “The main point there is the key that is used for decrypting that session is a short-lived key used only for that session.”

If we compare security to messaging apps, forward secrecy would be similar to Snapchat—once you’re done with the session, your key disappears. Websites that enabled forward secrecy disallowed hackers from unlocking any of the information they previously connected.

But it’s not just software vulnerabilities users have to worry about. Documents released by Edward Snowden reveal the National Security Agency vacuums up troves of encrypted data with the hopes of one day being able to crack it. Luckily for users, forward secrecy even prevents NSA agents from reading your email—a fear that no doubt pushed companies to rethink their encryption methods. (The NSA reportedly knew about Heartbleed before it was made public, a claim the agency flatly denies.) 

Who Uses Forward Secrecy? 

Forward secrecy is over 20 years old, but most websites don’t implement it. According to SSL Labs, over half of the most popular websites on the Web don't implement forward secrecy, and just 42% of popular websites have some forward secrecy suites enabled.

Google, ever a pioneer in securing user information, made forward secrecy mandatory in November 2011. The company then published its work on OpenSSL with the hopes that other companies would follow suit.

Unfortunately, it took two more years for other tech giants to get on board.

In mid-to-late 2013, Facebook, Microsoft, and Twitter began expanding security to include forward secrecy, and earlier this year (just one week before Heartbleed was made public), Yahoo announced it would implement forward secrecy across many of its properties. Apparently it didn’t move quick enough: Yahoo Mail was one of the biggest services affected by Heartbleed.

One of the main reasons websites don’t use forward secrecy, according to Hirvonen, is because there is a performance penalty—it requires more CPU resources. If you think of a server like a human, enabling perfect forward secrecy requires more brain power than what it takes to enable HTTPS encryption. 

Network engineer Vincent Bernat notes that forward secrecy can use up to 30% more CPU than traditional HTTPS security. 

“Configuration shouldn’t be that difficult,” Hirvonen said. “It’s more about the CPU resources, hardware requirements, and the impact on performance.”

It also takes someone with knowledge of configuring website encryption to deploy forward secrecy. Assuming you have the desire and skill to implement it, you have to configure your server to select forward secrecy, and place the two most common cipher suites at the top of your list. Help Net Security provides a tutorial here

Heartbleed reminded us all that our secure data is never as secure as we think it is on the Internet, and it will still be a while before the mess created by Heartbleed is entirely cleaned up. 

As we’ve learned, however, there are some simple but significant steps that can improve how users are protected, and most of the big tech companies are leading the charge. Hopefully Heartbleed can act as a catalyst to prompt more websites to adopt forward security and make the Web—and our data—safe from harm. 

Lead image courtesy of Alonis on Flickr

Developers, Check Your Amazon Bills For Bitcoin Miners

Tue, 04/15/2014 - 14:47



Amazon Web Services gives developers access to massive computing capability. Now hackers have found ways to hijack some accounts and use that power to make money on someone else's dime.

Joe Moreno’s bill for Amazon Web Services is usually about $5 a month. But last Thursday, he learned his AWS credentials had been compromised. An unknown person started renting computing power from Amazon on his account, racking up more than $5,300 in charges on servers in Amazon data centers as far away as Tokyo, São Paulo, Sydney, and Singapore.

It appeared that he was running processes that "mined" Bitcoin—creating units of the digital currency in exchange for processing transactions.

We Have Met The Enemy, And He Is Us

Given the timing of the attack, Moreno initially thought the Heartbleed bug was to blame, until he tracked down the breach and realized it was his own error.

In addition to developers' usernames and passwords for their accounts, AWS uses "access keys" which are easier to include in software. And that's the problem—developers include them in software, including copies of the software they store in public source-code repositories like GitHub.

Moreno had uploaded code to a GitHub repository, inadvertently including his Amazon credentials. 

You might think this is an isolated case, but a security expert in Australia discovered almost 10,000 AWS credentials in a search of GitHub last month.

Ty Miller, founder of security testing firm Threat Intelligence, found exposed credentials for Amazon, Google's Cloud Platform, and Microsoft Azure in GitHub repositories, but the largest number were for Amazon.

"These credentials are likely to provide full access to the AWS account," Miller told ReadWrite. That means hackers could delete data or add data and start new computing processes which could perform just about any task.

Amazon appears to be aware of the problem. The company specifically warns developers against including their credentials in code that they upload. But it's not clear how Amazon can police the problem.

Amazon For Nothing And Your Servers For Free

Moreno discovered the breach to his account after receiving email from Amazon asking him to update his credit-card information. Moreno, a former software developer at Apple, logged in and noticed the charges. He immediately contacted Amazon.

"Your AWS credentials have been compromised," the Amazon representative said. Bitcoin mining was a common goal of these hackers, though the AWS computing resources could be used for all kinds of money-making schemes.

When software consultant Ted Howard learned of Moreno’s experience, he commiserated. On April 5, he had learned that his Amazon account had been hacked.

“I immediately changed my password, disabled my access key and created a new key," he told ReadWrite.

Howard also believes the breach was likely his fault. After checking his GitHub repository, he found that he had committed a file that contained his AWS access key.

“I seem to be incapable of escaping my own stupidity,” he said. But the unintentional publication of AWS credentials appears to be a common problem. It even happened to security researcher Rich Mogull in January.

Keys To The Computing Kingdom

Howard thought his immediate problem was over, though he still had the bill to settle with Amazon.

But on Friday, after communicating with Moreno, he discovered yet another security breach on his AWS account, despite the steps he had already taken to secure it.  

After Moreno’s Amazon troubles came to light, Howard logged back into his own Amazon account and saw that 13 new EC2 instances in Oregon had been started—on April 9, days after he learned of $6,000 in fraudulent charges on his account.

“Of course I changed my password and disabled my new access key," he said. "This time I didn't even bother creating a new one.”

Since he hadn't used the new access key anywhere, or uploaded or shared it anywhere, he was worried.

“Whether it's related to Heartbleed is anyone's guess,” Howard said. "It's possible that they still accepted requests with the old access key after I killed it. Perhaps the attacker was notified of the new key somehow. I really have no idea."

Amazon: "We've Been Seeing More And More Of This"

Later on Friday, Amazon told Howard that the hacker may have used a feature called "Spot Requests" on his account before he reset his credentials. He checked out his account and found many of them.

As an Amazon developer, you can bid on unused computing resources via Spot Requests, and when Amazon accepts the price you set, it automatically starts using the designated computing resources. Amazon told Howard he had to check each of Amazon's geographical region for such requests, as deleting one would not affect instances in any other region.

"The nefarious way to use this is to set up a ton of requests with a high max price," Howard said. "Even once all the credentials are changed, this request is still present, so new instances continue to be spun up and down over time. This is apparently what happened to me."

That's what an Amazon representative told Moreno the day he discovered the breach. The Amazon employee also told Moreno to check his EC2 spot instances in other regions, and predicted he would see high end instances running. Which he did.

Like Howard, Moreno changed his password, but took the extra precaution of removing his code from GitHub. That's not a trivial process: Because the way repositories are backed up, his old keys may still be discoverable.

A helpful GitHub tutorial explains how to purge files from your repositories permanently and avoid accidental commits in the future.

Plugging The Holes

Recently, Amazon has changed the way it generates credentials, Moreno and Howard both said. To allow programs to access AWS resources, you used to need an access key ID and a secret access key—strings of characters generated by Amazon. In the past, you could log into your account and retrieve the secret key at any time. That's no longer the case.

"If you lose [the secret key], you must disable and generate a new access key," Howard said.

An Amazon guide for managing AWS credentials suggests removing, or not generating, an access key for your root account; and using AWS Identity Access Management (IAM) to create temporary security credentials for applications that interact with AWS resources. It also explains how to manage IAM access keys.

“We take security very seriously at AWS, and we provide many resources, guidelines and mechanisms to help customers configure AWS services and develop applications using security best practices," an AWS spokesperson said. "When we become aware of potentially exposed credentials, we proactively notify the affected customers and provide guidance on how to secure their access keys.”

It seems that Amazon could do more, however. If security researchers can easily scan public sites like GitHub and find access keys, couldn't Amazon do the same, and save itself and its customers from these incidents by immediately deactivating the keys?

How To Protect Yourself

It may go without saying, but if you've uploaded code to GitHub, you might want to check whether you inadvertently included your credentials for anyone, including hackers, to access.

"I'm sure many developers have made the mistake I've made," Moreno said. He and Howard offer the following advice.

  • Use two-factor authentication. Although this would not have helped either Howard or Moreno, additional security through a second type of authentication helps protect email and other accounts which might also hold your cloud keys. Take advantage of it.
  • Never hardcode your cloud computing credentials. Even if you're using a private source-code repository, that may change in the future. You may decide to contribute code to an open-source project, for example. "After looking through my code, I see that I had hardcoded my credentials and then commented out that code, later, when I moved the credentials to a preferences file," Moreno said. But, even that isn't good enough since preferences files are usually checked into repositories with code.
  • Use Identity Access Management. This feature from Amazon lets you create individual accounts that have limited privileges. If you wanted to create an applications that stores its data in S3, you can create an account that only has access to one S3 bucket. "If that app got compromised or those credentials got accidentally checked in to GitHub, then only that particular S3 bucket would be exposed," Howard said.

And if that doesn't stop a hack, you'll still want to gather information about what happened. Mogull explained in a post how to take a snapshot and apply forensic analysis to a hack.

The most important advice Howard offers?

"Don't put your Amazon credentials into source code and then share that source code in a public place like GitHub!"

It seems obvious. But it's clear that thousands of developers haven't taken this obvious step.

Update: After we published this story, Joe Moreno received this email from Amazon. wrote:

Hello Joseph,

I have good news!  As a one-time exception, we've approved a credit of the charges for April for the amount of $5360.23  This credit will offset the amount of your compromised resources!

Please make sure to monitor your AWS usage periodically to avoid unexpected charges. By selecting Bills in your Account Billing console, you can see current and past usage activity by service and region.

Photo courtesy of Joe Moreno

Why Pinterest Is The Google Competitor You Weren't Expecting

Tue, 04/15/2014 - 12:05



There are now nearly one billion "Place Pins" on Pinterest, the company said in an email Monday. And with that announcement, Pinterest moves one step closer to becoming a true search engine alternative to Google.

See also: Pinterest 'Place Pins' Put Travelers On The Map

Now, Pinterest's Place Pins aren't going to replace Google Maps anytime soon—or ever. But for users that would rather graze than pinpoint one exact spot, Place Pins are great for browsing various locales around the globe.

Place Pins are enhanced Pinterest images, better known as “pins,” with the addition of location metadata. Powered by Foursquare, you can use Place Pins to give a pin a physical address that you and other users can find on a map. Pinboards can collect arbitrary travel hotspots, like this board of world beaches, along with their physical locations recorded on a map. 

A Very Pinteresting Search Dilemma

Pinterest's visual search engine is powered by millions of individuals that curate and organize its content according to what users deem most relevant. But with billions of pins, that’s a ton of data—and users simply can’t organize all of it alone. 

Place Pins are just one of the ways Pinterest is working on surfacing that data—by tying topic-specific metadata to various pins to make them show up in more relevant searches. Thanks to Rich Pins, which appear as normal pins with auto-generated captions, Pinterest can categorize those pins into verticals for movies, recipes, articles, products and places.

Before Pinterest and other Visual Web networks came along, we generally thought of the Web as a place where text begot text—you input some text, press search, and get a bunch of relevant results, also in text form. On Pinterest, however, a text-based search leads to relevant image results—without losing any of that context in the transfer. 

So far, Pinterest is trying to improve search by going vertical and providing more metadata for different types of pins, including locations. That way—ideally—searching for pins about backpacking through Europe won’t result in a bunch of European-made backpacks. 

See also: Why Pinterest Needs To Update Visual Search Stat

You might be thinking, "So what? Google has a visual search engine." But what makes Pinterest unique is that it's not just a visual search engine; it’s a user-curated one. That means to guarantee accurate search results, Pinterest needs to nudge users into actually using Rich Pins.

A Scheme To Get Rich Pins Quick

Pinterest’s Place Pins milestone is proof that users are adopting the Rich Pin feature in droves. Keep in mind, however, Place Pins are only five months old; Foursquare and Pinterest announced the partnership in late November 2013. 

The month before Place Pins was revealed, Pinterest filed a trademark infringement suit against PinTrips, a Pinterest clone built specifically for flight search. The suit was notable because Pinterest revealed in the legal complaint just how many pinners use Pinterest for travel: 

“Pinterest users have posted more than 660 million PINS in Pinterest’s ‘Travel’ category to date. Many people use Pinterest as a travel-planning tool.”

That number is nothing to sneeze at, but it’s assumed the ‘Travel’ category was built up over the course of Pinterest's five-year existence. Meanwhile, Place Pins have hit the one billion mark in fewer than five months of existence. 

We also have some data about Article Pins, another kind of Rich Pin. Last fall, the company said five million of its daily pins included article metadata. However, Place Pins are opt-in, while article metadata attaches itself automatically when a user pins from a news site. 

See also: Pinterest Cofounder Evan Sharp: How The Visual Web Helps You See The Future

If one thing is certain, it’s that Pinterest is not here to compete against Google. Pinterest's search weaknesses are Google's strengths, and likewise, what Google is bad at doing Pinterest is really, really good at. As co-founder Evan Sharp told ReadWrite:

“People don’t think about searching 'living room inspiration' on Google. They literally don’t do that because the results don’t work, and they become accustomed to not searching that. But on Pinterest that can be a really fruitful and valuable thing to search.”

In its exploration of visual search, Pinterest is attempting to meet its userbases’ unique set of requirements by creating a search engine that solves different problems—or at least solves them differently. So Pinterest may never reach Google's level of popularity, but when it comes to exploring the world through search, Pinterest's plan is looking awfully good.  

Photo by Kellee Gunderson

Mozilla Names Former Exec Chris Beard As Interim CEO

Mon, 04/14/2014 - 19:07



Mozilla has a new leader, at least in the short term. The custodian of the Firefox browser named former vice president of products and chief marketing officer Chris Beard as its interim chief executive officer.

Beard, who most recently was an executive-in-residence at Greylock Partners venture capital firm, takes over the top spot at Mozilla after former chief technology officer Brendan Eich. Mozilla appointed Eich as the CEO at the end of March, although his stay was short lived after a firestorm of controversy around his support of the Proposition 8 initiative in California that banned gay marriage in the state until overruled by the courts.

Beard started at Mozilla in 2004 as VP of products before becoming the chief innovation officer and later head marketing officer. Even after leaving Mozilla in June 2013, he has listed himself as an advisor to the company. Beard will also be joining Mitchell Baker, Reid Hoffman and Katharina Borchert on Mozilla's board of directors.

Baker summed up the introduction of Beard on the company's official blog:

Mozilla is building these kinds of alternatives for the world. It’s why we’re here. It’s why we gather together to focus on our shared mission and goals. We intend to use recent events as a catalyst to develop and expand Mozilla’s leadership. Appointing Chris as our interim CEO is a first step in this process. Next steps include a long-term plan for the CEO role, adding board members who can help Mozilla succeed and continuing our efforts to actively support each Mozillian to reach his or her full potential as a leader.

Beard follows Jay Sullivan as a top executive at Mozilla to be named interim CEO. Sullivan was the chief operating officer of Mozilla and also held the role of CEO after Gary Kovacs resigned from the role in the spring of 2013. Sullivan left Mozilla after Eich was named CEO.

Mozilla still has plenty of work to do to reestablish its leadership. It also needs two more board members after three left when Eich was made CEO. Mozilla also needs to find a new chief technology officer to replace Eich. Li Gong is set to take over the role of COO this year.

This Vending Machine Uses Arduino To Tweet-Shame Your Sweets

Mon, 04/14/2014 - 14:37



Editor's note: This post was originally published by our partners at PopSugar Tech.

Meet the vending machine that tweet shames candy bar buyers. Would you think twice about your sweet treat if you knew that an automated dispensary would tell the world about it?

A UK-based group of creative crafters called Nottingham Hackspace has revamped its snack dispensary into a tweeting machine that keeps its members accountable for what they eat. After a successful pledge drive, the group was able to buy the vending machine off eBay. The hackers then enhanced the purchase with Arduino, an open source electronics prototyping platform.

Using Arduino, they've modified the cash payment system with a little reader onto which members tap their cards. The cards have a little RFID (radio-frequency identification) chip that wirelessly transmits who they are and how much money is on their card to the vending machine.

It also communicates with the Nottingham Hackspace server, Holly, who then tweets your candy bar purchase. We're thinking that we'll need to get one of those in our office.

Image courtesy of YouTube user Computerphile

More stories from PopSugar Tech:

The Little Gadget That Goes Grocery Shopping For You
7 Lesser-Known Subreddits to Bookmark
Xbox Is Recruiting Hollywood to Make the Next House of Cards
The Most Epic Minecraft Tributes on the Web
You Will Never Text and Drive Again After Watching This PSA

Who Should Buy Google Glass?

Mon, 04/14/2014 - 13:04



For one day only, Google will put its futurewear optics on sale to the general public. On Tuesday, April 15 at 9am ET, Google will be "opening up some spots in the Glass Explorer Program,” allowing any U.S. resident to buy the before-its-time wearable computer known as Google Glass.

At $1500, the device’s early adopter tax remains as steep as ever, although this time around, Google will toss in a pair of its handsome new prescription-compatible frames or the original shades that shipped with the very first Glass kits a year ago (much to the chagrin of previous Glass buyers who only scored the awkward shades).

After sating the collective appetite of many early adopters, it’s not clear who out there still hasn’t had a shot at buying Glass. But for anyone still “waitlisted” the opportunity sounds like a direct ticket to Glasstown. Here’s Google’s logic behind the one-day sale, as explained on its Glass Google+ account:

Our Explorers are moms, bakers, surgeons, rockers, and each new Explorer has brought a new perspective that is making Glass better. But every day we get requests from those of you who haven’t found a way into the program yet, and we want your feedback too. So in typical Explorer Program fashion, we’re trying something new.

Unfortunately, beyond the sky-high price, one big caveat remains: Glass is open to U.S. residents with a U.S. shipping address only.


Who Should Buy Google Glass

Public perception of Glass vacillates between intense curiosity and something resembling a cartoonish eye-roll, but I stand by the fact that Glass is a remarkably cool technology. The device remains cost prohibitive to a large swath of the population, but if you, your organization or your company can stomach the price, there are still valid reasons to consider signing up come Tuesday.

Developers: Ostensibly, the majority of folks who’ve signed up to date are building something cool with Glass. Due to limitations of the device, its adopters or perhaps our collective imaginations, there’s still plenty of room for innovation. For every mind-blowing Glass app out there, there are twenty that do something incredibly boring.

Like virtual reality, augmented reality directly alters the way we interact with the world around us—and that’s really, really cool. If Google can lower the barriers to entry, getting Glass into the hands of a more diverse sample population geographically and socioeconomically speaking, I suspect that innovation would soar. In the meantime, Glass needs more people who can build things outside the box.

Science: There are so many incredible applications for the sciences that I don’t even know where to start. Medicine, astrophysics, geology, botany, science education—there’s a lot of possibility here.

Even on a more meta level, we don’t yet know how augmented reality changes the way we think, both from a social perspective and a neural imaging one. Researchers, we need you!

Photographers/Artists: Silicon Valley can only take us so far. Its portability, low profile (well, in some situations anyway) and hands-free operation is transforming for experimental photographers. You really can interact with people and environments in a totally different way by capturing moments with Glass. And while photographers are already a thriving Glass-wearing tribe, interactive, experimental artists should consider turning the wearable into a sociotechnological medium.

Real Explorers: Google throws this term around about all Glass owners, but we need more real explorers. You know, the people who trek to the world’s most fascinating, extreme and remote locales. Hey, even I went on a hike or two. Explorers of the world: Get a Patagonia sponsorship and go forth!

Google Glass Remains An Expensive Habit 

Google first made Glass available only to attendees of its 2012 Google I/O developers conference (including press, like us), where the device was first revealed. After a long wait, Google then opened Glass sales to successive waves of so-called “Explorers” through a hashtag contest known as #ifihadglass before allowing each existing Explorer to dole out three “invites” each. Roughly a month later, Google quietly slipped a Glass waitlist onto its website, though it’s unclear how many of those orders have been filled.

After all of those rounds of availability, it’s hard to imagine who out there still hasn’t had a chance to buy Glass. Still, encounters with Google’s augmented reality visor remain rare outside of San Francisco, the device’s indigenous habitat.

We look forward to seeing what happens when (and if) Glass takes its Silicon Valley blinders off.

Windows Phone 8.1: The Good, The Bad & The Ugly [Review]

Mon, 04/14/2014 - 12:01



Windows Phone is finally all grown up. Well, almost.

To date, Microsoft’s mobile operating system has been an attractive, if somewhat frustrating, member of the Windows family. Windows Phone did all the things a smartphone operating system should—like make calls and texts, download the most popular apps and listen to music and take pictures.

But the experience was limited and rough at the edges. Notifications and settings were often difficult to find, many popular apps weren't available and the hub-and-tiles interface was stagnant and boring, not to mention identical across phones from different carriers and manufacturers.

Then there were the little things that frustrated users. Instead of little pockets of delight, which both Apple and Google offer users, Windows Phone offered little pockets of frustration. Why, for instance, was it impossible to adjust the phone ringer and notification volume without changing music-playback volume? Why did it take three updates for Microsoft to add a screen auto-lock option? Why was the file manager basically non-existent?

With Windows Phone 8.1, Microsoft has begun to polish the rough edges of its rough-and-ready smartphone OS.

The Good

Two aspects of Windows Phone 8.1 pop out immediately: customizable background and colors themes and the new Cortana personal assistant.

See also: The New "One Microsoft" Is—Finally—Poised For The Future

The background to that single homescreen is finally customizable. The standard black background is gone—it can now be “light or dark” as a theme across the operating system—and users can now choose from among 21 highlight color options that permeate the entire design of the phone. Users can now place their own photos behind the homescreen, where they're overlaid by hubs and tiles but visible with a parallax motion when scrolling up and down. 

Windows Phone 8.1 still just has the two homescreens—the standard hub-and-tiles plus the apps and settings screen to the right. But the new customization makes for a nice, if relatively minor, change.

Cortana, by contrast, is a major feature update for Windows Phone. It's essentially a mobile extension of the Bing search engine. It can also perform local functions on the phone not associated with Internet search. 

Cortana set up

Natural language search works very well with Cortana, thanks to excellent voice recognition software. Microsoft Research has worked for years on neural networks for speech recognition as well as creating a hybrid model where speech is both recognized locally on the device and in the cloud. This is an area of strength for Microsoft, and it shows in Cortana.

Cortana is essentially Microsoft’s answer to Google Now (though less so than Apple’s Siri). Google Now learns what you like, where you are and what you do through your search history and behavior with the device (like looking up directions). It then auto-delivers contextual information to your phone without actually searching for it.

Cortana is very similar in thatit tracks your interests (such as common people you interact with, news subjects etc.) in a notebook that updates itself. Users can also set reminders and calendar integrations within Cortana. 

Cortana interests and home screen

Microsoft wants users to believe that it has put the “person” back into personal assistant and that Cortana will behave like an actual human assistant would. The entire system learns from itself and Microsoft promises that its core intelligence will grow once a multitude of people use it.

Here are some other good bits introduced in Windows Phone 8.1:

  • Action Center: Just like iOS and Android, Microsoft has instituted a pull-from-top notification bar in Windows Phone that shows current messages and allows for quick access to settings.
  • Word Flow Keyboard: This is a Swype-like keyboard input—the kind of thing that's long been a staple of Android. It's not new or special as a technology, but a welcome addition to Windows Phone.
  • Volume Control: Finally, separate volume controls for the ringer/notifications and media playback.
  • Storage Sense: WP 8.1 offers a new file management system with SD card support.
  • Dual-SIM: The OS will support the use of two SIM cards, for travelers who change carriers at the border or who would otherwise carry work and personal phones.
  • Battery Saver: This feature provides app-by-app management and monitoring, much the way that Google does with Android.
  • Windows Store: A refresh allows users to buy apps once and share them across Microsoft devices. It also provides automatic app updates.
  • Internet Explorer 11: While IE 11 is not special, it does have a host of improvements in Windows Phone 8.1 including the ability to tag websites as live tiles on the homescreen, a private browsing mode, pre-rendering of websites for performance enhancements and Cortana integration.
The Bad

Even though users can now customize their own homescreens, Microsoft has no plans to let individual phone makers re-skin the operating system the way that Android manufacturers can add a customized interface to a device or let users install their own launchers.

It's one thing to give users a nominal amount of customization; it's another entirely  to fully hand over the keys of the operating system to developers and manufacturers (consequently, users). Microsoft is still in full control of the user interface and experience of Windows Phone.

Cortana doesn’t have a non-touch verbal voice key activation. In later versions of Android on certain devices (like the Moto X or Nexus 5), you can activate voice search or device commands without touching the device by saying, “OK Google.” In Cortana, you have to press the search button on the phone or the microphone radio button within the app to start a search.

See also: Why Microsoft's Universal Windows App Store Is Huge For Developers—And Consumers

The default email app for Windows Phone is Outlook. While it does allow for multiple inbox and third-party integration, any filtering that you may have from the other inbox does not show up in the Windows Phone email app. All emails enter one stream, regardless of how you might have them organized or separated in your other email accounts.

If you have a Nokia device, a Windows Phone will now give you two different maps. It will have the Microsoft Maps with the new Local Scout integration for shopping nearby and improved navigation and Cortana integration. The device will also have Nokia Here maps, which is almost the same thing but with better navigation controls. No other mobile operating system comes with two different maps products as default.

Quiz: Which is Nokia Here and which is Microsoft Maps?

Wi-Fi Sense is not an intuitive feature and may just be downright dangerous. As Microsoft describes it, Wi-Fi Sense "automatically attempts to connect to over a million free Wi-Fi hotspots around the world and to all of your friends’ Wi-Fi networks.

But not every Wi-Fi hotspot is created equal, and some are potentially malicious. Nor do you friends necessarily wanting you poaching off their Wi-Fi in their homes, which can also be harmful. Windows Phone 8.1 has several data optimization techniques built into it to save on cellular plans and improve performance, but Wi-Fi Sense may just be a bad idea. 

The Ugly

So, let’s just get this out there: Microsoft can iterate on the its “Metro” interface as much as it wants, but the fact of the matter is that the standard interface for Windows Phone devices is, by definition, limiting. It limits customization, it limits app organization, it limits choice.

By extension, the hubs-and-tiles-only approach limits the popularity of Windows Phone. If you don’t like it (which is completely subjective to individual users), then you are not going to like any Windows Phone ever made, Nokia or otherwise.

Microsoft made some good strides in customization in Windows Phone 8.1, but it's all essentially window dressing. Microsoft learned from the rollout of Windows 8 that people are not enthused with being forced into one standard interface—for instance, the way Microsoft originally hid the desktop and eliminated the Start menu.

If Microsoft could let users skip past the hub-and-tiles homescreen and the apps drawer/settings screen in order to get to more customizable homescreens, it would go a long way toward given them an experience that is more on par with what people expect from iOS and Android.

Because, let’s face it, a lot of Windows Phone 8.1 has been borrowed or copied from iOS and Android in the first place. Cortana is an imitation of Google Now (sometimes better, sometimes worse) and Siri, to a certain extent.. The pull-down notifications in Action Center are extremely similar to those in Android. Battery Saver and Word Flow are longstanding Android features while automatic app updates have been on both iOS and Android since the middle of 2013 or before. 

So why not flip the script on force-feeding Metro to Windows Phone and let it act as its own type of Start menu with layers of richness underneath? 

The Stage Is Set To Expand Windows Phone

WP 8.1 addresses what Microsoft, developers and consumers wanted from the mobile operating system. Coupled with Microsoft’s evolved device strategy and soon-to-be final acquisition of Nokia, the latest version of its mobile OS establishes a decent foundation for the company's next steps. 

See also: Introducing Cortana, Plus 8 Other Things To Know About Windows Phone 8.1

Microsoft is confident that its Windows Phone devices can make gains on both iOS and Android. To do that, it's finally brought WP 8 largely up to par with its rival OSes while maintaining its own unique aspects. Now, just like everything else that the company has done to align itself with the mobile world Microsoft joined so late, Windows Phone just needs to catch on with the buying audience.

The developer preview for Windows Phone 8.1 is available today and instructions on how to download can be found here.

7 Heartbleed Myths Debunked

Mon, 04/14/2014 - 11:34



People are on edge thanks to Heartbleed, a coding mistake that inadvertently laid waste to the security of many big online services.

See also: What You Need To Know About Heartbleed

The revelation this week shocked the world. And new reports coming out about Heartbleed only seems to inspire more worries, not less. The unfortunate result is a lot of misinformation going around. 

Care to join me in a little debunking session? Here are some of the doozies I heard this week, and why they’re not true.

Myth #1: Heartbleed Is A Virus

This OpenSSL bug is not a virus. It's a flaw, a simple coding error in the open-source encryption protocol used by many websites and other servers.

When it works as it should, OpenSSL helps ensure networked communication is protected from eavesdropping. (One clue that a website may be using it is when there’s a “HTTPS” in the Web address, with the extra “s”—although other forms of security do the same thing.) 

So it’s a bug, a security hole that was accidentally left open, allowing others to surveil a communication or login event, as well as pull confidential data or other records out. 

Myth #2: The Bug Only Affects Websites

Potential security breaches for servers and routers are massive issues, as it allows for the greatest amount of data to leak. And so, websites, online services and network servers tend to get the lion’s share of press. But they’re not the only potential targets. 

The clients that communicate with those servers—i.e. your phones, laptops and other devices used to jump online or connect to other networks—are at risk too due to what’s increasingly being called “reverse Heartbleed.” What that means is that the data stored in your device’s memory could be up for grabs.

See also: Heartbleed—What's Next? Check Your Clients, Routers, Virtual Machines And VPNs

“Typically on the client, the memory is allocated just to that process that’s running. So you don’t necessarily get access to all the processes,” David Chartier, CEO of Codenomicon—the Finnish security firm that co-discovered Heartbleed—told ReadWrite. “[But] you can still leak contents of emails, documents and logins.” 

The idea of unauthorized account and systems setting access can be particularly disconcerting for  smart home users. I reached out to startups like SmartThings and Revolv, as well as Zonoff—the company powering Staples Connect’s smart home system—and iControl, which supplies the technology for services like Time Warner Cable, ADT, Comcast, Cox, Rogers and others. 

SmartThings and Revolv have both patched the bug by updating their software to the latest version of OpenSSL. iControl reported back to me, saying that it doesn’t use OpenSSL. At press time, Zonoff wasn’t available for comment. 

Myth #3: Hackers Can Use It To Remote Control Your Phones

By all indications so far, a hacker can’t tunnel in directly using Heartbleed and take over control of your smartphone using Heartbleed. What’s at stake is the data stored in its memory, at least for those haven't been patched with the latest version of OpenSSL, 

Even if it was possible, iPhones and most Androids are immune to Heartbleed, with one big exception—Android 4.1.1. Google, however, has patches going out to cover this version of its mobile operating system. Overall, the fact that iOS and Android are largely unaffected has to come as a relief, particularly given recent iOS security concerns on other fronts.  

Of course, the apps these phones run might be another story. BlackBerry acknowledged that BBM for iOS and Android, for example, is vulnerable to Heartbleed. Attacker still wouldn't have been able to get into the device memory itself, but they might have been able to listen in on insecure chats in progress. 

Myth #4: Windows XP Users Are Screwed Because Microsoft Abandoned Them

Completely false. Sure, Microsoft said it won't be supporting Windows XP just as Heartbleed panic set out across the land. But the tech company does not use OpenSSL.

That’s great news for the loads of PCs out there that still use the 14-year-old Windows operating system—which, at press time, made up more than a quarter of all running desktops. Because they would be stranded with Heartbleed otherwise, with no hope of a security update. 

See also: Goodnight, Windows XP: Microsoft Terminates A Surprisingly Durable Operating System

People running XP, indeed all Windows users, get the company’s own encryption component called Secure Channel, also known as SChannel, and it's not susceptible to this particular security bug. However, it’s worth noting that XP users won’t get any further software support or updates for SChannel either. 

The exceptions are Windows Azure users running Linux in Microsoft's cloud service. These distributions rely on OpenSSL, so Microsoft urges these users to contact the distribution providers for the updated software. As for Mac OS X, Apple has officially declared it is not vulnerable to Heartbleed. 

Myth #5: All Of Our Banks Are Open For Heartbleeding

The security flaw is serious, but it can't pry open the virtual vaults at our top banks. In fact, American Banker, a news site for bank technologies, reports that no major banks are susceptible

These companies have all announced that they don’t use OpenSSL, so they aren’t at risk:

  • Bank of America
  • Capital One Financial
  • JPMorgan Chase
  • Citigroup
  • TD Bank
  • U.S. Bancorp
  • Wells Fargo
  • PNC Financial Services Group 

Of course, there are many more banks and credit unions out there, which is why the Federal Financial Institutions Examination Council (FFIEC) urged "financial institutions to incorporate patches on systems and services, applications, and appliances using OpenSSL and upgrade systems as soon as possible to address the vulnerability." 

Furthermore, CNET’s check of high-trafficked websites shows that PayPal is not vulnerable to Heartbleed either. Neither are these major retailers, where people may store debit or credit card information: 

  • eBay
  • Groupon
  • Target
  • TripAdvisor
  • Walmart

(Looks like Target learned a thing or two from its major security breach late last year.) 

So no, the Heartbleed glitch doesn't throw open the doors of these banks and major stores, at least not directly. However, just because these sites and accounts aren’t subject to these hacks, it doesn’t mean that data is entirely safe. (See below.) 

Myth #6: My ____ Site/Service Wasn’t At Risk Or Issued A Patch! I’m Safe Now. 

Not quite. Heartbleed is insidious because it leaves no trace. That means there’s no way to tell if your information was stolen previously from a site or service that has now fixed it. 

As for places that weren’t vulnerable to begin with, your accounts there may still be at risk, if that login information was stored or sent somewhere that was breached. 

Here’s what it boils down to: You’ll want to change passwords everywhere, but hold off for now on sites that haven’t patched the hole yet. But be sure to do it once they’ve updated their software. And don’t forget to check your credit, account statements and online activity to make sure no unauthorized entries appear. 

Myth #7: NSA Has Been Using Heartbleed To Spy On Us

Citing unnamed sources, Bloomberg accused the National Security Agency of knowing about Heartbleed and keeping it quiet. But that's not all. The agency wasn’t simply aware of bug, says the report—it allegedly exploited the flaw for two years, using it to spy on Americans. 

In light of the PRISM revelations, it’s all too easy to believe. Even before the accusation, suspicions were high that the NSA was involved, with plenty of tweets flooding Twitter questioning the agency's knowledge. It was as if a chorus of "Of course the NSA's involved" rang throughout the Web. 

But the NSA flatly denies it. Not only did it say it didn't use the security hole, it claimed ignorance of its existence. 

Sure, there's no way to know that the NSA is being honest with its denial; the agency's credibility isn't exactly at an all-time high. Still, there’s still no hard proof that it's actually exploited Heartbleed for surveillance. So for now, it's going in the "myth" pile. 

See also: NSA Accused Of Exploiting Heartbleed For At Least Two Years,

It's difficult to imagine any federal authority or agency not being aware of such a serious security weakness that affects so many. But it's not totally impossible. Just ask the Canada Revenue Agency. That government branch, which also used OpenSSL, had to shut down parts of its website temporarily because it was found to be vulnerable to Heartbleed as well. This just weeks before the Canadian tax deadline, to boot.  

Have you heard any Heartbleed myths or untruths? Deposit them in the comments, so we can all debunk them. 

Images courtesy of Flickr users David Goehring (feature image),  Lee Davy (malware), greyweed (Android zombie), Anonymous Account (bank vault), and Tony Fischer (spies). 

How Codenomicon Found The Heartbleed Bug Now Plaguing The Internet

Sun, 04/13/2014 - 14:27


See also: What You Need To Know About Heartbleed, A Really Major Bug That Short-Circuits Web Security

You know that song lyric about the first cut being the deepest? It’s complete rubbish. Heartbleed taught us all that. Because the more we learn about this online data-security wound, the deeper that threat seems to go. 

Discovered independently by Google engineer Neel Mehta and the Finnish security firm Codenomicon, Heartbleed has been called “one of the most serious security problems to ever affect the modern web.” I spoke with Codenomicon CEO David Chartier, who led the Finnish team that named and outed Heartbleed, to find out more about how his team discovered it, and how deep those vulnerabilities could go. (I've requested an interview with Mehta via Google, but as of this writing, have had no response so far.)

We All Bleed For Heartbleed

Codenomicon's David Chartier

Heartbleed actually started out really small. In fact, it was just a slight, accidental gaffe committed by one coder. Had it been caught immediately, it would have required just filling in a missing bit of code. But it wasn’t. And now, that error has propagated to compromise much of the Internet. 

The main problem is it that affects OpenSSL, a widespread open-source security protocol used by as much as two-thirds of Web servers. The other issue is that it went largely undetected for two years—plenty of time to perpetuate across the Web and leave sites, services and accounts big and small open to infiltration. (As the National Security Agency has reportedly done, although the White House has denied that.)

See also: Heartbleed—What's Next? Check Your Clients, Routers, VPNs And Virtual Machines

The initial flood of news reports focused on the hackability of user logins, financial information, emails, photos, medical records, and much more. But Heartbleed’s reach could be bigger than anyone imagined. The OpenSSL flaw affects any server or client that uses it, and that means it could span a huge number of things—including routers and phones, as well as citywide or municipal infrastructure, such as emergency services, transit and utilities. 

How Heartbleed Surfaced

Codenomicon first discovered Heartbleed—originally known by the infinitely less catchy name “CVE-2014-0160”—during a routine test of its software. In effect, the researchers pretended to be outside hackers and attacked the firm itself to test it. 

“We developed a product called Safeguard, which automatically tests things like encryption and authentication,” Chartier said. “We started testing the product on our own infrastructure, which uses Open SSL. And that’s how we found the bug.” 

The engineers found they could burrow in despite the cryptographic security layer, and were shocked at how much was up for grabs. They could access memory and encryption certificates, and pull user data and other records. “This is when we understood that this is a super significant bug,” Chartier said. 

The revelation was startling, not only because of the access this hole could allow, but because of its insidious nature, Chartier said. “On top of that, we couldn’t find any forensic trail that we were taking this data.” The hack was completely untraceable.

See also: Protect Yourself Against Heartbleed, The Web's Security Disaster

But how did something this egregious and widespread go on undetected for two years? The error is buried in the code. The only reason Chartier’s team found the glitch is because Codenomicon uses a rigorous testing process using a very large number of test cases to find weaknesses, just like hardcore hackers do, Chartier explained.

“The vulnerabilities you find after many tests are often more interesting than the ones you find right away," he said. "When you find one that’s difficult, it’s more interesting [to hackers] because they can write an exploit, and it will take more time to be found.” 

The odds of finding it were slight, yet Google's Mehta discovered the Heartbleed bug almost simultaneously. Chartier chalks it up to happenstance. “Google’s one of the leading companies in the world, and it's constantly testing for vulnerabilities,” he said. It takes security testing so seriously, it even offers a bounty for exploits on projects like Chrome. 

But not every company takes security that seriously. 

A Fail To Remember

Codenomicon, being a Finnish company, alerted the Finnish National Security Cyber Center of its findings. Commonly referred to as “CERT,” the group urged the OpenSSL Project to provide an update and release it to the public.

Since then, the news has been circulating in both tech and mainstream media outlets, and Chartier has been impressed with how online communities have disseminated the Heartbleed information. “We’re better off today than we were a week ago, because of getting the word out there,” he said. “It’s making the Internet safer and more secure.” 

Unfortunately, the Web is not where this problem ends. Other networks also need to apply the software update in both server and client devices. This includes gadgets like phones, computers and other communication devices. It also include numerous other technologies in the broader world, particularly as it relates to the Internet of Things. 

See also: The Internet Of Things Might Try To Kill You

Because Heartbleed affects OpenSSL, which is widely adopted, it can affect an extensive range of categories, including connected homes, citywide transportation, emergency services, power grids and other utilities—pretty much any large scale, connected systems. This can make locking all of them down tricky. 

Organizations must update to the patched version of OpenSSL, revoke encryption certificates that authenticate their sites and issue new ones. But systems that haven't gone through security and system testing may not be set up to handle update protocols efficiently. “There’s a lot of stuff out there that was built a long time ago,” said Chartier. “It wasn’t built for the type of stuff that’s coming out today.”

Security tests are essential for critical infrastructure, but unfortunately, there’s still a lot of room for improvement. “A lot of companies do a little performance testing, to see if [software] does what it’s supposed to,” he said. “But they don’t do enough security testing.”

Chartier thinks it could take up to a year or two before all or most of the old versions of OpenSSL out there get updated. In the meantime, things may get tricky.

At this point, many—though not all—of the largest vulnerable sites on the Web have patched OpenSSL against Heartbleed. With some of the smaller service providers and businesses, it may take a little more time. The most prudent users may want to assume that their data was compromised, and change those passwords on every site and service that has been secured. The Codenomicon chief recommends going through each provider, one by one, and “finding out if they used OpenSSL, and if they patched it.” 

As for the companies and organizations, Chartier urges them to adopt more stringent security standards. “You need to put this type of testing into your build cycle,” he said. That’s the best chance at mitigating the risk—so threats don’t penetrate quite so deeply. 

Feature collage by Adriana Lee for ReadWrite using images courtesy of Flickr user Marjan Krebelj and; heart lock image by Flickr user Alonis; David Chartier image courtesy of Codenomicon

NSA Accused Of Exploiting Heartbleed For At Least Two Years, But Agency Denies

Fri, 04/11/2014 - 19:30



The National Security Agency exploited the massive security vulnerability called Heartbleed for the past two years, and used the flaw in OpenSSL to intercept private data, according to a new report from Bloomberg.

Heartbleed is a newly-discovered flaw in OpenSSL that makes your private information—usernames, passwords, bank statements, etc.—vulnerable to potential hackers, and apparently, government snoops.

Update: The NSA denied any knowledge of Heartbleed before it was made public this week. The National Security Council released an official statement on Friday afternoon, saying claims that the NSA had prior knowledge of Heartbleed are wrong. 

Statement: NSA was not aware of the recently identified Heartbleed vulnerability until it was made public.

— NSA/CSS (@NSA_PAO) April 11, 2014

Image by Flickr user Erokism

Appellate Court Reverses Conviction Of Hacker “Weev”

Fri, 04/11/2014 - 17:35



Today a federal appeals court reversed and vacated the conviction of Andrew “Weev” Auernheimer. 

Weev received a 41-month prison sentence in March 2013 after being convicted of violating the Computer Fraud and Abuse Act and committing identity fraud for his actions in 2010 when he exposed a security hole at AT&T by hacking into the company’s public server and releasing thousands of iPad customer emails to Gawker.

See also: Hacker Crackdown: Blame AT&T's Crappy Security, Not Weev

Weev’s trial was seen as a landmark case for the vaguely worded CFAA, a 1986 law that defines "unauthorized access" to computers so broadly that prosecutors can use the law to charge relatively harmless acts as federal felonies.

The Third Circuit Court of Appeals in New Jersey chose not to address issues related to the CFAA raised in Auernheimer's appeal, and instead vacated his conviction on the simplest possible grounds—that he was tried in the wrong court:

Although this appeal raises a number of complex and novel issues that are of great public importance in our increasingly interconnected age, we find it necessary to reach only one that has been fundamental since our country’s founding: venue.

It's not immediately clear whether federal prosecutors will pursue charges in another state, or if Fifth Amendment protections against "double jeopardy"—i.e., being tried twice for the same crime—would prevent that.

Lead image courtesy of pinguino k on Flickr

Heartbleed—What's Next? Check Your Clients, Routers, Virtual Machines And VPNs

Fri, 04/11/2014 - 17:23



What we thought was secure—Web servers, routers, virtual machines, virtual private networks, and even client software—isn't so safe, after all.

See also: What You Need To Know About Heartbleed, A Really Major Bug That Short-Circuits Web Security

Just about everything that relies on the open source cryptographic software OpenSSL is compromised by the Heartbleed bug, which can leak the contents of the memories of these networks and devices to compromise your security.

Heartbleed can expose data in random 64KB “heartbeats,” and while each leak is limited to 64KB of memory at a time, an attacker can keep connecting to collect more data, which can include sensitive data like passwords, private encryption keys, and website cookies.

While the Heartbleed bug was initially known to compromise secure Web servers, the list of affected devices has extended to routers and other products from Cisco and Juniper Networks, virtualization software from VMware, OpenVPN's private networking software, Oracle software (though not clear), and may extend to devices like phones.

And then there’s the Trojan horse, "Reverse Heartbleed."

Vulnerable From Within

Heartbleed's blade cuts across both servers and clients. It can be used in reverse, by tricking a website to come to you, according to Brad Buda, CTO and founder of Meldium, a San Francisco-based firm that sells account and password protection software. 

Meldium has created a web site called, where you can test whether your client’s security has the reverse Heartbleed vulnerability. 

“Many organizations have hosts which initiate outbound SSL connections (pulling updates, fetching images, or pinging webhook URLs),” the site states. “These hosts are often on a separate infrastructure (with different SSL dependencies) within the organization firewall. These hosts may be vulnerable to the reverse Heartbleed attack.”

The post lists potentially vulnerable clients, including traditional clients and open agents.

  • Traditional clients include browsers, applications that use http APIs, and applications loaded onto a computer via DVD, such as your friendly word processor or office application, plus mobile apps on iOS and Android. All of these clients can be affected, if they haven't updated their OpenSSL.
  • Open agents are clients an attacker can drive remotely; these agents are used by social networks, file sharing applications like Dropbox, and web spiders. Until yesterday, Pinterest was vulnerable, but its security team was "very responsive" and patched  with us to polish the test tool," Buda said.

To understand how open agents might work, consider Facebook and Twitter. Though neither is vulnerable, they both have user interfaces that easily illustrate how Heartbleed can exploit client vulnerabilities.

An open agent can trick you into typing a URL that’s malicious in some way. This threat may take time to uncover, Buda said, because people are only looking at the problem from the point of view of the secure Web server, and are not actively searching throughout their infrastructure for vulnerabilities.

Meticulous Testing

Any software that runs OpenSSL—including servers and clients—can be problematic. It’s not built into any of the major Web browsers like Chrome or Firefox, but it is used in iPhone applications and back-end server applications. Reddit, for one, moved fast to patch its servers when Heartbleed first came to light, but it was still vulnerable to the bug. 

“You need to look at every part of the system that can talk to the outside,” Buda said.

See also: Protect Yourself Against Heartbleed, The Web's Security Disaster

Since Meldium published its Reverse Heartbleed tool, people have been using it to help illuminate the sites that still need patching. Buda admits the user interface for the Reverse Heartbleed tool itself needs a little fixing, but in general, you "press the big blue button" and the tool will generate a URL with malicious code. If you copy and paste that URL into an agent (like a Facebook or Twitter status update), the tool will try to fetch the URL. You'll know you're safe if you receive an SSL connection error.

Servers are typically thought of as the defensive perimeter while the inside is considered safe, but Buda said you need to examine every part of your system that communicates with outside computers, servers, or systems.

When Buda heard about Heartbleed, he said Meldium tested its own servers.

"We were vulnerable to the normal attack and patched it right away," he said. "It turned out that patch covered us."

But in researching Heartbleed on Twitter, Buda saw a tweet that suggested the attack could theoretically be reversed. "I can't claim credit for inventing this," Buda said. "We wanted to be the first to have a working exploit," though he built it with the hopes that the community would use it and help root out all the systems that need to be patched.

Router Vulnerabilities

Routers, which are used in both public and private networks, including homes, can also be breached.

As many as 65 different Cisco products are known to be vulnerable to the Heartbleed bug, and others are still being evaluated. Many of the company's most popular products, including Webex Messenger, Jabber client, Cisco IOS XR, Telepresence System 1100, Video Surveillance Media Server Software and Unified Operations Manager, were found to be susceptible to Heartbleed.

“Multiple Cisco products incorporate a version of the OpenSSL package affected by a vulnerability that could allow an unauthenticated, remote attacker to retrieve memory in chunks of 64 kilobytes from a connected client or server,” according to the Cisco alert.

Juniper Networks also alerted customers of products that are compromised, though you need an account to log in to get the information.

Virtualization Opens Many Doors To Heartbleed

VMware, for its part, lists more than 20 products that may be vulnerable to Heartbleed, including ESXi 5.5, vCloud Networking and Security, and the VMware Horizon View virtual desktop client for several operating systems, including Windows, Macintosh, iOS, and Android.

Citrix is still evaluating how its products are affected. Netscaler is safe, as are released versions of Citrix XenServer. However, some virtualization products are vulnerable, including  Citrix XenMobile App Controller, and Citrix advises users of its Citrix Web Interface are advised to check whether deployed servers using Web Interface are vulnerable.

Other Citrix products, including GoToAssist, GoToMeeting, GoToTraining, GoToWebinar, OpenVoice, ShareFile, as well as our Citrix Labs products (, Convoi, Talkboard, are not vulnerable," Citrix writes. 

Users of released versions of Citrix XenServer are safe; 

From the Heartbleed website:

As long as the vulnerable version of OpenSSL is in use it can be abused. Fixed OpenSSL has been released and now it has to be deployed. Operating system vendors and distribution, appliance vendors, independent software vendors have to adopt the fix and notify their users. Service providers and users have to install the fix as it becomes available for the operating systems, networked appliances and software they use.

"So many websites and applications are using this protocol, and until someone fixes it, the vulnerability is still open," said Amtel CEO PJ Gupta. "Every company has to fix it, and once you fix the code, you need to change your passwords."

But will fixing the code and changing one's passwords be enough? Software developer Dave Winer thinks not.

"It's hard to imagine something worse happening. And I think we're late responding to it," Winer wrote on his blog. "If this were a single system so compromised, the right technique would be to go offline and not come back until all the connection points were patched or verified to not need patching, it's risky that we all keep using the net." 

India Starts Paying Big For Software—Will China Follow?

Fri, 04/11/2014 - 15:14



India has never paid much for software, but that may be changing, according to a new Gartner report.

With the Indian software economy growing at a hearty 10% clip, there are signs that indicate India is growing out of the rampant software piracy that has long characterized the market. The question is whether or not this trend bodes well for vendors hoping to do business in other hot economies, too—like China.

India Is A Tough Slog For Software Companies

India has long been a difficult market for software companies. While Asia-Pacific ("APAC") accounts for roughly 10% to 20% of revenues for established companies like IBM, Oracle, Adobe and, "in general [India] makes up a very small percentage of APAC [revenue]," with the bulk of that revenue concentrated in a few large accounts, as Chirag Mehta, SAP's VP of Product Management and Business Development, notes.

Part of the problem is piracy. Western nations, whose piracy rates hover between 20% to 30%, have long traditions of intellectual property. In the APAC region, including India, this simply hasn't been the case. The tension between U.S. and Indian notions of intellectual property has simmered to a boil in recent years, particularly with regard to patents

The irony is that things are actually getting better. For at least a decade, India's rate of software piracy has been in steady decline. From 2005 (when 74% of all software was pirated) to 2008 (69%) to 2011 (63%), India's rate of software piracy has been dropping off pretty consistently. The question is why, and whether India's growing willingness to pay for software suggests other rising economies in the East will also pay for software. 

India's Software Boom

Though 63% of PC software in the country is pirated, India actually spends quite a bit of money on software. India software revenue increased totaled upwards of $4.7 billion in 2013, a 10% increase from the year prior, according to Gartner. That's incredibly fast, especially when compared to other fast-moving software economies like Russia (8.9% growth), Brazil (7.8%), China (7.0%) and South Africa (6.3%). 

That revenue isn't evenly split. It tends to concentrate in larger firms offering complex, proprietary software that isn't easily replicated by local software firms, as Gartner's data shows:

So complexity sells, but what else? According to Gartner research director Bhavish Sood, "recent advances in IT communications infrastructure in the country has opened up new avenues for local consumption of IT software and associated services." These investments point to an infrastructure that can more easily support a variety of network-based services like software-as-a-service, or SaaS, which is impossible to pirate.

For companies that deliver SaaS, this suggests a potential gold rush could be coming, as APAC leads the world in cloud-based software adoption:

IBM and other software companies are betting big on India's ability to prove this general APAC trend.

So What About China?

India's march toward paid software might bode well for those that want to do business in China. Definitely, maybe.

China's piracy rate, which was declining before it stalled in 2010, today roughly coincides with India's piracy rate 10 years ago. It's unlikely this will change overnight.

Like India, China has an abundance of highly skilled engineers. As such, China, like India, tends to use open-source software heavily, rarely paying for support subscriptions. But also, like their Indian counterparts, Chinese enterprises seem inclined to pay for complex, proprietary enterprise software that's more advanced than domestic firms have yet developed. 

But the question isn't from whom China gets its software. The question is whether China will pay for that software at all. 

The answer seems to be "yes, China will pay for software," even if they're doing it more for the technical support than the actual software.

While China would seem like a boon for open source vendors that sell support subscriptions, the requirements for competing in China are tough for non-Chinese companies. Thanks to government support and deep understanding of local business requirements, Chinese software companies were able to quickly take control of the market through the 1990s and 2000s. And this Chinese software offers multi-tiered, 24/7 support through large teams of customer support personnel, which is awfully tough for outsiders to match.

The other ways to make money in China's software market involve selling hardware appliances (e.g., Huawei's SAP HANA appliance) or hosted services ( has been doing business there since 2007). 

Complex, Cloud Or Hardware

Making money in China is somewhat similar to making money in India: The software has to either be more complex than the local market has been able to build on its own, or it must be baked into hardware (through appliances), or the network (through SaaS).

Given the legacy-licensed software model is in retreat everywhere, the stars may be aligning to allow more Western software or software service vendors to break into India and China in a significant way. This will require considerable investment in understanding these local markets, of course, but the economies of China and India may make the effort worthwhile.

Image courtesy of Shutterstock.

Sonos Lets Google Put It To Work, Chromecast-Style

Fri, 04/11/2014 - 15:02



Just when we were looking at how to turn a Chromecast into a Sonos, Google turned the Sonos into a Chromecast.

Sonos and Google's new partnership, touted on Google's fairly elaborate launch page, means that you can now stream music to a Sonos speaker from the Google Play Music app. That feature launched as an Android exclusive, but now Google Play Music will pop up as an option on the Sonos app for iOS too. Apparently you'll still need the Sonos app installed on Android, and of course you'll need to be on the same Wi-Fi network that your Sonos rocks out on.

Control Sonos With Google Play Music

If you haven't used a Sonos, this probably all sounds confusing. Historically, Sonos speakers are controlled only by the Sonos Controller app. To populate the Sonos app with music, you connect it to the music services you use, like Pandora or Spotify.

Doing so ends up circumventing those streaming-music apps entirely; instead, the Sonos apps controls those accounts, playlists and stations. It's nice because everything is in one place. But even though Sonos has a fully redesigned app just around the corner, its controller still can't capture the native experience of a fun app interface like Beats.

Google's Chromecast comes at the same problem from the opposite direction: You can control a Chromecast through a special "cast" button that must be added into an existing app. If you want to watch video with Google's Chromecast, an app like Netflix or Google Play turns into the controller. By contrast, to play music over a Sonos speaker, you had only one controller: the Sonos meta-app.

At least until Thursday, that was. Now, though, if you want to listen to music with a Sonos, Google Play Music can turn into the controller, too.

Google, Why Not Just Buy Sonos?

The result—playing music from Google's service through a Sonos speaker—might be the same. But being able to control a Sonos system with a non-Sonos app is a meaningful move for both Sonos, the defending champion of streaming home audio hardware, and Google, a company with a growing appetite for defining the smart home through acquisitions like Nest and innovations like the $35 Chromecast.

It also proves that Google isn't playing favorites with the Chromecast, its own digital streaming device that's been adding more and more support for prominent music apps like Pandora, Rhapsody and Rdio. Priced at $35, the hardware isn't raking in any profits for Google.

The same strategy is evident across its well-priced mobile Nexus line and its budget-level Chromebook computers, which all function—with the exception of the Chromebook Pixel— as very reasonably priced hardware platforms for Google's multimedia and app store, Google Play.

With its newly open approach to music streaming, the Sonos digital home hi-fi family sure would look good next to the Chromecast in that line-up. Maybe this is just the beginning of a beautiful friendship.

The New "One Microsoft" Is—Finally—Poised For The Future

Fri, 04/11/2014 - 14:34



Microsoft has completely overhauled its corporate kernel.

The stodgy old enterprise company whose former CEO once called open source Linux a “cancer” is gone. So is its notorious tendency to keep developers and consumers within its walled gardens. The “One Microsoft” goal that looked like more gaseous corporate rhetoric upon its debut last summer now is instead much closer to actual reality.

You can see the proof both in Microsoft's technology and the way it talks about it. For instance, Microsoft spent the last couple of years reworking the core of Windows and turning it into one platform. No longer are there different kernels for Windows 8, Windows Phone or Windows RT … it's now all just One Windows.

See also: Why Microsoft's Universal Windows App Store Is Huge For Developers—And Consumers

As goes the Windows kernel, so goes the entire company. Microsoft finally appears to have aimed all its guns outside the company rather than at internal rivals. Now it needs to rebuild its empire upon this new reality.

Platforms And Pillars

Microsoft is defined by platforms and pillars. The platforms—Windows and its cloud service Azure—are what developers build on and customers use. The pillars—Office, Xbox, Surface, OneDrive, Nokia and enterprise—are what people buy and use every day. In the old Microsoft, the company wanted you on its products, in its cloud, on its machines. The new Microsoft is far more egalitarian, going to where the people are instead of trying to herd them into its corral.

New Microsoft CEO Satya Nadella described the transformation in his keynote address at the company’s Build developer conference last week:

We started out as a company that was focused on developers. We were a tools company before we were an Office company before we were a Windows company.... We have that proliferation of what I talk about as ubiquitous computing and ambient intelligence or this mobile-first, cloud-first world where Windows is prevalent.

Joe Belfiore

Ubiquitous computing and ambient intelligence is Nadella’s geek-speak for “devices and services,” the principle Microsoft has ostensibly been organized around for a few years—but which it has only recently started to deliver on.

If you are buying or building a Windows device, Microsoft has lowered the barriers of entry. If you're building or buying an app for a Windows device, Microsoft has torn down the vertical barriers that locked you into (or out of) the system. If you want to use a Microsoft app, you can find it on whatever platform or device you are using, not just on Windows. Running behind everything is Microsoft’s Azure cloud and services.

“The computing facts have changed,” Joe Belfiore, VP of Windows Phone program management and design at Microsoft, said in an interview with ReadWrite. “Especially that we now have one cloud, that we expect to have great infrastructure that we hook all of our devices to and that we expect to have a series of devices that work well together through that cloud that have similar capabilities. So we have tried to evolve our organization and our technology to line up with that.”

Ubiquitous Computing: Microsoft’s New Device Strategy

Microsoft’s long-time devices strategy was to build its operating system—Windows—and then license it to manufacturing partners. Only rarely did Microsoft make its own hardware (the Xbox is the most significant example), and it made billions upon billions of dollars selling Windows for more than two decades.

That Windows cash cow may soon be dead—partly thanks to market changes (the decline of the PC) and partly by Microsoft's own hand.

You can't overstate the importance of the latter, even if the market has forced Microsoft's hand. Microsoft will now give away licenses for Windows on devices less than 9-inches for free. That means that any manufacturer wanting to build Windows Phone smartphones or Windows RT tablets don't have the pay the "Windows tax." Microsoft still reserves the right to charge a fee for larger tablets and PCs.

Microsoft is doing two big things here. It's eliminated the financial barrier of entry to its least popular (and least lucrative) platforms—Windows Phone and RT—while holding onto its PC revenue base. It's a move designed to ease Microsoft's transition away from PC-centrism to a mobile focus. Microsoft knows as well as anybody that PC growth is declining and will continue to do so forever.

“Our strategy was to build the most compelling user experience we could pretty vertically. So, we launched on just one mid-tier chip in a small number of countries with some success, user appreciation. Then we had to try and figure out how to scale it,” Belfiore said.

Nokia Lumia 930 on Fatboy wireless charging pillow

Microsoft has also made it easier to use its Windows on less-powerful devices. At Mobile World Congress in Barcelona earlier this year, Microsoft announced a new Qualcomm Reference Program that will help manufacturers build Windows Phone 8.1 smartphones on lower-end hardware. The company also announced lower requirements for Windows 8.1 so that it can run on hardware with as little as 1GB of RAM and 16GB of storage.

All this shows that Microsoft is serious about being a devices company. Microsoft has learned from the way that Google dominated the world with Android; its executives understand that to win these days, a platform needs to run on commodity hardware and has to be effectively free. Windows Phone may not be open source the way the Android Open Source Project is, but Microsoft has cleared the barriers it had put around Windows to make it easier to spread everywhere. Combined with the manufacturing arm of Nokia, Microsoft's device strategy is now fully developed.

“There is this virtuous cycle that we are looking to create where volume increases where more partners sign on, more apps and in more countries so now you can access more customers,” Belfiore said. “The process has been about scripting from how you get from Point A to Point B. The timing of bringing on the Qualcomm Reference Design and getting compatibility with Android apps and going with all of these other partners has been in combination with our technology getting to all of these price points.”

Crossing Platforms And Climbing Out Of The Walled Garden

Perhaps the final great act of outgoing long-time Microsoft CEO Steve Ballmer was to make the call to open the company’s other cash cow—Office—to the iPad. While Office for iPad is basically just another product introduction by Microsoft, the fact that it is decoupling Office from Windows 8 touchscreen machines represents a significant change for the company.

Office for iPad also speaks to Microsoft’s desire to really have a robust “services” division with a suite of products that can be used on any device and anywhere. If Microsoft owns and builds an app, it wants the audience for that app to be as large as possible. If that means bringing Office to the iPad and Android, Skype to every gadget imaginable, the same logic presumably holds for OneNote and One Drive and Bing and Internet Explorer and Outlook and so forth.

In other words, Microsoft has adapted to the smartphone and tablet industry with new hardware requirements and OS licensing. Now it's time to do the same with its app ecosystem. The biggest example of this new desire to provide a robust cross-platform services suite is the Nokia X, the Android-based smartphone Nokia released at Mobile World Congress this year.

See also: Why The Nokia X Makes Sense For Both Nokia And Microsoft

“Essentially the story is that Microsoft wants to connect the next billion people to the cloud,” Jussi Nevanlinna, the VP of product marketing for mobile phones at Nokia, said in an interview with ReadWrite in February. “What we bring is very wide reach. We have access to these consumers.... We are a volume platform to connect the next billion people to Microsoft’s cloud and services.”

Conversations with both Nokia and Microsoft employees since the release of the Nokia X strongly suggest that Microsoft will keep the Nokia X around even after it finalizes its acquisition of Nokia. This could all be a bunch of hot air and good public relations training, but I've heard the same thing from the number of unrelated executives. The general thought is that Microsoft wants people using its services and if Android or iOS can help be that vehicle, then so be it. 

Asked if the Nokia X Android line running Microsoft services would continue, Hans Henrik Lund, VP of product marketing for smart devices at Nokia, told me at Build that it would. “Of course there will. Because again it makes sense in the fact that we can get users on to Microsoft services instead of Google services,” Lund said.

Microsoft has come to realize that keeping people tied up into its own walled garden has become counterproductive in a world where people have multiple devices running different operating systems. Google has long had that same ethos, building its search engine and core apps for the Web on any device. For instance, Google Chrome and Gmail are both extremely popular on iOS devices like the iPad. Microsoft is behind, but it is making the same type of move toward "ubiquity" of its services.

“Consumers hate ecosystems,” Lund said. “Ecosystems are not created for consumers, per se. They would love to be able to mix and match as long as their content goes with them on any device. The minimum that we can provide is to do that for our own ecosystem.”

Ambient Intelligence: The Azure Backbone

Microsoft's EVP of Cloud & Enterprise Scott Guthrie

When talking about “services” at Microsoft, what that actually entails is two-fold: the front-facing consumer apps like Office and the cloud-based backend represented by Azure and Microsoft’s server business.

Of course, these are not mutually exclusive. These apps are supported by Microsoft’s Azure-based cloud just like Google’s core apps are supported by Google’s cloud and Apple’s apps are supported by iCloud. 

See also: Azure Is Helping Microsoft Catch Up In The Cloud

Microsoft Azure has certainly grown up in the last two years. Now, the company wants to make Azure a core tool for cross-platform development of websites, apps and games. At Build, Microsoft closely tied the integration of Azure to its Visual Studio integrated developer environment and wants to help developers automate everything on the backend of their systems. 

With the recent updates to Azure (there were 44 different product feature updates at Build), Microsoft has fully entered the cloud platform wars. Azure will be popular with enterprise vendors and app developers that have long trusted Microsoft services, but it remains to be seen if the vast majority of consumer developers will follow suit.

Amazon is—by far—the runaway leader in cloud services. At a recent informal survey at a developer event, 10 out of 10 developers asked about what cloud they use said Amazon Web Services. 

A More Open Microsoft

One of the more common misconceptions about Microsoft over the years has been that it is anti-open source. This has not exactly been true. Microsoft has long donated tools and developer resources and employee hours to open source projects (like HTML5, for instance).

Still, much of Microsoft’s bad reputation among open source enthusiasts comes from its battle with the Netscape browser in the late 1990s, its battle with Sun Microsystems in the early 2000s and Ballmer’s remarks on the open source “cancer” of Linux. Over its history, Microsoft has opposed some of the biggest creators and advocates and products of open source software ever created. That is a reputation that may be hard to live down.

Microsoft is not going to open source the Windows or Windows Phone source code, the way that Google does with the Android Open Source Project or Chromium. But Microsoft’s stance on Linux softened and it started contributing to the platform in 2012. 

See also: WinJS: 5 Things You Need To Know About Microsoft's New Open-Source JavaScript Library

As part of Microsoft's shift, the company has started providing more open source tools to the software development community. It signed an agreement with Novell to help keep non-commercial free software developers from being sued, for instance. Microsoft took a step further at Build when it open sourced its WinJS library and tools, created the .Net Foundation and opened its .Net Compiler Platform Roslyn to developers. The Roslyn announcement, in particular, garnered what was perhaps the biggest cheer from the audience at Build.

The new .Net Foundation and open sourcing WinJS and Rosyln are positive moves for Microsoft. But that doesn’t mean that developers inherently trust Redmond because of a few new open source projects. Developers rightly see the new .Net Foundation as a way to get people into its development ecosystem and then sell them Azure licenses.

To be fair to Microsoft, though, that's how the cloud business works these days: provide free software and tools while charging for the cloud. Microsoft is far from the only company to take this tack.

“I'm not trying to tell you that Microsoft is suddenly our warm, fuzzy friend. As even Scott Hanselman would gladly admit, they're ultimately trying to sell you software licenses or Azure services,” said a commenter that goes by the name John Booty on developer forum Hacker News after the announcement. “But it seems to me that they're using open source the right way in order to achieve that goal, as opposed to the bad old ‘embrace and extend / embrace and extinguish’ days at Microsoft.”

Another commenter named David Gerard summed up developers’ caution of Microsoft succinctly, “I think one would be exceedingly foolish not to exercise the very greatest of caution. This is Microsoft we're talking about—they have a track record of profound fuckery.”

Time will tell if Microsoft’s overtures to the open source community are a real and altruistic form of doing business or are more of the same from Redmond, like putting lipstick on a pig. But Microsoft's words and actions over the last couple of years, culminating at Build, are very different from its historic fight against the open source movement. However you view it, it's a big change and a measure of the new Microsoft.

Now It’s Time For Microsoft To Build

It's difficult to understate the cumulative effect of all these changes at Microsoft. After years of confusion, false starts, falling behind and frustration, Microsoft has finally gotten itself on the right track. It's reorganized itself and its products for modern computing, the mobile world and the cloud.

Of course, finally aligning itself in the proper direction doesn’t mean that Microsoft will automatically be successful. Consumer and developer choice can be fickle. Right now, people prefer Google and Android, Apple and iOS. The hub-and-tile “Metro” interface of Windows 8 hasn't struck a chord with consumers across the world, and developers are reticent to embrace a platform that hasn't shown much beyond marginal growth. Microsoft can do and say all the right things and still fail.

At the same time, you have to give Microsoft credit for realizing that its ship was headed for the iceberg and correcting its course. It has the platforms, it has the tools, it has the devices and portals. Now it just needs to build upon the new foundation it worked so hard to create.

What ComiXology Can Do For Amazon

Fri, 04/11/2014 - 14:09



Amazon on Thursday announced it will acquire digital comics agency ComiXology for an undisclosed sum. But why does the world's biggest online retailer care so much about comic books?

Well, that's because the deal—and ComiXology, as a whole—isn't just about comics. ComiXology is pioneering the art of digital storytelling, and attempting to bring these tools to the masses. With Amazon, ComiXology gets a big boost towards its goal of adding a third dimension to the two-dimensional world of books, comics and graphic novels.

Why Amazon Cares About Comics

Smartphones and tablets are great because they can store a veritable library of books on relatively lightweight devices, which is so much better than having to lug around pounds of paper. But ComiXology believes electronic devices can do so much more than simply replicate the experience of reading a physical book.

With special tools like "Guided View," ComiXology lets its authors and artists select how they want their stories to feel and how they want them to be read. Like movie directors, they can choose the speed and order of every shot, and add special effects like "pan," "zoom" and "fade" whenever they'd like.

Though ComiXology has been touting its immersive technologies for years, it only recently gave people the ability to create, stylize and publish their own books through a platform called "ComiXology Submit," released in 2012. Like ComiXology as a whole, ComiXology Submit was free for all authors and allowed them to create and sell their works across a slew of mobile operating systems, including iOS, Android, Windows, Kindle Fire and the Web.

Amazon Wants To Be The Dominant Bookstore Again

Until Amazon started tackling the living room recently, the company used to be known for two main reasons: Its retail site, and its Kindle e-readers.

With ComiXology under its belt, Amazon has a chance to make Kindle owners very happy.

At its core, ComiXology lets artists design, create and publish their digital works to their liking—and making it easy for anyone to use. But with Amazon, this technology no longer needs to be limited to just comic books.

Amazon and Apple have fiercely competed with each other in the e-reader/tablet space, and now the living room/TV space. With its vast library of available books and e-books to purchase, Amazon already has a rival to the iBookstore—its main website—but thanks to comiXology, Amazon can tackle the entire iBooks platform, which about two years ago started letting authors customize their works, from design to publication, with a tool called "iBooks Author."

The ComiXology acquisition is an important move to make Amazon's ecosystem feel more complete. Before, Amazon users were limited to their roles as customers—Amazon supplied the books, and users bought them. Now, though, Kindle users can also be Kindle creators. And if Amazon improves its Kindle software, those devices could potentially pull off more impressive visual effects than any other simple e-reader, or any book from the iBookstore, or from any digital bookstore for that matter.

And that's the key right there. Amazon has always been able to sell books, but now it has the means to provide unique digital experiences specifically for its platform. So if you want your ebook to provide readers with a cinematic experience, you'd have to use Amazon.

And whereas Apple prohibits iBooks authors from publishing their original works elsewhere, Amazon has a real opportunity to attract artists with more flexible licensing agreements, and of course, a platform that allows artists to express their true creativity. Now it's just a matter of execution.

Lead image courtesy of Amazon

Building A Raspberry Pi VPN Part Two: Creating An Encrypted Client Side

Fri, 04/11/2014 - 13:03



Welcome to Part Two of ReadWrite's Raspberry Pi VPN server tutorial!

By now, it's pretty apparent that turning your Raspberry Pi into a Virtual Private Network is an all-evening activity. But as security flaws further compromise our Internet lives, it feels increasingly worth it to have a secure server on your side. That way, you're free to write emails and transfer data without worrying about what or whom might be intercepting it as it travels from your computer to the Web. 

See also: Building A Raspberry Pi VPN Part One: How And Why To Build A Server 

If you’ve followed the steps from Part One of this tutorial, you’ve got a fully functional VPN server on your Raspberry Pi. You can use this to connect securely to your home network wherever there’s an unencrypted wireless connection. You can also access shared files and media you keep stored on your home network. 

Only, you can’t access those files just yet. We’ve created keys for clients (computers and devices) to use, but we haven’t told the clients where to find the server, how to connect, or which key to use. 

If you remember, we created several different client keys for each of the devices we want to grant VPN access. We called them Client1, Client2 and Client3. 

It'd be a lot of trouble to generate a new configuration file for each client from scratch, which is why we’ll use an ingenious script written by Eric Jodoin of the SANS institute. Instead of generating a file for each client on our own, this script will do it for us. 

Following The Script

The script will access our default settings to generate files for each client. The first thing we need to do, then, is create a blank text file in which those default settings can be read. 

nano /etc/openvpn/easy-rsa/keys/default.txt 

Fill in the blank text file with the following: 

client  dev tun  proto udp  remote <YOUR PUBLIC IP ADDRESS HERE> 1194  resolv-retry infinite  nobind  persist-key  persist-tun  mute-replay-warnings  ns-cert-type server  key-direction 1  cipher AES-128-CBC  comp-lzo  verb 1  mute 20 

It should look like the screenshot below, except it should show your public IP address. You'll see that I deleted my own public IP address because that's private information you shouldn't be sharing around. On the other hand, local static IP addresses are very similar for everyone. They usually start with "192.168."

Now, if you don’t have a static public IP address, you need to use a dynamic domain name system (DDNS) service to give yourself a domain name to put in place of the IP address. I recommend using the free service DNS Dynamic, which lets you pick a name of your choice. Then on your Pi, you need to run DDclient to update your DDNS registry automatically. I wrote a full tutorial for how to do this here

As always, press Control+X to save and exit the nano editor. 

See also: 5 Pointers To Supercharge Your Raspberry Pi Projects

Next, we need to create the actual script file. The script will run from a shell file, which is an executable script that usually automates tasks on Linux—including in this case.

nano /etc/openvpn/easy-rsa/keys/ 

Here’s the script Jodoin wrote. Copy and paste it into your blank shell file. 

You still need to give this script permission to run. First, go to the folder it’s in: 

cd /etc/openvpn/easy-rsa/keys/

And then give it root privileges. If you remember from Part One, permissions in Linux are governed by different three-digit numbers. Seven hundred means "owner can read, write, and execute."

chmod 700

Finally, execute the script with: 


As the script runs, it'll ask you to input the names of the existing clients for whom you generated CA keys earlier. Example: “Client1.” Be sure to name only clients that already exist.

If all goes well, you should see this line appear:

Done! Client1.opvn Successfully Created.

Repeat this step for each existing client. 

The last thing to do is connect to your Raspberry Pi so you can download files from it. You need to use a SCP (Secure Copy Protocol) client in order to do this. For Windows, I recommend WinSCP. For Mac, I’ve been using Fugu

Note: if you cannot get permission to connect to your SCP client, you’ll need to grant yourself read/write access to the folder. Back on the Raspberry Pi, write: 

chmod 777 -R /etc/openvpn

Be sure to undo this when you’re done copying files, so others can’t do it! Put the permission back to 600 when you’re done, so only the Pi user can read/write files:

chmod 600 -R /etc/openvpn

Put it into your client and you’re done. 

Working With Client Software

Okay, the hard part is over. From here, we need to input the scripts we generated earlier into a Graphical User Interface. For your PC, Android, or iOS mobile device, you can download OpenVPN Connect. There isn't one for your Mac computer, so I tried both Tunnelblick and Viscosity.

Tunnelblick is free, while Viscosity costs $9 after a free 30-day trial. In either case, let's walk through how to set up a Mac computer as a client.

In my case, my Mac is my fifth device that I want to connect to the VPN server, so the file I generated with the above script is named client5.opvn. 

Download the version of Tunnelblick that works for your version of OS X. I'm using Mavericks, so I downloaded the beta. The fact that it popped up in a bunch of languages looked funny to me, but that's the legitimate download. 

Then, it'll ask if you already have a file you want to use. I did—my Client5.opvn file.


It will then ask if your configuration file is in .opvn format or .tblk. If you select .opvn, it'll walk you through changing the file type to Tunnelblick's native type. I did this by transferring Client5.opvn into a folder Tunnelblick provided, and then changing the name of the folder to Client5.tblk.

Now you're all set to connect. Click the Tunnelblick icon on the top right of your screen and select Client5. 

It will ask you for a pass phrase. This is the same pass phrase we generated last tutorial, back when we were generating keys for each client.

If you get the password right, it'll look like this! 

Try out your new connection at coffee shop, the local library, anywhere there's unencrypted Wi-Fi. You may still be using the public connection, but over VPN, your data is anything but out in the open.

Illustration and screenshots by ReadWrite

Facebook, If You're Serious About Privacy Controls, Let Me Control Them

Fri, 04/11/2014 - 11:49



Facebook wants people to stop getting frustrated with the company’s privacy settings. Well, good luck with that.

Almost any change Facebook makes to privacy controls triggers outcries and accusations that the social network is continuing to erode any remaining confidence people might have have sharing their data with the social network—and justifiably so. Yet Facebook just can't stop trying to win over hearts and minds. If it really wants to succeed, though, it needs to become a lot more transparent, and more lenient, about how it vacuums up data, what sort of data it keeps and what it does with it.

Facebook has faced lawsuits for sketchy privacy policies, and recently closed down a controversial advertising product that used people’s likeness in ads. In 2011, the Federal Trade Commission settled with Facebook after the social network failed to keep its privacy promises, and the FTC reminded Facebook of those promises when it cleared the Facebook-WhatsApp acquisition on Thursday.

The company is hoping to change this negative perception. At a roundtable with reporters this past Tuesday, Facebook highlighted a few changes people will start to see in their news feed.

See Also: Why Facebook's WhatsApp Deal Is Bad For Users

“We haven’t communicated as well as we could have,” Mike Nowack, privacy product manager at Facebook, told reporters. “[Feedback] has led us to think about privacy not just as controls or settings, but as a set of experiences that help people feel comfortable sharing what they want with who they want.”

For instance, Facebook is testing a minor tweak that would change the look of a drop-down menu that lets you select with whom you share, making “Public” and “Friends” the two prominent options (Facebook says those choices are the most popular). Facebook also told us that it runs 4,000 privacy surveys a day to better understand what people like or dislike about their current settings in order to make changes retroactively. 

The biggest change is letting users control who sees their past cover photos, one of the items Facebook deems publicly available information—that is, data that’s visible to anyone in the world. Previously, anyone could view all your past cover photos.

While it’s smart of Facebook to be proactive about educating users on privacy controls and anticipating backlash, these changes don't go nearly far enough. The company is still missing some key features that would prove  it really takes privacy seriously.

You Can’t Not Be Public

It’s easy for strangers to find you on Facebook, thanks to publicly available information—the data you give to Facebook that the social network then shares with the world. This includes your name, profile photo, cover photo, gender, and networks such as your school or workplace. 

According to Facebook, it’s necessary for this information to be public: “These are pieces of information that both help disambiguate you from other people in the world, but help you get the best experience to find other people,” Raylene Yung, an engineering manager on Facebook’s privacy team, told me. “They’ve been a part of the site for as long as its existed.”

When Facebook was still a small and growing social network, it made sense for your personal information to be public so new friends or family that signed up for the service would be able to find you. But now, with over one billion users, many people have established their small piece of the social experience and don’t need to field any additional friend requests, while others just don’t want to be found at all.

Public information proves to be a difficult obstacle for many people who have experienced online harassment or stalking. I’ve personally been a target of Facebook stalking—in college I was harassed by a stranger who sent me numerous messages and a friend request; I eventually blocked him and the harassment stopped.

When asked on Tuesday about potential safety issues regarding public information, Facebook officials emphasized the blocking policy and said that people who feel harassed should report it to Facebook. Of course, once blocked or reported, harassers can simply create a pseudonymous account and find you once again.

In order to feel completely secure on Facebook, it should give users the opportunity to opt-out of search, or choose what part, if any, of their data can be publicly visible.

Facebook killed a privacy setting that did just this last fall. It eliminated users’ ability to block people from searching them, effectively forcing everyone into Graph Search—the massive, practically endless, natural-language search that contains all the public data of every Facebook user. I was one of the people that had the setting checked because I didn’t want to appear in unwanted searches, and was disappointed when the setting disappeared.

Luckily, you can tailor your settings to allow only “Friends of Friends” to send you friend requests, or “Only Friends” to send you messages, small but significant settings that deter unwanted contact.

Restricting cover photo viewing is a step in the right direction, but restricting or eliminating all required public information would boost confidence in users that Facebook is taking concerns seriously. 

Multi-App Strategy: What Does It Do With That Data?

When Facebook acquired WhatsApp earlier this year, the fear of Facebook getting its hands on even more of your data irked numerous users. 

Those fears could have legs—a report published by data analytics company SiSense took a look at an average WhatsApp conversation and noticed the data potentially collected from WhatsApp is significantly more personalized and meaningful than what Facebook gleans from its flagship application.

SiSense analyzed the conversation from one of its own employees to illustrate the potential data Facebook can mine from WhatsApp. The analysis showed that Jennifer regularly talks about food, specifically desserts, that she is most active around 8 p.m., and she regularly talks about populism and conservative politics. 

Having access to these intimate conversations creates a more substantial profile based on what people say, not what they like—a profile that Facebook can then monetize. Although WhatsApp claims it will remain independent of Facebook and stay free from advertising, the company’s privacy policy says it may share personal data with third-party services “to the extent that it is reasonably necessary to perform, improve or maintain the WhatsApp Service.”

Clearly the FTC is concerned about the potential privacy flaws, too. The government organization sent letters to WhatsApp and Facebook that accompanied the acquisition approval, reiterating that their responsibility is to consumers first.

We want to make clear that, regardless of the acquisition, WhatsApp must continue to honor these promises to consumers. Further, if the acquisition is completed and WhatsApp fails to honor these promises, both companies could be in violation of Section 5 of the Federal Trade Commission (FTC) Act and, potentially, the FTC’s order against Facebook. 

Instagram’s privacy policies have also drawn ire from users. It was forced to change its policies in 2012 after controversial wording of its privacy policies put users in an uproar. 

And Facebook wants to bring even more of Instagram’s data in-house. The company is testing Facebook Places in lieu of Foursquare’s location services on the app that lets users geo-tag their photos. Instead of feeding precious data to a separate social network, Facebook wants to make sure it keeps tabs not just on photos, but location as well.

A Focus On Anonymity

Facebook has long been tied to your real identity. In fact, Mark Zuckerberg famously said, “Having two identities for yourself is an example of a lack of integrity.”

See Also: Facebook Just Killed A Privacy Setting, So It's A Good Time To Do Your Own Checkup

Those tides may be changing, however. In an interview with Bloomberg earlier this year, Zuckerberg said a number of applications created under the Facebook Creative Labs umbrella allow users to login anonymously—an unprecedented move for the social network. 

Recent rumors that Facebook is interested in acquiring Secret, an anonymous social network where people post photos and text updates, give more credence to the speculation that Zuckerberg and company are indeed pushing for more guarded privacy options. 

It could be just the spark folks need to turn the tables in favor of the social network. 

While anonymity may be bad for Facebook—its business is knowing as much about you as possible and using that information to sell advertising—it might prove a useful compromise for Facebook's longstanding critics.

Lead image by Taylor Hatmaker for ReadWrite