Wednesday 26 March 2014

How satellites tracked down flight MH370 – but why we still can’t find the plane


Yesterday morning, the Malaysian prime minister confirmed that Malaysia Airlines flight 370 crashed in the south Indian Ocean, killing all 239 people on board. Curiously, though, despite the PM’s confidence, this conclusion is based entirely on second-hand information provided by UK satellite company Inmarsat. There is still no sign of debris from MH370, and investigators still have absolutely no idea what happened after the final “All right, good night” message from the cockpit. If you’ve been following the news, you probably have two questions: How did Inmarsat narrow down MH370′s location from two very broad swaths across central Asia and the Indian Ocean, and furthermore, if we know where the plane crashed into the ocean, why haven’t we found it yet?

How Inmarsat tracked down flight MH370



After flight MH370′s communication systems were disabled (it’s still believed that they were disabled manually by the pilots, but we don’t know why), the only contact made by the plane was a series of pings to Inmarsat 4-F1, a communications satellite that orbits about 22,000 miles above the Indian Ocean.

The initial Inmarsat report, which placed MH370 along two possible arcs, was based on a fairly rudimentary analysis of ping latency. Inmarsat 4-F1 sits almost perfectly stationary above the equator, at 64 degrees east longitude. By calculating the latency of MH370′s hourly satellite pings, Inmarsat could work out how far away the plane was from the satellite — but it couldn’t say whether the plane went north or south.


To work out which direction was taken by flight MH370, Inmarsat, working with the UK’s Air Accidents Investigation Branch (AAIB), says it used some clever analysis of the Doppler effect. The Doppler effect describes the change in frequency (the Doppler shift) as a sound/light/radio source travels towards the listener, and then again as it moves away. The most common example is the change in frequency of a police or fire truck siren as it passes you. Radio waves, such as the pings transmitted by flight MH370, are also subject to the Doppler effect.

Basically, Inmarsat 4-F1′s longitude wobbles slightly during its orbit. This wobble, if you know what you’re looking for, creates enough variation in the Doppler shift that objects moving and north and south have slightly different frequencies. (If it didn’t wobble, the Doppler shift would be identical for both routes.) Inmarsat says that it looked at the satellite pings of other flights that have taken similar paths, and confirmed that the Doppler shift measurements for MH370′s pings show an “extraordinary matching” for the southern projected arc over the Indian Ocean. ”By yesterday [we] were able to definitively say that the plane had undoubtedly taken the southern route,” said Inmarsat’s Chris McLaughlin.


So, where is flight MH370?




At this point, if we assume that Inmarsat knows what it’s doing, we know with some certainty that flight MH370′s last satellite ping originated from around 2,500 kilometers (1,500 miles) off the west coast of Australia. Because we know how much fuel the Boeing 777 was carrying, we know that it probably ran out of fuel sometime after that last ping, crashing into the Indian Ocean. Assuming the plane was flying at around 450 knots (517 mph, 833 kph), the potential crash zone is huge.

The southern Indian Ocean is one of the most inhospitable and remote places on Earth. Its distance from major air and navy bases make it one of the worst possible places to carry out a search and rescue mission. Even if satellite imagery purports to show debris from flight 370, waves, weather, and ocean currents mean that the debris will be constantly moving. ”We’re not searching for a needle in a haystack,” said Mark Binskin, vice chief of the Australian Defence Force. “We’re still trying to define where the haystack is.”

Multiple nations are sending search-and-rescue aircraft and ships to the region to look for flight 370, and the US is deploying its Towed Pinger Locator — a device that can locate black boxes up to a depth of 20,000 feet (6,100 meters). The flight data recorder (FDR) or cockpit voice recorder (CVR) generally only have enough battery power to ping for a month or two, so time is of the essence.




Thursday 27 February 2014

D-Wave, disentangled: Google explains the present and future of quantum computing...






The performance and quantum nature of the D-Wave 2 processor continues to be a topic of discussion with every new data release, and the performance figures that Google released in late January were no exception. The company has now followed up these figures with a second blog post that describes its own interpretation of the results, what it intends to test next, and what the future of the program is likely to be.
The key question at the heart of the D-Wave enigma is whether or not the system is actually performing quantum annealing. In theory, the D-Wave 2 processor could be an excellent simulation of a quantum computer — possibly the best simulation ever built — but still, ultimately, an approximation of what the real thing would offer. The only way to determine whether or not the D-Wave performs true quantum annealing is to find a test case in which the D-Wave 2 outperforms even the best classical (meaning, standard) computers.
Google’s last set of data indicated that while the D-Wave 2 outperformed off-the-shelf classical software by huge margins, hand-tuned classical computer configurations running on Nvidia GPUs were capable of competing with the quantum computer in a number of specific benchmarks. According to Google’s engineers, this close performance is an artifact of the primitive state of current quantum annealers.
The D-Wave 2 is limited by what’s called “sparse connectivity,” as shown below.
D-Wave 2
Note that while each sub-group of eight qubits is tightly linked to its adjacent partners, the blocks themselves connect in far fewer places. This limits the performance of the quantum annealer because it limits the number of states that the quantum computer can test in order to find the ideal solution to the problem. This is a separate problem from the number of qubits in the system (up to 509 out of a possible 512 in this machine) — it’s an issue of how interconnected the 509 functional qubits are.
According to Google, it’s this sparse connectivity that’s allowing classical computers to keep pace with D-Wave’s quantum system. The company writes that, “For each solver, there are problems for which the classical solver wins or at least achieves similar performance. But the inverse is also true. For each classical solver, there are problems for which the hardware does much better.”

Echoes of the past

The current debate over the merits of quantum annealing and the relative performance advantage of classic computers versus D-Wave’s system is somewhat similar to the debates over digital vs. analog computing of the mid-20th century. From this end of history, it may look as though digital technology was an unstoppable wave that simply buried older, more primitive methods of computation — but this glosses over historical fact.
Project Cyclone
Image: Popular Science. Doctor Frances Baurer with the analog computer (Project Cyclone)
Electronic analog computers were initially far faster than their digital counterparts. They operated in parallel, whereas digital systems performed operations sequentially. They were capable of higher levels of precision (0.1% as compared to the 1% margin of error within the first digital systems). In the end, digital computers won out over analog — but the two types of systems co-existed at various levels for several decades.
Just as early digital systems were matched or outperformed in many respects by well-developed analog computers, it’s possible that D-Wave’s first quantum computing efforts can be matched or exceeded by well-tuned classical systems. In fact, given the hundreds of billions of dollars poured in to the development of modern computers, it would be astonishing if scientists invented a new computing solution capable of beating conventional equipment in all respects in just a handful of years.

Thursday 20 February 2014

Facebook to buy WhatsApp for 19 bln dollars in deal shocker.


SAN FRANCISCO:  Facebook Inc will buy fast-growing mobile-messaging startup WhatsApp for $19 billion in cash and stock in a landmark deal that places the world's largest social network closer to the heart of mobile communications and may bring younger users into the fold.

The transaction involves $4 billion in cash, $12 billion in stock and $3 billion in restricted stock that vests over several years. The WhatsApp deal is worth more than Facebook raised in its own IPO and underscores the social network's determination to win the market for messaging.

Founded by a Ukrainian immigrant who dropped out of college, Jan Koum, and a Stanford alumnus, Brian Acton, WhatsApp is a Silicon Valley startup fairy tale, rocketing to 450 million users in five years and adding another million daily.

"No one in the history of the world has ever done something like this," Facebook Chief Executive Mark Zuckerberg said on a conference call on Wednesday.

Zuckerberg, who famously closed a $1 billion deal to buy photo-sharing service Instagram over a weekend in mid-2012, revealed on Wednesday that he proposed the tie-up over dinner with CEO Koum just 10 days earlier, on the night of February 9.

WhatsApp was the leader among a wave of smartphone-based messaging apps that are now sweeping across North America, Asia and Europe. Although WhatsApp has adhered strictly to its core functionality of mimicking texting, other apps, such as Line in Japan or Tencent Holdings Ltd's WeChat, offer games or even e-commerce on top of their popular messaging features.

The deal provides Facebook entree to new users, including teens who eschew the mainstream social networks but prefer WhatsApp and rivals, which have exploded in size as private messaging takes off.

"People are calling them 'Facebook Nevers,'" said Jeremy Liew, a partner at Lightspeed and an early investor in Snapchat.

How the service will pay for itself is not yet clear.

Zuckerberg and Koum on the conference call did not say how the company would make money beyond a $1 annual fee, which is not charged for the first year. "The right strategy is to continue to focus on growth and product," Zuckerberg said.

Zuckerberg and Koum said that WhatsApp will continue to operate independently, and promised to continue its policy of no advertising.

"Communication is the one thing that you have to use daily, and it has a strong network effect," said Jonathan Teo, an early investor in Snapchat, another red-hot messaging company that flirted year ago with a multibillion dollar acquisition offer from Facebook.

"Facebook is more about content and has not yet fully figured out communication."

PRICE TAG

Even so, many balked at the price tag.

Facebook is paying $42 per user with the deal, dwarfing its own $33 per user cost of acquiring Instagram. By comparison, Japanese e-commerce giant Rakuten just bought messaging service Viber for $3 per user, in a $900 million deal.

Rick Summer, an analyst with Morningstar, warned that while investors may welcome the addition of such a high-growth asset, it may point to an inherent weakness in the social networking company that has seen growth slow in recent quarters.

"This is a tacit admission that Facebook can't do things that other networks are doing," he said, pointing to the fact that Facebook had photo-sharing and messaging before it bought Instagram and WhatsApp.

"They can't replicate what other companies are doing so they go out and buy them. That's not all together encouraging necessarily and I think deals like these won't be the last one and that is something for investors to consider."

Venture capitalist Sequoia Capital, which invested in WhatsApp in February 2011 and led three rounds of financing altogether, holds a stake worth roughly $3 billion of the $19 billion valuation, according to people familiar with the matter.

"Goodness gracious, it's a good deal for WhatsApp," said Teo, the early investor in Snapchat.

Facebook pledged a break-up fee of $1 billion in cash and $1 billion in stock if the deal falls through.

Facebook was advised by Allen & Co, while WhatsApp has enlisted Morgan Stanley for the deal.

Shares in Facebook slid 2.5 percent to $66.36 after hours, from a close of $68.06 on the Nasdaq.

"No matter how you look at it this is an expensive deal and a very big bet and very big bets either work out or they perform quite poorly," Summer said. "Given the relative size, the enterprise valuations this is a very significant deal and it may not be the last one." – Reuters

Thursday 23 January 2014

Who makes the most reliable hard drives?

A few months ago we asked and answered one of computing’s oldest questions: How long do hard drives actually last? That story missed one vital piece of information, though — who makes the most reliable hard drives? Well, we can now answer that question too.
Just like last time, this information comes from Backblaze, an all-you-can-eat online backup company. Backblaze currently has around 28,000 hard drives powered up and constantly spinning, storing a total of around 80,000 terabytes (80 petabytes) of user data. As you can imagine, it is very much in Backblaze’s interests to ensure that it buys reliable hard drives. Every time a drive fails, it takes considerable time and effort to pull the drive, slot in a new one, and rebuild the RAID array.

Which hard drive manufacturer is the most reliable?

Backblaze breaks down its data in two ways — by manufacturer, and by specific drive. The data is fairly complex, but we’ll try to break it down into morsels of easy-to-digest, actionable information. (Read: How a hard drive works.)
As of the end of December 2013, Backblaze had 12,765  Seagate drives, 12,956 Hitachi drives, and 2,838 Western Digital drives. These drives are not all the same age — some are almost four years old, while many were installed in the past year. The odd numbers are because Backblaze basically buys whatever drive offers the most competitive dollar-per-gigabyte ratio, with reliability being a secondary factor. For most of the last four years, Seagate and Hitachi have offered the best price-per-gig, with Western Digital Red drives only now becoming a viable option for Backblaze.



Hard drive annual failure rate, broken down by maker (Hitachi, Seagate, Western Digital) and size
As you can see from the graph above, Hitachi drives are by far the most reliable. Even though most of Backblaze’s Hitachi drives are now older than two years, they only have an annual failure rate of around 1%. The “annual failure rate” is the chance of a drive dying within a 12-month period. After three years of being powered up 24/7, 96.9% of Hitachi drives are still running.
Western Digital is slightly worse, but still impressive: After three years of operation, 94.8% of Western Digital drives are still running. Backblaze lists the annual failure rate of the WD drives at around 3% (I don’t think the numbers quite add up, but I could be wrong).
Seagate drives are not very reliable at all. As you can see in the second graph below, Seagate drives are fine for the first year, but failures quickly start building up after 18 months. By the end of the third year, just 73.5% of Backblaze’s Seagate drives are still running. This equates to an annual failure rate of 8-9%.




Hard drive failure rate, plotted by month
In Backblaze’s words: “If the price were right, we would be buying nothing but Hitachi drives. They have been rock solid, and have had a remarkably low failure rate.”

Which single hard drive is the most reliable? (And which is the least?)

In general, then, if you want a reliable hard drive you should go for a Hitachi or Western Digital. If you’re looking for a specific drive model that has good longevity, the numbers break down interestingly.
The two best drives, with 0.9% annual failure rate over more than two years, are the Hitachi GST Deskstar 5K3000, and Hitachi Deskstar 7K3000. Get one of these drives and you’re almost guaranteed (97-98%) to make it through three years without a dead drive. If you want a 4TB drive, the Hitachi Deskstar 5K4000 is your best bet — it has a slightly higher failure rate, but still below WD and Seagate’s offerings.
As far as poor reliability goes, Seagate has some nasty offenders. The 1.5TB Seagate Barracuda 7200 (an old drive now) has a very high chance of failing after three or four years. Even the newer 3TB Seagate Barracuda has a pretty high failure rate, at 9.8% per year.
Backblaze also notes that some drives (the Western Digital Green 3TB and Seagate Barracuda LP 2TB) start producing errors as soon as they’re slotted into a storage pod. They think this is due to the large amounts of vibration caused by thousands of other hard drives. (They also think that their aggressive spin-down setting, which is ostensibly to save power, causes a lot of wear to the drive.)
Hit up Backblaze’s website for a full list of hard drives and their statistics.

Samsung and Toshiba

Unfortunately, Backblaze doesn’t have a statistically significant number of Samsung or Toshiba drives installed. Even so, because Samsung’s hard drive division was acquired by Seagate in 2011, it’s hard to say if an older, pre-acquisition Samsung drive would be more or less reliable than a post-acquisition drive. Toshiba/Fujitsu still have a reasonable wedge (~10%) of the market share pie, but unfortunately we’ll have to wait for another study to see how they compare to Seagate, Western Digital, and Hitachi.
On the topic of acquisitions, you may also remember that Western Digital acquired Hitachi GST almost two years ago. If we compare Hitachi drives from before and after the acquisition, the annual failure rate seems to stay the same (around 1%). It would seem that Western Digital and Hitachi have the reliable hard drive business sewn up — and this is before we’ve had a chance to see what WD/HGST’s helium-filled hard drive can do!

Wednesday 22 January 2014

Sugar-powered biobattery has 10 times the energy storage of lithium: Your smartphone might soon run on enzymes



As you probably know, from sucking down cans of Coke and masticating on candy, sugar — glucose, fructose, sucrose, dextrose — is an excellent source of energy. Biologically speaking, sugar molecules are energy-dense, easy to transport, and cheap to digest. There is a reason why almost every living cell on Earth generates its energy (ATP) from glucose. Now, researchers at Virginia Tech have successfully created a sugar-powered fuel cell that has an energy storage density of 596 amp-hours per kilo — or “one order of magnitude” higher than lithium-ion batteries. This fuel cell is refillable with a solution of maltodextrin, and its only by products are electricity and water. The chief researcher, Y.H. Percival Zhang, says the tech could be commercialized in as soon as three years.
Now, it’s not exactly news that sugar is an excellent energy source. As a culture we’ve probably known about it since before we were Homo sapiens. The problem is, unless you’re a living organism or some kind of incendiary device, extracting that energy is difficult. In nature, an enzymatic pathway is used — a production line of tailor-made enzymes that meddle with the glucose molecules until they become ATP. Because it’s easy enough to produce enzymes in large quantities, researchers have tried to create fuel cells that use artificial “metabolism” to break down glucose into electricity (biobatteries), but it has historically proven very hard to find the right pathway for maximum efficiency and to keep the enzymes in the right place over a long period of time.
Now, however, Zhang and friends at Virginia Tech appear to have built a high-density fuel cell that uses an enzymatic pathway to create a lot of electricity from glucose. There doesn’t seem to be much information on how stable this biobattery is over multiple refills, but if Zhang thinks it could be commercialized in three years, that’s a very good sign. Curiously, the research paper says that the enzymes are non-immobilized — meaning Zhang found a certain battery chemistry that doesn’t require the enzymes to be kept in place… or, alternatively, that it will only work for a very short time.
Energy densities of various battery types. “15% Maltodextrin”, in dark blue, is the battery being discussed here.
The Virginia Tech biobattery uses 13 enzymes, plus air (it’s an air-breathing biobattery), to produce nearly 24 electrons from a single glucose unit. This equates to a power output of 0.8 mW/cm, current density of 6 mA/cm, and energy storage density of 596 Ah/kg. This last figure is impressive, at roughly 10 times the energy density of the lithium-ion batteries in your mobile devices. [Research paper: doi:10.1038/ncomms4026 - "A high-energy-density sugar biobattery based on a synthetic enzymatic pathway"]
If Zhang’s biobatteries pan out, you might soon be recharging your smartphone by pouring in a solution of 15% maltodextrin. That battery would not only be very safe (it produces water and electricity), but very cheap to run and very green. This seems to fit in perfectly with Zhang’s homepage, which talks about how his main goals in life are replacing crude oil with sugar, and feeding the world.
The other area in which biobatteries might be useful is powering implanted devices, such as pacemakers — or, in the future, subcutaneous sensors and computers. Such a biobattery could feed on the glucose in your bloodstream, providing an endless supply of safe electricity for the myriad implants that futuristic technocrats will surely have.