Google to Host Terabytes of Open-Source Science Data
Topic: High Tech Developments
12:58 pm EST, Jan 20, 2008
Sources at Google have disclosed that the humble domain, http://research.google.com, will soon provide a home for terabytes of open-source scientific datasets. The storage will be free to scientists and access to the data will be free for all. The project, known as Palimpsest and previewed to the scientific community at the Science Foo camp at the Googleplex last August, missed its original launch date this week, but will debut soon.
Building on the company's acquisition of the data visualization technology, Trendalyzer, from the oft-lauded, TED presenting Gapminder team, Google will also be offering algorithms for the examination and probing of the information. The new site will have YouTube-style annotating and commenting features.
We hear that Google is hunting for cool datasets, so if you have one, it might pay to get in touch with them.
The web is more interesting when you can build apps that easily interact with your friends and colleagues. But with the trend towards more social applications also comes a growing list of site-specific APIs that developers must learn.
OpenSocial provides a common set of APIs for social applications across multiple websites. With standard JavaScript and HTML, developers can create apps that access a social network's friends and update feeds.
It was only a few months ago that I wrote:
Now is the time for all good men to quit their silos and adopt an open standard.
What a brilliant move by Google. (I suppose as a launch partner, I'm biased, but still: what a brilliant move.) That $15 billion Facebook valuation got a lot of abuse over the past few weeks, but in a way I thought it made sense. Obviously, there was risk involved, but if you thought that Facebook had a reasonable shot at becoming "the social operating system of the Web", then it was probably worth making the bet -- particularly given that Microsoft had other reasons to invest. A company that runs the web's "social operating system" could easily be worth $50B or $100B. But that seems entirely impossible now, just a few days later, thanks to OpenSocial.
Not everyone is so sanguine:
“Open Social” sounds a lot like an “Open Marriage” – on the surface some may think this sounds fun but after thinking about it for a minute you quickly realize it’s a bad idea.
NYT observes:
Developers may not see the advantage to writing programs that run across such remarkably different networks.
But Eric Schmidt is reassuring:
"If you are of a certain age, you sort of dismiss this as college kids or teenagers. But it is very real."
This man wants to control the Internet. And you should let him.
...
A system of linked computers like the Internet is obviously a network, but so are jetliners, human bodies, and even bacterial cells. They’re all networks because they are made up of lots and lots of parts that work together. Robust networks have parts that continue to work together smoothly even if conditions fluctuate unpredictably. In the case of the Internet, a million people may try to send e-mail at once. Doyle knows, however, that networks that look perfectly sound can be headed for collapse with little warning. Control theorists have pondered living things for decades, but until recently they lacked the mathematical tools to analyze them as they would a technological system. Doyle and his colleagues have created some of those tools.
“This may well turn out to be a watershed in terms of widespread awareness of the vulnerability of modern society,” said Linton Wells II, the principal deputy assistant secretary of defense for networks and information integration at the Pentagon. “It has gotten the attention of a lot of people.”
Most of this article consists of explaining the concept of a botnet attack. So I went searching for more substantive material.
Arbor Networks offers a brief summary; some of the tidbits show up almost verbatim in the Markoff piece. A more formal analysis is apparently still in the works.
This one is for Tom:
[Estonia] has a nearly model economy, based in large part on the teachings of Milton Friedman who favored free markets unfettered by state control.
Kevin Poulsen thinks the press is overexcited:
Here's THREAT LEVEL's top-10 favorite phrases that have surfaced in the media so far (exclamation points added):
Cyberwarfare! Online Combat! Cyber 'Nuclear Winter'! Massive Panic! Online Terrorism! A New Battle Tactic! Unprecedented! An Internet Riot! Cyber-Terrorism! Havoc!
Details may not be forthcoming just yet:
I do not want to disclose facts that could damage operations that the people on the ground are working on. Again, this is the sad fact working with incidents like this, you just can't correct misconceptions in real-time, no matter how much you would like.
Digital Imaging, Reimagined | MIT Technology Review's TR10
Topic: High Tech Developments
9:58 pm EDT, Apr 3, 2007
Baraniuk and Kelly, both professors of electrical and computer engineering at Rice University, have developed a camera that doesn't need to compress images. Instead, it uses a single image sensor to collect just enough information to let a novel algorithm reconstruct a high-resolution image.
At the heart of this camera is a new technique called compressive sensing. A camera using the technique needs only a small percentage of the data that today's digital cameras must collect in order to build a comparable picture. Baraniuk and Kelly's algorithm turns visual data into a handful of numbers that it randomly inserts into a giant grid. There are just enough numbers to enable the algorithm to fill in the blanks, as we do when we solve a Sudoku puzzle. When the computer solves this puzzle, it has effectively re-created the complete picture from incomplete information.
Compressed sensing is a new framework for acquiring sparse signals based on the revelation that a small number of linear projections (measure- ments) of the signal contain enough information for its reconstruction. The foundation of Compressed sensing is built on the availability of noise-free measurements. However, measurement noise is unavoidable in analog systems and must be accounted for. We demonstrate that measurement noise is the crucial factor that dictates the number of measurements needed for reconstruction. To establish this result, we evaluate the information contained in the measurements by viewing the mea- surement system as an information theoretic channel. Combining the capacity of this channel with the rate- distortion function of the sparse signal, we lower bound the rate-distortion performance of a compressed sensing system. Our approach concisely captures the effect of measurement noise on the performance limits of signal reconstruction, thus enabling to benchmark the perfor- mance of specific reconstruction algorithms.
Compressive Sensing is an emerging field based on the revelation that a small group of non-adaptive linear projections of a compressible signal or image contains enough information for reconstruction and processing. Our new digital image/video camera directly acquires random projections of a scene without first collecting the pixels/voxels. The camera architecture employs a digital micromirror array to optically calculate linear projections of the scene onto pseudorandom binary patterns. Its key hallmark is its ability to obtain an image or video with a single detection element (the "single pixel") while measuring the scene fewer times than the number of pixels/voxels. Since the camera relies on a single photon detector, it can also be adapted to image at wavelengths where conventional CCD and CMOS imagers are blind.
Alberto Mujica, President and CEO of Reputation Technologies, feels the support of groups like these furthers the development of a more secure internet: "The MIT Spam Conference will gather some of the smartest people to have thought about, and worked on, this problem. Their support is a good thing for all of us who depend on email as a communication medium."
The proceedings are available, but, bizarrely, only as an ISO image.
Basically, they're talking about taking YouTube peer-to-peer and monetizing it.
Neokast is a live video streaming platform for video distribution over the Internet. The video files available through Neokast are both user-generated and solicited from professional film makers by Neokast Productions LLC. Users can upload live video streams from any video capture device, and they can also upload prerecorded (archived) video content from their computer. Content publishers can choose to make their content available on their own website, on the Neokast website, or both. Live video streams and archived videos can be viewed as they are uploaded, in real-time, by any number of viewers through any normal DSL or cable Internet connection. Content publishers have the option of offering their content as free live streams, free archived streams, On-Demand, and Pay-Per-View. Users have the option to charge admission to their content and to participate in the sharing of advertising revenue generated by their content. There is no limit to the video length. The videos can be viewed in HD quality, depending on the capabilities of the content publisher and viewer. Due to advanced streaming protocols within the Neokast technology, the cost of providing the Neokast service is far lower than any other video distribution service in existence.
By expanding its scope and enhancing its capabilities, Neokast’s patent pending technology has the potential to revolutionize the market in global communications. As the world moves further towards globalization, it has become increasingly desirable from a social, political, and economical standpoint that the ability to transmit and receive information is not bounded by region, and that it does not belong solely to media conglomerates and political networks. With Neokast providing the ability to stream events in real time, anyone can be a field reporter, every band can reach a global audience, every sporting event can be viewed from anywhere in the world, and everyone can have a forum in which to be heard. But the possibilities do not end there. With the expansion of Wi-Fi networks, the growing availability of high-speed internet access, and the integration of computers with television, Neokast offers viewers a portal through which they can continuously view the most interesting and significant world events.
In retrospect it was so obvious -- !! -- that the worldwide shortage of field reporters was due to the lack of a viable P2P video distribution system. Why didn't someone fix this before now?
On Cringely's blog, the CEO responds to critics, talking about the illegality of rival services. Well, then, who is he kidding about "every sporting event can be viewed from anywhere in the world"? For most professional events, video distribution rights are exclusively negotiated and highly valued. This doesn't add up.
Bob Cringely joins the chorus of people who don't get it. But at least he doesn't get it in a different way.
With Cisco it always comes down to routers and how to get people to buy new ones. That's evident in Cisco's purchase this week of WebEx, where we can expect Cisco to strongly push video services on those two million WebEx customers, straining the system and forcing hardware upgrades. It's not about Microsoft; it's about the routers.
Bob's analysis of the economics of multicast is, well, ... let's just say I'd recommend Kevin Almeroth's analysis instead.