| |
Current Topic: Technology |
|
Topic: Technology |
7:10 pm EST, Mar 7, 2009 |
IBM high-performance computing (HPC) clustered solutions offer significant price/performance advantages for many high-performance workloads by harnessing the advantages of low-cost servers plus innovative, easily available, open source and commercially available software. Today, some businesses are using their own resources to build Linux and Windows clusters using commodity hardware, standard interconnects and networking technology; open source software; and in-house or third-party applications. Any savings realized from a potentially lower acquisition cost offered by these systems is offset by the expense and complexity of assembling, integrating, testing and managing these clusters from disparate, piece-part components. IBM has designed the IBM System Cluster 1350 to help address these challenges. Our clients benefit from IBM’s extensive experience with HPC to help minimize complexity and risk. Using advanced Intel® Xeon®, AMD™ Opteron™ and IBM POWER6™-based server nodes, proven cluster management software, and optional high-speed interconnects, the Cluster 1350 offers the best of IBM and third-party technologies. As a result, clients can speed up installation of an HPC cluster, simplify its management and support, and reduce mean time to payback.
IBM System Cluster 1350 |
|
Georgia State University's IBM Cluster 1350 Supercomputer |
|
|
Topic: Technology |
7:09 pm EST, Mar 7, 2009 |
Georgia State’s new supercomputer allows for cutting-edge research ATLANTA – Eric Hurst wants to know who is really in control of our country, and the Georgia State doctoral student in political science is using the university’s new supercomputer to get to the bottom of it. Georgia State recently purchased an IBM System Cluster 1350 supercomputer through a partnership program between IBM and Southeastern Universities Research Association (SURA), a consortium of more than 60 research institutions. Able to make three-trillion calculations per second, the power of 320 desktop computers, Georgia State’s new supercomputer is the latest addition to the school’s expanding inventory of supercomputing resources that will benefit researchers in various disciplines.
Georgia State University's IBM Cluster 1350 Supercomputer |
|
Cumulo - Computing in the Cloud |
|
|
Topic: Technology |
3:12 pm EST, Mar 7, 2009 |
Introducing Cumulo BLAST Turnkey BLAST Servers with Cumulo SAASi and Amazon EC2 BLAST is the most important data mining tool for bioinformatics applications. It's provided by the National Center for Biotechnology Information (NCBI). Although there are various free BLAST servers available around the world, there is still a need for biologists to have their own BLAST servers. Typically, the NCBI and other BLAST servers can get bogged down with requests, and turn around times can be slow. Having your own BLAST servers can significantly improve the throughput of your BLAST pipelines. Cumulo BLAST addresses this need by offering a turnkey BLAST server built using Cumulo SAASi and Amazon EC2. You simply have to start one or more Amazon EC2 instances that we supply, and you instantly have a BLAST server that can handle web service requests. You can start as many servers as you need to increase throughput to any desired level.
Cumulo - Computing in the Cloud |
|
Bluish Coder: Distributed Erlang and Firewalls |
|
|
Topic: Technology |
7:48 pm EST, Mar 6, 2009 |
Sunday, November 27, 2005 Distributed Erlang and Firewalls I have two Erlang nodes communicating with each other, one of which is behind a firewall. The Erlang distribution mechanism uses a couple of TCP ports that need to be opened in the firewall. The first is port 4369, used by 'epmd'. The second port is dynamically assigned which makes it difficult to configure the firewall.
Bluish Coder: Distributed Erlang and Firewalls |
|
Test Setup, Flash SSDs and Access Time - Review Tom's Hardware : Accelerate Your Hard Drive By Short Stroking |
|
|
Topic: Technology |
4:35 pm EST, Mar 5, 2009 |
Although short stroking doesn’t get hard drives anywhere the access times of flash SSDs, we found that their access times still decrease by 40% in the case of the Ultrastar 15K450 SAS HDDs, and by an amazing 50% in the case of the Deskstar 7K1000.B SATA drives. The advantages are similar when the drives are configured in RAID modes. Since no future hard drive will be able to significantly shorten today’s access times, short stroking is an excellent technique for improving performance in a very noticeable way. Even the desktop 7K1000.B shows access times that are quicker than those of 10,000 RPM drives.
Tom's reduced access times on high end SATA drives as much as 40% by formatting only the outer 10-20% of the platters, to minimize seek times of the read heads. Amusing. Test Setup, Flash SSDs and Access Time - Review Tom's Hardware : Accelerate Your Hard Drive By Short Stroking |
|
Topic: Technology |
3:28 pm EST, Mar 5, 2009 |
Today, we’re taking Twitpay out of beta and putting it out there for everyone to use. (If you don’t like to read long blog posts: we’re turning on “real money” powered by Amazon Payments. We’re excited. Twitpay is awesome.)
Congrats to twitpay for going live with real money! Twitpay |
|
Can You Buy a Silicon Valley? Maybe. |
|
|
Topic: Technology |
1:52 am EST, Mar 1, 2009 |
However, even that is an interesting prospect. Suppose to be on the safe side it would cost a million dollars per startup. If you could get startups to stick to your town for a million apiece, then for a billion dollars you could bring in a thousand startups. That probably wouldn't push you past Silicon Valley itself, but it might get you second place. For the price of a football stadium, any town that was decent to live in could make itself one of the biggest startup hubs in the world.
Can You Buy a Silicon Valley? Maybe. |
|
How FriendFeed uses MySQL to store schema-less data - Bret Taylor's blog |
|
|
Topic: Technology |
12:00 am EST, Mar 1, 2009 |
Background We use MySQL for storing all of the data in FriendFeed. Our database has grown a lot as our user base has grown. We now store over 250 million entries and a bunch of other data, from comments and "likes" to friend lists. As our database has grown, we have tried to iteratively deal with the scaling issues that come with rapid growth. We did the typical things, like using read slaves and memcache to increase read throughput and sharding our database to improve write throughput. However, as we grew, scaling our existing features to accomodate more traffic turned out to be much less of an issue than adding new features. In particular, making schema changes or adding indexes to a database with more than 10 - 20 million rows completely locks the database for hours at a time. Removing old indexes takes just as much time, and not removing them hurts performance because the database will continue to read and write to those unused blocks on every INSERT, pushing important blocks out of memory. There are complex operational procedures you can do to circumvent these problems (like setting up the new index on a slave, and then swapping the slave and the master), but those procedures are so error prone and heavyweight, they implicitly discouraged our adding features that would require schema/index changes. Since our databases are all heavily sharded, the relational features of MySQL like JOIN have never been useful to us, so we decided to look outside of the realm of RDBMS.
How FriendFeed uses MySQL to store schema-less data - Bret Taylor's blog |
|
Force of Good: a blog by Lance Weatherby |
|
|
Topic: Technology |
6:21 pm EST, Feb 28, 2009 |
Could Atlanta Buy A Silicon Valley? The Answers With a URL with the extension of "maybe" Paul Graham wrote an interesting essay about how a city could go about buying a Silicon Valley. Towards the end he poses a series of question any city should ask if the scheme will work for them. Here is my take on the answers.
Much discussion also at: http://news.ycombinator.com/item?id=498431 Force of Good: a blog by Lance Weatherby |
|
Web λ.0 - Functional programming for the Web: Erlang tips and tricks: nodes |
|
|
Topic: Technology |
3:11 pm EST, Feb 28, 2009 |
The title might be a bit exaggerated, but if you have just started your adventure with Erlang I would like to provide you with a couple of hints that can spare you a serious headache. The basic tool to work with Erlang is its REPL shell, started in terminal mode with erl command. The name REPL comes from read, eval, print, loop cycle and among functional languages is a commonly used term for interactive shell.
Lots of tips/tricks here re: cookies, shell, name/sname, etc. that will save you Erlang headaches. Web λ.0 - Functional programming for the Web: Erlang tips and tricks: nodes |
|