| |
Current Topic: Technology |
|
bit-tech.net | Feature - How AI in Games Works |
|
|
Topic: Technology |
9:26 pm EDT, Mar 13, 2009 |
Chris Jurney, a senior programmer for Relic, offered the example of the state machines in its RTS, Dawn of War 2, to illustrate this. "The AI for Dawn of War 2 has roughly three main layers: the computer player, the squad and the entity," says Jurney. "The squad and the entities are both hierarchal finite state machines, and we have roughly 20 states at the squad level and 20 at the entity level. The states at the squad level pretty much map directly to orders that can be issued by the user. "For example, if you issue a capture order to a squad, the squad will enter the SquadStateCapture state," continues Jurney. "This state might find that it’s not close enough yet to capture the point, so it will sub-state to SquadStateMove. In this state, the formation movement system will kick in and start issuing Move orders to individual entities, and the entities receive these commands and enter the StateMove state. Inside that, paths are found, dynamics are applied and generally, the individuals try to look smart as they perform their orders." Some states will also be shared among multiple character types where relevant, and most characters spend most of their time using just a few standard states, with the others waiting in the wings for extraordinary circumstances.
bit-tech.net | Feature - How AI in Games Works |
|
blog dds: 2009.03.04 - Parallelizing Jobs with xargs |
|
|
Topic: Technology |
3:58 am EDT, Mar 13, 2009 |
The xargs -P flag can also be useful for parellelizing commands that depend on a large number of high-latency systems. Only a week ago I spent hours to write a script that would resolve IP addresses into host names in parallel. (Yes, I know the logresolve.pl that comes with the Apache web server distribution, and the speedup it provides leaves a lot to be desired.) Had I known the -P xargs option, I would have finished my task in minutes.
blog dds: 2009.03.04 - Parallelizing Jobs with xargs |
|
Amazon Web Services Blog: Announcing Amazon EC2 Reserved Instances |
|
|
Topic: Technology |
9:15 pm EDT, Mar 12, 2009 |
Taking these requirements into account, we've created a new EC2 pricing model, which we call Reserved Instances. After you purchase such an instance for a one-time fee, you have the option to launch an EC2 instance of a certain instance type, in a particular availability zone, for a period of either 1 of 3 years. Your launch is guaranteed to succeed; there's no chance of encountering any transient limitations in EC2 capacity. You have no obligation to run the instances full time, so you'll pay even less if you choose to turn them off when you are not using them. Steady-state usage costs, when computed on an hourly basis over the term of the reservation, are significantly lower than those for the on-demand model. For example, an on-demand EC2 Small instance costs 10 cents per hour. Here's the cost breakdown for a reserved instance (also check out the complete EC2 pricing info): Term One-time Fee Hourly Usage Effective 24/7 Cost 1 Year $325 $0.030 $0.067 3 Year $500 $0.030 $0.049 Every one of the EC2 instance types is available at a similar savings. We've preserved the flexibility of the on-demand model and have given you a new and more cost-effective way to use EC2. Think of the one-time fee as somewhat akin to acquiring hardware, and the hourly usage as similar to operating costs.
Amazon Web Services Blog: Announcing Amazon EC2 Reserved Instances |
|
Computational Complexity: Math books you can actually read |
|
|
Topic: Technology |
7:39 pm EDT, Mar 11, 2009 |
So, which books are just right- hard enough to have things in them of interest, but not so hard that you can't read them. Demanding you be able to read it `cover-to-cover' is rather demanding and also ambigous- do you need to understand everything? I'll define this to just be `read/understood over 90% of the book'
Computational Complexity: Math books you can actually read |
|
SpringerLink - Journal Article - Caffeine Eating Bacteria |
|
|
Topic: Technology |
11:41 am EDT, Mar 11, 2009 |
Abstract A strain of Serratia marcescens showing the ability to degrade caffeine and other methylxanthines was isolated from soil under coffee cultivation. Growth was observed only with xanthines methylated at the 7 position (caffeine, 1,3,7-dimethylxanthine; paraxanthine, 1,7-dimethylxanthine; theobromine, 3,7-dimethylxanthine and 7-methylxanthine). Paraxanthine and theobromine were released in liquid medium when caffeine was used as the sole source of carbon and nitrogen. When paraxanthine or theobromine were used, 3-methylxanthine, 7-methylxanthine, and xanthine were detected in the liquid medium. Serratia marcescens did not grow with theophylline (1,3-dimethylxanthine), 1-methylxanthine, and 3-methylxanthine, and poor growth was observed with xanthine. Methyluric acid formation from methylxanthines was tested in cell-free extracts by measuring dehydrogenase reduction of tetrazolium salt in native-polyacrylamide gel electrophoresis gel. Activity was observed for all methylxanthines, even those with which no bacterial growth was observed. Our results suggest that in this strain of S. marcescens caffeine is degraded to theobromine (3,7-dimethylxanthine) and/or paraxanthine (1,7-dimethylxanthine), and subsequently to 7-methylxanthine and xanthine. Methyluric acid formation could not be confirmed.
SpringerLink - Journal Article - Caffeine Eating Bacteria |
|
How to Write Parallel Programs |
|
|
Topic: Technology |
11:37 am EDT, Mar 11, 2009 |
HOW TO WRITE PARALLEL PROGRAMS A FIRST COURSE
How to Write Parallel Programs |
|
Caffeine Producing Transgenic Tobacco: A Novel Pest Control Strategy |
|
|
Topic: Technology |
9:54 pm EDT, Mar 10, 2009 |
The Nara Institute of Science and Technology in Japan recently reported research on the development of caffeine-producing transgenic tobacco plants tolerant to tobacco cutworms (Spodoptera litura). Previously, the researchers isolated genes encoding three distinct N-methyltransferases and demonstrated in vitro production of the recombinant enzymes responsible for caffeine yield. They also published a review of the metabolic engineering of the caffeine biosynthetic pathway utilizing both gene silencing and over-expression approaches. The application of this research supported further efforts to employ transgenic caffeine-expressing plants as insect repellents.
This sure would mess up the organic branding... Caffeine Producing Transgenic Tobacco: A Novel Pest Control Strategy |
|
Southern Light Rail - SLR Overview |
|
|
Topic: Technology |
3:54 pm EDT, Mar 10, 2009 |
Southern Light Rail (SLR) is a Georgia Tech non-profit corporation providing National Lambda Rail (NLR) access to the Georgia Research Alliance universities, other universities in the Southern region of the United States and governmental and private sector organizations involved in university research initiatives. SLR will reach out to facilitate dark fiber connectivity to the Southern universities and research centers. This will help enable more collaborative research activities between and within universities.
Southern Light Rail - SLR Overview |
|
Thinking in Scala vs Erlang : Caoyuan's Blog |
|
|
Topic: Technology |
11:46 am EDT, Mar 10, 2009 |
Briefly, If only do small message processing in each actor/process, especially in Binary, Erlang is faster than Scala; If do heavy work in each actor/process, for example, processing String/Text, Erlang is slow than Scala.
Thinking in Scala vs Erlang : Caoyuan's Blog |
|
How to download ALL TED Talks at once, using Bash |
|
|
Topic: Technology |
11:14 pm EDT, Mar 9, 2009 |
aurynn: ok. here we go: first part: for a in $( seq 1 34 ); do lynx -dump http://www.ted.com/index.php/talks/list/page/$a > $a.html; done aurynn: it fetches lists of talks - 34 pages of them stores each page as .html (which is bad, as it is not html) aurynn: 2nd part: cat *.html | perl -ne 'print if s{^\s*\d+\.\s+(http://www.ted.com/index.php/talks/[^/]+\.html)\s*$}{$1\n}' | sort | uniq > big.list aurynn: it generates list of all pages of talks. aurynn: cat big.list | while read URL; do OUTPUT=$( echo $URL | sed 's#.*/#out/#;s#html$#txt#' ); lynx -dump "$URL" > "$OUTPUT"; echo $URL; done it fetches all pages of talks, and stores them in .txt for a in *.txt; do grep -q -E '\[[0-9]+\]Video to desktop \(Zipped MP4\)' $a || echo $a; done this lists all .txt files which don't have link of "video to desktop" (to talks, both seem to be someone singing i removed these .txt files finally: for a in *.txt; do POS=$( grep -E '\[[0-9]+\]Video to desktop \(Zipped MP4\)' $a | sed 's/^.*\[//;s/\].*//' ); grep -E "^[[:space:]]*$POS\.[[:space:]]http" $a | sed "s/.*http/wget -O $a.zip http/;s/txt.zip/zip/"; done > runme.sh this generates runme.sh, which wgets all videos, and stores them in .zip
How to download ALL TED Talks at once, using Bash |
|