|Let's Make a Game|
|Let's Make a Game|
The boxer has his hand, well, up his butt. He hasjust thrown a punch, a hard one—too hard because the forward momentumnearly threw him flat on his face. He saved himself by pulling hisright arm back behind him as a counterbalance, a tactic that worked,except for one thing: as that arm came around front again, it grazedhis posterior, causing his hand to stick where the sun don’t shine.
Nowthe fighter’s opponent should be able to clobber him. But thesefighters are characters in a video game, one in the earliest stages ofdevelopment. The game still has glitches, so the opponent is standingon the opposite side of the computer screen solipsistically shadowboxing. He shows no signs of wanting to engage in combat. In fact, bothfighters bear little resemblance to real people. They are stickfigures, made up of small orange rectangles attached where we humanshave joints.
Don’t expect these boxers to be showing up on anXbox any time soon. But once its creators, computer science professorOdest “Chad” Jenkins and his former student Pawel Wrotek ’05, furtherdevelop it, the game might help you not with boxing but with housework.Why? It’s all about artificial intelligence. Eventually, Jenkins andWrotek plan to invite an actual heavyweight boxer to come into the lab,don a body suit studded with sensors, and teach the computer how tomove and fight. If all goes as expected, the pixilated boxer on thecomputer screen will start to learn how to literally think like achamp, with the same instincts, acumen, and prowess.
Now take the same software and put it inside arobot. This time, you’re the one with the sensor-studded body suit.Pick up a broom or fire up the vacuum and start cleaning the house. Therobot will first learn to imitate you, then grasp your preferences andpatterns of behavior, and will even detect how you react when thetelephone rings or the baby cries. “It’s too hard to program a computerto do what you want it to do,” Jenkins says. “It’s much easier todemonstrate and have the robot follow.”
So that boxer with his hand up his rear? In thenot too far off future, you may be having him answer your doorbell, putaway the dishes, and fold your laundry.
If it’s true that learning should be fun, the twenty-seven membersof Brown’s computer science faculty are doing a lot of learning. Mostof these professors work in highly esoteric and complicated areas ofresearch. They are well acquainted with such things as stochasticoptimization, cryptography, and natural language processing.
You will also find, though, that many of thesescientists spend a great deal of time designing and playing games.Games are where the rubber meets the road, so to speak, wheretheoretical ideas get transformed into practical applications that mayone day transform our everyday lives.
TakeAmy Greenwald, for instance. Greenwald, one of the department’s starjunior faculty members, specializes in devising optimal decision-makingalgorithms and artificial-intelligence heuristic search techniques. Butday-to-day, she spends a significant amount of time preparing for atournament called the Trading Agent Competition, in which teams ofpretend travel agents take part in twenty-eight simultaneouscompetitions, eight for hotel rooms, eight for flights, and twelve forconcert and entertainment tickets. The real competitors, however, arenot the human beings but their computer software. Balancing forecastsof supply and demand, assessments of customer preferences, andjudgments of what the other players will bid based on their behavior atprevious auctions, the software places bids on behalf of each team. Thewinner is the one whose travel agency makes the wisest bids andtherefore earns the highest profit.
Thinkof what such a computer program could accomplish on eBay. After youtold the computer your preferences and needs, it would find theauctions that satisfy these criteria and enter simultaneous bids in allof them. Because the program would also be analyzing the habits ofother bidders, its final bid would be just high enough to beat thecompetition, and not one cent more. And if such a program could do thison eBay, just think of the impact more sophisticated versions mighthave in other areas—Wall Street or international finance, forexample—where game theory really matters. Greenwald, a rare female in amale-dominated field, calls her software RoxyBot. In this year’stournament, it trounced the competition.
Chad Jenkins began playing video games after hisfather, a warden at a federal prison, brought home an Atari console.Jenkins was eight years old. “It was like this magic black box,” herecalls. “I was just fascinated.” Twenty years later, for all hisambition to push the outer limits of his academic specialty, artificialintelligence, Jenkins just wants to develop the perfect video game.What holds back such games is their limited ability to learn. Right nowthe range of actions you can take as a player depends on the number ofmaneuvers programmed into the game’s software. Kick, punch, jump, duck,and shoot are pretty much all you can do. But imagine the action if thegame’s software can learn new moves from human players. That such abreakthrough in the world of gaming could lead to the development ofrobots capable of being educated by humans can seem beside the point toJenkins, who seems not to have shed completely the perspective of aneight-year-old boy. “Ever since I was a kid playing those video games,”he says, “I’ve known I could do it better.”
In other words, if you really want to know whatthe future of computer technology will look like, skip the journalarticles and the academic conferences. Sit down with these professorsand play games with them. That’s where the seeds of technologicalbreakthroughs are being sown.
I am a troll scurrying around the labyrinthine ground floor of a medievalcastle. I head up a cobblestone ramp into an atrium and fire off a fewblasts at the dozens of other trolls running around. This video game,known as the Cube, presents me with formidable foes. They have rocketlaunchers; I have a handgun. Within seconds, one of them fires off aburst. “You are fragged,” the computer screen reads. Translation: I amdead.
I challenge computer science grad student YanifAhmad ScM ’04 to do better. He finds a way to replace the handgun witha rocket launcher, which I consider unfair. Then again, he has spentthe last several years of his life developing this program. Not that itmakes much difference, anyway. He manages to kill a few enemy trolls,but then one of his foes sneaks up behind him and blows him away. Hedidn’t last much longer than I did. “Right now, there are too manytrolls in too small a space,” Ahmad explains. “Even I can’t cope inthere.”
What’s different about this game is the level ofcomplexity Ahmad is striving for. He is trying to write code that willlet hundreds of thousands of users all across the globe battle oneanother simultaneously.
To achieve such a real-time gaming environment,the slightest move by one player must register instantly on individualcomputers all over the world. The technology for doing that todayrequires an individual player’s computer to relay a command to acentral computer server, which in turn sends it out to everyone else’scomputer. This takes time—too much time, in fact, for the Cube to work.Having a player in Beijing beam a request for data to a server inProvidence and then wait to get an answer back wastes milliseconds,time you simply do not have if you want the Cube to achieve its fullpotential. Besides, hundreds of thousands of users asking for updateson other players’ maneuvers every few milliseconds would overload theentire system.
How quickly information moves among computers isa fundamental problem of our age. As Ahmad’s adviser, assistantprofessor Ugur Cetintemel, describes it, “We have reached the pointwhere no computer, no matter how powerful, will ever be able to keep upwith all the information out there.” This is why the Cube experiment isso critical to the future of computing. It’s not just the Cube thatneeds to relay massive amounts of information in real time; computerson Wall Street must constantly update the market data they send toinvestment companies. News sites can update their Web sites every fewminutes at most; more frequent updates would crash their servers.Getting information out faster in such businesses can give them a hugecompetitive advantage.
Thereare more mundane advantages as well. Supermarkets may soon be attachingsensors to every item they sell to help them continually track andupdate their inventories. They are looking at scenarios that wouldallow them to track the precise whereabouts of their products, so thatthe act of putting one product in your grocery cart would trigger priceadjustments to items you haven’t reached yet. If you were stocking upon celery sticks, for example, the market might immediately drop theprice of dip to encourage you to spend the extra money. Right now nocomputer server out there has the processing power to handle so muchdata quickly enough. But if only Ahmad could get that Cube workingright.
Cetintemel’s solution is what he calls an“overlay network.” Instead of a centralized server, the Cube will useevery computer in the game as a kind of mini server sharing theresponsibility for moving data. If Player A moves his troll forward,his own computer will send this information to several other computers,each of which relays it to several other computers, and so on and soon. Every computer becomes in effect a server. Similarly, if you wantupdates of news beamed to you by CNN every second, you will have togive over some of your computing power to serve the larger needs of thenetwork.
The problem is that every computer—hundreds ofthousands of them in this example—will have to think like a server,making split-second decisions about the most efficient way to routedata. If, for example, Player A’s computer in the Cube senses thatPlayer B’s computer has a slow processor, it will send a signal out tothe other computers to make sure that Player B’s computer is pushed tothe fringes of the network. Player B’s computer will also need to beable to tell if Player A turns his machine off, and then send out amessage informing the network that A is AWOL and a new replacementneeds to be found immediately. What you wind up with is a highlydynamic, supremely powerful, Borg-like network that is constantlyreconfiguring itself to deliver information at a rate approaching thespeed of light. “No one has ever achieved speeds like this before,”says Ahmad. “Whether this is feasible at all, I don’t know, but we’regoing to give it a shot.”
It’s spring break, and the campus is deserted. Even athletes aretakinga week off. But holed up in a computer lab, Dan Grollman ScM ’05 andthree other grad students are at work training a dog. Grollman, thegroup’s captain, presses a button on a store-bought Logitech joystick.A few feet away, a five-inch-high robotic poodle rises to attention.Its platinum head, complete with two purely decorative plastic ears,mechanically turns back and forth. It seems to be sensing something.Grollman pushes the joystick, and the robot moves forward, its legsmotoring in little circles as it walks in a motion more ducklike thandoglike. It reaches a plastic pink ball, then swings its front rightleg forward. The ball moves. The poodle swings its leg forward again.The ball moves again.
Ina few weeks, Grollman and his team will travel to Atlanta for theRoboCup competition, where four of their reprogrammed robotic dogs willface off in a soccer game against canine squads from other universitiesaround the country (They wound up losing all sixteen matches). For now,the game is basically what Grollman calls a “canine free for all”—thedogs are programmed to sense the location of the ball, go toward it,and then kick it. With luck the ball will move in the direction of theopponent’s goal. In an example of the level of optimism that drivesscientific researchers, RoboCup’s organizers believe that in fiftyyears similar androids will be able to compete with that year’s WorldCup champions—and beat them.
Before that’s possible, Grollman says he willneed to perfect “a dog that can learn new tricks.” As with a real dog,this means repeating a lot of actions over and over so that the robotcan gather and collect data on what its sensors detect. The data itrecords while racing toward the ball, for example, form what might becalled Snapshot 1. By recording such things as the amount of lightoverhead, the rotation of its legs, and the distance between it and theball, and by joining that data with the information it took in responseto these stimuli, the dog will be able to react the same way when itencounters another constellation of stimuli similar to Snapshot 1. Overtime the dog will have created so many snapshots it will have theability to parse the data and “reason” its way to reacting to itsenvironment. This constant analyzing and reanalyzing of snapshots willeventually give the dog a kind of intelligence that will let itimprovise even when its opponent plays out a maneuver that’s totallyunexpected.
The work is similar to what Chad Jenkins is doingwith his boxers. Through their games, both are trying to achieve thesame serious goal: to bridge the gap between robots and humans. BothGrollman’s dog and Jenkin’s boxer think like any other computer, inbinary combinations of 0’s and 1’s. Because humans don’t think thatway, they are trying to write software that translates human reasoningpatterns into 0’s and 1’s, the vocabulary of computer code. In the caseof the dog and boxer, this happens through a process known as“dimensionality reduction.” The software sifts through all the data therobot or boxer has collected from the environment and picks out theinformation that was the basis for its human master’s decision.
When Grollman tells his dog to advance on theball, the machine actually collects more than 20,000 pieces of dataabout its position and environment at that precise moment. AssumingGrollman succeeds in his quest, the dog will then be able to select outthe half dozen or so stimuli that most likely motivated the human beingin the first place. The pooch will ignore, for example, that it wasthree feet away from a wall or that the light was dim overhead.Instead, it will know that the information pertaining to its distancefrom the ball, the goal, and its closest opponent was pertinent to itshuman master and therefore must be the most important data.
Jenkins sees this kind of learning, howeverstilted, as the only way machines can become fully integrated into oureveryday lives. What good is a housecleaning android that tidies upyour house according to information preprogrammed into its software byits manufacturer? It needs to be able to internalize your preferencesand, even more important, to react as you would in situations it hasn’tencountered before. “Right now,” Jenkins says, “we have programmersthat sit down and manually program what they think a robot should do,but it takes a lot of time.You can get a really big advantage inprogramming a robot by letting humans be humans and having computersjust observe and learn from them.”
This may all be a pipe dream. It might evencreate robot Frankensteins. One thing is for certain, though: Jenkins,Grollman, and their colleagues will have a lot of fun trying to achieveit.
Lawrence Goodman is the BAM’s staff writer.