Pac-Man Proved NP-Hard By Computational Complexity Theory

The classic ’80s arcade game turns out to be  equivalent to the travelling salesman problem, according a new analysis of the computational complexity of video games



In the last few years, a few dedicated mathematicians have begun to study the computational complexity of video games. Their goal is to determine the inherent difficulty of the games and how they might be related to each other and other problems.

Today, Giovanni Viglietta at the University if Pisa in Italy reveals a body of Herculean work in this area in which he classifies a large number of games from the 1980s and 90s including Pac-Man, Doom, Tron and many others.

Viglietta’s work involves several steps. The first is to determine the class of computational complexity to which the game belongs. Next, he works out whether knowing how to solve the game also allows you to solve many other problems in the same class, a property that complexity theorists call ‘hardness’. Finally, he determines whether the game is complete, meaning that it is one of the ‘hardest’ in its class.

His approach is relatively straightforward. He first works through a number of proofs showing that any video game with specific game-playing properties falls into a certain complexity class.

He then classifies the games according to game-playing properties they have.

For instance, one type of game involves a player moving through a  landscape visiting a number of locations. He calls this ‘location traversal’ and an example would be a game in which certain items are strewn around a landscape and the goal is to collect them all.

Some location traversal games allow each location to be visited only once. So-called single use path games might include downhill races.

He then uses graph theory to prove that any game exhibiting both location traversal and single-use paths is NP-hard, that’s the same class of complexity as the travelling salesman problem.

It turns out that Pac-Man falls into this category (the proof involves distributing power pills around the maze in a way that enforces single use paths).

He shows how games fall into other complexity categories too. For example, games that feature pressure pads to open and close doors are PSPACE-hard if each door is controlled by two pressure plates. Doom falls into this category.

And so on.

The resulting list is impressive. Here are a few of his results:

Boulder Dash (First Star Software, 1984) is NP-hard.
Deflektor (Vortex Software, 1987) is in L.
Prince of Persia (Brderbund, 1989) is PSPACE-complete.
Tron (Bally Midway, 1982) is NP-hard.

For the full list and reasoning, see the paper below.

That’s clearly been a labour of love for Viglietta, given the title of his paper: “Gaming Is A Hard Job, But Someone Has To Do It!”
Interestingly, he says this kind of analysis is unnecessary for modern games. “Most recent commercial games incorporate Turing-equivalent scripting languages that easily allow the design of undecidable puzzles as part of the gameplay,” he says.
In a way, that makes these older games all the more charming still


Enhanced by Zemanta

Viewed 29405 times by 5443 viewers

Be the first to comment - What do you think?
Posted by plates55 - January 26, 2012 at 4:23 pm

Categories: Gadget   Tags: , , , , , , ,

File/Drive Wiping

Occasionally there is some discussion on computer forensic examinations and how good is the software and the examiners.  A majority of the computer forensic examiners are well-trained and have good experience.  The same can be said for the tools they use.  There are a variety of commercial and open source forensic tools.  Probably the best know computer/network forensic tool is EnCaseA single copy/license for EnCase runs over a couple thousand dollars.  This is a good tool – Bottom line.

If you have some interest in computer forensics, here is site to look at – Forensic Focus

The following Forensic Focus article was interesting – Is  Your Client An Attorney Be  Aware Of Possible Constraints On Your Investigation Part 2

Single Pass is Good

Saying that, EnCase and other forensic tools do have limits.  In years past, I have played with (and tested) a variety of software, to include data encryption and file/drive wiping. The following was true for all the open source wiping tools I tested on standard hard drives: WHEN PROPERLY USED, nothing was recoverable.  This was for single-pass write.  Many of the wiping tools also have multiple-pass write options.  Some up to 35 write passes!  *** Don’t try this on a large GB hard drive – It will take a LONG TIME!  I would suggest that you only use multi-passes on single files.

Note:  With Solid State Hard Drives (SSD) there were some previous problems with some SSD not being wiped as expected.  If you use them, I would suggest reviewing the reports, as well as verifying wipes on them with a Disk/Hex editor.    The following links are to the “Anti-Forensics” Web site; their 2009 article stating single-pass wiping is good enough.

Yes it is a bit geeky, but provides lots of good information.  The author also addresses the common belief that even if overwritten, data can be recovered.  What this belief is usually referring to is some sort of microscopic examination of the “physical” storage plates.  This process is extremely costly, time-consuming, and the chance of finding the smoking gun is doubtful at best.

The Problem with Wiping Files

The problem lies in most operating systems have various records, temp files, caches, file/folder pointers, and registry entries that a user doesn’t know or think about.  These residue items can show what was once on a system, even when the original data is long gone and unrecoverable.  It can paint a possible picture.  I assume that this was the case based on reading a recent Prenda case filing where there was some sort of forensic examinationCase 2:11-CV-03072, Boy Racer v. Named Doe.

Based on the document, I believe Prenda obtained some sort of consent from the owner for the analysis.  If the examiner had found the “smoking gun” on the hard drive we would have seen the Doe settle (Dismissed with Prejudice) or it would have likely gone to trial.  As all we see in the amended complaint is the weak circumstantial evidence, I don’t believe the examiner found any movie(s), just pointers of such movies.

26. In a recent examination of the Macintosh computer used by Defendant during the times of his infringements, an updated version of Vuze appears in the “Applications” folder.  Through further inspection of Defendant’s computer, Plaintiff’s agents found Mp4 converter, StreamMe, and ServeToMe software that could enable an individual to convert a full-length video to a mobile device-compatible format; Toast10, which allows an individual to burn DVDs on Mac computers from videos downloaded over the Internet; and OmniDiskSweeper, a Mac utility program that helps users quickly identify and delete potentially infringing videos on one’s Mac computer in furtherance of evading liability for copyright infringement.

Just A Tool

Now I know the Trolls will say I’m telling people to use these tools to destroy evidence – I’m not.  The post is an attempt to dispel some rumors and give people accurate information.  I laugh at the suggestion that because someone has these tools, they are up to no good and guilty of being a pirate, thief, etc.  These are tools – plain and simple.  The same as a hand gun – what you do with it determines if it is used for good or bad.  If you have ever donated or sold a computer, I hope and pray you did wipe the hard drive first.

Enhanced by Zemanta

Viewed 27224 times by 5429 viewers

Be the first to comment - What do you think?
Posted by plates55 - January 24, 2012 at 8:21 am

Categories: Gadget   Tags: , , , , , , ,

MIT Genius Stuffs 100 Processors Into Single Chip

WESTBOROUGH, Massachusetts — Call Anant Agarwal’s work crazy, and you’ve made him a happy man.

Agarwal directs the Massachusetts Institute of Technology’s vaunted Computer Science and Artificial Intelligence Laboratory, or CSAIL. The lab is housed in the university’s Stata Center, a Dr. Seussian hodgepodge of forms and angles that nicely reflects the unhindered-by-reality visionary research that goes on inside.

Agarwal and his colleagues are figuring out how to build the computer chips of the future, looking a decade or two down the road. The aim is to do research that most people think is nuts. “If people say you’re not crazy,” Agarwal tells Wired, “that means you’re not thinking far out enough.”

Agarwal has been at this a while, and periodically, when some of his pie-in-the-sky research becomes merely cutting-edge, he dons his serial entrepreneur hat and launches the technology into the world. His latest commercial venture is Tilera. The company’s specialty is squeezing cores onto chips — lots of cores. A core is a processor, the part of a computer chip that runs software and crunches data. Today’s high-end computer chips have as many as 16 cores. But Tilera’s top-of-the-line chip has 100.

The idea is to make servers more efficient. If you pack lots of simple cores onto a single chip, you’re not only saving power. You’re shortening the distance between cores.


Today, Tilera sells chips with 16, 32, and 64 cores, and it’s scheduled to ship that 100-core monster later this year. Tilera provides these chips to Quanta, the huge Taiwanese original design manufacturer (ODM) that supplies servers to Facebook and — according to reports, Google. Quanta servers sold to the big web companies don’t yet include Tilera chips, as far as anyone is admitting. But the chips are on some of the companies’ radar screens.

Agarwal’s outfit is part of an ever growing movement to reinvent the server for the internet age. Facebook and Google are now designing their own servers for their sweeping online operations. Startups such as SeaMicro are cramming hundreds of mobile processors into servers in an effort to save power in the web data center. And Tilera is tackling this same task from different angle, cramming the processors into a single chip.

Tilera grew out of a DARPA- and NSF-funded MIT project called RAW, which produced a prototype 16-core chip in 2002. The key idea was to combine a processor with a communications switch. Agarwal calls this creation a tile, and he’s able to build these many tiles into a piece of silicon, creating what’s known as a “mesh network.”

“Before that you had the concept of a bunch of processors hanging off of a bus, and a bus tends to be a real bottleneck,” Agarwal says. “With a mesh, every processor gets a switch and they all talk to each other…. You can think of it as a peer-to-peer network.”

What’s more, Tilera made a critical improvement to the cache memory that’s part of each core. Agarwal and company made the cache dynamic, so that every core has a consistent copy of the chip’s data. This Dynamic Distributed Cache makes the cores act like a single chip so they can run standard software. The processors run the Linux operating system and programs written in C++, and a large chunk of Tilera’s commercialization effort focused on programming tools, including compilers that let programmers recompile existing programs to run on Tilera processors.

The end result is a 64-core chip that handles more transactions and consumes less power than an equivalent batch of x86 chips. A 400-watt Tilera server can replace eight x86 servers that together draw 2,000 watts. Facebook’s engineers have given the chip a thorough tire-kicking, and Tilera says it has a growing business selling its chips to networking and videoconferencing equipment makers. Tilera isn’t naming names, but claims one of the top two videoconferencing companies and one of the top two firewall companies.

An Army of Wimps

There’s a running debate in the server world over what are called wimpy nodes. Startups SeaMicro and Calxeda are carving out a niche for low-power servers based on processors originally built for cellphones and tablets. Carnegie Mellon professor Dave Andersen calls these chips “wimpy.” The idea is that building servers with more but lower-power processors yields better performance for each watt of power. But some have downplayed the idea, pointing out that it only works for certain types of applications.

Tilera takes the position that wimpy cores are okay, but wimpy nodes — aka wimpy chips — are not.

Keeping the individual cores wimpy is a plus because a wimpy core is low power. But if your cores are spread across hundreds of chips, Agarwal says, you run into problems: inter-chip communications are less efficient than on-chip communications. Tilera gets the best of both worlds by using wimpy cores but putting many cores on a chip. But it still has a ways to go.

There’s also a limit to how wimpy your cores can be. Google’s infrastructure guru, Urs Hölzle, published an influential paper on the subject in 2010. He argued that in most cases brawny cores beat wimpy cores. To be effective, he argued, wimpy cores need to be no less than half the power of higher-end x86 cores.

Tilera is boosting the performance of its cores. The company’s most recent generation of data center server chips, released in June, are 64-bit processors that run at 1.2 to 1.5 GHz. The company also doubled DRAM speed and quadrupled the amount of cache per core. “It’s clear that cores have to get beefier,” Agarwal says.

The whole debate, however, is somewhat academic. “At the end of the day, the customer doesn’t care whether you’re a wimpy core or a big core,” Agarwal says. “They care about performance, and they care about performance per watt, and they care about total cost of ownership, TCO.”

Tilera’s performance per watt claims were validated by a paper published by Facebook engineers in July. The paper compared Tilera’s second generation 64-core processor to Intel’s Xeon and AMD’s Opteron high end server processors. Facebook put the processors through their paces on Memcached, a high-performance database memory system for web applications.

According to the Facebook engineers, a tuned version of Memcached on the 64-core Tilera TILEPro64 yielded at least 67 percent higher throughput than low-power x86 servers. Taking power and node integration into account as well, a TILEPro64-based S2Q server with 8 processors handled at least three times as many transactions per second per Watt as the x86-based servers.

Despite the glowing words, Facebook hasn’t thrown its arms around Tilera. The stumbling block, cited in the paper, is the limited amount of memory the Tilera processors support. Thirty-two-bit cores can only address about 4GB of memory. “A 32-bit architecture is a nonstarter for the cloud space,” Agarwal says.

Tilera’s 64-bit processors change the picture. These chips support as much as a terabyte of memory. Whether the improvement is enough to seal the deal with Facebook, Agarwal wouldn’t say. “We have a good relationship,” he says with a smile.

While Intel Lurks

Intel is also working on many-core chips, and it expects to ship a specialized 50-core processor, dubbed Knights Corner, in the next year or so as an accelerator for supercomputers. Unlike the Tilera processors, Knights Corner is optimized for floating point operations, which means it’s designed to crunch the large numbers typical of high-performance computing applications.

In 2009, Intel announced an experimental 48-core processor code-named Rock Creek and officially labeled the Single-chip Cloud Computer (SCC). The chip giant has since backed off of some of the loftier claims it was making for many-core processors, and it focused its many-core efforts on high-performance computing. For now, Intel is sticking with the Xeon processor for high-end data center server products.

Dave Hill, who handles server product marketing for Intel, takes exception to the Facebook paper. “Really what they compared was a very optimized set of software running on Tilera versus the standard image that you get from the open source running on the x86 platforms,” he says.

The Facebook engineers ran over a hundred different permutations in terms of the number of cores allocated to the Linux stack, the networking stack and the Memcached stack, Hill says. “They really kinda fine tuned it. If you optimize the x86 version, then the paper probably would have been more apples to apples.”

Tilera’s roadmap calls for its next generation of processors, code-named Stratton, to be released in 2013. The product line will expand the number of processors in both directions, down to as few as four and up to as many as 200 cores. The company is going from a 40-nm to a 28-nm process, meaning they’re able to cram more circuits in a given area. The chip will have improvements to interfaces, memory, I/O and instruction set, and will have more cache memory.

But Agarwal isn’t stopping there. As Tilera churns out the 100-core chip, he’s leading a new MIT effort dubbed the Angstrom project. It’s one of four DARPA-funded efforts aimed at building exascale supercomputers. In short, it’s aiming for a chip with 1,000 cores.

Enhanced by Zemanta

Viewed 33133 times by 6978 viewers

1 comment - What do you think?
Posted by plates55 - January 23, 2012 at 2:38 pm

Categories: Gadget   Tags: , , , , , , ,

Could Your Car Be Hacked?

As soon as things get smart, something stupid also happens: they become vulnerable to attack. This was the case (though over-hyped, perhaps) of printers that cybersecurity researchers warned could be hijacked and theoretically set on fire. And now, argues Willie D. Jones of IEEE Spectrum, it could be the fate of our latest smart devices: our cars.

Cars are dangerous enough, without the problem of a cyberattack thrown in the mix. But unfortunately, researchers are coming up with several ways cars could be vulnerable to hackers. Wi-Fi, cellular, and Bluetooth connections exist in cars to help us communicate or be entertained as we drive, but a few research groups have already shown how these channels can be hijacked by someone with malicious intent.

One research team at UC San Diego and University of Washington demonstrated it was possible to do an absurd attack that could allow criminals to locate cars’ GPS coordinates, override their security systems, unlock their doors, and start their engines–in other words, a carjacker’s dream come true. A even worse scenario envisioned by one researcher: a hack that would disable your breaks while you’re driving on the highway.

All this doesn’t merely exist in the academic journals of a few university white-hat hackers. Some of this stuff has already happened. Jones points to a September report from McAfee that spoke of an instance where a disgruntled employee at a Texas car dealership was able to shut off the engines of 100 cars at once. A recent blog post from McAfee goes into detail on several other hacks, most of them white-hat, that would seem like the purview of science fiction, were it not for the fact that they’re real. Fiction has already been made fact, per McAfee: “In the movie ‘Live Free or Die Hard,’ actor Justin Long portrays a computer hacker who social-engineers the call center agent into remotely starting the car. That was Hollywood; yet at the recent Black Hat USA conference, security researchers Don Bailey and Mat Solnik expanded on earlier research to locate and attack car telematics systems.”

As in so many things, if Hollywood has imagined it, some clever hacker is already probably making it happen. Says McAfee’s Jimmy Shah: “As devices get smarter and more connected, we’re going to see more attacks targeted at them.” Let’s just hope we get smarter, too–smart enough to guard against these attacks before they happen.

Enhanced by Zemanta

Viewed 26145 times by 5470 viewers

Be the first to comment - What do you think?
Posted by plates55 - January 13, 2012 at 10:30 am

Categories: Gadget   Tags: , , , , , , ,

Electricity from the Air

California company Makani Power is developing a flying wind turbine designed to generate cheap renewable power.



Flying windmill: Multiple exposures show the flight pattern of the Makani Airborne Wind Turbine, built by Makani Power. The electricity-generating glider is attached to the ground by a carbon-fiber tether. The craft flies “crosswind,” or perpendicular to the direction of the wind, as a kite does. In early tests, prototypes have generated five kilowatts of electric power. Larger versions with 88-foot wingspans able to generate 600 kilowatts of power are planned.

Credit: Makani

Enhanced by Zemanta

Viewed 23274 times by 5574 viewers

Be the first to comment - What do you think?
Posted by plates55 - December 28, 2011 at 9:56 am

Categories: Gadget   Tags: , , , , , , ,

Spherical hexapod robot walks like a crab, dances like the Bogle (video)

For those who like robots and all things robotic.   This is something that should not be missed.


Viewed 12021 times by 2643 viewers

Be the first to comment - What do you think?
Posted by plates55 - December 12, 2011 at 3:45 pm

Categories: Gadget   Tags:

The Engadget Show is live tomorrow with Boeing, the Tokyo Motor Show and the year’s best gadgets




We’ll be dashing through the proverbial tech snow, laughing all the way at 6PM ET tomorrow. We’re gonna tour the new Boeing 787 Dreamliner, take a trip to Tokyo Motor Show and check out the best gadgets of 2011.

Best of all, you can join us live! If you’re in New York City, we’ve got a few extra tickets left over. If you’d like to attend, email jon dot turi at engadget dot com including your full name and confirmation that you can show up. Everyone else can follow along from home right here.

Subscribe to the Show:

[iTunes] Subscribe to the Show directly in iTunes (M4V).
[Zune] Subscribe to the Show directly in the Zune Marketplace (M4V).
[RSS M4V] Add the Engadget Show feed (M4V) to your RSS aggregator and have it delivered automatically.

Enhanced by Zemanta

Viewed 11393 times by 2488 viewers

Be the first to comment - What do you think?
Posted by plates55 - December 12, 2011 at 3:42 pm

Categories: Gadget   Tags:

« Previous Page


Get every new post delivered to your Inbox

Join other followers