By BBC News Online technology correspondent Mark Ward Scientists have found a way to coerce computers into doing science without the consent of their owners. By exploiting basic functions of web servers a group of US scientists have been able to make the machines carry out a small part of a much larger computation. The researchers believe that the technique could be used to turn the web into a powerful distributed computer. But they said their technique for covert computation is cumbersome, and needs to be refined before it is more widely used. Parasites abound Before now enrolling your computer in a distributed computing project to search for extraterrestrial life involved downloading special software that can run while the computer is otherwise idle. But all this could change as US scientists subvert an error-checking procedure used by all web servers into a means for carrying out computations. Parasitic computing represents an ethically challenging alternative..., as it uses resources without the consent of the computer's owner Albert-Laszlo Barabasi When any information is sent across the internet it is split up into small chunks or packets that travel, often independently of each other, to their common destination. Each packet is stamped with information about its source and destination, as well as a value that reveals how many bits it contains. When a web server receives a packet of data it performs a quick calculation to see if the number of bits received is the same as the number sent. If this number or "checksum" is different to the one stamped on the packet, this reveals that the packet has been corrupted during transit. Corrupted packets are discarded. One of the internet's basic standards, called the Transmission Control Protocol (TCP), governs this error checking process. Scientists from the physics and computer science departments at the University of Notre Dame in Indiana are using this error checking procedure to carry out "parasitic computing". In the journal Nature, physicist Albert-Laszlo Barabasi and colleagues show how to subvert error checking to carry out more complex calculations, and force a web server to inadvertently take part in a distributed science project. Short salesmen The scientists tested their ideas by using web servers to find the correct solution to an example of a mathematical conundrum known as the "travelling salesmen problem". This involves working out the shortest route that a fictional salesman would have to take to visit all possible locations on a hypothetical map. The more locations on the hypothetical map means more potential routes, and the longer it would take any single computer to crank through all possible combinations. The SETI@home project searches data for signs of alien life But by sharing the job of working out which route is shortest, the total time it takes to solve any particular travelling salesman problem can be vastly reduced. Professor Barabasi and his colleagues used one computer to generate possible solutions to a travelling salesman problem, and then used parasitic computing to make lots of web servers perform the calculations on each candidate solution. Because of the way TCP works, only solutions to their travelling salesman problem were returned to the researchers. All others, because they produced invalid checksums, were discarded. The scientists said their technique needed refinement because it takes far longer for a web server to carry out a calculation on their behalf than it does to check that packets of data were intact. Widespread use of it could slow down the rate at which a web server receives data. "Parasitic computing represents an ethically challenging alternative for cluster computing, as it uses resources without the consent of the computer's owner," wrote the researchers in their paper. But they said it has potential to harness far more computers than take part voluntarily in projects such as Seti@home.