Overview
The threat of backdoors surfaced when multiuser and networked operating systems became widely adopted. Petersen and Turn discussed computer subversion in a paper published in the proceedings of the 1967 AFIPS Conference.[1] They noted a class of active infiltration attacks that use "trapdoor" entry points into the system to bypass security facilities and permit direct access to data. The use of the word trapdoor here clearly coincides with more recent definitions of a backdoor. However, since the advent of public key cryptography the term trapdoor has acquired a different meaning. More generally, such security breaches were discussed at length in a RAND Corporation task force report published under ARPA sponsorship by J.P. Anderson and D.J. Edwards in 1970.[2]A backdoor in a login system might take the form of a hard coded user and password combination which gives access to the system. A famous example of this sort of backdoor was as a plot device in the 1983 film WarGames, in which the architect of the "WOPR" computer system had inserted a hardcoded password (his dead son's name) which gave the user access to the system, and to undocumented parts of the system (in particular, a video game–like simulation mode and direct interaction with the artificial intelligence).
An attempt to plant a backdoor in the Linux kernel, exposed in November 2003, showed how subtle such a code change can be.[3] In this case, a two-line change appeared to be a typographical error, but actually gave the caller to the sys_wait4 function root access to the system.[4]
Although the number of backdoors in systems using proprietary software (software whose source code is not publicly available) is not widely credited, they are nevertheless frequently exposed. Programmers have even succeeded in secretly installing large amounts of benign code as Easter eggs in programs, although such cases may involve official forbearance, if not actual permission.
It is also possible to create a backdoor without modifying the source code of a program, or even modifying it after compilation. This can be done by rewriting the compiler so that it recognizes code during compilation that triggers inclusion of a backdoor in the compiled output. When the compromised compiler finds such code, it compiles it as normal, but also inserts a backdoor (perhaps a password recognition routine). So, when the user provides that input, he gains access to some (likely undocumented) aspect of program operation. This attack was first outlined by Ken Thompson in his famous paper Reflections on Trusting Trust (see below).
Many computer worms, such as Sobig and Mydoom, install a backdoor on the affected computer (generally a PC on broadband running insecure versions of Microsoft Windows and Microsoft Outlook). Such backdoors appear to be installed so that spammers can send junk e-mail from the infected machines. Others, such as the Sony/BMG rootkit distributed silently on millions of music CDs through late 2005, are intended as DRM measures — and, in that case, as data gathering agents, since both surreptitious programs they installed routinely contacted central servers.
A traditional backdoor is a symmetric backdoor: anyone that finds the backdoor can in turn use it. The notion of an asymmetric backdoor was introduced by Adam Young and Moti Yung in the Proceedings of Advances in Cryptology: Crypto '96. An asymmetric backdoor can only be used by the attacker who plants it, even if the full implementation of the backdoor becomes public (e.g., via publishing, being discovered and disclosed by reverse engineering, etc.). Also, it is computationally intractable to detect the presence of an asymmetric backdoor under black-box queries. This class of attacks have been termed kleptography; they can be carried out in software, hardware (for example, smartcards), or a combination of the two. The theory of asymmetric backdoors is part of a larger field now called cryptovirology.
There exists an experimental asymmetric backdoor in RSA key generation. This OpenSSL RSA backdoor was designed by Young and Yung, utilizes a twisted pair of elliptic curves, and has been made available.
[edit] Reflections on Trusting Trust
Ken Thompson's Reflections on Trusting Trust[5] was the first major paper to describe black box backdoor issues, and points out that trust is relative. It described a very clever backdoor mechanism based upon the fact that people only review source (human-written) code, and not compiled machine code. A program called a compiler is used to create the second from the first, and the compiler is usually trusted to do an honest job.Thompson's paper described a modified version of the Unix C compiler that would:
- Put an invisible backdoor in the Unix login command when it noticed that the login program was being compiled, and as a twist
- Also add this feature undetectably to future compiler versions upon their compilation as well.
This attack was recently (August 2009) discovered by Sophos labs: The W32/Induc-A virus infected the program compiler for Delphi, a Windows programming language. The virus introduced its own code to the compilation of new Delphi programs, allowing it to infect and propagate to many systems, without the knowledge of the software programmer. An attack that propagates by building its own Trojan Horse can be especially hard to discover. It is believed that the Induc-A virus had been propagating for at least a year before it was discovered.[7]
Once a system has been compromised with a backdoor or Trojan horse, such as the Trusting Trust compiler, it is very hard for the "rightful" user to regain control of the system. However, several practical weaknesses in the Trusting Trust scheme have been suggested. For example, a sufficiently motivated user could painstakingly review the machine code of the untrusted compiler before using it. As mentioned above, there are ways to hide the trojan horse, such as subverting the disassembler; but there are ways to counter that defense, too, such as writing your own disassembler from scratch, so the infected compiler won't recognize it. However, such proposals are generally impractical. If a user had a serious concern that the compiler was compromised, they would be better off avoiding using it altogether rather than reviewing the binary in detail using only tools that have been verified to be untainted. A user that did not have serious concerns that the compiler was compromised could not be practically expected to undertake the vast amount of work required.
David A. Wheeler has proposed a counter to this attack using an approach he calls "diverse double-compiling", which uses techniques adapted from compiler bootstrapping. This involves re-compiling the source of the compiler through another independently-written and generated "trusted" compiler, and then using the binary generated from this to recompile the original compiler again, and then comparing the binary generated from this second compilation with that generated from using the original compiler to recompile itself directly.[8]
1 commentaire:
The first time I saw this website, I was immediately attracted to zoom. Moreover, all the information is in my opinion quite interesting and intriguing. I hope you also visit my website and pass judgment on my website. Thanks.
earn money online without investment
Enregistrer un commentaire