# Claude Shannon

Claude Shannon is a famous an American mathematician, electronic engineer and geneticist, sometimes titled as *the father of information theory.*

Claude Elwood Shannon (1916–2001) was an outstanding student, and after receiving in 1936 two bachelor’s degrees (one in electrical engineering and one in mathematics) at the University of Michigan, he began graduate study at the Massachusetts Institute of Technology (MIT), where he obtained a Master’s Degree in electrical engineering and his Ph.D. in mathematics in 1940. While in MIT, he worked on Vannevar Bush‘s *differential analyzer* (a mechanical analog computer, designed to solve differential equations by integration).

While studying the complicated circuits of the differential analyzer, Shannon saw that Boole’s concepts could be used there to great utility. In the 1938 issue of the *Transactions of the American Institute of Electrical Engineers* he published a paper, drawn from his 1937 master’s thesis—A Symbolic Analysis of Relay and Switching Circuits. This paper earned Shannon the *Alfred Noble American Institute of American Engineers Award* in 1940. Some people called Shannon’s thesis *possibly the most important, and also the most famous, master’s thesis of the century*

In his paper, Shannon proved that Boolean algebra and binary arithmetic could be used to simplify the arrangement of the electromechanical relays then used in telephone routing switches, then turned the concept upside down and also proved that it should be possible to use arrangements of relays to solve Boolean algebra problems. Exploiting this property of electrical switches to do logic is the basic concept that underlies all electronic digital computers. Shannon’s work became the foundation of practical digital circuit design when it became widely known among the electrical engineering community during and after WW2. The theoretical rigor of Shannon’s work completely replaced the ad hoc methods that had previously prevailed.

In 1940, Shannon became a National Research Fellow at the Institute for Advanced Study in Princeton, New Jersey. At Princeton, Shannon had the opportunity to discuss his ideas with influential scientists and mathematicians such as Hermann Weyl and John von Neumann. Shannon worked freely across disciplines, and began to shape the ideas that would become information theory.

During the WWII Shannon worked on fire-control systems and cryptography at Bell Labs. In 1943, he came into contact with the famous British mathematician and cryptanalyst Alan Turing, who was then in Washington to share with the US Navy’s cryptanalytic service the methods used by the British Government Code and Cypher School to break the German ciphers. Turing showed Shannon his seminal 1936 paper On Computable Numbers, with an Application to the Entscheidungsproblem, that defined what is now known as the *Universal Turing machine*, which impressed him, as many of its ideas were complementary to his own.

In 1948 Shannon published another seminal paper—A Mathematical Theory of Communication. In this paper he defined the subject of information theory and proposed a linear schematic model of a communications system, which was a new idea. Communication was then thought of as requiring electromagnetic waves to be sent down a wire. The idea that one could transmit pictures, words, sounds, etc., by sending a stream of 1s and 0s down a wire. Introducing the word *bit* for the first time, Shannon showed that adding extra bits to a signal allowed transmission errors to be corrected. He was the person who saw that the binary digit was the fundamental element in all of communications. That was really his discovery, and from it the whole communications revolution has sprung.

The ideas in Shannon’s paper were soon picked up by communication engineers and mathematicians around the world. They were elaborated upon, extended, and complemented with new related ideas. The subject thrived and grew to become a well-rounded and exciting chapter in the annals of science.

Shannon’s later work looked at ideas in artificial intelligence. In 1950 he published a groundbreaking paper on computer chess, entitled *Programming a Computer for Playing Chess*, which led to the first full game played by the Los Alamos *MANIAC* computer in 1956. The same 1950 he created the electronic mouse *Theseus* (see the nearby photo) which could solve maze problems. It was a magnetic mouse controlled by a relay circuit that enabled it to move around a maze of 25 squares. The maze configuration was flexible and it could be modified at will. The mouse was designed to search through the corridors until it found the target. Having traveled through the maze, the mouse would then be placed anywhere it had been before and because of its prior experience it could go directly to the target. If placed in unfamiliar territory, it was programmed to search until it reached a known location and then it would proceed to the target, adding the new knowledge to its memory thus learning. Shannon’s mouse appears to have been the first learning device of its kind.

Shannon also applied his inventing genius to other areas, e.g. inventing a two-seater version of his beloved unicycle, and it is probably true that no one was anxious to share it with him. A later invention, the unicycle with an off-centre hub, would bring people out into the corridors to watch him as he rode it, bobbing up and down like a duck.