![]() |
Computer programming
From Wikipedia, the free encyclopedia
"Programming" redirects here. For other uses, see Programming (disambiguation).
Software development process | |
---|---|
Activities and steps | |
Requirements · Specification Architecture · Design Implementation · Testing Deployment · Maintenance | |
Methodologies | |
Agile · Cleanroom · Iterative RAD · RUP · Spiral Waterfall · XP · Lean Scrum · V-Model · TDD | |
Supporting disciplines | |
Configuration management Documentation Quality assurance (SQA) Project management User experience design | |
Tools | |
Compiler · Debugger · Profiler GUI designer · IDE | |
Contents[hide] |
[edit] Overview
![]() | Wikiversity has learning materials about programming |
There is an ongoing debate on the extent to which the writing of programs is an art, a craft or an engineering discipline.[1] In general, good programming is considered to be the measured application of all three, with the goal of producing an efficient and evolvable software solution (the criteria for "efficient" and "evolvable" vary considerably). The discipline differs from many other technical professions in that programmers, in general, do not need to be licensed or pass any standardized (or governmentally regulated) certification tests in order to call themselves "programmers" or even "software engineers." Because the discipline covers many areas, which may or may not include critical applications, it is debatable whether licensing is required for the profession as a whole. In most cases, the discipline is self-governed by the entities which require the programming, and sometimes very strict environments are defined (e.g. United States Air Force use of AdaCore and security clearance). However, representing oneself as a "Professional Software Engineer" without a license from an accredited institution is illegal in many parts of the world.
Another ongoing debate is the extent to which the programming language used in writing computer programs affects the form that the final program takes. This debate is analogous to that surrounding the Sapir–Whorf hypothesis[2] in linguistics and cognitive science, which postulates that a particular spoken language's nature influences the habitual thought of its speakers. Different language patterns yield different patterns of thought. This idea challenges the possibility of representing the world perfectly with language, because it acknowledges that the mechanisms of any language condition the thoughts of its speaker community.
[edit] History
See also: History of programming languages.

Wired control panel for an IBM 402 Accounting Machine.
In the late 1880s, Herman Hollerith invented the recording of data on a medium that could then be read by a machine. Prior uses of machine readable media, above, had been for control, not data. "After some initial trials with paper tape, he settled on punched cards..."[7] To process these punched cards, first known as "Hollerith cards" he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. In 1896 he founded the Tabulating Machine Company (which later became the core of IBM). The addition of a control panel (plugboard) to his 1906 Type I Tabulator allowed it to do different jobs without having to be physically rebuilt. By the late 1940s, there were a variety of control panel programmable machines, called unit record equipment, to perform data-processing tasks.

Data and instructions could be stored on external punched cards, which were kept in order and arranged in program decks.
In 1954, FORTRAN was invented; it was the first high level programming language to have a functional implementation, as opposed to just a design on paper.[8][9] (A high-level language is, in very general terms, any programming language that allows the programmer to write programs in terms that are more abstract than assembly language instructions, i.e. at a level of abstraction "higher" than that of an assembly language.) It allowed programmers to specify calculations by entering a formula directly (e.g. Y = X*2 + 5*X + 9). The program text, or source, is converted into machine instructions using a special program called a compiler, which translates the FORTRAN program into machine language. In fact, the name FORTRAN stands for "Formula Translation". Many other languages were developed, including some for commercial programming, such as COBOL. Programs were mostly still entered using punched cards or paper tape. (See computer programming in the punch card era). By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were developed that allowed changes and corrections to be made much more easily than with punched cards. (Usually, an error in punching a card meant that the card had to be discarded and a new one punched to replace it.)
As time has progressed, computers have made giant leaps in the area of processing power. This has brought about newer programming languages that are more abstracted from the underlying hardware. Although these high-level languages usually incur greater overhead, the increase in speed of modern computers has made the use of these languages much more practical than in the past. These increasingly abstracted languages typically are easier to learn and allow the programmer to develop applications much more efficiently and with less source code. However, high-level languages are still impractical for a few programs, such as those where low-level hardware control is necessary or where maximum processing speed is vital.
Throughout the second half of the twentieth century, programming was an attractive career in most developed countries. Some forms of programming have been increasingly subject to offshore outsourcing (importing software and services from other countries, usually at a lower wage), making programming career decisions in developed countries more complicated, while increasing economic opportunities in less developed areas. It is unclear how far this trend will continue and how deeply it will impact programmer wages and opportunities.[citation needed]
[edit] Modern programming
This section relies largely or entirely upon a single source. Please help improve this article by introducing appropriate citations to additional sources. (August 2010) |
[edit] Quality requirements
Whatever the approach to software development may be, the final program must satisfy some fundamental properties. The following properties are among the most relevant:- Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms, and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero or off-by-one errors).
- Robustness: how well a program anticipates problems not due to programmer error. This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services and network connections, and user error.
- Usability: the ergonomics of a program: the ease with which a person can use the program for its intended purpose, or in some cases even unanticipated purposes. Such issues can make or break its success even regardless of other issues. This involves a wide range of textual, graphical and sometimes hardware elements that improve the clarity, intuitiveness, cohesiveness and completeness of a program's user interface.
- Portability: the range of computer hardware and operating system platforms on which the source code of a program can be compiled/interpreted and run. This depends on differences in the programming facilities provided by the different platforms, including hardware and operating system resources, expected behaviour of the hardware and operating system, and availability of platform specific compilers (and sometimes libraries) for the language of the source code.
- Maintainability: the ease with which a program can be modified by its present or future developers in order to make improvements or customizations, fix bugs and security holes, or adapt it to new environments. Good practices during initial development make the difference in this regard. This quality may not be directly apparent to the end user but it can significantly affect the fate of a program over the long term.
- Efficiency/performance: the amount of system resources a program consumes (processor time, memory space, slow devices such as disks, network bandwidth and to some extent even user interaction): the less, the better. This also includes correct disposal of some resources, such as cleaning up temporary files and lack of memory leaks.
[edit] Readability of source code
In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability.Readability is important because programmers spend the majority of their time reading, trying to understand and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study[10] found that a few simple readability transformations made code shorter and drastically reduced the time to understand it.
Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability.[11] Some of these factors include:
- Different indentation styles (whitespace)
- Comments
- Decomposition
- Naming conventions for objects (such as variables, classes, procedures, etc.)
[edit] Algorithmic complexity
The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problem. For this purpose, algorithms are classified into orders using so-called Big O notation, O(n), which expresses resource use, such as execution time or memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances.[edit] Methodologies
The first step in most formal software development processes is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of differing approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis. Nowadays many programmers use forms of Agile software development where the various stages of formal software development are more integrated together into short cycles that take a few weeks rather than years. There are many approaches to the Software development process.Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA.
A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).
Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic languages.
[edit] Measuring language usage
It is very difficult to determine what are the most popular of modern programming languages. Some languages are very popular for particular kinds of applications (e.g., COBOL is still strong in the corporate data center, often on large mainframes, FORTRAN in engineering applications, scripting languages in web development, and C in embedded applications), while some languages are regularly used to write many different kinds of applications. Also many applications use a mix of several languages in their construction and use.Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language,[12] the number of books teaching the language that are sold (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).
[edit] Debugging

A bug, which was debugged in 1947
Debugging is often done with IDEs like Eclipse, Kdevelop, NetBeans, Code::Blocks, and Visual Studio. Standalone debuggers like gdb are also used, and these often provide less of a visual environment, usually using a command line.
[edit] Programming languages
Main articles: Programming language and List of programming languages
Different programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. Languages form an approximate spectrum from "low-level" to "high-level"; "low-level" languages are typically more machine-oriented and faster to execute, whereas "high-level" languages are more abstract and easier to use but execute less quickly. It is usually easier to code in "high-level" languages than in "low-level" ones.Allen Downey, in his book How To Think Like A Computer Scientist, writes:
- The details look different in different languages, but a few basic instructions appear in just about every language:
- input: Get data from the keyboard, a file, or some other device.
- output: Display data on the screen or send data to a file or other device.
- arithmetic: Perform basic arithmetical operations like addition and multiplication.
- conditional execution: Check for certain conditions and execute the appropriate sequence of statements.
- repetition: Perform some action repeatedly, usually with some variation.
[edit] Programmers
Main article: Programmer
See also: Software developer and Software engineer.
Computer programmers are those who write computer software. Their jobs usually involve:[edit] See also
Main article: Outline of computer programming
[edit] See also
- ACCU
- Association for Computing Machinery
- Computer networking
- Computer programming in the punch card era
- Computer science
- Computing
- Hello world program
- List of basic computer programming topics
- List of computer programming topics
- Programming paradigms
- Software engineering
Central processing unit
From Wikipedia, the free encyclopedia
"CPU" redirects here. For other uses, see CPU (disambiguation).
An Intel 80486DX2 CPU from above
On large machines, CPUs require one or more printed circuit boards. On personal computers and small workstations, the CPU is housed in a single chip called a microprocessor. Since the 1970's the microprocessor class of CPUs has almost completely overtaken all other CPU implementations.
The CPU itself is an internal component of the computer. Modern CPUs are small and square and contain multiple metallic connectors or pins on the underside. The CPU is inserted directly into a CPU socket, pin side down, on the motherboard.
Each motherboard will support only a specific type or range of CPU so that one has to check the motherboard manufacturer's specifications before attempting to replace or upgrade a CPU. Modern CPUs also have an attached heat sink and small fan that go directly on top of the CPU to help dissipate heat.
Two typical components of a CPU are the arithmetic logic unit (ALU), which performs arithmetic and logical operations, and the control unit (CU), which extracts instructions from memory and decodes and executes them, calling on the ALU when necessary.
Contents[hide] |
[edit] History
Main article: History of general purpose CPUs

EDVAC, one of the first electronic stored program computers
The idea of a stored-program computer was already present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was initially omitted so the machine could be finished sooner. On June 30, 1945, before ENIAC was completed, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It outlined the design of a stored-program computer that would eventually be completed in August 1949.[2] EDVAC was designed to perform a certain number of instructions (or operations) of various types. These instructions could be combined to create useful programs for the EDVAC to run. Significantly, the programs written for EDVAC were stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program, or software, that EDVAC ran could be changed simply by changing the contents of the computer's memory.
Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this costly method of designing custom CPUs for a particular application has largely given way to the development of mass-produced processors that are made for one or many purposes. This standardization trend generally began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones and children's toys.[clarification needed]
While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, others before him, such as Konrad Zuse, had suggested and implemented similar ideas. The so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily von Neumann in design, but elements of the Harvard architecture are commonly seen as well.
As a digital device, a CPU is limited to a set of discrete states, and requires some kind of switching elements to differentiate between and change states. Prior to commercial development of the transistor, electrical relays and vacuum tubes (thermionic valves) were commonly used as switching elements. Although these had distinct speed advantages over earlier, purely mechanical designs, they were unreliable for various reasons. For example, building direct current sequential logic circuits out of relays requires additional hardware to cope with the problem of contact bounce. While vacuum tubes do not suffer from contact bounce, they must heat up before becoming fully operational, and they eventually cease to function due to slow contamination of their cathodes that occurs in the course of normal operation. If a tube's vacuum seal leaks, as sometimes happens, cathode contamination is accelerated. Usually, when a tube failed, the CPU would have to be diagnosed to locate the failed component so it could be replaced. Therefore, early electronic (vacuum tube based) computers were generally faster but less reliable than electromechanical (relay based) computers.
Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier) Harvard Mark I failed very rarely.[1] In the end, tube based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs (see below for a discussion of clock rate). Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with.
[edit] Overview
[edit] The control unit
The control unit of the CPU contains circuitry that uses electrical signals to direct the entire computer system to carry out stored program instructions. The control unit does not execute program instructions; rather, it directs other parts of the system to do so. The control unit must communicate with both the arithmetic/logic unit and memory.[edit] Discrete transistor and integrated circuit CPUs

CPU, core memory, and external bus interface of a DEC PDP-8/I. Made of medium-scale integrated circuits
During this period, a method of manufacturing many transistors in a compact space gained popularity. The integrated circuit (IC) allowed a large number of transistors to be manufactured on a single semiconductor-based die, or "chip." At first only very basic non-specialized digital circuits such as NOR gates were miniaturized into ICs. CPUs based upon these "building block" ICs are generally referred to as "small-scale integration" (SSI) devices. SSI ICs, such as the ones used in the Apollo guidance computer, usually contained transistor counts numbering in multiples of ten. To build an entire CPU out of SSI ICs required thousands of individual chips, but still consumed much less space and power than earlier discrete transistor designs. As microelectronic technology advanced, an increasing number of transistors were placed on ICs, thus decreasing the quantity of individual ICs needed for a complete CPU. MSI and LSI (medium- and large-scale integration) ICs increased transistor counts to hundreds, and then thousands.
In 1964 IBM introduced its System/360 computer architecture which was used in a series of computers that could run the same programs with different speed and performance. This was significant at a time when most electronic computers were incompatible with one another, even those made by the same manufacturer. To facilitate this improvement, IBM utilized the concept of a microprogram (often called "microcode"), which still sees widespread usage in modern CPUs.[3] The System/360 architecture was so popular that it dominated the mainframe computer market for decades and left a legacy that is still continued by similar modern computers like the IBM zSeries. In the same year (1964), Digital Equipment Corporation (DEC) introduced another influential computer aimed at the scientific and research markets, the PDP-8. DEC would later introduce the extremely popular PDP-11 line that originally was built with SSI ICs but was eventually implemented with LSI components once these became practical. In stark contrast with its SSI and MSI predecessors, the first LSI implementation of the PDP-11 contained a CPU composed of only four LSI integrated circuits.[4]
Transistor-based computers had several distinct advantages over their predecessors. Aside from facilitating increased reliability and lower power consumption, transistors also allowed CPUs to operate at much higher speeds because of the short switching time of a transistor in comparison to a tube or relay. Thanks to both the increased reliability as well as the dramatically increased speed of the switching elements (which were almost exclusively transistors by this time), CPU clock rates in the tens of megahertz were obtained during this period. Additionally while discrete transistor and IC CPUs were in heavy usage, new high-performance designs like SIMD (Single Instruction Multiple Data) vector processors began to appear. These early experimental designs later gave rise to the era of specialized supercomputers like those made by Cray Inc.
[edit] Microprocessors
![]() | This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (October 2009) |
Main article: Microprocessor
Previous generations of CPUs were implemented as discrete components and numerous small integrated circuits (ICs) on one or more circuit boards. Microprocessors, on the other hand, are CPUs manufactured on a very small number of ICs; usually just one. The overall smaller CPU size as a result of being implemented on a single die means faster switching time because of physical factors like decreased gate parasitic capacitance. This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, as the ability to construct exceedingly small transistors on an IC has increased, the complexity and number of transistors in a single CPU has increased dramatically. This widely observed trend is described by Moore's law, which has proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity to date.
While the complexity, size, construction, and general form of CPUs have changed drastically over the past sixty years, it is notable that the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As the aforementioned Moore's law continues to hold true, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model.
[edit] Operation
The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions called a program. The program is represented by a series of numbers that are kept in some kind of computer memory. There are four steps that nearly all CPUs use in their operation: fetch, decode, execute, and writeback.The first step, fetch, involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. The location in program memory is determined by a program counter (PC), which stores a number that identifies the current position in the program. After an instruction is fetched, the PC is incremented by the length of the instruction word in terms of memory units.[5] Often, the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely addressed in modern processors by caches and pipeline architectures (see below).
The instruction that the CPU fetches from memory is used to determine what the CPU is to do. In the decode step, the instruction is broken up into parts that have significance to other portions of the CPU. The way in which the numerical instruction value is interpreted is defined by the CPU's instruction set architecture (ISA).[6] Often, one group of numbers in the instruction, called the opcode, indicates which operation to perform. The remaining parts of the number usually provide information required for that instruction, such as operands for an addition operation. Such operands may be given as a constant value (called an immediate value), or as a place to locate a value: a register or a memory address, as determined by some addressing mode. In older designs the portions of the CPU responsible for instruction decoding were unchangeable hardware devices. However, in more abstract and complicated CPUs and ISAs, a microprogram is often used to assist in translating instructions into various configuration signals for the CPU. This microprogram is sometimes rewritable so that it can be modified to change the way the CPU decodes instructions even after it has been manufactured.
After the fetch and decode steps, the execute step is performed. During this step, various portions of the CPU are connected so they can perform the desired operation. If, for instance, an addition operation was requested, the arithmetic logic unit (ALU) will be connected to a set of inputs and a set of outputs. The inputs provide the numbers to be added, and the outputs will contain the final sum. The ALU contains the circuitry to perform simple arithmetic and logical operations on the inputs (like addition and bitwise operations). If the addition operation produces a result too large for the CPU to handle, an arithmetic overflow flag in a flags register may also be set.
The final step, writeback, simply "writes back" the results of the execute step to some form of memory. Very often the results are written to some internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but cheaper and larger, main memory. Some types of instructions manipulate the program counter rather than directly produce result data. These are generally called "jumps" and facilitate behavior like loops, conditional program execution (through the use of a conditional jump), and functions in programs.[7] Many instructions will also change the state of digits in a "flags" register. These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, one type of "compare" instruction considers two values and sets a number in the flags register according to which one is greater. This flag could then be used by a later jump instruction to determine program flow.
After the execution of the instruction and writeback of the resulting data, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If the completed instruction was a jump, the program counter will be modified to contain the address of the instruction that was jumped to, and program execution continues normally. In more complex CPUs than the one described here, multiple instructions can be fetched, decoded, and executed simultaneously. This section describes what is generally referred to as the "classic RISC pipeline", which in fact is quite common among the simple CPUs used in many electronic devices (often called microcontroller). It largely ignores the important role of CPU cache, and therefore the access stage of the pipeline.
[edit] Design and implementation
Main article: CPU design
[edit] Integer range
The way a CPU represents numbers is a design choice that affects the most basic ways in which the device functions. Some early digital computers used an electrical model of the common decimal (base ten) numeral system to represent numbers internally. A few other computers have used more exotic numeral systems like ternary (base three). Nearly all modern CPUs represent numbers in binary form, with each digit being represented by some two-valued physical quantity such as a "high" or "low" voltage.[8]Related to number representation is the size and precision of numbers that a CPU can represent. In the case of a binary CPU, a bit refers to one significant place in the numbers a CPU deals with. The number of bits (or numeral places) a CPU uses to represent numbers is often called "word size", "bit width", "data path width", or "integer precision" when dealing with strictly integer numbers (as opposed to Floating point). This number differs between architectures, and often within different parts of the very same CPU. For example, an 8-bit CPU deals with a range of numbers that can be represented by eight binary digits (each digit having two possible values), that is, 28 or 256 discrete numbers. In effect, integer size sets a hardware limit on the range of integers the software run by the CPU can utilize.[9]
Integer range can also affect the number of locations in memory the CPU can address (locate). For example, if a binary CPU uses 32 bits to represent a memory address, and each memory address represents one octet (8 bits), the maximum quantity of memory that CPU can address is 232 octets, or 4 GiB. This is a very simple view of CPU address space, and many designs use more complex addressing methods like paging in order to locate more memory than their integer range would allow with a flat address space.
Higher levels of integer range require more structures to deal with the additional digits, and therefore more complexity, size, power usage, and general expense. It is not at all uncommon, therefore, to see 4- or 8-bit microcontrollers used in modern applications, even though CPUs with much higher range (such as 16, 32, 64, even 128-bit) are available. The simpler microcontrollers are usually cheaper, use less power, and therefore generate less heat, all of which can be major design considerations for electronic devices. However, in higher-end applications, the benefits afforded by the extra range (most often the additional address space) are more significant and often affect design choices. To gain some of the advantages afforded by both lower and higher bit lengths, many CPUs are designed with different bit widths for different portions of the device. For example, the IBM System/370 used a CPU that was primarily 32 bit, but it used 128-bit precision inside its floating point units to facilitate greater accuracy and range in floating point numbers.[3] Many later CPU designs use similar mixed bit width, especially when the processor is meant for general-purpose usage where a reasonable balance of integer and floating point capability is required.
[edit] Clock rate
Main article: Clock rate
The clock rate is the speed at which a microprocessor executes instructions. Every computer contains an internal clock that regulates the rate at which instructions are executed and synchronizes all the various computer components. The CPU requires a fixed number of clock ticks (or clock cycles) to execute each instruction. The faster the clock, the more instructions the CPU can execute per second.Most CPUs, and indeed most sequential logic devices, are synchronous in nature.[10] That is, they are designed and operate on assumptions about a synchronization signal. This signal, known as a clock signal, usually takes the form of a periodic square wave. By calculating the maximum time that electrical signals can move in various branches of a CPU's many circuits, the designers can select an appropriate period for the clock signal.
This period must be longer than the amount of time it takes for a signal to move, or propagate, in the worst-case scenario. In setting the clock period to a value well above the worst-case propagation delay, it is possible to design the entire CPU and the way it moves data around the "edges" of the rising and falling clock signal. This has the advantage of simplifying the CPU significantly, both from a design perspective and a component-count perspective. However, it also carries the disadvantage that the entire CPU must wait on its slowest elements, even though some portions of it are much faster. This limitation has largely been compensated for by various methods of increasing CPU parallelism. (see below)
However, architectural improvements alone do not solve all of the drawbacks of globally synchronous CPUs. For example, a clock signal is subject to the delays of any other electrical signal. Higher clock rates in increasingly complex CPUs make it more difficult to keep the clock signal in phase (synchronized) throughout the entire unit. This has led many modern CPUs to require multiple identical clock signals to be provided in order to avoid delaying a single signal significantly enough to cause the CPU to malfunction. Another major issue as clock rates increase dramatically is the amount of heat that is dissipated by the CPU. The constantly changing clock causes many components to switch regardless of whether they are being used at that time. In general, a component that is switching uses more energy than an element in a static state. Therefore, as clock rate increases, so does heat dissipation, causing the CPU to require more effective cooling solutions.
One method of dealing with the switching of unneeded components is called clock gating, which involves turning off the clock signal to unneeded components (effectively disabling them). However, this is often regarded as difficult to implement and therefore does not see common usage outside of very low-power designs. One notable late CPU design that uses clock gating is that of the IBM PowerPC-based Xbox 360. It utilizes extensive clock gating in order to reduce the power requirements of the aforementioned videogame console in which it is used. [11] Another method of addressing some of the problems with a global clock signal is the removal of the clock signal altogether. While removing the global clock signal makes the design process considerably more complex in many ways, asynchronous (or clockless) designs carry marked advantages in power consumption and heat dissipation in comparison with similar synchronous designs. While somewhat uncommon, entire asynchronous CPUs have been built without utilizing a global clock signal. Two notable examples of this are the ARM compliant AMULET and the MIPS R3000 compatible MiniMIPS. Rather than totally removing the clock signal, some CPU designs allow certain portions of the device to be asynchronous, such as using asynchronous ALUs in conjunction with superscalar pipelining to achieve some arithmetic performance gains. While it is not altogether clear whether totally asynchronous designs can perform at a comparable or better level than their synchronous counterparts, it is evident that they do at least excel in simpler math operations. This, combined with their excellent power consumption and heat dissipation properties, makes them very suitable for embedded computers.[12]
[edit] Parallelism
Main article: Parallel computing
The description of the basic operation of a CPU offered in the previous section describes the simplest form that a CPU can take. This type of CPU, usually referred to as subscalar, operates on and executes one instruction on one or two pieces of data at a time.This process gives rise to an inherent inefficiency in subscalar CPUs. Since only one instruction is executed at a time, the entire CPU must wait for that instruction to complete before proceeding to the next instruction. As a result, the subscalar CPU gets "hung up" on instructions which take more than one clock cycle to complete execution. Even adding a second execution unit (see below) does not improve performance much; rather than one pathway being hung up, now two pathways are hung up and the number of unused transistors is increased. This design, wherein the CPU's execution resources can operate on only one instruction at a time, can only possibly reach scalar performance (one instruction per clock). However, the performance is nearly always subscalar (less than one instruction per cycle).
Attempts to achieve scalar and better performance have resulted in a variety of design methodologies that cause the CPU to behave less linearly and more in parallel. When referring to parallelism in CPUs, two terms are generally used to classify these design techniques. Instruction level parallelism (ILP) seeks to increase the rate at which instructions are executed within a CPU (that is, to increase the utilization of on-die execution resources), and thread level parallelism (TLP) purposes to increase the number of threads (effectively individual programs) that a CPU can execute simultaneously. Each methodology differs both in the ways in which they are implemented, as well as the relative effectiveness they afford in increasing the CPU's performance for an application.[13]
[edit] Instruction level parallelism
Main articles: Instruction pipelining and Superscalar
Pipelining does, however, introduce the possibility for a situation where the result of the previous operation is needed to complete the next operation; a condition often termed data dependency conflict. To cope with this, additional care must be taken to check for these sorts of conditions and delay a portion of the instruction pipeline if this occurs. Naturally, accomplishing this requires additional circuitry, so pipelined processors are more complex than subscalar ones (though not very significantly so). A pipelined processor can become very nearly scalar, inhibited only by pipeline stalls (an instruction spending more than one clock cycle in a stage).
Most of the difficulty in the design of a superscalar CPU architecture lies in creating an effective dispatcher. The dispatcher needs to be able to quickly and correctly determine whether instructions can be executed in parallel, as well as dispatch them in such a way as to keep as many execution units busy as possible. This requires that the instruction pipeline is filled as often as possible and gives rise to the need in superscalar architectures for significant amounts of CPU cache. It also makes hazard-avoiding techniques like branch prediction, speculative execution, and out-of-order execution crucial to maintaining high levels of performance. By attempting to predict which branch (or path) a conditional instruction will take, the CPU can minimize the number of times that the entire pipeline must wait until a conditional instruction is completed. Speculative execution often provides modest performance increases by executing portions of code that may not be needed after a conditional operation completes. Out-of-order execution somewhat rearranges the order in which instructions are executed to reduce delays due to data dependencies. Also in case of Single Instructions Multiple Data — a case when a lot of data from the same type has to be processed, modern processors can disable parts of the pipeline so that when a single instruction is executed many times, the CPU skips the fetch and decode phases and thus greatly increases performance on certain occasions, especially in highly monotonous program engines such as video creation software and photo processing.
In the case where a portion of the CPU is superscalar and part is not, the part which is not suffers a performance penalty due to scheduling stalls. The Intel P5 Pentium had two superscalar ALUs which could accept one instruction per clock each, but its FPU could not accept one instruction per clock. Thus the P5 was integer superscalar but not floating point superscalar. Intel's successor to the P5 architecture, P6, added superscalar capabilities to its floating point features, and therefore afforded a significant increase in floating point instruction performance.
Both simple pipelining and superscalar design increase a CPU's ILP by allowing a single processor to complete execution of instructions at rates surpassing one instruction per cycle (IPC).[15] Most modern CPU designs are at least somewhat superscalar, and nearly all general purpose CPUs designed in the last decade are superscalar. In later years some of the emphasis in designing high-ILP computers has been moved out of the CPU's hardware and into its software interface, or ISA. The strategy of the very long instruction word (VLIW) causes some ILP to become implied directly by the software, reducing the amount of work the CPU must perform to boost ILP and thereby reducing the design's complexity.
[edit] Thread-level parallelism
Another strategy of achieving performance is to execute multiple programs or threads in parallel. This area of research is known as parallel computing. In Flynn's taxonomy, this strategy is known as Multiple Instructions-Multiple Data or MIMD.One technology used for this purpose was multiprocessing (MP). The initial flavor of this technology is known as symmetric multiprocessing (SMP), where a small number of CPUs share a coherent view of their memory system. In this scheme, each CPU has additional hardware to maintain a constantly up-to-date view of memory. By avoiding stale views of memory, the CPUs can cooperate on the same program and programs can migrate from one CPU to another. To increase the number of cooperating CPUs beyond a handful, schemes such as non-uniform memory access (NUMA) and directory-based coherence protocols were introduced in the 1990s. SMP systems are limited to a small number of CPUs while NUMA systems have been built with thousands of processors. Initially, multiprocessing was built using multiple discrete CPUs and boards to implement the interconnect between the processors. When the processors and their interconnect are all implemented on a single silicon chip, the technology is known as a multi-core microprocessor.
It was later recognized that finer-grain parallelism existed with a single program. A single program might have several threads (or functions) that could be executed separately or in parallel. Some of the earliest examples of this technology implemented input/output processing such as direct memory access as a separate thread from the computation thread. A more general approach to this technology was introduced in the 1970s when systems were designed to run multiple computation threads in parallel. This technology is known as multi-threading (MT). This approach is considered more cost-effective than multiprocessing, as only a small number of components within a CPU is replicated in order to support MT as opposed to the entire CPU in the case of MP. In MT, the execution units and the memory system including the caches are shared among multiple threads. The downside of MT is that the hardware support for multithreading is more visible to software than that of MP and thus supervisor software like operating systems have to undergo larger changes to support MT. One type of MT that was implemented is known as block multithreading, where one thread is executed until it is stalled waiting for data to return from external memory. In this scheme, the CPU would then quickly switch to another thread which is ready to run, the switch often done in one CPU clock cycle, such as the UltraSPARC Technology. Another type of MT is known as simultaneous multithreading, where instructions of multiple threads are executed in parallel within one CPU clock cycle.
For several decades from the 1970s to early 2000s, the focus in designing high performance general purpose CPUs was largely on achieving high ILP through technologies such as pipelining, caches, superscalar execution, out-of-order execution, etc. This trend culminated in large, power-hungry CPUs such as the Intel Pentium 4. By the early 2000s, CPU designers were thwarted from achieving higher performance from ILP techniques due to the growing disparity between CPU operating frequencies and main memory operating frequencies as well as escalating CPU power dissipation owing to more esoteric ILP techniques.
CPU designers then borrowed ideas from commercial computing markets such as transaction processing, where the aggregate performance of multiple programs, also known as throughput computing, was more important than the performance of a single thread or program.
This reversal of emphasis is evidenced by the proliferation of dual and multiple core CMP (chip-level multiprocessing) designs and notably, Intel's newer designs resembling its less superscalar P6 architecture. Late designs in several processor families exhibit CMP, including the x86-64 Opteron and Athlon 64 X2, the SPARC UltraSPARC T1, IBM POWER4 and POWER5, as well as several video game console CPUs like the Xbox 360's triple-core PowerPC design, and the PS3's 7-core Cell microprocessor.
[edit] Data parallelism
Main articles: Vector processor and SIMD
A less common but increasingly important paradigm of CPUs (and indeed, computing in general) deals with data parallelism. The processors discussed earlier are all referred to as some type of scalar device.[16] As the name implies, vector processors deal with multiple pieces of data in the context of one instruction. This contrasts with scalar processors, which deal with one piece of data for every instruction. Using Flynn's taxonomy, these two schemes of dealing with data are generally referred to as SIMD (single instruction, multiple data) and SISD (single instruction, single data), respectively. The great utility in creating CPUs that deal with vectors of data lies in optimizing tasks that tend to require the same operation (for example, a sum or a dot product) to be performed on a large set of data. Some classic examples of these types of tasks are multimedia applications (images, video, and sound), as well as many types of scientific and engineering tasks. Whereas a scalar CPU must complete the entire process of fetching, decoding, and executing each instruction and value in a set of data, a vector CPU can perform a single operation on a comparatively large set of data with one instruction. Of course, this is only possible when the application tends to require many steps which apply one operation to a large set of data.Most early vector CPUs, such as the Cray-1, were associated almost exclusively with scientific research and cryptography applications. However, as multimedia has largely shifted to digital media, the need for some form of SIMD in general-purpose CPUs has become significant. Shortly after inclusion of floating point execution units started to become commonplace in general-purpose processors, specifications for and implementations of SIMD execution units also began to appear for general-purpose CPUs. Some of these early SIMD specifications like HP's Multimedia Acceleration eXtensions (MAX) and Intel's MMX were integer-only. This proved to be a significant impediment for some software developers, since many of the applications that benefit from SIMD primarily deal with floating point numbers. Progressively, these early designs were refined and remade into some of the common, modern SIMD specifications, which are usually associated with one ISA. Some notable modern examples are Intel's SSE and the PowerPC-related AltiVec (also known as VMX).[17]
[edit] Performance
The performance or speed of a processor depends on the clock rate (generally given in multiples of hertz) and the instructions per clock (IPC), which together are the factors for the instructions per second (IPS) that the CPU can perform.[18] Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches, whereas realistic workloads consist of a mix of instructions and applications, some of which take longer to execute than others. The performance of the memory hierarchy also greatly affects processor performance, an issue barely considered in MIPS calculations. Because of these problems, various standardized tests such as SPECint have been developed to attempt to measure the real effective performance (often called "benchmark" for this purpose) in commonly used applications.Processing performance of computers is increased by using multi-core processors, which essentially is plugging two or more individual processors (called cores in this sense) into one integrated circuit.[19] Ideally, a dual core processor would be nearly twice as powerful as a single core processor. In practice, however, the performance gain is far less, only about fifty percent,[19] due to imperfect software algorithms and implementation.
Motherboard
From Wikipedia, the free encyclopedia
In personal computers, a motherboard is the central printed circuit board (PCB) in many modern computers and holds many of the crucial components of the system, providing connectors for other peripherals. The motherboard is sometimes alternatively known as the mainboard, system board, or, on Apple computers, the logic board.[1] It is also sometimes casually shortened to mobo.[2]
Motherboard for a Acer desktop personal computer, showing the typical components and interfaces that are found on a motherboard. This model was made by Foxconn in 2008, and follows the ATX layout (known as the "form factor") usually employed for desktop computers. It is designed to work with AMD's Athlon 64 processor.
Contents[hide] |
[edit] History
Prior to the advent of the microprocessor, a computer was usually built in a card-cage case or mainframe with components connected by a backplane consisting of a set of slots themselves connected with wires; in very old designs the wires were discrete connections between card connector pins, but printed circuit boards soon became the standard practice. The Central Processing Unit, memory and peripherals were housed on individual printed circuit boards which plugged into the backplane.During the late 1980s and 1990s, it became economical to move an increasing number of peripheral functions onto the motherboard (see below). In the late 1980s, motherboards began to include single ICs (called Super I/O chips) capable of supporting a set of low-speed peripherals: keyboard, mouse, floppy disk drive, serial ports, and parallel ports. As of the late 1990s, many personal computer motherboards supported a full range of audio, video, storage, and networking functions without the need for any expansion cards at all; higher-end systems for 3D gaming and computer graphics typically retained only the graphics card as a separate component.
The early pioneers of motherboard manufacturing were Micronics, Mylex, AMI, DTK, Hauppauge, Orchid Technology, Elitegroup, DFI, and a number of Taiwan-based manufacturers.
The most popular computers such as the Apple II and IBM PC had published schematic diagrams and other documentation which permitted rapid reverse-engineering and third-party replacement motherboards. Usually intended for building new computers compatible with the exemplars, many motherboards offered additional performance or other features and were used to upgrade the manufacturer's original equipment
The term mainboard is applied to devices with a single board and no additional expansions or capability. In modern terms this would include embedded systems and controlling boards in televisions, washing machines, etc. A motherboard specifically refers to a printed circuit board with expansion capability.
[edit] Overview
A motherboard, like a backplane, provides the electrical connections by which the other components of the system communicate, but unlike a backplane, it also connects the central processing unit and hosts other subsystems and devices.A typical desktop computer has its microprocessor, main memory, and other essential components connected to the motherboard. Other components such as external storage, controllers for video display and sound, and peripheral devices may be attached to the motherboard as plug-in cards or via cables, although in modern computers it is increasingly common to integrate some of these peripherals into the motherboard itself.
An important component of a motherboard is the microprocessor's supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components. This chipset determines, to an extent, the features and capabilities of the motherboard.
Modern motherboards include, at a minimum:
- sockets (or slots) in which one or more microprocessors may be installed[3]
- slots into which the system's main memory is to be installed (typically in the form of DIMM modules containing DRAM chips)
- a chipset which forms an interface between the CPU's front-side bus, main memory, and peripheral buses
- non-volatile memory chips (usually Flash ROM in modern motherboards) containing the system's firmware or BIOS
- a clock generator which produces the system clock signal to synchronize the various components
- slots for expansion cards (these interface to the system via the buses supported by the chipset)
- power connectors, which receive electrical power from the computer power supply and distribute it to the CPU, chipset, main memory, and expansion cards.[4]
Given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly always include heat sinks and mounting points for fans to dissipate excess heat.
[edit] CPU sockets
Main article: CPU socket
A CPU socket or slot is an electrical component that attaches to a printed circuit board (PCB) and is designed to house a CPU (also called a microprocessor). It is a special type of integrated circuit socket designed for very high pin counts. A CPU socket provides many functions, including a physical structure to support the CPU, support for a heat sink, facilitating replacement (as well as reducing cost), and most importantly, forming an electrical interface both with the CPU and the PCB. CPU sockets can most often be found in most desktop and server computers (laptops typically use surface mount CPUs), particularly those based on the Intel x86 architecture on the motherboard. A CPU socket type and motherboard chipset must support the CPU series and speed.[edit] Integrated peripherals
For example, the ECS RS485M-M,[6] a typical modern budget motherboard for computers based on AMD processors, has on-board support for a very large range of peripherals:
- disk controllers for a floppy disk drive, up to 2 PATA drives, and up to 6 SATA drives (including RAID 0/1 support)
- integrated graphics controller supporting 2D and 3D graphics, with VGA and TV output
- integrated sound card supporting 8-channel (7.1) audio and S/PDIF output
- Fast Ethernet network controller for 10/100 Mbit networking
- USB 2.0 controller supporting up to 12 USB ports
- IrDA controller for infrared data communication (e.g. with an IrDA-enabled cellular phone or printer)
- temperature, voltage, and fan-speed sensors that allow software to monitor the health of computer components
[edit] Peripheral card slots
A typical motherboard of 2009 will have a different number of connections depending on its standard.A standard ATX motherboard will typically have one PCI-E 16x connection for a graphics card, two conventional PCI slots for various expansion cards, and one PCI-E 1x (which will eventually supersede PCI). A standard EATX motherboard will have one PCI-E 16x connection for a graphics card, and a varying number of PCI and PCI-E 1x slots. It can sometimes also have a PCI-E 4x slot. (This varies between brands and models.)
Some motherboards have two PCI-E 16x slots, to allow more than 2 monitors without special hardware, or use a special graphics technology called SLI (for Nvidia) and Crossfire (for ATI). These allow 2 graphics cards to be linked together, to allow better performance in intensive graphical computing tasks, such as gaming and video editing.
As of 2007, virtually all motherboards come with at least four USB ports on the rear, with at least 2 connections on the board internally for wiring additional front ports that may be built into the computer's case. Ethernet is also included. This is a standard networking cable for connecting the computer to a network or a modem. A sound chip is always included on the motherboard, to allow sound output without the need for any extra components. This allows computers to be far more multimedia-based than before. Some motherboards contain video outputs on the back panel for integrated graphics solutions (either embedded in the motherboard, or combined with the microprocessor, such as the Intel HD Graphics). A separate card may still be used.
[edit] Temperature and reliability
Main article: Computer cooling
Motherboards are generally air cooled with heat sinks often mounted on larger chips, such as the Northbridge, in modern motherboards. If the motherboard is not cooled properly, it can cause the computer to crash. Passive cooling, or a single fan mounted on the power supply, was sufficient for many desktop computer CPUs until the late 1990s; since then, most have required CPU fans mounted on their heat sinks, due to rising clock speeds and power consumption. Most motherboards have connectors for additional case fans as well. Newer motherboards have integrated temperature sensors to detect motherboard and CPU temperatures, and controllable fan connectors which the BIOS or operating system can use to regulate fan speed. Some computers (which typically have high-performance microprocessors, large amounts of RAM, and high-performance video cards) use a water-cooling system instead of many fans.Some small form factor computers and home theater PCs designed for quiet and energy-efficient operation boast fan-less designs. This typically requires the use of a low-power CPU, as well as careful layout of the motherboard and other components to allow for heat sink placement.
A 2003 study[7] found that some spurious computer crashes and general reliability issues, ranging from screen image distortions to I/O read/write errors, can be attributed not to software or peripheral hardware but to aging capacitors on PC motherboards. Ultimately this was shown to be the result of a faulty electrolyte formulation.[8]
- For more information on premature capacitor failure on PC motherboards, see capacitor plague.
[edit] Form factor
Main article: Comparison of computer form factors
Motherboards are produced in a variety of sizes and shapes called computer form factor, some of which are specific to individual computer manufacturers. However, the motherboards used in IBM-compatible to fit various case sizes. As of 2007[update], most desktop computer motherboards use one of these[which?] standard form factors—even those found in Macintosh and Sun computers, which have not traditionally been built from commodity components. The current desktop PC form factor of choice is ATX. A case's motherboard and PSU form factor must all match, though some smaller form factor motherboards of the same family will fit larger cases. For example, an ATX case will usually accommodate a microATX motherboard.Laptop computers generally use highly integrated, miniaturized and customized motherboards. This is one of the reasons that laptop computers are difficult to upgrade and expensive to repair. Often the failure of one laptop component requires the replacement of the entire motherboard, which is usually more expensive than a desktop motherboard due to the large number of integrated components.
[edit] Bootstrapping using the BIOS
Main article: booting
Motherboards contain some non-volatile memory to initialize the system and load an operating system from some external peripheral device. Microcomputers such as the Apple II and IBM PC used ROM chips, mounted in sockets on the motherboard. At power-up, the central processor would load its program counter with the address of the boot ROM and start executing ROM instructions, displaying system information on the screen and running memory checks, which would in turn start loading memory from an external or peripheral device (disk drive). If none is available, then the computer can perform tasks from other memory stores or display an error message, depending on the model and design of the computer and version of the BIOS.Most modern motherboard designs use a BIOS, stored in an EEPROM chip soldered or socketed to the motherboard, to bootstrap an operating system. When power is first applied to the motherboard, the BIOS firmware tests and configures memory, circuitry, and peripherals. This Power-On Self Test (POST) may include testing some of the following devices:
- video adapter
- cards inserted into slots, such as conventional PCI
- floppy drive
- thermistors, voltages, and fan speeds for hardware monitoring
- CMOS used to store BIOS setup configuration
- keyboard and mouse
- network controller
- optical drives: CD-ROM or DVD-ROM
- SCSI hard drive
- IDE, EIDE, or SATA hard disk
- security devices, such as a fingerprint reader or the state of a latch switch to detect intrusion
- USB devices, such as a memory storage device
[edit] See also
- Accelerated Graphics Port (AGP)
- Backplane
- BIOS
- Central Processing Unit
- Chipset
- Computer case
- Conventional PCI
- Daughterboard
- Front-side bus
- Industry Standard Architecture (ISA)
- List of motherboard manufacturers
- Offboard
- Overclocking
- PCI Express
- Single-board computer
[edit] References
- ^ Paul Miller. "Apple sneaks new logic board into whining MacBook Pros" (2006). Engadget. http://www.engadget.com/2006/07/08/apple-sneaks-new-logic-board-into-whining-macbook-pros/. Retrieved 2008-10-23.
- ^ "mobo". Webopedia. http://www.webopedia.com/TERM/M/mobo.html. Retrieved 2008-10-23.
- ^ In the case of CPUs in BGA packages, such as the VIA C3, the CPU is directly soldered to the motherboard.
- ^ As of 2007[update], some graphics cards (e.g. GeForce 8 and Radeon R600) require more power than the motherboard can provide, and thus dedicated connectors have been introduced to attach them directly to the power supply. (Note that most disk drives also connect to the power supply via dedicated connectors.)
- ^ "Golden Oldies: 1993 mainboards". http://redhill.net.au/b/b-93.html. Retrieved 2007-06-27.
- ^ "RS485M-M (V1.0)". http://www.ecs.com.tw/ECSWebSite/Products/ProductsDetail.aspx?DetailID=654&CategoryID=1&DetailName=Feature&MenuID=46&LanID=9. Retrieved 2007-06-27.
- ^ c't Magazine, vol. 21, pp. 216-221. 2003.
- ^ Yu-Tzu Chiu, Samuel K. Moore "Faults & Failures: Leaking Capacitors Muck up Motherboards" (2003-02-19) IEEE Spectrum accessed 2008-03-10
- ^ See the capacitor lifetime formula at [1].
[edit] External links
![]() | Wikimedia Commons has media related to: Computer motherboards |
- List of motherboard manufacturers and links to BIOS updates
- What is a motherboard?
- The Making of a Motherboard: ECS Factory Tour
- The Making of a Motherboard: Gigabyte Factory Tour
- Motherboards at the Open Directory Project
- Front Panel I/O Connectivity Design Guide - v1.3 (pdf file) (February 2005)
Mouse (computing)
From Wikipedia, the free encyclopedia
Contents[hide] |
[edit] Naming
The first known publication of the term "mouse" as a pointing device is in Bill English's 1965 publication "Computer-Aided Display Control".[1]The online Oxford Dictionaries entry for mouse states the plural for the small rodent is mice, while the plural for the small computer connected device is either mice or mouses. However, in the usage section of the entry it states that the more common plural is mice, and the first recorded use of the term in the plural (1984) is mice as well.[2] The fourth edition of The American Heritage Dictionary of the English Language endorses both computer mice and computer mouses as correct plural forms for computer mouse. Some authors of technical documents may prefer either mouse devices or the more generic pointing devices. The plural mouses treats mouse as a "headless noun".
[edit] Early mice

Early mouse patents. From left to right: Opposing track wheels by Engelbart, Nov. 1970, U.S. Patent 3,541,541. Ball and wheel by Rider, Sept. 1974, U.S. Patent 3,835,464. Ball and two rollers with spring by Opocensky, Oct. 1976, U.S. Patent 3,987,685
Independently, Douglas Engelbart at the Stanford Research Institute invented the first mouse prototype in 1963,[4] with the assistance of his colleague Bill English. They christened the device the mouse as early models had a cord attached to the rear part of the device looking like a tail and generally resembling the common mouse.[5] Engelbart never received any royalties for it, as his patent ran out before it became widely used in personal computers.[6]
The invention of the mouse was just a small part of Engelbart's much larger project, aimed at augmenting human intellect.[7]

The first computer mouse, held by inventor Douglas Engelbart, showing the wheels that make contact with the working surface
Just a few weeks before Engelbart released his demo in 1968, a mouse has already been developed and published by the German company Telefunken. Unlike Engelbart's mouse, the Telefunken model had a ball, as it can be seen in most later models until today. Since 1970 it was shipped as a part and sold together with Telefunken Computers. Some models from the year 1972 are still well preserved.[10]
The second marketed integrated mouse shipped as a part of a computer and intended for personal computer navigation came with the Xerox 8010 Star Information System in 1981. However, the mouse remained relatively obscure until the appearance of the Apple Macintosh, which included an updated version of the original Lisa Mouse. In 1984 PC columnist John C. Dvorak dismissively commented on the newly-released computer with a mouse: "There is no evidence that people want to use these things".[11][12]
[edit] Variants
[edit] Mechanical mice
![]() Operating an opto-mechanical mouse. |
Bill English, builder of Engelbart's original mouse,[13] invented the ball mouse in 1972 while working for Xerox PARC.[14]
The ball-mouse replaced the external wheels with a single ball that could rotate in any direction. It came as part of the hardware package of the Xerox Alto computer. Perpendicular chopper wheels housed inside the mouse's body chopped beams of light on the way to light sensors, thus detecting in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and became the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse when required.The ball mouse has two freely rotating rollers. They are located 90 degrees apart. One roller detects the forward–backward motion of the mouse and other the left–right motion. Opposite the two rollers is a third one (white, in the photo, at 45 degrees) that is spring-loaded to push the ball against the other two rollers. Each roller is on the same shaft as an encoder wheel that has slotted edges; the slots interrupt infrared light beams to generate electrical pulses that represent wheel movement. Each wheel's disc, however, has a pair of light beams, located so that a given beam becomes interrupted, or again starts to pass light freely, when the other beam of the pair is about halfway between changes. Simple logic circuits interpret the relative timing to indicate which direction the wheel is rotating. This scheme is sometimes called quadrature encoding of the wheel rotation, as the two optical sensor produce signals that are in approximately quadrature phase. The mouse sends these signals to the computer system via the mouse cable, directly as logic signals in very old mice such as the Xerox mice, and via a data-formatting IC in modern mice. The driver software in the system converts the signals into motion of the mouse cursor along X and Y axes on the screen.
The ball is mostly steel, with a precision spherical rubber surface. The weight of the ball, given an appropriate working surface under the mouse, provides a reliable grip so the mouse's movement is transmitted accurately.
Ball mice and wheel mice were manufactured for Xerox by Jack Hawley, doing business as The Mouse House in Berkeley, California, starting in 1975.[15][16]
Based on another invention by Jack Hawley, proprietor of the Mouse House, Honeywell produced another type of mechanical mouse.[17][18] Instead of a ball, it had two wheels rotating at off axes. Keytronic later produced a similar product.[19]
Modern computer mice took form at the École polytechnique fédérale de Lausanne (EPFL) under the inspiration of Professor Jean-Daniel Nicoud and at the hands of engineer and watchmaker André Guignard.[20] This new design incorporated a single hard rubber mouseball and three buttons, and remained a common design until the mainstream adoption of the scroll-wheel mouse during the 1990s.[21] In 1985, René Sommer added a microprocessor to Nicoud's and Guignard's design.[22] Through this innovation, Sommer is credited with inventing a significant component of the mouse, which made it more "intelligent;"[22] though optical mice from Mouse Systems had incorporated microprocessors by 1984.[23]
Another type of mechanical mouse, the "analog mouse" (now generally regarded as obsolete), uses potentiometers rather than encoder wheels, and is typically designed to be plug-compatible with an analog joystick. The "Color Mouse", originally marketed by Radio Shack for their Color Computer (but also usable on MS-DOS machines equipped with analog joystick ports, provided the software accepted joystick input) was the best-known example.
[edit] Optical and Laser mice
Main article: Optical mouse
Optical mice make use of one or more light-emitting diodes (LEDs) and an imaging array of photodiodes to detect movement relative to the underlying surface, rather than internal moving parts as does a mechanical mouse. A Laser mouse is an optical mouse that uses coherent (Laser) light.[edit] Inertial and gyroscopic mice
Often called "air mice" since they do not require a surface to operate, inertial mice use a tuning fork or other accelerometer (US Patent 4787051) to detect rotary movement for every axis supported. The most common models (manufactured by Logitech and Gyration) work using 2 degrees of rotational freedom and are insensitive to spatial translation. The user requires only small wrist rotations to move the cursor, reducing user fatigue or "gorilla arm". Usually cordless, they often have a switch to deactivate the movement circuitry between use, allowing the user freedom of movement without affecting the cursor position. A patent for an inertial mouse claims that such mice consume less power than optically based mice, and offer increased sensitivity, reduced weight and increased ease-of-use.[24] In combination with a wireless keyboard an inertial mouse can offer alternative ergonomic arrangements which do not require a flat work surface, potentially alleviating some types of repetitive motion injuries related to workstation posture.[edit] 3D mice
Also known as bats,[25] flying mice, or wands,[26] these devices generally function through ultrasound and provide at least three degrees of freedom. Probably the best known example would be 3DConnexion/Logitech's SpaceMouse from the early 1990s.In the late 1990s Kantek introduced the 3D RingMouse. This wireless mouse was worn on a ring around a finger, which enabled the thumb to access three buttons. The mouse was tracked in three dimensions by a base station.[27] Despite a certain appeal, it was finally discontinued because it did not provide sufficient resolution.
A recent consumer 3D pointing device is the Wii Remote. While primarily a motion-sensing device (that is, it can determine its orientation and direction of movement), Wii Remote can also detect its spatial position by comparing the distance and position of the lights from the IR emitter using its integrated IR camera (since the nunchuk accessory lacks a camera, it can only tell its current heading and orientation). The obvious drawback to this approach is that it can only produce spatial coordinates while its camera can see the sensor bar.
A mouse-related controller called the SpaceBall™ [28] has a ball placed above the work surface that can easily be gripped. With spring-loaded centering, it sends both translational as well as angular displacements on all six axes, in both directions for each.
In November 2010 a German Company called Axsotic introduced a new concept of 3D mouse called 3D Spheric Mouse. This new concepts of a true 6 DOF input-device uses a ball to rotate in 3 axes without any limitations.[29]
[edit] Tactile mice
In 2000, Logitech introduced the "tactile mouse", which contained a small actuator that made the mouse vibrate. Such a mouse can augment user-interfaces with haptic feedback, such as giving feedback when crossing a window boundary. To surf by touch requires the user to be able to feel depth or hardness; this ability was realized with the first electrorheological tactile mice[30] but never marketed.[edit] Connectivity and communication protocols
To transmit their input, typical cabled mice use a thin electrical cord terminating in a standard connector, such as RS-232C, PS/2, ADB or USB. Cordless mice instead transmit data via infrared radiation (see IrDA) or radio (including Bluetooth), although many such cordless interfaces are themselves connected through the aforementioned wired serial buses.While the electrical interface and the format of the data transmitted by commonly available mice is currently standardized on USB, in the past it varied between different manufacturers. A bus mouse used a dedicated interface card for connection to an IBM PC or compatible computer.
Mouse use in DOS applications became more common after the introduction of the Microsoft mouse, largely because Microsoft provided an open standard for communication between applications and mouse driver software. Thus, any application written to use the Microsoft standard could use a mouse with a Microsoft compatible driver (even if the mouse hardware itself was incompatible with Microsoft's). An interesting footnote is that the Microsoft driver standard communicates mouse movements in standard units called "mickeys".
[edit] Serial interface and protocol
Standard PC mice once used the RS-232C serial port via a D-subminiature connector, which provided power to run the mouse's circuits as well as data on mouse movements. The Mouse Systems Corporation version used a five-byte protocol and supported three buttons. The Microsoft version used an incompatible three-byte protocol and only allowed for two buttons. Due to the incompatibility, some manufacturers sold serial mice with a mode switch: "PC" for MSC mode, "MS" for Microsoft mode.[31][edit] PS/2 interface and protocol
For more details on this topic, see PS/2 connector.
With the arrival of the IBM PS/2 personal-computer series in 1987, IBM introduced the eponymous PS/2 interface for mice and keyboards, which other manufacturers rapidly adopted. The most visible change was the use of a round 6-pin mini-DIN, in lieu of the former 5-pin connector. In default mode (called stream mode) a PS/2 mouse communicates motion, and the state of each button, by means of 3-byte packets.[32] For any motion, button press or button release event, a PS/2 mouse sends, over a bi-directional serial port, a sequence of three bytes, with the following format:Bit 7 | Bit 6 | Bit 5 | Bit 4 | Bit 3 | Bit 2 | Bit 1 | Bit 0 | |
---|---|---|---|---|---|---|---|---|
Byte 1 | YV | XV | YS | XS | 1 | MB | RB | LB |
Byte 2 | X movement | |||||||
Byte 3 | Y movement |
A Microsoft IntelliMouse relies on an extension of the PS/2 protocol: the ImPS/2 or IMPS/2 protocol (the abbreviation combines the concepts of "IntelliMouse" and "PS/2"). It initially operates in standard PS/2 format, for backwards compatibility. After the host sends a special command sequence, it switches to an extended format in which a fourth byte carries information about wheel movements. The IntelliMouse Explorer works analogously, with the difference that its 4-byte packets also allow for two additional buttons (for a total of five).[33]
The Typhoon mouse uses 6-byte packets which can appear as a sequence of two standard 3-byte packets, such that an ordinary PS/2 driver can handle them.[34]
Mouse vendors also use other extended formats, often without providing public documentation.
For 3-D (or 6-degree-of-freedom) input, vendors have made many extensions both to the hardware and to software. In the late 1990s Logitech created ultrasound based tracking which gave 3D input to a few millimetres accuracy, which worked well as an input device but failed as a profitable product. In 2008, Motion4U introduced its "OptiBurst" system using IR tracking for use as a Maya (graphics software) plugin.
[edit] Apple Desktop Bus

Apple Macintosh Plus mice (left) Beige mouse (right) Platinum mouse 1986
[edit] USB
The industry-standard USB protocol and its connector have become widely used for mice; it's currently among the most popular types.[35][edit] Cordless or wireless
Cordless or wireless mice transmit data via infrared radiation (see IrDA) or radio (including Bluetooth). The receiver is connected to the computer through a serial or USB port. The newer nano receivers were designed to be small enough to remain connected in a laptop or notebook computer during transport, while still being large enough to easily remove.[36][edit] Operation
A mouse typically controls the motion of a cursor in two dimensions in a graphical user interface (GUI). Clicking or hovering (stopping movement while the cursor is within the bounds of an area) can select files, programs or actions from a list of names, or (in graphical interfaces) through small images called "icons" and other elements. For example, a text file might be represented by a picture of a paper notebook, and clicking while the cursor hovers this icon might cause a text editing program to open the file in a window. (See also point-and-click)Users can also employ mice gesturally; meaning that a stylized motion of the mouse cursor itself, called a "gesture", can issue a command or map to a specific action. For example, in a drawing program, moving the mouse in a rapid "x" motion over a shape might delete the shape.
Gestural interfaces occur more rarely than plain pointing-and-clicking; and people often find them more difficult to use, because they require finer motor-control from the user. However, a few gestural conventions have become widespread, including the drag-and-drop gesture, in which:
- The user presses the mouse button while the mouse cursor hovers over an interface object
- The user moves the cursor to a different location while holding the button down
- The user releases the mouse button
Other uses of the mouse's input occur commonly in special application-domains. In interactive three-dimensional graphics, the mouse's motion often translates directly into changes in the virtual camera's orientation. For example, in the first-person shooter genre of games (see below), players usually employ the mouse to control the direction in which the virtual player's "head" faces: moving the mouse up will cause the player to look up, revealing the view above the player's head. A related function makes an image of an object rotate, so that all sides can be examined.
When mice have more than one button, software may assign different functions to each button. Often, the primary (leftmost in a right-handed configuration) button on the mouse will select items, and the secondary (rightmost in a right-handed) button will bring up a menu of alternative actions applicable to that item. For example, on platforms with more than one button, the Mozilla web browser will follow a link in response to a primary button click, will bring up a contextual menu of alternative actions for that link in response to a secondary-button click, and will often open the link in a new tab or window in response to a click with the tertiary (middle) mouse button.
Different ways of operating the mouse cause specific things to happen in the GUI:
- Click: pressing and releasing a button.
- (left) Single-click: clicking the main button.
- (left) Double-click: clicking the button two times in quick succession counts as a different gesture than two separate single clicks.
- (left) Triple-click: clicking the button three times in quick succession.
- Right-click: clicking the secondary button.
- Middle-click: clicking the ternary button.
- Drag: pressing and holding a button, then moving the mouse without releasing. (Use the command "drag with the right mouse button" instead of just "drag" when you instruct a user to drag an object while holding the right mouse button down instead of the more commonly used left mouse button.)
- Button chording (a.k.a. Rocker navigation).
- Combination of right-click then left-click.
- Combination of left-click then right-click or keyboard letter.
- Combination of left or right-click and the mouse wheel.
- Clicking while holding down a modifier key.
- Rollover
- Selection
- Menu traversal
- Drag and drop
- Pointing
- Goal crossing
[edit] Multiple-mouse systems
Some systems allow two or more mice to be used at once as input devices. 16-bit era home computers such as the Amiga used this to allow computer games with two players interacting on the same computer. The same idea is sometimes used in collaborative software, e.g. to simulate a whiteboard that multiple users can draw on without passing a single mouse around.Microsoft Windows, since Windows 98, has supported multiple simultaneous pointing devices. Because Windows only provides a single screen cursor, using more than one device at the same time generally results in seemingly random movements of the cursor. However, the advantage of this support lies not in simultaneous use, but in simultaneous availability for alternate use: for example, a laptop user editing a complex document might use a handheld mouse for drawing and manipulation of graphics, but when editing a section of text, use a built-in trackpad to allow movement of the cursor while keeping his hands on the keyboard. Windows' multiple-device support means that the second device is available for use without having to disconnect or disable the first.
DirectInput originally allowed access to multiple mice as separate devices, but Windows NT based systems could not make use of this. When Windows XP was introduced, it provided a feature called "Raw Input" that offers the ability to track multiple mice independently, allowing for programs that make use of separate mice. Though a program could, for example, draw multiple cursors if it was a fullscreen application, Windows still supports just one cursor and keyboard.
As of 2009, Linux distributions and other operating systems that use X.Org, such as OpenSolaris and FreeBSD, support unlimited numbers of cursors and keyboards through Multi-Pointer X.
There have also been propositions of having a single operator use two mice simultaneously as a more sophisticated means of controlling various graphics and multimedia applications.[37]
[edit] Buttons
Main article: Mouse button
Mouse buttons are microswitches which can be pressed ("clicked") in order to select or interact with an element of a graphical user interface.The three-button scrollmouse has become the most commonly available design. As of 2007 (and roughly since the late 1990s), users most commonly employ the second button to invoke a contextual menu in the computer's software user interface, which contains options specifically tailored to the interface element over which the mouse cursor currently sits. By default, the primary mouse button sits located on the left-hand side of the mouse, for the benefit of right-handed users; left-handed users can usually reverse this configuration via software.
[edit] Mouse speed
The computer industry often measures mouse sensitivity in terms of counts per inch (CPI), commonly expressed incorrectly as dots per inch (DPI) – the number of steps the mouse will report when it moves one inch. In early mice, this specification was called pulses per inch (ppi).[15] If the default mouse-tracking condition involves moving the cursor by one screen-pixel or dot on-screen per reported step, then the CPI does equate to DPI: dots of cursor motion per inch of mouse motion. The CPI or DPI as reported by manufacturers depends on how they make the mouse; the higher the CPI, the faster the cursor moves with mouse movement. However, software can adjust the mouse sensitivity, making the cursor move faster or slower than its CPI. Current[update] software can change the speed of the cursor dynamically, taking into account the mouse's absolute speed and the movement from the last stop-point. In most software[specify] this setting is named "speed", referring to "cursor precision". However, some software[specify] names this setting "acceleration", but this term is in fact incorrect. The mouse acceleration, in the majority of mouse software, refers to the setting allowing the user to modify the cursor acceleration: the change in speed of the cursor over time while the mouse movement is constant.For simple software, when the mouse starts to move, the software will count the number of "counts" received from the mouse and will move the cursor across the screen by that number of pixels (or multiplied by a rate factor, typically less than 1). The cursor will move slowly on the screen, having a good precision. When the movement of the mouse passes the value set for "threshold", the software will start to move the cursor more quickly, with a greater rate factor. Usually, the user can set the value of the second rate factor by changing the "acceleration" setting.
Operating systems sometimes apply acceleration, referred to as "ballistics", to the motion reported by the mouse. For example, versions of Windows prior to Windows XP doubled reported values above a configurable threshold, and then optionally doubled them again above a second configurable threshold. These doublings applied separately in the X and Y directions, resulting in very nonlinear response.[38]
[edit] Mousepads
Main article: Mousepad
Engelbart's original mouse did not require a mousepad;[39] the mouse had two large wheels which could roll on virtually any surface. However, most subsequent mechanical mice starting with the steel roller ball mouse have required a mousepad for optimal performance.The mousepad, the most common mouse accessory, appears most commonly in conjunction with mechanical mice, because in order to roll smoothly, the ball requires more friction than common desk surfaces usually provide. So-called "hard mousepads" for gamers or optical/laser mice also exist.
Most optical and laser mice do not require a pad. Whether to use a hard or soft mousepad with an optical mouse is largely a matter of personal preference. One exception occurs when the desk surface creates problems for the optical or laser tracking, for example, a transparent or reflective surface.
[edit] In the marketplace
Around 1981 Xerox included mice with its Xerox Star, based on the mouse used in the 1970s on the Alto computer at Xerox PARC. Sun Microsystems, Symbolics, Lisp Machines Inc., and Tektronix also shipped workstations with mice, starting in about 1981. Later, inspired by the Star, Apple Computer released the Apple Lisa, which also used a mouse. However, none of these products achieved large-scale success. Only with the release of the Apple Macintosh in 1984 did the mouse see widespread use.[40]The Macintosh design, commercially successful and technically influential, led many other vendors to begin producing mice or including them with their other computer products (by 1986, Atari ST, Commodore Amiga, Windows 1.0, GEOS for the Commodore 64, and the Apple IIGS).[41] The widespread adoption of graphical user interfaces in the software of the 1980s and 1990s made mice all but indispensable for controlling computers.
In November 2008, Logitech built their billionth mouse.[42]
[edit] Use in gaming
Mice often function as an interface for PC-based computer games and sometimes for video game consoles.[edit] First-person shooters
Due to the cursor-like nature of the crosshairs in first-person shooters (FPS), a combination of mouse and keyboard provides a popular way to play FPS games. Players use the X-axis of the mouse for looking (or turning) left and right, leaving the Y-axis for looking up and down. Many gamers prefer this primarily in FPS games over a gamepad or joypad because it provides a higher resolution for input. This means they are able to make small, precise motions in the game more easily. The left button usually controls primary fire. If the game supports multiple fire-modes, the right button often provides secondary fire from the selected weapon. The right button may also provide bonus options for a particular weapon, such as allowing access to the scope of a sniper rifle or allowing the mounting of a bayonet or silencer.Gamers can use a scroll wheel for changing weapons, or for controlling scope-zoom magnification. On most FPS games, programming may also assign more functions to additional buttons on mice with more than three controls. A keyboard usually controls movement (for example, WASD, for moving forward, left, backward and right, respectively) and other functions such as changing posture. Since the mouse serves for aiming, a mouse that tracks movement accurately and with less lag (latency) will give a player an advantage over players with less accurate or slower mice.
An early technique of players, circle strafing, saw a player continuously strafing while aiming and shooting at an opponent by walking in circle around the opponent with the opponent at the center of the circle. Players could achieve this by holding down a key for strafing while continuously aiming the mouse towards the opponent.
Games using mice for input have such a degree of popularity that many manufacturers, such as Logitech, Cyber Snipa, Razer USA Ltd and SteelSeries, make peripherals such as mice and keyboards specifically for gaming. Such mice may feature adjustable weights, high-resolution optical or laser components, additional buttons, ergonomic shape, and other features such as adjustable CPI.
Many games, such as first- or third-person shooters, have a setting named "invert mouse" or similar (not to be confused with "button inversion", sometimes performed by left-handed users) which allows the user to look downward by moving the mouse forward and upward by moving the mouse backward (the opposite of non-inverted movement). This control system resembles that of aircraft control sticks, where pulling back causes pitch up and pushing forward causes pitch down; computer joysticks also typically emulate this control-configuration.
After id Software's Doom, the game that popularized FPS games but which did not support vertical aiming with a mouse (the y-axis served for forward/backward movement), competitor 3D Realms' Duke Nukem 3D became one of the first games that supported using the mouse to aim up and down. This and other games using the Build engine had an option to invert the Y-axis. The "invert" feature actually made the mouse behave in a manner that users now[update] regard as non-inverted (by default, moving mouse forward resulted in looking down). Soon after, id Software released Quake, which introduced the invert feature as users now[update] know it. Other games using the Quake engine have come on the market following this standard, likely due to the overall popularity of Quake.
[edit] Home consoles
In 1988 the educational video game system, the VTech Socrates, featured a wireless mouse with an attached mouse pad as an optional controller used for some games. In the early 1990s the Super Nintendo Entertainment System video game system featured a mouse in addition to its controllers. The Mario Paint game in particular used the mouse's capabilities, as did its successor on the Nintendo 64. Sega released official mice for their Genesis/Mega Drive, Saturn and Dreamcast consoles. NEC sold official mice for its PC Engine and PC-FX consoles. Sony Computer Entertainment released an official mouse product for the PlayStation console, and included one along with the Linux for PlayStation 2 kit. However, users can attach virtually any USB mouse to the PlayStation 2 console. In addition the PlayStation 3 also fully supports USB mice. Recently the Wii also has this latest development added on in a recent software update.[edit] See also
[edit] Notes
- ^ Oxford English Dictionary, "mouse", sense 13
- ^ "Definition for Mouse". 2011. http://oxforddictionaries.com/definition/mouse. Retrieved 2011-07-06.
- ^ Ferranti-Packard: Pioneers in Canadian Electrical Manufacturing, Norman R. Ball, John N. Vardalas, McGill-Queen's Press, 1993
- ^ The computer mouse turns 40. Retrieved 16 April 2009.
- ^ ""Mouses" vs "mice"". alt.usage.english fast-access FAQ. http://alt-usage-english.org/excerpts/fxmouses.html. Retrieved 2006-06-11.
- ^ Maggie, Shiels (2008-07-17). "Say goodbye to the computer mouse". BBC News. http://news.bbc.co.uk/1/hi/technology/7508842.stm. Retrieved 2008-07-17.
- ^ "Evolving Collective Intelligence" by Engelbart, Landau and Clegg
- ^ "Retrieved 31 December 2006". Invent.org. 1925-01-30. http://www.invent.org/hall_of_fame/53.html. Retrieved 2010-05-29.
- ^ Retrieved 31 December 2006[dead link]
- ^ http://www.heise.de/newsticker/Auf-den-Spuren-der-deutschen-Computermaus--/meldung/136901
- ^ John C. Dvorak, San Francisco Examiner, 19 February 1984
- ^ "25 Years of Macintosh". AAPLinvestors. http://aaplinvestors.net/2009/01/10/25-years-of-macintosh/. Retrieved 2010-05-29.
- ^ "Doug Engelbart: Father of the Mouse (interview)". http://www.superkids.com/aweb/pages/features/mouse/mouse.html. Retrieved 2007-09-08.
- ^ . Byte. pp. 58–68.
- ^ a b "The Xerox Mouse Commercialized". Making the Macintosh: Technology and Culture in Silicon Valley. http://library.stanford.edu/mac/primary/images/hawley1.html.
- ^ "Hawley Mark II X063X Mouses". oldmouse.com. http://www.oldmouse.com/mouse/hawley/.
- ^ "Honeywell mechanical mouse". Archived from the original on 2007-04-28. http://web.archive.org/web/20070428032201/http://www.bergen.org/AAST/Projects/Engineering_Graphics/_EG2001/mouse/improvements.html#honeywell. Retrieved 2007-01-31.
- ^ "Honeywell mouse patent". http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=4628755.PN.&OS=PN/4628755&RS=PN/4628755. Retrieved 2007-09-11.
- ^ "Keytronic 2HW73-1ES Mouse". http://www.keytronic.com/home/products/specs/2hw73-1es.htm. Retrieved 2007-01-31.
- ^ "Retrieved 31 December 2006". News.softpedia.com. 1970-11-17. http://news.softpedia.com/news/Of-Mice-and-Men-and-PCs-43129.shtml. Retrieved 2010-05-29.
- ^ "Inventions, computer mouse – the CNN site". Archived from the original on April 24, 2005. http://web.archive.org/web/20050424150438/http://www.cnn.com/SPECIALS/2004/explorers/interactive/profiles/computer.mouse/content.html. Retrieved 2006-12-31.
- ^ a b "Computer mouse inventor dies in Vaud". World Radio Switzerland. 2009-10-14. http://worldradio.ch/wrs/news/wrsnews/computer-mouse-inventor-dies-in-vaud.shtml?16283. Retrieved 2009-10-28.
- ^ Denise Caruso (May 14, 1984). "People". InfoWorld (InfoWorld Media Group, Inc.) 6 (20): 16. ISSN 0199-6649. http://books.google.com/?id=sy4EAAAAMBAJ&pg=PA16&dq=optical-mouse+kirsch+microprocessor&cd=1#v=onepage&q=optical-mouse%20kirsch%20microprocessor.
- ^ Fresh Patents – Highly Sensitive Inertial Mouse. Retrieved 31 December 2006.
- ^ Doug A. Bowman, Ernst Kruijff and Ivan Poupyrev (2005). 3D user interfaces. Addison-Wesley. p. 111. ISBN 9780201758672. http://books.google.com/?id=It5QAAAAMAAJ&q=bat+3d-mouse&dq=bat+3d-mouse.
- ^ Stephen F. Krar and Arthur Gill (2003). Exploring advanced manufacturing technologies. Industrial Press Inc. pp. 8–6–4. ISBN 9780831131500. http://books.google.com/?id=TGkfsC77pdwC&pg=PT247&dq=flying-mouse+3d+wand.
- ^ "Retrieved 31 December 2006". Byte.com. http://www.byte.com/art/9602/sec17/art6.htm. Retrieved 2010-05-29.
- ^ "Space Ball". Vrlogic.com. http://www.vrlogic.com/html/3dconnexion/space_ball.html. Retrieved 2010-05-29.
- ^ "axsotic". axsotic.com. http://www.axsotic.com. Retrieved 2011-02-09.
- ^ Computer based platform for tactile actuator analysis. Actuator'06, Bremen. 14–16 June 2006
- ^ FreeDOS-32 – Serial Mouse driver[dead link]
- ^ Computer Engineering Tips – PS/2 Mouse Interface
- ^ Retrieved 31 December 2006[dead link]
- ^ "Retrieved 31 December 2006". Win.tue.nl. http://www.win.tue.nl/~aeb/linux/kbd/scancodes-13.html. Retrieved 2010-05-29.
- ^ Jon Gan (November 2007). "USB: A Technological Success Story". HWM (SPH Magazines): 114. ISSN 0219-5607. http://books.google.com/?id=MesDAAAAMBAJ&pg=RA1-PA49&dq=imac+usb+1998&cd=14#v=onepage&q=imac%20usb%201998.
- ^ Lisa Johnston. What Is a Nano Wireless Receiver? About.com. Accessed 2010-09-03
- ^ "Design and implementation of the double mouse system for a Window environment". IEEE Xplore. http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/7568/20620/00953558.pdf. Retrieved 2010-05-29.
- ^ "Pointer ballistics for Windows XP". Windows Hardware Developer Center Archive. Microsoft Corporation. 2002. http://www.microsoft.com/whdc/archive/pointer-bal.mspx. Retrieved April 29, 2010.
- ^ Eric "Unit24" Guy. "Corepad Victory & Deskpad XXXL". http://www.gruntville.com/reviews/mousepads/corepad_roundup/index.php. Retrieved 2007-10-03.
- ^ Andrew Chan (Nov. 2004). "The Macintosh Phenomenon: Celebrating Twenty Years of the World's Most Adored Desktop Computers". HWM: 74–77. http://books.google.com/books?id=o-oDAAAAMBAJ&pg=PA76.
- ^ Stephen A. Booth (Jan. 1987). "Colorful New Apple". Popular Mechanics 164 (1): 16. ISSN 0032-4558. http://books.google.com/books?id=GOMDAAAAMBAJ&pg=PA16.
- ^ Shiels, Maggie (2008-12-03). "Logitech's billionth mouse". BBC News. http://news.bbc.co.uk/1/low/technology/7751627.stm. Retrieved 2010-05-29.
[edit] References
- Agilent Technologies (2004). ADNS-2610 Optical Mouse Sensor. (pdf format) Retrieved 2004-11-16.
- Squeak Wiki (16 March 2004). FAQ: Mouse Buttons. Revision 24. Retrieved 2004-11-17.
- Inertial mouse system, United States Patent 4787051
[edit] External links
![]() | Wikimedia Commons has media related to: Computer mouse |
- Timeline of Mouse History (Macworld)
- Interview with Doug Engelbart on 40th Anniversary of the Mouse
- Primary Material on the Apple Mouse
- Of Mice and Zen: Product Design and Invisible Innovation, by Alex Soojung-Kim PangPDF (64.9 KiBapplication/pdf, 66545 bytes)
- MouseSite including 1968 demonstration
- Father of the Mouse the mouse history page at the Doug Engelbart Institute website
- Mouse Interrupts in DOS
- The PS/2 mouse interface: Detailed description of the data protocol, including the Microsoft Intellimouse wheel-and-five-buttons extensions
- Howstuffworks.com article on how computer mice work
- "First ever computer mouse demo". BBC News. 8 December 2008. http://news.bbc.co.uk/1/hi/technology/7772376.stm.
- CC BY-SA photos of Hawley Mark II Mouse
Computer speaker
From Wikipedia, the free encyclopedia
![]() | This article does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2010) |
Computer speakers range widely in quality and in price. The computer speakers typically packaged with computer systems are small, plastic, and have mediocre sound quality. Some computer speakers have equalization features such as bass and treble controls.
The internal amplifiers require an external power source, usually an AC adapter. More sophisticated computer speakers can have a subwoofer unit, to enhance bass output, and these units usually include the power amplifiers both for the bass speaker, and the small satellite speakers.
Some computer displays have rather basic speakers built-in. Laptops come with integrated speakers. Restricted space available in laptops means these speakers usually produce low-quality sound.
For some users, a lead connecting computer sound output to an existing stereo system is practical. This normally yields much better results than small low-cost computer speakers. Computer speakers can also serve as an economy amplifier for MP3 player use for those who wish to not use headphones, although some models of computer speakers have headphone jacks of their own.
Contents[hide] |
[edit] Common features
Features vary by manufacturer, but may include the following:- An LED power indicator.
- A 3.5 mm headphone jack.
- Controls for volume, and sometimes bass and treble
- A remote volume control.
[edit] Cost cutting measures and technical compatibility
In order to cut the cost of computer speakers (unless designed for premium sound performance), speakers designed for computers often lack an AM/FM tuner and other built-in sources of audio. However, the male 3.5 mm plug can be jury rigged with "female 3.5 mm TRS to female stereo RCA" adapters to work with stereo system components such as CD/DVD-Audio/SACD players (although computers have CD-ROM drives of their own with audio CD support), audio cassette players, turntables, etc.Despite being designed for computers, computer speakers are electrically compatible with the aforementioned stereo components. There are even models of computer speakers that have stereo RCA in jacks.
[edit] Major computer speaker companies

The base of a Harman Kardon speaker.
- Altec Lansing
- Bose Corporation
- Creative Labs
- Cyber Acoustics
- Dell
- Edifier
- General Electric
- Harman Kardon
- Hewlett-Packard
- JBL
- Klipsch
- Logitech
Webcam
From Wikipedia, the free encyclopedia

Typical low-cost webcam used with many personal computers
![]() | Wikimedia Commons has media related to: Webcams |
Animated set of x-ray images of a webcam. Images acquired using an industrial computed tomograph.
Their most popular use is the establishment of video links, permitting computers to act as videophones or videoconference stations. This common use as a video camera for the World Wide Web gave the webcam its name. Other popular uses include security surveillance and computer vision.
Webcams are known for their low manufacturing cost and flexibility,[1] making them the lowest cost form of videotelephony. They have also become a source of security and privacy issues, as some built-in webcams can be remotely activated via spyware.
Contents[hide] |
[edit] History
[edit] Early development
First developed in 1991, a webcam was pointed at the Trojan Room coffee pot in the Cambridge University Computer Science Department. The camera was finally switched off on August 22, 2001. The final image captured by the camera can still be viewed at its homepage.[2][3] The oldest webcam still operating is FogCam at San Francisco State University, which has been running continuously since 1994.[4][edit] Connectix QuickCam
The first known commercial webcam, the QuickCam, entered the marketplace in 1994, created by the U.S. computer hardware and software company Connectix, which later sold its product line to another U.S. company, Logitech, in 1998. QuickCam was originally the design of Jon Garber, who wanted to call it the 'Mac-camera', but was vetoed by Connectix's marketing department which saw the possibility of it one day becoming a cross-platform product. It was to become Connectix's first Microsoft Windows product 14 months later when QuickCam for Windows was launched in October 1995. The Macintosh QuickCam had shipped earlier in August 1994, and could only provide 320 x 240 pixel resolution with a grayscale colour depth of 16 shades at 60 frames per second, which would drop down to 15 frames per second if it was switched to a less basic 256 shades of gray (8-bit).[5]The QuickCam had earlier started as a graduate degree research project in the early 1990's between various California and East Coast universities, and was originally designed with an RS-232 serial port connector color CCD camera. Both the Apple and Windows software versions were sponsored by DARPA and the U.S. Department of Veterans Affairs. The Windows software version was compiled under both MS Visual Studios and Borland C/C++ compilers for both Windows 3.11 and Windows 95. Videoconferencing via computers already existed, and at the time client-server based videoconferencing software such as CU-SeeMe had started to become popular.
The initial QuickCam model was available only for the Apple Macintosh, connecting to it via its serial port, and was sold at a cost of $100. In 2010, Time Magazine designated QuickCam as one of the top computer devices of all time.[6]
[edit] Later developments
One of the most widely reported-on webcam sites was JenniCam, created in 1996, which allowed Internet users to observe the life of its namesake constantly, in the same vein as the reality TV series Big Brother, launched four years later.[7] More recently, the website Justin.tv has shown a continuous video and audio stream from a mobile camera mounted on the head of the site's star. Other cameras are mounted overlooking bridges, public squares, and other public places, their output made available on a public web page in accordance with the original concept of a "webcam". Aggregator websites have also been created, providing thousands of live video streams or up-to-date still pictures, allowing users to find live video streams based on location or other criteria.Around the turn of the 21st century, computer hardware manufacturers began building webcams directly into laptop and desktop screens, thus eliminating the need to use an external USB or Firewire camera. Gradually webcams came to be used more for telecommunication, or videotelephony, between two people, or among a few people, than for offering a view on a Web page to an unknown public.
The term 'webcam' may also be used in its original sense of a video camera connected to the Web continuously for an indefinite time, rather than for a particular session, generally supplying a view for anyone who visits its web page over the Internet. Some of them, for example those used as online traffic cameras, are expensive, rugged professional video cameras.
For less than $100 US (retail), Minoru makes a 3D webcam which produces videos and photos in 3D Anaglyph image with a resolution up to 1280x480 pixels. Both sender and receiver of the images must use 3D glasses to see the effect of three dimensional image.[8]
[edit] Uses
[edit] Videocalling and conferencing
As webcam capabilities have been added to instant messaging, text chat services such as AOL Instant Messenger, and VoIP services such as Skype, one-to-one live video communication over the Internet has now reached millions of mainstream PC users worldwide. Improved video quality has helped webcams encroach on traditional video conferencing systems. New features such as automatic lighting controls, real-time enhancements (retouching, wrinkle smoothing and vertical stretch), automatic face tracking and autofocus assist users by providing substantial ease-of-use, further increasing the popularity of webcams.Webcam features and performance can vary by program, computer operating system, and also by the computer's processor capabilities. For example, 'high-quality video' is principally available to users of certain Logitech webcams if their computers have dual-core processors meeting certain specifications. Video calling support has also been added to several popular instant messaging programs.
[edit] Video security
Webcams are also used as security cameras. Software is available to allow PC-connected cameras to watch for movement and sound, recording both when they are detected; these recordings can then be saved to the computer, e-mailed or uploaded to the Internet. In one well-publicised case,[9] a computer e-mailed out images as the burglar who stole it, allowing the owner to give police a clear picture of the burglar's face even after the computer had been stolen.[edit] Video clips and stills
Webcams can be used to take video clips and still pictures. Various software tools in wide use can be employed for this, such as PicMaster (for use with Windows operating systems), Photo Booth (Mac), or Cheese (with Unix systems).[edit] Input control devices
Special software can use the video stream from a webcam to assist or enhance a user's control of applications and games. Video features, including faces, shapes, models and colors can be observed and tracked to produce a corresponding form of control. For example, the position of a single light source can be tracked and used to emulate a mouse pointer, a head mounted light would allow hands-free computing and would greatly improve computer accessibility. This can also be applied to games, providing additional control, improved interactivity and immersiveness.FreeTrack is a free webcam motion tracking application for Microsoft Windows that can track a special head mounted model in up to six degrees of freedom and output data to mouse, keyboard, joystick and FreeTrack supported games By removing the IR filter of the webcam, IR LEDs can be used, which has the advantage of being invisible to the naked eye, removing a distraction from the user. TrackIR is a commercial version of this technology.
The EyeToy for the PlayStation 2 (The updated PlayStation 3 equivalent is the PlayStation Eye) and similarly the Xbox Live Vision Camera and the Kinect AKA 'Project Natal' for the Xbox 360 and Xbox Live are color digital cameras that have been used as control input devices by some games.
Small webcam-based PC games are available as either standalone executables or inside web browser windows using Adobe Flash.
[edit] Technology
Image sensors can be CMOS or CCD, the former being dominant for low-cost cameras, but CCD cameras do not necessarily outperform CMOS-based cameras in the low cost price range. Most consumer webcams are capable of providing VGA-resolution video at a frame rate of 30 frames per second. Many newer devices can produce video in multi-megapixel resolutions, and a few can run at high frame rates such as the PlayStation Eye, which can produce 320×240 video at 120 frames per second.
Support electronics are present to read the image from the sensor and transmit it to the host computer. The camera pictured to the right, for example, uses a Sonix SN9C101 to transmit its image over USB. Some cameras, such as mobile phone cameras, use a CMOS sensor with supporting electronics "on die", i.e. the sensor and the support electronics are built on a single silicon chip to save space and manufacturing costs. Most webcams feature built-in microphones to make video calling and videoconferencing more convenient.
The USB video device class (UVC) specification allows for interconnectivity of webcams to computers even without proprietary drivers installed. Microsoft Windows XP SP2, Linux[10] and Mac OS X (since October 2005) have UVC drivers built in and do not require extra drivers, although they are often installed in order to add additional features.
[edit] Privacy
Many users do not wish the continuous exposure for which webcams were originally intended, but rather prefer privacy. Such privacy is lost when Trojan horse programs allow malicious hackers to activate the webcam without the user's knowledge, providing the hackers with a live video and audio feed.[citation needed] Cameras such as Apple's older external iSight cameras include lens covers to thwart this. Some webcams have built-in hardwired LED indicators that light up whenever the camera is active. It is not clear whether these indicators can be circumvented when webcams are surreptitiously activated without the user's knowledge or intent, via spyware.In mid-January 2005, some search engine queries were published in an on-line forum[11] which allow anyone to find thousands of Panasonic- and Axis-made high-end web cameras, provided that they have a web-based interface for remote viewing. Many such cameras are running on default configuration, which does not require any password login or IP address verification, making them visible to anyone.
Some laptop computers have built in webcams which present both privacy and security issues, as such cameras cannot normally be physically disabled if hijacked by a Trojan Horse program or other similar spyware programs. In the 2010 Robbins v. Lower Merion School District "WebcamGate" case, plaintiffs charged that two suburban Philadelphia high schools secretly spied on students—by surreptitiously remotely activating iSight webcams embedded in school-issued MacBook laptops the students were using at home—and thereby infringed on their privacy rights. School authorities admitted to secretly snapping over 66,000 photographs, including shots of students in the privacy of their bedrooms, including some with teenagers in various state of undress.[12][13] The school board involved quickly disabled their laptop spyware program after parents filed lawsuits against the board and various individuals.[14][15]
[edit] Effects on modern society
Webcams allow for inexpensive, real-time video chat and webcasting, in both amateur and professional pursuits. They are frequently used in online dating. YouTube is a popular website hosting many videos made using webcams. News websites such as the BBC can also produce professional live news videos.[16]Webcams can also encourage telecommuting, where people can work from home utilizing the Internet, rather than having to travel to their office.
Webcam use can also have negative consequences. On March 23, 2007, a man named Kevin Whitrick committed suicide live on the Internet on a chat room website.[17]
[edit] Sign language communications via webcam
- Main articles: Video Relay Service, a telecommunication service for deaf, hard-of-hearing and speech-impaired (mute) individuals communicating with hearing persons at a different location, and Video Remote Interpreting, used where deaf/hard-of-hearing/mute persons are in the same location as their hearing parties
[edit] 21st century improvements
Significant improvements in video call quality of service for the deaf occurred in the United States in 2003 when Sorenson Media Inc. (formerly Sorenson Vision Inc.), a video compression software coding company, developed its VP-100 model stand-alone videophone specifically for the deaf community. It was designed to output its video to the user's television in order to lower the cost of acquisition, and to offer remote control and a powerful video compression codec for unequaled video quality and ease of use with video relay services. Favourable reviews quickly led to its popular usage at educational facilities for the deaf, and from there to the greater deaf community.[20]Coupled with similar high-quality videophones introduced by other electronics manufacturers, the availability of high speed Internet, and sponsored video relay services authorized by the U.S. Federal Communications Commission in 2002, VRS services for the deaf underwent rapid growth in that country.[20]
[edit] Present day usage
Using such video equipment in the present day, the deaf, hard-of-hearing and speech-impaired can communicate between themselves and with hearing individuals using sign language. The United States and several other countries compensate companies to provide 'Video Relay Services' (VRS). Telecommunication equipment can be used to talk to others via a sign language interpreter, who uses a conventional telephone at the same time to communicate with the deaf person's party. Video equipment is also used to do on-site sign language translation via Video Remote Interpreting (VRI). The relative low cost and widespread availability of 3G mobile phone technology with video calling capabilities have given deaf and speech-impaired users a greater ability to communicate with the same ease as others. Some wireless operators have even started free sign language gateways.Sign language interpretation services via VRS or by VRI are useful in the present-day where one of the parties is deaf, hard-of-hearing or speech-impaired (mute). In such cases the interpretation flow is normally within the same principal language, such as French Sign Language (LSF) to spoken French, Spanish Sign Language (LSE) to spoken Spanish, British Sign Language (BSL) to spoken English, and American Sign Language (ASL) also to spoken English (since BSL and ASL are completely distinct to each other), and so on.
Multilingual sign language interpreters, who can also translate as well across principal languages (such as to and from SSL, to and from spoken English), are also available, albeit less frequently. Such activities involve considerable effort on the part of the translator, since sign languages are distinct natural languages with their own construction, semantics and syntax, different from the aural version of the same principal language.
With video interpreting, sign language interpreters work remotely with live video and audio feeds, so that the interpreter can see the deaf or mute party, and converse with the hearing party, and vice versa. Much like telephone interpreting, video interpreting can be used for situations in which no on-site interpreters are available. However, video interpreting cannot be used for situations in which all parties are speaking via telephone alone. VRS and VRI interpretation requires all parties to have the necessary equipment. Some advanced equipment enables interpreters to control the video camera remotely, in order to zoom in and out or to point the camera toward the party that is signing.
- Further information: Language interpretation -Sign language
[edit] Videotelephony terminology
Videophone calls (also: videocalls and video chat),[21] differ from videoconferencing in that they expect to serve individuals, not groups. However that distinction has becoming increasingly blurred with technology improvements such as increased bandwidth and sophisticated software clients that can allow for multiple parties on a call. In general everyday usage the term videoconferencing is now frequently used instead of videocall for point-to-point calls between two units. Both videophone calls and videoconferencing are also now commonly referred to as a video link.Webcams are popular, relatively low cost devices which can provide live video and audio streams via personal computers, and can be used with many software clients for both video calls and videoconferencing.[22]
A videoconference system is generally higher cost than a videophone and deploys greater capabilities. A videoconference (also known as a videoteleconference) allows two or more locations to communicate via live, simultaneous two-way video and audio transmissions. This is often accomplished by the use of a multipoint control unit (a centralized distribution and call management system) or by a similar non-centralized multipoint capability embedded in each videoconferencing unit. Again, technology improvements have circumvented traditional definitions by allowing multiple party videoconferencing via web-based applications.[23][24] A separate webpage article is devoted to videoconferencing.
A telepresence system is a high-end videoconferencing system and service usually employed by enterprise-level corporate offices. Telepresence conference rooms use state-of-the art room designs, video cameras, displays, sound-systems and processors, coupled with high-to-very-high capacity bandwidth transmissions.
Typical uses of the various technologies described above include videocalling or videoconferencing on a one-to-one, one-to-many or many-to-many basis for personal, business, educational, deaf Video Relay Service and tele-medical, diagnostic and rehabilitative use or services. New services utilizing videocalling and videoconferencing, such as personal videocalls to inmates incarcerated in penitentiaries, and videoconferencing to resolve airline engineering issues at maintenance facilities, are being created or evolving on an on-going basis.
[edit] See also
- List of webcams for use on personal computers
- Robbins v. Lower Merion School District: U.S. legal action by parents against a high school board for invasion of privacy
- Telepresence
- Video camera
- Video Relay Service: for deaf and hard-of-hearing users
- Videoconferencing
- Videophone
- Videotelephony
[edit] References
- ^ Solomon Negash, Michael E. Whitman. Editors: Solomon Negash, Michael E. Whitman, Amy B. Woszczynski, Ken Hoganson, Herbert Mattord. Handbook of Distance Learning for Real-Time and Asynchronous Information Technology Education, Idea Group Inc (IGI), 2008, p. 17, ISBN 1599049643, ISBN 9781599049649. Note costing: "students had the option to install a webcam on their end (a basic webcam costs about $40.00) to view the class in session."
- ^ CoffeeCam, University of Cambridge.
- ^ Spiegel CoffeeCam
- ^ "Happy Birthday Fogcam" by Anjuli Elais in Golden Gate XPress, September 30, 2004
- ^ Edwards, Benj. History of Video Calls: From Fantasy to Flops to Facetime, PC World Magazine, June 17, 2010.
- ^ Ha, Peter. Computing: Connectix QuickCam, Time Magazine, October 25, 2010.
- ^ "Plug pulled on live website seen by millions" by Oliver Burkeman in The Guardian, January 3, 2004
- ^ http://crave.cnet.co.uk/gadgets/3d-photos-minoru-3d-webcam-hands-on-49303012/
- ^ Serial burglar caught on webcam BBC News, February 16, 2005, retrieved January 3, 2006.
- ^ Linux 2 6 26 – Linux Kernel Newbies
- ^ "Google exposes web surveillance cams" by Kevin Poulsen, The Register, January 8, 2005, retrieved September 5, 2006
- ^ Doug Stanglin (February 18, 2010). "School district accused of spying on kids via laptop webcams". USA Today. http://content.usatoday.com/communities/ondeadline/post/2010/02/school-district-accused-of-issuing-webcam-laptops-to-spy-on-students/1. Retrieved February 19, 2010.
- ^ "Initial LANrev System Findings", LMSD Redacted Forensic Analysis, L-3 Services – prepared for Ballard Spahr (LMSD's counsel), May 2010. Retrieved August 15, 2010.
- ^ Holmes, Kristin E. (August 31, 2010). "Lower Merion School District ordered to pay plaintiff's lawyer $260,000". Philadelphia Inquirer. http://www.philly.com/inquirer/local/pa/20100831_Lower_Merion_School_District_ordered_to_pay_plaintiff_s_lawyer__260_000.html. Retrieved September 20, 2010.
- ^ ".". Main Line Media News. September 18, 2010. http://mainlinemedianews.com/articles/2010/08/31/main_line_times/news/doc4c7cfdad3e059461146296.txt. Retrieved September 20, 2010.
- ^ Live radio studio webcams
- ^ Kevin Whitrick commits suicide while broadcasting video
- ^ Bell Laboratories RECORD (1969) A collection of several articles on the AT&T Picturephone (then about to be released) Bell Laboratories, Pg.134–153 & 160–187, Volume 47, No. 5, May/June 1969;
- ^ a b New Scientist. Telephones Come To Terms With Sign Language, New Scientist, 19 August 1989, Vol.123, Iss.No.1678, pp.31.
- ^ a b Fitzgerald, Thomas J. For the Deaf, Communication Without the Wait, The New York Times, December 18, 2003.
- ^ PC Magazine. Definition: Video Calling, PC Magazine website. Retrieved 19 August 2010,
- ^ Solomon Negash, Michael E. Whitman. Editors: Solomon Negash, Michael E. Whitman, Amy B. Woszczynski, Ken Hoganson, Herbert Mattord. Handbook of Distance Learning for Real-Time and Asynchronous Information Technology Education, Idea Group Inc (IGI), 2008, pg. 17, ISBN 1-59904-964-3, ISBN 978-1-59904-964-9. Note costing: "....students had the option to install a webcam on their end (a basic webcam costs about $40.00) to view the class in session."
- ^ Lawson, Stephen. Vidyo Packages Conferencing For Campuses, IDG News Service, February 16, 2010. Retrieved via Computerworld.com's website, February 18, 2010
- ^ Jackman, Elizabeth. New Video Conferencing System Streamlines Firefighter Training, Peoria Times, Peoria, AZ, February 19, 2010. Retrieved February 19, 2010;
Computer fan
From Wikipedia, the free encyclopedia

A 3D illustration of four 80 mm fans, a type of fan commonly used in personal computers (sometimes as a set, or mixed with other fan sizes).
Contents[hide] |
[edit] Usage
As processors, graphics cards, RAM and other components in computers have increased in speed and power consumption, the amount of heat produced by these components as a side-effect of normal operation has also increased. These components need to be kept within a specified temperature range to prevent overheating, instability, malfunction and damage leading to a shortened component lifespan.While in earlier personal computers it was possible to cool most components using natural convection (passive cooling), many modern components require more effective active cooling. To cool these components, fans are used to move heated air away from the components and draw cooler air over them. Fans attached to components are usually used in combination with a heatsink to increase the area of heated surface in contact with the air, thereby improving the efficiency of cooling.
In the IBM compatible PC market, the computer's power supply unit (PSU) almost always uses an exhaust fan to expel warm air from the PSU. Active cooling on CPUs started to appear on the Intel 80486, and by 1997 was standard on all desktop processors.[1] Chassis or case fans, usually one exhaust fan to expel heated air from the rear and optionally an intake fan to draw cooler air in through the front, became common with the arrival of the Pentium 4 in late 2000.[1] A third vent fan in the side of the PC, often located over the CPU, is also common. The graphics processing unit (GPU) on many modern graphics cards also requires a heatsink and fan. In some cases, the northbridge chip on the motherboard has another fan and heatsink. Other components such as the hard drives and RAM may also be actively cooled, though as of 2007[update] this remains relatively unusual. It is not uncommon to find five or more fans in a modern PC.
[edit] Cooling fan applications
[edit] Case mount
Used to aerate the case of the computer. The components inside the case cannot dissipate heat efficiently if the surrounding air is too hot. Case fans move air through the case, usually drawing cooler outside air in through the front (where it may also be drawn over the internal hard drive racks) and expelling it through the rear. There may be a third fan in the side or top of the case to draw outside air into the vicinity of the CPU, which is usually the largest single heat source. Standard case fans are 80 mm, 92 mm or 120 mm in width and length. As case fans are often the most readily visible form of cooling on a PC, decorative fans are widely available and may be lit with LEDs, made of UV-reactive plastic, and covered with decorative grilles. Decorative fans and accessories are popular with case modders. Air filters are often used over intake fans, to prevent dust from entering the case.A power supply (PSU) fan often plays a double role, not only keeping the PSU itself from overheating, but also removing warm air from inside the case. PSUs with two fans are also available, which typically have a fan on the inside to supply case air into the PSU and a second fan on the back to expel the heated air.
[edit] CPU fan
Used to cool the CPU (central processing unit) heatsink.- See computer spot cooling.
[edit] Graphics card fan
Used to cool the graphics processing unit or the memory on graphics cards. These fans were not necessary on older cards because of their low power dissipation, but most modern graphics cards, especially those designed for 3D graphics and gaming, need their own dedicated cooling fans. Some of the higher powered cards can produce more heat than the CPU (up to 289 watts[2]), so effective cooling is especially important. Passive coolers for new video cards, however, are not unheard of, such as the Thermalright HR-03.[edit] Chipset fan
Used to cool the northbridge of a motherboard's chipset, which may be necessary for system bus overclocking.[edit] Other types of fans
Other less commonly encountered fans may include:- PCI slot fan: A fan mounted in one of the PCI slots, usually to supply additional cooling to the PCI and/or graphics cards.
- Hard disk fan: A fan mounted next to or on a hard disk drive. This may be desirable on faster-spinning (e.g. 10,000 RPM) hard disks with greater heat production.
- CD burner fan: Some internal CD and/or DVD burners included cooling fans.
[edit] Aesthetics
Many gamers, case modders, and enthusiasts utilize fans that are illuminated with colored LED lights to enhance the computer's visual appeal.[edit] Physical characteristics
The width and height of these usually square fans are measured in millimeters; common sizes include 60 mm, 80 mm, 92 mm and 120 mm. Fans with a round frame are also available; these are usually designed so that one may use a larger fan than the mounting holes would otherwise allow (i.e., a 120 mm fan with 90 mm holes). The amount of airflow which fans generate is typically measured in cubic feet per minute (CFM), and the speed of rotation is measured in revolutions per minute (RPM). Often, computer enthusiasts choose fans which have a higher CFM rating, but produce less noise (measured in decibels, or dB), and some fans come with an adjustable RPM rating to produce less noise when the computer does not require additional airflow. Fan speeds may be controlled manually (a simple potentiometer control, for example), thermally, or by the computer hardware or by software. It is also possible to run many 12V fans from the 5 V supply, at the expense of airflow, but with reduced noise levels.The other consideration when choosing a computer fan is static pressure. A fan with high static pressure is more effective at forcing air through restricted spaces, such as the gaps between a radiator or heatsink. Therefore, enthusiasts often prioritize static pressure over CFM when choosing a fan for use with a heatsink. The relative importance of static pressure depends on the degree to which the airflow is restricted by geometry (i.e. static pressure becomes more important as the spacing between heatsink blades decreases). Static pressure is usually measured in either mm Hg or mm H2O.
The type of bearing used in a fan can affect its performance and noise output. Most computer fans use one of the following bearing types:
- Sleeve bearings use two surfaces lubricated with oil or grease as a friction contact. Sleeve bearings are less durable as the contact surfaces can become rough and/or the lubricant dry up, eventually leading to failure. Sleeve bearings may be more likely to fail at higher temperatures, and may perform poorly when mounted in any orientation other than vertical. The lifespan of a sleeve bearing fan may be around 40,000 hours at 50 °C. Fans that use sleeve bearings are generally cheaper than fans that use ball bearings, and are quieter at lower speeds early in their life, but can grow noisier as they age.[3]
- Rifle bearings are similar to sleeve bearings, but are quieter and have almost as much lifespan as ball bearings. The bearing has a spiral groove in it that pumps fluid from a reservoir. This allows them to be safely mounted horizontally (unlike sleeve bearings), since the fluid being pumped lubricates the top of the shaft.[4] The pumping also ensures sufficient lubricant on the shaft, reducing noise, and increasing lifespan.
- Ball bearings: Though generally more expensive, ball bearing fans do not suffer the same orientation limitations as sleeve bearing fans, are more durable especially at higher temperatures, and are quieter than sleeve bearing fans at higher rotation speeds. The lifespan of a ball bearing fan may be around 63,000 hours at 50 °C.[3]
- Fluid bearings have the advantages of near-silent operation and high life expectancy (comparable to ball bearings). However, fans using fluid bearings tend to be the most expensive. The enter bearing fan is a variation of the fluid bearing fan, developed by EverFlow Technology Corporation.[5]
- Magnetic bearings or maglev bearings, in which the fan is repelled from the bearing by magnetism.
[edit] Fan Sizing
Fans are available in wide variety of sizes and capacities. The most common being 120mm with Gamers or designers computers and 80mm for smaller, everyday user computers although sizes can vary. In general the faster the fan, the more noise it produces. Within a given physical size capacity is roughly proportional to current draw. For a given flow a larger fan will be quieter than a smaller fan.[edit] Fan connector
The standard connectors for computer fans are- 3-pin Molex connector KK Family
- This connector is used when connecting a fan to the motherboard or other circuit board. It is a small thick rectangular in-line female connector with two tabs on the outer-most edge of one long side. The size and spacing of the pin sockets is identical to a standard 3-pin female IC connector. The three pins are used for ground, +12 V power, and a tachometer signal. Molex Part number of receptacle is 22-01-3037. Molex Part number of individual crimp contacts is 08-55-0101.
- 4-pin Molex connector KK Family
- This is a special variant of the Molex KK connector with four pins but with the locking/polarisation features of a 3-pin connector. The additional pin is used for a pulse-width modulation signal to provide variable speed control.[6] These can be plugged into 3-pin headers, but will lose their fan speed control. Molex Part number of receptacle is 22-01-3047. Molex Part number of individual crimp contacts is 08-55-0101.
- 4-pin Molex connector
- This connector is used when connecting the fan directly to the power supply. It consists of two wires (red/12V and black/ground) leading to and splicing into a large in-line 4-pin male-to-female Molex connector.
- Dell, Inc. proprietary
- This connector is an expansion of a simple 3-pin female IC connector by adding two tabs to the middle of the connector on one side and a lock-tab on the other side. The size and spacing of the pin sockets is identical to a standard 3-pin female IC connector and 3-pin Molex connector. Some models have the wiring of the white wire (speed sensor) in the middle, whereas the standard 3-pin Molex requires the white wire as pin #3, thus compatibility issues may exist.
[edit] Alternatives
If a fan is not desirable, because of noise, reliability, or environmental concerns, there are some alternatives:- Very Rarely, such as ultra silent home theatre machines, can rely on passive cooling alone and do not require a case fan to keep computer components at ordinary operating temperatures. More commonly (such as in simple business and home machines) a power supply fan alone is sufficient to cool the machine. [7]
- Undervolting and/or underclocking to reduce power dissipation
- Larger heatsinks (for example, some motherboards have northbridge fans; others have larger, more costly heatsinks)
- Natural convection cooling: carefully designed, correctly oriented, and sufficiently large CPU coolers can dissipate up to 100 W by natural convection alone
- More unusual solutions, e.g. heatpipes bonded to the metal case, water cooling, or refrigeration
- Motherboards sunk in liquid oil provides excellent convection cooling and protects from humidity and water without the need for heatsinks or fans. Special care must be taken to ensure compatibility with adhesives and sealants used on the motherboard and ICs. This solution is used in some external environments like wireless equipments located in the wild.[citation needed]
- Ionic wind cooling is being researched, whereby air is moved by ionizing air between 2 electrodes. this replaces the fan and has the advantage of no moving parts. [1] and [2]
No comments:
Post a Comment