Saturday, October 17, 2009

IntERNET

The Internet is a global system of interconnected computer networks that use the standardized Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private and public, academic, business, and government networks of local to global scope that are linked by copper wires, fiber-optic cables, wireless connections, and other technologies. The Internet carries a vast array of information resources and services, most notably the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support electronic mail. In addition it supports popular services such as online chat, file transfer and file sharing, gaming, commerce, social networking, publishing, video on demand, and teleconferencing and telecommunications. Voice over Internet Protocol (VoIP) applications allow person-to-person communication via voice and video.

The origins of the Internet reach back to the 1960s when the United States funded research projects of its military agencies to build robust, fault-tolerant and distributed computer networks. This research and a period of civilian funding of a new U.S. backbone by the National Science Foundation spawned worldwide participation in the development of new networking technologies and led to the commercialization of an international network in the mid 1990s, and resulted in the following popularization of countless applications in virtually every aspect of modern human life. As of 2009, an estimated quarter of Earth's population uses the services of the Internet.

The terms Internet and World Wide Web are often used in everyday speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global data communications system. It is a hardware and software infrastructure that provides connectivity between computers. In contrast, the Web is one of the services communicated via the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs.[1] The term the Internet, when referring to the Internet, has traditionally been treated as a proper noun and written with an initial capital letter. There is a trend to regard it as a generic term or common noun and thus write it as "the internet", without the capital.

The USSR's launch of Sputnik spurred the United States to create the Advanced Research Projects Agency, known as ARPA, in February 1958 to regain a technological lead.[2][3] ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. J. C. R. Licklider was selected to head the IPTO. Licklider moved from the Psycho-Acoustic Laboratory at Harvard University to MIT in 1950, after becoming interested in information technology. At MIT, he served on a committee that established Lincoln Laboratory and worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of time-sharing.

Professor Leonard Kleinrock with one of the first ARPANET Interface Message Processors at UCLA

At the IPTO, Licklider got Lawrence Roberts to start a project to make a network, and Roberts based the technology on the work of Paul Baran,[4] who had written an exhaustive study for the United States Air Force that recommended packet switching (opposed to circuit switching) to achieve better network robustness and disaster survivability. UCLA professor Leonard Kleinrock had provided the theoretical foundations for packet networks in 1962, and later, in the 1970s, for hierarchical routing, concepts which have been the underpinning of the development towards today's Internet.

After much work, the first two nodes of what would become the ARPANET were interconnected between UCLA's School of Engineering and Applied Science and SRI International (SRI) in Menlo Park, California, on October 29, 1969. The ARPANET was one of the "eve" networks of today's Internet. Following on from the demonstration that packet switching worked on the ARPANET, the British Post Office, Telenet, DATAPAC and TRANSPAC collaborated to create the first international packet-switched network service. In the UK, this was referred to as the International Packet Switched Service (IPSS), in 1978. The collection of X.25-based networks grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The X.25 packet switching standard was developed in the CCITT (now called ITU-T) around 1976.

A plaque commemorating the birth of the Internet at Stanford University

X.25 was independent of the TCP/IP protocols that arose from the experimental work of DARPA on the ARPANET, Packet Radio Net and Packet Satellite Net during the same time period. Vinton Cerf and Robert Kahn developed the first description of the TCP protocols during 1973 and published a paper on the subject in May 1974. Use of the term "Internet" to describe a single global TCP/IP network originated in December 1974 with the publication of RFC 675, the first full specification of TCP that was written by Vinton Cerf, Yogen Dalal and Carl Sunshine, then at Stanford University. During the next nine years, work proceeded to refine the protocols and to implement them on a wide range of operating systems. The first TCP/IP-based wide-area network was operational by January 1, 1983 when all hosts on the ARPANET were switched over from the older NCP protocols. In 1985, the United States' National Science Foundation (NSF) commissioned the construction of the NSFNET, a university 56 kilobit/second network backbone using computers called "fuzzballs" by their inventor, David L. Mills. The following year, NSF sponsored the conversion to a higher-speed 1.5 megabit/second network. A key decision to use the DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of the Supercomputer program at NSF.

The opening of the network to commercial interests began in 1988. The US Federal Networking Council approved the interconnection of the NSFNET to the commercial MCI Mail system in that year and the link was made in the summer of 1989. Other commercial electronic e-mail services were soon connected, including OnTyme, Telemail and Compuserve. In that same year, three commercial Internet service providers (ISPs) were created: UUNET, PSINet and CERFNET. Important, separate networks that offered gateways into, then later merged with, the Internet include Usenet and BITNET. Various other commercial and educational networks, such as Telenet, Tymnet, Compuserve and JANET were interconnected with the growing Internet. Telenet (later called Sprintnet) was a large privately funded national computer network with free dial-up access in cities throughout the U.S. that had been in operation since the 1970s. This network was eventually interconnected with the others in the 1980s as the TCP/IP protocol became increasingly popular. The ability of TCP/IP to work over virtually any pre-existing communication networks allowed for a great ease of growth, although the rapid growth of the Internet was due primarily to the availability of an array of standardized commercial routers from many companies, the availability of commercial Ethernet equipment for local-area networking, and the widespread implementation and rigorous standardization of TCP/IP on UNIX and virtually every other common operating system.

This NeXT Computer was used by Berners-Lee at CERN and became the world's first Web server.

Although the basic applications and guidelines that make the Internet possible had existed for almost two decades, the network did not gain a public face until the 1990s. On 6 August 1991, CERN, a pan European organisation for particle research, publicized the new World Wide Web project. The Web was invented by English scientist Tim Berners-Lee in 1989. An early popular web browser was ViolaWWW, patterned after HyperCard and built using the X Window System. It was eventually replaced in popularity by the Mosaic web browser. In 1993, the National Center for Supercomputing Applications at the University of Illinois released version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously academic, technical Internet. By 1996 usage of the word Internet had become commonplace, and consequently, so had its use as a synecdoche in reference to the World Wide Web.

Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks, such as FidoNet, have remained separate). During the 1990s, it was estimated that the Internet grew by 100 percent per year, with a brief period of explosive growth in 1996 and 1997.[5] This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. [6] Using various statistics, Advanced Micro Devices estimated the population of Internet users to be 1.5 billion as of January 2009.[7]

c0mputeR virusES

A computer virus is a computer program that can copy itself and infect a computer. The term "virus" is also commonly but erroneously used to refer to other types of malware, adware, and spyware programs that do not have the reproductive ability. A true virus can only spread from one computer to another (in some form of executable code) when its host is taken to the target computer; for instance because a user sent it over a network or the Internet, or carried it on a removable medium such as a floppy disk, CD, DVD, or USB drive. Viruses can increase their chances of spreading to other computers by infecting files on a network file system or a file system that is accessed by another computer.[1][2]

The term "computer virus" is sometimes used as a catch-all phrase to include all types of malware. Malware includes computer viruses, worms, trojan horses, most rootkits, spyware, dishonest adware, crimeware, and other malicious and unwanted software, including true viruses. Viruses are sometimes confused with computer worms and Trojan horses, which are technically different. A worm can exploit security vulnerabilities to spread itself to other computers without needing to be transferred as part of a host, and a Trojan horse is a program that appears harmless but has a hidden agenda. Worms and Trojans, like viruses, may cause harm to either a computer system's hosted data, functional performance, or networking throughput, when they are executed. Some viruses and other malware have symptoms noticeable to the computer user, but many are surreptitious.

Most personal computers are now connected to the Internet and to local area networks, facilitating the spread of malicious code. Today's viruses may also take advantage of network services such as the World Wide Web, e-mail, Instant Messaging, and file sharing systems to spread.

The Creeper virus was first detected on ARPANET, the forerunner of the Internet in the early 1970s.[3] Creeper was an experimental self-replicating program written by Bob Thomas at BBN in 1971.[4] Creeper used the ARPANET to infect DEC PDP-10 computers running the TENEX operating system. Creeper gained access via the ARPANET and copied itself to the remote system where the message, "I'm the creeper, catch me if you can!" was displayed. The Reaper program was created to delete Creeper.[5]

A program called "Rother J" was the first computer virus to appear "in the wild" — that is, outside the single computer or lab where it was created.[citation needed] Written in 1981 by Richard Skrenta, it attached itself to the Apple DOS 3.3 operating system and spread via floppy disk.[6] This virus was created as a practical joke when Richard Skrenta was still in high school. It was injected in a game on a floppy disk. On its 50th use the Elk Cloner virus would be activated, infecting the computer and displaying a short poem beginning "Elk Cloner: The program with a personality."

The first PC virus in the wild was a boot sector virus dubbed (c)Brain[7], created in 1986 by the Farooq Alvi Brothers, operating out of Lahore, Pakistan, reportedly to deter piracy of the software they had written[citation needed]. However, analysts have claimed that the Ashar virus, a variant of Brain, possibly predated it based on code within the virus.[original research?]

Before computer networks became widespread, most viruses spread on removable media, particularly floppy disks. In the early days of the personal computer, many users regularly exchanged information and programs on floppies. Some viruses spread by infecting programs stored on these disks, while others installed themselves into the disk boot sector, ensuring that they would be run when the user booted the computer from the disk, usually inadvertently. PCs of the era would attempt to boot first from a floppy if one had been left in the drive. Until floppy disks fell out of use, this was the most successful infection strategy and boot sector viruses were the most common in the wild for many years.[8]

Traditional computer viruses emerged in the 1980s, driven by the spread of personal computers and the resultant increase in BBS, modem use, and software sharing. Bulletin board-driven software sharing contributed directly to the spread of Trojan horse programs, and viruses were written to infect popularly traded software. Shareware and bootleg software were equally common vectors for viruses on BBS's.[citation needed] Within the "pirate scene" of hobbyists trading illicit copies of retail software, traders in a hurry to obtain the latest applications were easy targets for viruses.[original research?]

Macro viruses have become common since the mid-1990s. Most of these viruses are written in the scripting languages for Microsoft programs such as Word and Excel and spread throughout Microsoft Office by infecting documents and spreadsheets. Since Word and Excel were also available for Mac OS, most could also spread to Macintosh computers. Although most of these viruses did not have the ability to send infected e-mail, those viruses which did took advantage of the Microsoft Outlook COM interface.[citation needed]

Some old versions of Microsoft Word allow macros to replicate themselves with additional blank lines. If two macro viruses simultaneously infect a document, the combination of the two, if also self-replicating, can appear as a "mating" of the two and would likely be detected as a virus unique from the "parents."[9]

A virus may also send a web address link as an instant message to all the contacts on an infected machine. If the recipient, thinking the link is from a friend (a trusted source) follows the link to the website, the virus hosted at the site may be able to infect this new computer and continue propagating.

Cross-site scripting viruses emerged recently, and were academically demonstrated in 2005.[10] Since 2005 there have been multiple instances of the cross-site scripting viruses in the wild, exploiting websites such as MySpace and Yahoo.

wHat Is c0mpuTer????

A computer is a machine that manipulates data according to a set of instructions.

Although mechanical examples of computers have existed through much of recorded human history, the first electronic computers were developed in the mid-20th century (1940–1945). These were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).[1] Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.[2] Simple computers are small enough to fit into a wristwatch, and can be powered by a watch battery. Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". The embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are however the most numerous.

The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore computers ranging from a mobile phone to a supercomputer are all able to perform the same computational tasks, given enough time and storage capacity.

The first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued to be used in that sense until the middle of the 20th century. From the end of the 19th century onwards though, the word began to take on its more familiar meaning, describing a machine that carries out computations.[3]

The history of the modern computer begins with two separate technologies—automated calculation and programmability—but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. Examples of early mechanical calculating devices include the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150–100 BC). Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when.[4] This is the essence of programmability.

The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer.[5] It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing automatic doors to open every hour,[6][7] and five robotic musicians who played music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be re-programmed to compensate for the changing lengths of day and night throughout the year.[5]

The Renaissance saw a re-invigoration of European mathematics and engineering. Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers, but none fit the modern definition of a computer, because they could not be programmed.

In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.

It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, his analytical engine.[8] Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed.

In the late 1880s, Herman Hollerith invented the recording of data on a machine readable medium. Prior uses of machine readable media, above, had been for control, not data. "After some initial trials with paper tape, he settled on punched cards ..."[9] To process these punched cards he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.

Alan Turing is widely regarded to be the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm and computation with the Turing machine. Of his role in the modern computer, Time Magazine in naming Turing one of the 100 most influential people of the 20th century, states: "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine." [10]

The inventor of the program-controlled computer was Konrad Zuse, who built the first working computer in 1941 and later in 1955 the first computer based on magnetic storage.[11]

George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.[12]


A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult.Shannon 1940 Notable achievements include:

EDSAC was one of the first computers to implement the stored program (von Neumann) architecture.
Die of an Intel 80486DX2 microprocessor (actual size: 12×6.75 mm) in its packaging.
  • Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational computer.[13]
  • The non-programmable Atanasoff–Berry Computer (1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory. The use of regenerative memory allowed it to be much more compact then its peers (being approximately the size of a large desk or workbench), since intermediate results could be stored and then fed back into the same set of computation elements.
  • The secret British Colossus computers (1943),[14] which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes.
  • The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.
  • The U.S. Army's Ballistic Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming.

Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the "stored program architecture" or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of these being completed in Great Britain. The first to be demonstrated working was the Manchester Small-Scale Experimental Machine (SSEM or "Baby"), while the EDSAC, completed a year after SSEM, was the first practical implementation of the stored program design. Shortly thereafter, the machine originally described by von Neumann's paper—EDVAC—was completed but did not see full-time use for an additional two years.

Nearly all modern computers implement some form of the stored-program architecture, making it the single trait by which the word "computer" is now defined. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture.

Computers using vacuum tubes as their electronic elements were in use throughout the 1950s, but by the 1960s had been largely replaced by transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorised computer was demonstrated at the University of Manchester in 1953.[15] In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the late 1970s, many products such as video recorders contained dedicated computers called microcontrollers, and they started to appear as a replacement to mechanical controls in domestic appliances such as washing machines. The 1980s witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.

Modern smartphones are fully-programmable computers in their own right, and as of 2009 may well be the most common form of such computers in existence.

wHy s0me w0meN tAkE rIsKs

Women with more of the hormone testosterone tend to behave more like men when taking financial risks,according to a new study.known as the male sex hormone,testosterone occurs in both men and women,but a higher levels in men.It has long been associated waith competitiveness and dominance,reduction of fear,and with risky behaviour like gambling and alcohol use.
Previous research in England showed that higher levels of testosterone seem able to boost short term success at finance.Researchers there tested male traders moprning and evening and found that those with high levels of testosterone in the morning were more likely to make an unusually big profit that day.
Zingales and his team tested the testosterone levelss of more than 500 MBA students-males and females-and asked them to choose between accepting a guaranteed monetary award or choosing risky lottery with a higher potential layout.Students had to choose repeatedly between the lottery and a fixed payment at increasing values.
In general men had higher levels of testosterone and were more likely to choose the risky lottery than women.But it also turned out that women with higher levels of testosterone were almost seven times more likely to take riskd than women with lower hormone levels.On the other hand,there was no difference in risk-tasking between those with relatively low levels of testosterone-90% of women and 31% of men.

Friday, September 25, 2009

H1N1 virus


Influenza A (H1N1) virus is a subtype of influenzavirus A and the most common cause of influenza (flu) in humans. Some strains of H1N1 are endemic in humans and cause a small fraction of all influenza-like illness and a large fraction of all seasonal influenza. H1N1 strains caused roughly half of all human flu infections in 2006.[1] Other strains of H1N1 are endemic in pigs (swine influenza) and in birds (avian influenza).

In June 2009, World Health Organization declared that flu due to a new strain of swine-origin H1N1 was responsible for the 2009 flu pandemic. This strain is often called "swine flu" by the public media.

Influenza A virus strains are categorized according to two proteins found on the surface of the virus: hemagglutinin (H) and neuraminidase (N). All influenza A viruses contain hemagglutinin and neuraminidase, but the structures of these proteins differ from strain to strain, due to rapid genetic mutation in the viral genome.

Influenza A virus strains are assigned an H number and an N number based on which forms of these two proteins the strain contains. There are 16 H and 9 N subtypes known in birds, but only H 1, 2 and 3, and N 1 and 2 are commonly found in humans.[3]

In the 2009 flu pandemic, the virus isolated from patients in the United States was found to be made up of genetic elements from four different flu viruses – North American swine influenza, North American avian influenza, human influenza, and swine influenza virus typically found in Asia and Europe – "an unusually mongrelised mix of genetic sequences."[13] This new strain appears to be a result of reassortment of human influenza and swine influenza viruses, in all four different strains of subtype H1N1.

Preliminary genetic characterization found that the hemagglutinin (HA) gene was similar to that of swine flu viruses present in U.S. pigs since 1999, but the neuraminidase (NA) and matrix protein (M) genes resembled versions present in European swine flu isolates. The six genes from American swine flu are themselves mixtures of swine flu, bird flu, and human flu viruses.[14] While viruses with this genetic makeup had not previously been found to be circulating in humans or pigs, there is no formal national surveillance system to determine what viruses are circulating in pigs in the U.S.[15]

On June 11, 2009, the WHO declared an H1N1 pandemic, moving the alert level to phase 6, marking the first global pandemic since the 1968 Hong Kong flu.[16]


LoVe


Love is any of a number of emotions and experiences related to a sense of strong affection[1] and attachment. The word love can refer to a variety of different feelings, states, and attitudes, ranging from generic pleasure ("I loved that meal") to intense interpersonal attraction ("I love my boyfriend"). This diversity of uses and meanings, combined with the complexity of the feelings involved, makes love unusually difficult to consistently define, even compared to other emotional states.

As an abstract concept, love usually refers to a deep, ineffable feeling of tenderly caring for another person. Even this limited conception of love, however, encompasses a wealth of different feelings, from the passionate desire and intimacy of romantic love to the nonsexual emotional closeness of familial and platonic love[2] to the profound oneness or devotion of religious love.[3] Love in its various forms acts as a major facilitator of interpersonal relationships and, owing to its central psychological importance, is one of the most common themes in the creative arts.


The English word "love" can have a variety of related but distinct meanings in different contexts. Often, other languages use multiple words to express some of the different concepts that English relies mainly on "love" to encapsulate; one example is the plurality of Greek words for "love." Cultural differences in conceptualizing love thus make it doubly difficult to establish any universal definition.[4]

Although the nature or essence of love is a subject of frequent debate, different aspects of the word can be clarified by determining what isn't love. As a general expression of positive sentiment (a stronger form of like), love is commonly contrasted with hate (or neutral apathy); as a less sexual and more emotionally intimate form of romantic attachment, love is commonly contrasted with lust; and as an interpersonal relationship with romantic overtones, love is commonly contrasted with friendship, although other definitions of the word love may be applied to close friendships in certain contexts.

There are two types of love,impersonal love.A person can be said to love a country, principle, or goal if they value it greatly and are deeply committed to it. Similarly, compassionate outreach and volunteer workers' "love" of their cause may sometimes be borne not of interpersonal love, but impersonal love coupled with altruism and strong political convictions. People can also "love" material objects, animals, or activities if they invest themselves in bonding or otherwise identifying with those things. If sexual passion is also involved, this condition is called paraphilia.[7]

Interpersonal love refers to love between human beings. It is a more potent sentiment than a simple liking for another. Unrequited love refers to those feelings of love that are not reciprocated. Interpersonal love is most closely associated with interpersonal relationships. Such love might exist between family members, friends, and couples. There are also a number of psychological disorders related to love, such as erotomania.

Throughout history, philosophy and religion have done the most speculation on the phenomenon of love. In the last century, the science of psychology has written a great deal on the subject. In recent years, the sciences of evolutionary psychology, evolutionary biology, anthropology, neuroscience, and biology have added to the understanding of the nature and function of love.