22/07/2012

360º 3-DIMENSIONAL HOLOGRAPHIC DISPLAYS

Microsoft have shown off a prototype project, called the "Vermeer Interactive Display", where the research project combines Microsoft's Kinect motion sensing technology to allow you to directly 'touch' and interact with the virtual image being projected, which Microsoft describes as a '3D volumetric/light field display'.

What it does, is create an image between two facing parabolic mirrors, which then creates the optical illusion of a color 3D image floating above them, which can be viewed all the way round. This technology could eventually see the light of day for PC gaming, which would really give PC gaming something that consoles just couldn't have in the near future.
 Imagine a new next-gen RTS game where you could have one of these as a controller... or a new adventure game where you could have an additional display, or use for an interactive display. Maybe a Fallout-type, PIPBOY display, where you could press buttons to the side of you with the interactive display. All we need is one serious, thinking-outside-the-box developer to use this and it could really take off.
The ZCam is a video camera that can capture depth information (which is used to build the 3D model) along with video and is produced by 3DV Systems. The technology is based on the Time of Flight principle. In this technique, 3D depth data is generated by sending pulses of infra-red light  into the scene and detecting the light reflected from the surfaces of objects in the scene. Using the time taken for a light pulse to travel to the target and back, the distance can be calculated and used to build up 3D depth information for all objects in the scene.

plz.. click on:http://adf.ly/Kd1uq

THE SIXTH SENSE TECHNOLOGY

The conference TED (the name stands for Technology, Entertainment, Design) the guys from the Massachusetts Institute of Technology revealed something unbelievable, a working prototype of a multifunctional device that can become part of our lives in five years to ten. Set named “sixth sense” consists of only wearing colorful caps, perceived by a multifunctional device.It is a wearable gestural interface device by Pranav Mistry, a PhD candidate in the Fluid Interfaces Group at the MIT Media Lab. 
The SixthSense prototype comprises a pocket projector, a mirror and a camera contained in a pendant-like, wearable device. Both the projector and the camera are connected to a mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks users' hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tips of the user’s fingers. The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces. SixthSense supports multi-touch and multi-user interaction.

The SixthSense prototype contains a number of demonstration applications.
  • The map application lets the user navigate a map displayed on a nearby surface using hand gestures to zoom and pan
  • The drawing application lets the user draw on any surface by tracking the fingertip movements of the user’s index finger.
  • SixthSense also implements Augmented reality; projecting information onto objects the user interacts with. For example a paper newspaper can be augmented with projected dynamic live information.
A 'framing' gesture takes a picture of the scene. The user can stop by any surface or wall and flick through the photos he/she has taken.The system recognizes a user's free hand gestures as well as icons/symbols drawn in the air with the index finger, for example:
  • Drawing a magnifying glass symbol takes the user to the map application while an ‘@’ symbol lets the user check his mail.
  • The gesture of drawing a circle on the user’s wrist projects an analog watch.
SixthSense prototypes cost approximately $350 to build (not including the computer), the main cost being the micro-projector. Mistry had announced in Nov 2009 that the source code will be released under Open Source. The open source code for the project can be found at SixthSense Google Code and SixthSense Github Repo.

plz.. click on:http://adf.ly/Kd1uq

do parallel universe really exists?


In 1954, a young Princeton University doctoral candidate, Hugh Everett III came up with a radical idea that there exist parallel universes, exactly like our ­universe. These universes are all related to ours indeed, they branch off from ours, and our universe is branched off of others. Within these parallel universes, our wars have had different outcomes than the ones we know. Species that are extinct in our universe have evolved and adapted in others. In other universes, we humans may have become extinct. This thought boggles the mind and yet, it is still comprehensible. Notions of parallel universes or dimensions that resemble our own have appeared in works of science fiction and have been used as explanations for metaphysics.
With his Many Worlds theory, Everett was attempting to answer a rather sticky question related to quantum physics - "Why does quantum matter behave erratically?". The quantum level is the smallest one, science has detected so far. The study of quantum physics began in 1900, when the physicist Max Planck first introduced the concept to the scientific world. Planck's study of radiation yielded some unusual findings that contradicted classical physical laws. These findings suggested that there are other laws at work in the universe, operating on a deeper level than the one we know.
Heisenberg Uncertainty Principle
In fairly short order, physicists studying the quantum level noticed some peculiar things about this tiny world. For one, the particles that exist on this level have a way of taking different forms arbitrarily. For example, scientists have observed photons(tiny packets of light), acting as particles and waves. Even a single photon exhibits this shape-shifting.
This has come to be known as the Heisenberg Uncertainty Principle. The physicist Werner Heisenberg suggested that just by observing quantum matter, we affect the behavior of that matter. Thus, we can never be fully certain of the nature of a quantum object or its attributes, like velocity and location. This idea is supported by the Copenhagen interpretation of quantum mechanics. Posed by the Danish physicist Niels Bohr, this interpretation says that all quantum particles don't exist in one state or the other, but in all of its possible states at once. The sum total of possible states of a quantum object is called its wave function. The state of an object existing in all of its possible states at once is called its superpositionAccording to Bohr, when we observe a quantum object, we affect its behavior. Observation breaks an object's superposition and essentially forces the object to choose one state from its wave function. This theory accounts for why physicists have taken opposite measurements from the same quantum object: The object "chose" different states during different measurements. Bohr's interpretation was widely accepted, and still is by much of the quantum community. But lately, Everett's Many-Worlds theory has been getting some serious attention. Read the next page to find out how the Many-Worlds interpretation works.
Many Worlds Theory
Young Hugh Everett agreed with much of what the highly respected physicist Niels Bohr had suggested about the quantum world. He agreed with the idea of superposition, as well as with the notion of wave functions. But Everett disagreed with Bohr in one vital respect.
To Everett, measuring a quantum object does not force it into one comprehensible state or another. Instead, a measurement taken of a quantum object causes an actual split in the universe. The universe is literally duplicated, splitting into one universe for each possible outcome from the measurement. For example, say an object's wave function is both a particle and a wave. When a physicist measures the particle, there are two possible outcomes: It will either be measured as a particle or a wave. This distinction makes Everett's Many-Worlds theory a competitor of the Copenhagen interpretation as an explanation for quantum mechanics.
When a physicist measures the object, the universe splits into two distinct universes to accommodate each of the possible outcomes. So a scientist in one universe finds that the object has been measured in wave form. The same scientist in the other universe measures the object as a particle. This also explains how one particle can be measured in more than one state.
As unsettling as it may sound, Everett's Many-Worlds interpretation has implications beyond the quantum level. If an action has more than one possible outcome, then -- if Everett's theory is correct -- the universe splits when that action is taken. This holds true even when a person chooses not to take an action. This means that if you have ever found yourself in a situation where death was a possible outcome, then in a universe parallel to ours, you are dead. This is just one reason that some find the Many-Worlds interpretation disturbing.
Another disturbing aspect of the Many-Worlds interpretation is that it undermines our concept of time as linear. Imagine a time line showing the history of the Vietnam War. Rather than a straight line showing noteworthy events progressing onward, a time line based on the Many-Worlds interpretation would show each possible outcome of each action taken. From there, each possible outcome of the actions taken would be further chronicled.
But a person cannot be aware of his other selves or even his death that exist in parallel universes. So how could we ever know if the Many Worlds theory is correct? Assurance that the interpretation is theoretically possible came in the late 1990s from a thought experiment -- an imagined experiment used to theoretically prove or disprove an idea -- called quantum suicide. 
This thought experiment renewed interest in Everett's theory, which was for many years considered rubbish. Since Many-Worlds was proven possible, physicists and mathematicians have aimed to investigate the implications of the theory in depth. But the Many-Worlds interpretation is not the only theory that seeks to explain the universe. Nor is it the only one that suggests there are universes parallel to our own. Read the next page to lean about string theory.
Parallel Universes: Split or String?
The Many-Worlds theory and the Copenhagen interpretation aren't the only competitors ­trying to explain the basic level of the universe. In fact, quantum mechanics isn't even the only field within physics searching for such an explanation. The theories that have emerged from the study of subatomic physics still remain theories. This has caused the field of study to be divided in much the same way as the world of psychology. Theories have adherents and critics, as do the psychological frameworks proposed by Carl Jung, Albert Ellis and Sigmund Freud.
Since their science was developed, physicists have been engaged in reverse engineering the universe  they have studied what they could observe and worked backward toward smaller and smaller levels of the physical world. By doing this, physicists are attempting to reach the final and most basic level. It is this level, they hope, that will serve as the foundation for understanding everything else. Following his famous Theory of Relativity, Albert Einstein spent the rest of his life looking for the one final level that would answer all physical questions. Physicists refer to this phantom theory as the Theory of Everything. Quantum physicists believe that they are on the trail of finding that final theory. But another field of physics believes that the quantum level is not the smallest level, so it therefore could not provide the Theory of Everything. These physicists turn instead to a theoretical subquantum level called string theory for the answers to all of life. What's amazing is that through their theoretical investigation, these physicists, like Everett, have also concluded that there are parallel universes.
String theory was originated by the Japanese-American physicist Michio Kaku. His theory says that the essential building blocks of all matter as well as all of the physical forces in the universe -- like gravity - exist on a subquantum level. These building blocks resemble tiny rubber bands -- or strings - that make up quarks (quantum particles), and in turn electrons, and atoms, and cells and so on. Exactly what kind of matter is created by the strings and how that matter behaves depends on the vibration of these strings. It is in this manner that our entire universe is composed. And according to string theory, this composition takes place across 11 separate dimensions.
Like the Many-Worlds theory, string theory demonstrates that parallel universes exist. According to the theory, our own universe is like a bubble that exists alongside similar parallel universes. Unlike the Many-Worlds theory, string theory supposes that these universes can come into contact with one another. String theory says that gravity can flow between these parallel universes. When these universes interact, a Big Bang like the one that created our universe occurs.
While physicists have managed to create machines that can detect quantum matter, the subquantum strings are yet to be observed, which makes them -- and the theory on which they're built -- entirely theoretical. It has been discredited by some, although others believe it is correct.
So do parallel universes really exist? According to the Many-Worlds theory, we can't truly be certain, since we cannot be aware of them. The string theory has already been tested at least once -- with negative results. Dr. Kaku still believes parallel dimensions do exist, however.
Einstein didn't live long enough to see his quest for the Theory of Everything taken up by others. Then again, if Many-Worlds is correct, Einstein's still alive in a parallel universe. Perhaps in that universe, physicists have already found the Theory of Everything.

QUANTUM TELEPORTER

Quantum Teleportation has been successful on smaller objects according to a Study. “We were able to perform a quantum teleportation experiment for the first time ever outside a university laboratory,” said Rupert Ursin, a researcher at the Institute for Experimental Physics at the University of Vienna in Austria. In quantum Teleportation it is the quantum states of the objects that are destroyed and recreated, and not the objects themselves. Therefore, quantum Teleportation cannot teleport animate or inanimate matter in its physical entirety. The device thus creates a replica of an original thing at a new position and the original thing ceased to exist once the replicas were created.
Quantum teleportation is a process by which a quantum information can be transmitted exactly from one location to another, without the  information being transmitted through the intervening space. It is useful for quantum information processing, however it does not immediately transmit classical information, and therefore cannot be used for communication at a speed faster than light. Quantum teleportation is different from teleportation, it does not transport the system itself, and does not concern rearranging particles to copy the form of an object.

Work in 1998 verified the initial results, and in August 2004 increased the distance of teleportation to 600 meters using optical fiber. The longest distance yet claimed to be achieved for quantum teleportation is 143 km (89 mi) in May 2012 between the two Canary Islands of La Palma and Tenerife off the Atlantic coast of north Africa. In April 2011, experimenters reported that they had demonstrated teleportation of wave packets of light up to a bandwidth of 10 MHz while preserving strongly nonclassical superposition states.
In this experiment, researchers in Australia and Japan were able to transfer quantum information from one place to another without having to physically move it. It was destroyed in one place and instantly resurrected in another, “alive” again and unchanged. This is a major advance, as previous teleportation experiments were either very slow or caused some information to be lost.The team employed a mind-boggling set of quantum manipulation techniques to achieve this, including squeezing, photon subtraction, entanglement and homodyne detection.The results pave the way for high-speed, high-fidelity transmission of information, according to Elanor Huntington, a professor at the University of New South Wales in Australia who was part of the study.“If we can do this, we can do just about any form of communication needed for any quantum technology,” she said in a news release.
Instead of using ones and zeroes, quantum computers store data as qubits, which can represent one and zero simultaneously. This superposition enables the computers to solve multiple problems at once. The new, faster teleportation process means scientists can move blocks of this quantum information around within a computer or across a network, Huntington said.Optics researcher Philippe Grangier at the Institut d’Optique in Palaiseau, France, said it was a major breakthrough.
“It shows that the controlled manipulation of quantum objects has progressed steadily and achieved objectives that seemed impossible just a few years ago,” he wrote in an editorial that accompanies the study.

21/07/2012

artificial intelligence

Artificial intelligence (AI) is the field within computer science that seeks to explain and to emulate, through mechanical or computational processes, some or all aspects of human intelligence. Included among these aspects of intelligence are the ability to interact with the environment through sensory means and the ability to make decisions in unforeseen circumstances without human intervention. Typical areas of research in AI include game playing, natural language understanding and synthesis, computer vision, problem solving, learning, and robotics.

Deduction, reasoning, problem solving

Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and '90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.
For difficult problems, most of these algorithms can require enormous computational resources – most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem-solving algorithms is a high priority for AI research.
Human beings solve most of their problems using fast, intuitive judgments rather than the conscious, step-by-step deduction that early AI research was able to model. AI has made some progress at imitating this kind of "sub-symbolic" problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside human and animal brains that give rise to this skill; statistical approaches to AI mimic the probabilistic nature of the human ability to guess..
Knowledge representation
Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A representation of "what exists" is an ontology (borrowing a word from traditional philosophy), of which the most general are called upper ontologies.
Among the most difficult problems in knowledge representation are:
1.Default reasoning and the qualification problem
Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.
2.The breadth of commonsense knowledge
The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering — they must be built, by hand, one complicated concept at a time. A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the internet, and thus be able to add to its own ontology.
3.The subsymbolic form of some commonsense knowledge
Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed" or an art critic can take one look at a statue and instantly realize that it is a fake. These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically. Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped thatsituated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.
Planning
Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or "value") of the available choices.
In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be. However, if the agent is not the only actor, it must periodically ascertain whether the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.
Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.

Learning

Machine learning has been central to AI research from the beginning. In 1956, at the original Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine". Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. These can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.
Natural language processing
Natural language processing gives machines the ability to read and understand the languages that humans speak. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as Internet texts. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation.
A common method of processing and extracting meaning from natural language is through semantic indexing. Increases in processing speeds and the drop in the cost of data storage makes indexing large volumes of abstractions of the users input much more efficient.

Motion and manipulation

The field of robotics is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, with sub-problems of localization (knowing where you are, or finding out where other things are), mapping (learning what is around you, building a map of the environment), and motion planning (figuring out how to get there) or path planning (going from one point in space to another point, which may involve compliant motion - where the robot moves while maintaining physical contact with an object).

Perception

Machine perception is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A few selected subproblems are speech recognition, facial recognition and object recognition.
Social intelligence:
Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as to early philosophical enquiries into emotion, the more modern branch of computer science originated with Rosalind Picard's 1995 paper on affective computing. A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behaviour to them, giving an appropriate response for those emotions.
Emotion and social skills play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, in an effort to facilitate human-computer interaction, an intelligent machine might want to be able to display emotions—even if it does not actually experience them itself—in order to appear sensitive to the emotional dynamics of human interaction.
Creativity
 sub-field of AI addresses creativity both theoretically (from a philosophical and psychological perspective) and practically (via specific implementations of systems that generate outputs that can be considered creative, or systems that identify and assess creativity). Related areas of computational research are Artificial intuition and Artificial imagination

General intelligence

Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them. A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.
Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's intention (social intelligence). Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.

A Relational approach

 In "Computing Machinery and Intelligence" (1997), Turing addresses the question of which functions are essential for intelligence with a proposal for what has come to be the generally accepted test for machine intelligence. An human interrogator is connected by terminal to two subjects, one a human and the other a machine. If the interrogator fails as often as he or she succeeds in determining which is the human and which the machine, the machine could be considered as having intelligence. The Turing Test is not based on the completion of tasks or the solution of problems by the machine, but on the machine's ability to relate to a human being in conversation. Discourse is unique among human activities in that it subsumes all other activities within itself. Turing predicted that by the year 2000, there would be computers that could fool an interrogator at least thirty percent of the time. This, like most predictions in AI, was overly optimistic. No computer has yet come close to passing the Turing Test.
The Turing Test uses relational discourse to demonstrate intelligence. However, Turing also notes the importance of being in relationship for the acquisition of knowledge or intelligence. He estimates that the programming of the background knowledge needed for a restricted form of the game would take at a minimum three hundred person-years to complete. This is assuming that the appropriate knowledge set could be identified at the outset. Turing suggests that rather than trying to imitate an adult mind, computer scientists should attempt to construct a mind that simulates that of a child. Such a mind, when given an appropriate education, would learn and develop into an adult mind. One AI researcher taking this approach is Rodney Brooks of the Massachusetts Institute of Technology, whose lab has constructed several robots, including Cog and Kismet, that represent a new direction in AI in which embodiedness is crucial to the robot's design. Their programming is distributed among the various physical parts; each joint has a small processor that controls movement of that joint. These processors are linked with faster processors that allow for interaction between joints and for movement of the robot as a whole. These robots are designed to learn tasks associated with human infants, such as eye-hand coordination, grasping an object, and face recognition through social interaction with a team of researchers. Although the robots have developed abilities such as tracking moving objects with the eyes or withdrawing an arm when touched, Brooks's project is too new to be assessed. It may be no more successful than Lenat's Cyc in producing a machine that could interact with humans on the level of the Turing Test. However Brooks's work represents a movement toward Turing's opinion that intelligence is socially acquired and demonstrated.
The Turing Test makes no assumptions as to how the computer arrives at its answers; there need be no similarity in internal functioning between the computer and the human brain. However, an area of AI that shows some promise is that of neural networks, systems of circuitry that reproduce the patterns of neurons found in the brain. Current neural nets are limited, however. The human brain has billions of neurons and researchers have yet to understand both how these neurons are connected and how the various neurotransmitting chemicals in the brain function. Despite these limitations, neural nets have reproduced interesting behaviors in areas such as speech or image recognition, natural-language processing, and learning. Some researchers, including Hans Moravec and Raymond Kurzweil, see neural net research as a way to reverse engineer the brain. They hope that once scientists can design nets with a complexity equal to the human brain, the nets will have the same power as the brain and develop consciousness as an emergent property. Kurzweil posits that such mechanical brains, when programmed with a given person's memories and talents, could form a new path to immortality, while Moravec holds out hope that such machines might some day become our evolutionary children, capable of greater abilities than humans currently demonstrate.

VOYAGER

The Voyager spacecraft weighs 773 kilograms. Of this, 105 kilograms are scientific instruments.The identical Voyager spacecraft are three-axis stabilized systems that use celestial or gyro referenced attitude control to maintain pointing of the high-gain antennas toward Earth. The prime mission science payload consisted of 10 instruments (11 investigations including radio science).
The diagram at the right shows the 3.66 meter diameter high-gain antenna (HGA) attached to the hollow ten-sided polygonal electronics bus, with the spherical tank within containing hydrazine propulsion fuel.
The Voyager Golden Record is attached to one of the bus sides. The angled square panel to the right is the optical calibration target and excess heat radiator. The three radioisotope thermoelectric generators (RTGs) are mounted end-to-end on the lower boom.
Two 10-meter whip antennas, which study planetary radio astronomy and plasma waves, extend from the spacecraft's body diagonally below the magnetometer boom. The 13 metre long Astromast tri-axial boom extends diagonally downwards left and holds the two low-field magnetometers (MAG); the high-field magnetometers remain close to the HGA.
The instrument boom extending upwards holds, from bottom to top: the cosmic ray subsystem (CRS) left, and Low-Energy Charged Particle (LECP) detector right; the Plasma Spectrometer (PLS) right; and the scan platform that rotates about a vertical axis.
The scan platform comprises: the Infrared Interferometer Spectrometer (IRIS) (largest camera at top right); the Ultraviolet Spectrometer (UVS) just above the UVS; the two Imaging Science Subsystem (ISS) vidicon cameras to the left of the UVS; and the Photopolarimeter System (PPS) under the ISS.
Only five investigation teams are still supported, though data is collected for two additional instruments. The Flight Data Subsystem (FDS) and a single eight-track digital tape recorder (DTR) provide the data handling functions.
The FDS configures each instrument and controls instrument operations. It also collects engineering and science data and formats the data for transmission. The DTR is used to record high-rate Plasma Wave Subsystem (PWS) data. The data is played back every six months.
The Imaging Science Subsystem, made up of a wide angle and a narrow angle camera, is a modified version of the slow scan vidicon camera designs that were used in the earlier Mariner flights. The Imaging Science Subsystem consists of two television-type cameras, each with 8 filters in a commandable Filter Wheel mounted in front of the vidicons. One has a low resolution 200 mm wide-angle lens with an aperture of f/3 (wide angle camera), while the other uses a higher resolution 1500 mm narrow-angle f/8.5 lens (narrow angle camera).
Unlike the other onboard instruments, operation of the cameras is not autonomous, but is controlled by an imaging parameter table residing in one of the spacecraft computers, the Flight Data Subsystem (FDS). Modern spacecraft (post 1990) typically have fully autonomous cameras.
The computer command subsystem (CCS) provides sequencing and control functions. The CCS contains fixed routines such as command decoding and fault detection and corrective routines, antenna pointing information, and spacecraft sequencing information. The computer is an improved version of that used in the Viking orbiter. The custom-built CCS systems on both craft are identical. There is only a minor software modification for one craft that has a scientific subsystem the other lacks.
The Attitude and Articulation Control Subsystem (AACS) controls the spacecraft orientation, maintains the pointing of the high-gain antenna towards Earth, controls attitude maneuvers, and positions the scan platform. The custom built AACS systems on both craft are identical.
It is widely reported on the web that the Voyager spacecraft were controlled by a version of the RCA CDP1802 "COSMAC" microprocessor, but such claims are not substantiated by primary references. The CDP1802 was used in the later Galileo spacecraft. The Voyager systems were based on RCA CD4000 radiation-hardened sapphire-on-silicon (SOS) custom chips, and some TI 54L ICs.
Uplink communications is via S band (16-bit/s command rate) while an X band transmitter provides downlink telemetry at 160 bit/s normally and 1.4 kbit/s for playback of high-rate plasma wave data. All data is transmitted from and received at the spacecraft via the 3.7m high-gain antenna.
Electrical power is supplied by three radioisotope thermoelectric generators (RTGs). They are powered by plutonium-238 (distinct from the Pu-239 isotope used in nuclear weapons) and provided approximately 470 W at 30 volts DC when the spacecraft was launched. Plutonium-238 decays with a half-life of 87.74 years, so RTGs using Pu-238 will lose a factor of 1−0.5{1/87.74} = 0.78% of their power output per year.
In 2011, 34 years after launch, such an RTG would inherently produce 470 W × 2−(34/87.74) ≈ 359 W, about 76% of its initial power. Additionally, the thermocouples that convert heat into electricity also degrade, reducing available power below this calculated level.
By October 7, 2011 the power generated by Voyager 1 and Voyager 2 had dropped to 267.9 W and 269.2 W respectively, about 57% of the power at launch. The level of power output was better than pre-launch predictions based on a conservative thermocouple degradation model. As the electrical power decreases, spacecraft loads must be turned off, eliminating some capabilities.
The Voyager primary mission was completed in 1989, with the close flyby of Neptune by Voyager 2. The Voyager Interstellar Mission (VIM) is a mission extension, which began when the two spacecraft had already been in flight for over 12 years. The Heliophysics Division of the NASA Science Mission Directorate conducted a Heliophysics Senior Review in 2008. The panel found that the VIM "is a mission that is absolutely imperative to continue" and that VIM "funding near the optimal level and increased DSN (Deep Space Network) support is warranted."
As of the present date, the Voyager 2 and Voyager 1 scan platforms, including all of the platform instruments, have been powered down. The ultraviolet spectrometer (UVS) on Voyager 1 was active until 2003, when it too was deactivated. Gyro operations will end in 2015 for Voyager 2 and 2016 forVoyager 1. Gyro operations are used to rotate the probe 360 degrees six times per year to measure the magnetic field of the spacecraft, which is then subtracted from the magnetometer science data.
The two Voyager spacecraft continue to operate, with some loss in subsystem redundancy, but retain the capability of returning scientific data from a full complement of Voyager Interstellar Mission (VIM) science instruments.
Both spacecraft also have adequate electrical power and attitude control propellant to continue operating until around 2025, after which there may not be available electrical power to support science instrument operation. At that time, science data return and spacecraft operations will cease.
Voyager 1 and 2 both carry with them a golden record that contains pictures and sounds of Earth, along with symbolic directions for playing the record and data detailing the location of Earth. The record is intended as a combination time capsule and interstellar message to any civilization, alien or far-future human, that may recover either of the Voyager craft. The contents of this record were selected by a committee that included Timothy Ferris and was chaired by Carl Sagan.



OUTER SPACE

Outer space, or simply space, is the void that exists between celestial bodies, including the Earth. It is not completely empty, but consists of a hard vacuum containing a low density of particles: predominantly a plasma of hydrogen and helium, as well as electromagnetic radiation, magnetic fields, and neutrinos. Observations and theory suggest that it also contains dark matter and dark energy. The baseline temperature, as set by the background radiation left over from the Big Bang, is only 3 Kelvin (K); in contrast, temperatures in the coronae of stars can reach over a million Kelvin. Plasma with an extremely low density (less than one hydrogen atom per cubic meter) and high temperature (millions of Kelvin) in the space between galaxies accounts for most of the baryonic (ordinary) matter in outer space; local concentrations have condensed into stars and galaxies. Intergalactic space takes up most of the volume of the Universe, but even galaxies and star systems consist almost entirely of empty space.
There is no firm boundary where space begins. However the Kármán line, at an altitude of 100 km (62 mi) above sea level, is conventionally used as the start of outer space for the purpose of space treaties and aerospace records keeping. The framework for international space law was established by theOuter Space Treaty, which was passed by the United Nations in 1967. This treaty precludes any claims of national sovereignty and permits all states to explore outer space freely. In 1979, the Moon Treaty made the surfaces of objects such as planets, as well as the orbital space around these bodies, the jurisdiction of the international community. Additional resolutions regarding the peaceful uses of outer space have been drafted by the United Nations, but these have not precluded the deployment of weapons into outer space, including the live testing of anti-satellite weapons.
Humans began the physical exploration of space during the twentieth century with the advent of high-altitude balloon flights, followed by the development of single and multi-stage rocket launchers. Earth orbit was achieved by Yuri Gagarin in 1961 and unmanned spacecraft have since reached all of the known planets in the Solar System. Achieving orbit requires a minimum velocity of 28,400 km/h (17,600 mph); much faster than any conventional aircraft. Outer space represents a challenging environment for human exploration because of the dual hazards of vacuum and radiation.Microgravity has a deleterious effect on human physiology, resulting in muscle atrophy and bone loss. As of yet space travel has been limited to low Earth orbit and the Moon for manned flight, and the vicinity of the Solar System for unmanned; the remainder of outer space remains inaccessible to humans other than by passive observation with telescopes.
Outer space is the closest natural approximation to a perfect vacuum. It has effectively no friction, allowing stars, planets and moons to move freely along their ideal orbits. However, even the deep vacuum of intergalactic space is not devoid of matter, as it contains a few hydrogen atoms per cubic meter. By comparison, the air we breathe contains about 1025 molecules per cubic meter. The sparse density of matter in outer space means that electromagnetic radiation can travel great distances without being scattered: the mean free path of a photon in intergalactic space is about 1023 km, or 10 billion light years. In spite of this, extinction, which is the absorption and scattering of photons by dust and gas, is an important factor in galactic and intergalactic astronomy.
Stars, planets and moons retain their atmospheres by gravitational attraction. Atmospheres have no clearly delineated boundary: the density of atmospheric gas gradually decreases with distance from the object until it becomes indistinguishable from the surrounding environment. The Earth's atmospheric pressure drops to about 3.2 × 10−2 Pa at 100 kilometres (62 miles) of altitude, compared to 100 kPA for the International Union of Pure and Applied Chemistry (IUPAC) definition of standard pressure. Beyond this altitude, isotropic gas pressure rapidly becomes insignificant when compared to radiation pressure from the Sun and the dynamic pressure of the solar wind. The thermosphere in this range has large gradients of pressure, temperature and composition, and varies greatly due to space weather.
On the Earth, temperature is defined in terms of the kinetic activity of the surrounding atmosphere. However the temperature of the vacuum cannot be measured in this way. Instead, the temperature is determined by measurement of the radiation. All of the observable Universe is filled with photons that were created during the Big Bang, which is known as the cosmic microwave background radiation (CMB). (There is quite likely a correspondingly large number of neutrinos called the cosmic neutrino background.) The current black body temperature of the background radiation is about 3 K (−270 °C; −454 °F). Some regions of outer space can contain highly energetic particles that have a much higher temperature than the CMB, such as the corona of the Sun where temperatures can range over 1.2–2.6 MK.
Outside of a protective atmosphere and magnetic field, there are few obstacles to the passage through space of energetic subatomic particles known as cosmic rays. These particles have energies ranging from about 106 eV up to an extreme 1020 eV of ultra-high-energy cosmic rays. The peak flux of cosmic rays occurs at energies of about 109 eV, with approximately 87% protons, 12% helium nuclei and 1% heavier nuclei. In the high energy range, the flux of electrons is only about 1% of that of protons. Cosmic rays can damage electronic components and pose a health threat to space travelers.
Contrary to popular belief, a person suddenly exposed to a vacuum would not explode, freeze to death or die from boiling blood. However, sudden exposure to very low pressure, such as during a rapid decompression, could cause pulmonary barotrauma—a rupture of the lungs, due to the large pressure differential between inside and outside of the chest. Even if the victim's airway is fully open, the flow of air through the windpipe may be too slow to prevent the rupture. Rapid decompression can rupture eardrums and sinuses, bruising and blood seep can occur in soft tissues, and shock can cause an increase in oxygen consumption that leads to hypoxia.
As a consequence of rapid decompression, any oxygen dissolved in the blood would empty into the lungs to try to equalize the partial pressure gradient. Once the deoxygenated blood arrived at the brain, humans and animals will lose consciousness after a few seconds and die of hypoxia within minutes. Blood and other body fluids boil when the pressure drops below 6.3 kPa, and this condition is called ebullism. The steam may bloat the body to twice its normal size and slow circulation, but tissues are elastic and porous enough to prevent rupture. Ebullism is slowed by the pressure containment of blood vessels, so some blood remains liquid. Swelling and ebullism can be reduced by containment in a flight suit. Shuttle astronauts wear a fitted elastic garment called the Crew Altitude Protection Suit (CAPS) which prevents ebullism at pressures as low as 2 kPa. Space suits are needed at 8 km (5.0 mi) to provide enough oxygen for breathing and to prevent water loss, while above 20 km (12 mi) they are essential to prevent ebullism. Most space suits use around 30–39 kPa of pure oxygen, about the same as on the Earth's surface. This pressure is high enough to prevent ebullism, but evaporation of blood could still cause decompression sickness and gas embolisms if not managed.
Because humans are optimized for life in Earth gravity, exposure to weightlessness has been shown to have deleterious effects on the health. Initially, more than 50% of astronauts experience space motion sickness. This can cause nausea and vomiting, vertigo, headaches, lethargy, and overall malaise. The duration of space sickness varies, but it typically lasts for 1–3 days, after which the body adjusts to the new environment. Longer term exposure to weightlessness results in muscle atrophy and deterioration of the skeleton, or spaceflight osteopenia. These effects can be minimized through a regimen of exercise. Other effects include fluid redistribution, slowing of the cardiovascular system, decreased production of red blood cells, balance disorders, and a weakening of the immune system. Lesser symptoms include loss of body mass, nasal congestion, sleep disturbance, and puffiness of the face.
For long duration space travel, radiation can pose an acute health hazard. Exposure to radiation sources such as high-energy, ionizing cosmic rays can result in fatigue, nausea, vomiting, as well as damage to the immune system and changes to the white blood cell count. Over longer durations, symptoms include an increase in the risk of cancer, plus damage to the eyes, nervous system, lungs and the gastrointestinal tract. On a round-trip Mars mission lasting three years, nearly the entire body would be traversed by high energy nuclei, each of which can cause ionization damage to cells. Fortunately, most such particles are significantly attenuated by the shielding provided by the aluminum walls of a spacecraft, and can be further diminished by water containers and other barriers. However, the impact of the cosmic rays upon the shielding produces additional radiation that can affect the crew. Further research will be needed to assess the radiation hazards and determine suitable countermeasures.
There is no clear boundary between Earth's atmosphere and space, as the density of the atmosphere gradually decreases as the altitude increases. There are several standard boundary designations, namely:
  • The Fédération Aéronautique Internationale has established the Kármán line at an altitude of 100 km (62 mi) as a working definition for the boundary between aeronautics and astronautics. This is used because at an altitude of roughly 100 km (62 mi), as Theodore von Kármán calculated, a vehicle would have to travel faster than orbital velocity in order to derive sufficient aerodynamic lift from the atmosphere to support itself.
  • The United States designates people who travel above an altitude of 50 miles (80 km) as astronauts.
  • NASA's mission control uses 76 mi (122 km) as their re-entry altitude (termed the Entry Interface), which roughly marks the boundary where atmospheric drag becomes noticeable (depending on the ballistic coefficient of the vehicle), thus leading shuttles to switch from steering with thrusters to maneuvering with air surfaces.
In 2009, scientists at the University of Calgary reported detailed measurements with an instrument called the Supra-Thermal Ion Imager (an instrument that measures the direction and speed of ions), which allowed them to establish a boundary at 118 km (73 mi) above Earth. The boundary represents the midpoint of a gradual transition over tens of kilometers from the relatively gentle winds of the Earth's atmosphere to the more violent flows of charged particles in space, which can reach speeds well over 268 m/s (600 mph).