Sunday, May 2, 2010

COLLECTIVE INTELLIGENCE --- OUR ONLY HOPE FOR SURVIVING THE SINGULARITY

COLLECTIVE INTELLIGENCE --- OUR ONLY HOPE FOR SURVIVING THE SINGULARITY

========================================

Over the next several decades there will be an explosion in the rate of technical development. The change is expected to be so great --- many call it the “Singularity.” It will drastically transform our --- economy --- society --- values --- bodies --- and minds --- in ways that could be very good --- or very bad.

This explosion will be fueled by the ever increasing power of computers. Within two decades machine intelligence is likely to vastly surpass all the powers of the human brain. This will produce superintelligences that can perform --- learning --- understanding --- mathematical --- scientific --- programming --- engineering --- robotic --- manufacturing --- and --- human interfacing --- related mental tasks much faster, better, and less expensively than humans. (For reasons why superintelligence will probably happen so soon, see http://fora.humanityplus.org/index.php?/topic/31-human-level-artificial-intelligence-and-its-consequences-are-near/ [ on this page at ] )

This superintelligence will enable breathtaking advances in many other technologies, including: --- nano-electronics --- quantum computing --- networking --- brain science --- brain manipulation, interfacing, & augmentation --- medicine --- life extension --- biotechnology --- genetic engineering --- synthetic biology --- nanotechnology --- molecular & self-organizing manufacturing --- robotics --- nano-robotics --- energy --- sensor networks --- surveillance --- weaponry --- cyber crime & warfare --- and --- interactive virtual worlds, friends, and lovers --- ones more detailed, interactive, and exciting than those that are real.

The Singularity will NOT occur in a vacuum.

It will NOT occur in a realm of pure science, engineering, or philosophy. It will NOT occur in one instant, one year, or one decade.

Instead, it WILL occur --- over multiple decades --- in the real world --- one dominated by struggles for --- personal --- corporate --- political --- and national --- survival, money, and power. How the Singularity’s wildly transformative technologies will be developed and deployed will be decided largely by collective entities --- by corporations --- governments --- political parties --- militaries --- bureaucracies --- interest groups --- criminal gangs --- the media --- and public opinion.

The increasing rate and degree of change made possible by the Singularity --- and the power it could give a very few to benefit --- or harm --- very many --- will tend to make the world much less stable --- and much more difficult for human institutions to govern.

In three to five decades the Singularity could drastically increase the world’s production of food and necessities ---and/or--- replace almost all human work for pennies an hour in a way that would prevent most people from earning a living. It could create a relatively evenly shared plenty --- or --- extremely concentrated wealth and power. It could enable a plurality of free, networked voices and collaboration --- or --- enable machines to watch everything people do, say, and think --- and punish those who disobey. It could greatly lengthen life and health --- or --- create synthetic life forms that accidentally wreack global havoc and death. It could create machines that greatly empower individual humans and their minds --- or --- enable superintelligences --- controlled by one group, one person, or one system of machines --- to hack into --- and take control of --- virtually all the machines upon which humans depend --- so as to enslave or kill most, or all, of humanity.

We cannot stop the advent of superintelligence. Too many people already know how much --- technological --- economic --- political --- and --- military --- advantage can be gained by the nations and corporations that are first to substantially deploy it. It cannot be stopped because computer technology and our understanding of intelligence are already so advanced --- that most of the world’s leading nations and technology companies could develop it --- within roughly a decade --- if they tried.

It is arguable that we will actually --- need --- superintelligence. It is possible that without it we might not be able to deal with many of the problems the world already is facing. It is also possible that if we were smart enough --- as a species --- we could learn how to use it relatively safely to create tremendous benefits for mankind.

Given the complex, and rapidly-changing mix of choices, promises, and threats the Singularity will present --- if humanity is to have any chance of surviving well through this century --- we must harness the coming explosion of technology --- itself --- to vastly increase our collective intelligence, wisdom, and responsibility.

If we --- as a species --- are intelligent enough to design machines that think much more efficiently than we do --- then why can’t we also design technology to enable groups of humans --- when connected by the internet --- and augmented by machine superintelligences --- to think together much, much more intelligently, fairly, and responsibly?

Current computer and internet hardware is already powerful enough to substantially increase humanity’s collective intelligence. And with the technology of the coming decades we will be able to increase our collective intelligence much, much more.

The major barriers will not be technological. They will come --- from human nature --- from religious, cultural, and national values --- and from selfish interests.

Seen from a system-wide viewpoint --- the current collective intelligence of many human institutions is stunningly stupid. Here are just a few examples:

-The 2008 collapse of the world’s financial markets was caused by: --- the false meme that average American residential real estate values never drop (although they had done so twice within the previous eighty years) --- a system of short-term incentives that rewarded people for taking breathtakingly irresponsible risks with other people’s money --- and --- by financial rating agencies and legislators that were --- in effect --- bribed to ignore such dangerous risks.

-America’s Social Security Trust Fund has been an obvious, worthless, sham for decades. It has no net worth. It is nothing but IOU’s from the federal government to itself. The trillions of dollars of Treasury bonds the fund holds have no more value to the federal government than the blank paper it is free --- at any time --- to print into new notes for equal amounts and try to sell --- (i.e. borrow with). And yet our public forum is so dysfunctional that most politicians, media voices, and citizens have acted for decades as if the trust fund had many trillions of dollars of --- actual --- worth. The second President Bush hinted the trust fund’s bonds had little worth --- during his ill-fated attempt to reform Social Security --- but, for political reasons, he was not willing to drive home just what an obvious, and harmful, lie both political parties had been telling the American people for decades.

-America’s federal government has been so short sighted it has run up --- over many decades and under both political parties --- fifty to eighty trillion dollars of unfunded obligations that will come due in the next few decades. These obligations are highly likely --- given most current economic predictions --- to throw our country into an economic crisis much, much deeper than that we are currently in. That is, unless the federal government has the political will to substantially raise taxes or reduce the benefits it has promised under many entitlement programs and pension agreements. Most political observers believe our government will make such difficult changes --- but only after our economy has been so drastically harmed by this expected debt crisis that our government will be absolutely forced to do so.

As these examples --- and thousands of others that could be listed --- indicate --- many of society's current collective systems are not intelligent enough to deal with many of our current problems.

If this is true --- it is almost certain that such systems will not be smart and wise enough to deal well with the much more disruptive choices, changes, and challenges the Singularity will bring.

SO, HOW CAN WE RAISE THE COLLECTIVE INTELLIGENCE OF HUMAN SOCIETY AND ITS INSTITUTIONS TO BEST DEAL WITH THE SINGULARITY?

This is the subject I would like to see discussed under this topic.

I will start by adding some of my own thoughts below. But I look forward to hearing yours.

[Currently comment can be placed at under corresponding text at http://fora.humanityplus.org/index.php?/topic/70-collective-intelligence-our-only-hope-for-surviving-the-singularity/ and http://www.int4um.blogspot.com/ ]

==================

P.S.

For some thinking on how to make democratic government more intelligent with today’s technology, go to:

MIT's Center for Collective Intelligence at http://cci.mit.edu/ , including their experiment in collective intelligence using climate change as the test subject at http://www.climatecollaboratorium.org/web/guest

the “Personal Democracy Forum” at http://personaldemocracy.com/ , and in particular to their anthology of essays on the subject at “It's Time to Reboot America” at http://rebooting.personaldemocracy.com/files/Rebooting_America.pdf

For a video describing one currently proposed collective intelligence system see http://www.youtube.com/watch?v=ue-ibFH9zTA (Thanks to TransAlchemy for pointing this out under the Sousveillance topic)

For a more futurist discussion that relates to how the Singularity might effect human governance, read Goertzel and Bugaj’s article on “Sousveilance” at http://fora.humanityplus.org/index.php?/topic/27-sousveillance-and-artificial-general-intelligence/

Tuesday, April 27, 2010

DARPA’s 2 liter, 1KW, 10^14 synapse AGI brain

DARPA’s Defense Sciences Office (DSO) is supporting the Systems of Neuromorphic Adaptive Plastic Scalable Electronics, or SyNAPSE, project. It’s goal, according to its April 8, 2008 BAA (Broad Agency Announcement) is to create a system with roughly: the same number of neurons (they want 10^10); same number of synapses (they want 10^14); and same power as the human brain --- that will fit in a volume of 2 liters or less, and will draw less than one kilowatt of electric power..

The SyNAPSE BAA says:

“The vision for the anticipated DARPA SyNAPSE program is the enabling of electronic neuromorphic machine technology that is scalable to biological levels. Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations….

and

“Architectures will support critical structures and functions observed in biological systems such as connectivity, hierarchical organization, core component circuitry, competitive self-organization, and modulatory/reinforcement systems. As in biological systems, processing will necessarily be maximally distributed, nonlinear, and inherently noise- and defect-tolerant. “


Guilio Tononi, who has developed “An information integration theory of consciousness” (described at http://www.biomedcentral.com/1471-2202/5/42 ), is working on the SyNAPSE project. As is stated in “Cognitive computing: Building a machine that can learn from experience” (at http://www.physorg.com/news148754667.html ), Tononi is part of a team that will be developing a prototype, small-mammal-brain-powered, neuromorphic AGI for the SyNAPSE project.

“Tononi, professor of psychiatry at the UW-Madison School of Medicine and Public Health and an internationally known expert on consciousness, is part of a team of collaborators from top institutions who have been awarded a $4.9 million grant from the Defense Advanced Research Projects Agency (DARPA) for the first phase of DARPA's Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.

“Tononi and scientists from Columbia University and IBM will work on the "software" for the thinking computer, while nanotechnology and supercomputing experts from Cornell, Stanford and the University of California-Merced will create the "hardware." Dharmendra Modha of IBM is the principal investigator.

'The idea is to create a computer capable of sorting through multiple streams of changing data, to look for patterns and make logical decisions.

“There's another requirement: The finished cognitive computer should be as small as a the brain of a small mammal and use as little power as a 100-watt light bulb. It's a major challenge. But it's what our brains do every day.”


One of the keys to making the types of compact, low-power, extremely powerful supercomputers SyNAPSE envisions within in this coming decade is the “memsistor.”

This is because memristors enable a synapse to be modeled much more compactly than ever before possible. Memristors are a type of resistor in which the resistance can be varied by changing the magnitude or direction of current passed through it, and can be remembered until the next time it is changed. Hewlet-Packard is currently the world’s leading developer of memsistor technology and is an important part of the DARPA’s SyNAPSE program.

An article at http://www.newscientist.com/article/mg20327151.600-memristor-minds-the-future-of-artificial-intelligence.html?full=true&print=true states the following about the role of memristors in the SyNAPSE project:

“So now we've found [memristors], might a new era in artificial intelligence be at hand? The Defense Advanced Research Projects Agency certainly thinks so. DARPA is a US Department of Defense outfit with a strong record in backing high-risk, high-pay-off projects - things like the internet. In April last year, it announced the Systems of Neuromorphic Adaptive Plastic Scalable Electronics Program, SyNAPSE for short, to create "electronic neuromorphic machine technology that is scalable to biological levels".

“Williams's team from Hewlett-Packard is heavily involved. Late last year, in an obscure US Department of Energy publication called SciDAC Review, his colleague Greg Snider set out how a memristor-based chip might be wired up to test more complex models of synapses. He points out that in the human cortex synapses are packed at a density of about 10^10 per square centimetre, whereas today's microprocessors only manage densities 10 times less. "That is one important reason intelligent machines are not yet walking around on the street," he says.

'Snider's dream is of a field he calls "cortical computing" that harnesses the possibilities of memristors to mimic how the brain's neurons interact. It's an entirely new idea. "People confuse these kinds of networks with neural networks," says Williams. But neural networks - the previous best hope for creating an artificial brain - are software working on standard computing hardware. "What we're aiming for is actually a change in architecture," he says.

'The first steps are already being taken. Williams and Snider have teamed up with Gail Carpenter and Stephen Grossberg at Boston University, who are pioneers in reducing neural behaviours to systems of differential equations, to create hybrid transitor-memristor chips designed to reproduce some of the brain's thought processes. Di Ventra and his colleague Yuriy Pershin have gone further and built a memristive synapse that they claim behaves like the real thing(www.arxiv.org/abs/0905.2935).

'The electronic brain will be a time coming. "We're still getting to grips with this chip," says Williams. Part of the problem is that the chip is just too intelligent - rather than a standard digital pulse it produces an analogue output that flummoxes the standard software used to test chips. So Williams and his colleagues have had to develop their own test software. "All that takes time," he says.”


Two recent articles point out successes HP is making in developing memristors. This progress is so impressive that memristors may well become the major form of long anticipated “universal” memories (i.e., memory that can be used substantially like SRAM, DRAM, and flash are today. But first ways will have to be found to substantially increase how many times memsistor can have their values changed far beyond the number of times flash memory can be changed. People at HP currently claim to be confidient they can achieve such increases.


An April 7, 2010 NYTimes article (at http://www.nytimes.com/2010/04/08/science/08chips.html ) reported Hewlett-Packard has been making significant progress on memsistor technology. In part it said:

“they had devised a new method for storing and retrieving information from a vast three-dimensional array of memristors. The scheme could potentially free designers to stack thousands of switches in a high-rise fashion, permitting a new class of ultradense computing devices even after two-dimensional scaling reaches fundamental limits”

“The most advanced transistor technology today is based on minimum feature sizes of 30 to 40 nanometers…and Dr. Williams said that H.P. now has working 3-nanometer memristors that can switch on and off in about a nanosecond, or a billionth of a second.

'He said the company could have a competitor to flash memory in three years that would have a capacity of 20 gigabytes a square centimeter.”


An April 9, 3010 article from EEtimes (at http://www.eetimes.com/showArticle.jhtml?articleID=224202453 ) stated

“Hewlett-Packard has demonstrated memristors ("memory resistors") cast in an architecture that can be dynamically changed between logic operations and memory storage. The configurable architecture demonstrates "stateful logic" that HP claims could someday obsolete the dedicated central-processing unit (CPU) by enabling dynamically changing circuits to maintain a constant memory of their state…

“… HP showed that memristive devices could use stateful logic to perform material implication—a "complete" operator that can be interconnected to create any logical operation, much as early supercomputers were made from NAND gates. Bertrand Russell espoused material implication in Principia Mathematica, the seminal primer on logic he co-authored with Alfred Whitehead, but until now engineers have largely ignored the concept.

“HP realized the material implication gate with one regular resistor connected to two memristive devices used as digital switches (low resistance for "on" and high resistance for "off"). By using three memristors, HP could have realized a NAND gate and thus re-created the conditions under which earlier supercomputers were conceived. But HP claims that material implication is better than NAND for memristive devices, because material implication gates can be cast in an architecture that uses them as either memory or logic, enabling a device whose function can be dynamically changed.”
All these article indicate advances in memristors might well hasten the day when human-level AGI’s are created.



For more information on the SyNAPSE project look at the following two links
IBM also has part of the SyNAPSE contract as is discussed in the last half of http://www-03.ibm.com/press/us/en/pressrelease/28842.wss

For DSO’s current brief summary of the project see http://www.darpa.mil/dso/thrusts/bio/biologically/synapse/index.htm

DARPA IPTO projects likely to advance AGI

Here is a summary of projects of DARPA’s IPTO (Information Processing Technique Office) taken from its web site. It shows this office within DARPA is funding a lot of projects that are likely to speed the advance of AI.

Particularly if it is combined with the type of deep learning DARPA is proposing, described in one of my posts above, or if it combined with DARPA’s neuromorphic computing project.

(I have capitalized the portions of text that seem most relevant to the development of AI. Apologies to those who view all caps as screaming. In the limited word processor offered in this forum, it seems the most efficient way to let one scan highlighted text.).

============================================================
Cognitive Systems @ http://www.darpa.mil/ipto/thrust_areas/thrust_cs.asp
============================================================
COGNITIVE COMPUTING IS THE DEVELOPMENT OF COMPUTER TECHNIQUES TO EMULATE HUMAN PERCEPTION, INTELLIGENCE AND PROBLEM SOLVING. Cognitive systems offer some important advantages over conventional computing approaches. For example, COGNITIVE SYSTEMS CAN LEARN FROM EVENTS THAT OCCUR IN THE REAL WORLD and so are better suited to applications that require EXTRACTING AND ORGANIZING INFORMATION IN COMPLEX UNSTRUCTURED SCENARIOS than conventional computing systems, which must have the right models built in a priori in order to be effective. Because many of challenges faced by military commanders involve vast amounts of data from sensors, databases, the Web and human sources, IPTO is creating cognitive systems that CAN LEARN AND REASON TO STRUCTURE MASSIVE AMOUNTS OF RAW DATA INTO USEFUL, ORGANIZED KNOWLEDGE WITH A MINIMUM OF HUMAN ASSISTANCE. IPTO is implementing cognitive technology in systems that support warfighters in the decision-making, management, and understanding of complexity in traditional and emergent military missions. These cognitive systems WILL UNDERSTAND WHAT THE USER IS REALLY TRYING TO DO AND PROVIDE PROACTIVE INTELLIGENCE, ASSISTANCE AND ADVICE. Finally, the increasing complexity, rigidity, fragility and vulnerability of modern information technology has led to ever-growing manpower requirements for IT support. The incorporation of COGNITIVE CAPABILITIES IN INFORMATION SYSTEMS WILL ENABLE THEM TO SELF-MONITOR, SELF-CORRECT, AND SELF-DEFEND AS THEY EXPERIENCE SOFTWARE CODING ERRORS, HARDWARE FAULTS AND CYBER-ATTACK.

Programs

Advanced Soldier Sensor Information System and Technology (ASSIST)
---------------------------------------------------------------------------------------------------------
The main goal of the program is to enhance battlefield awareness via exploitation of soldier-collected information. The program will demonstrate advanced technologies and an integrated system for processing, digitizing and disseminating key data and knowledge captured by and for small squad leaders.

Bootstrapped Learning (BL)
---------------------------------------------------------------------------------------------------------
THE BOOTSTRAPPED LEARNING PROGRAM SEEKS TO MAKE INSTRUCTABLE COMPUTING A REALITY. THE "ELECTRONIC STUDENT" WILL LEARN FROM A HUMAN TEACHER WHO USES SPOKEN LANGUAGE, GESTURES, DEMONSTRATION, AND MANY OTHER METHODS ONE WOULD FIND IN A HUMAN MENTORED RELATIONSHIP. FURTHERMORE, IT WILL BUILD UPON LEARNED CONCEPTS AND APPLY THAT KNOWLEDGE ACROSS DIFFERENT FIELDS OF STUDY.

EMBEDDING BL TECHNOLOGY IN COMPUTING SYSTEMS WILL ELIMINATE THE NEED FOR TRAINED PROGRAMMERS IN MANY PRACTICAL SETTINGS, significantly accelerating human-machine instruction, and making possible on-the-fly upgrades by domain experts rather than computer experts. Target applications include a variety of field-trainable military systems, such as human-instructable unmanned aerial vehicles. However, BL technology is being developed and tested against a portfolio of training tasks across very diverse domains, thus it can be applied to any programmable, automated system. As such systems have become ubiquitous, and their operation inaccessible to the layperson, there is also the strong prospect of societal adoption and benefit.

Brood of Spectrum Supremacy (BOSS)
---------------------------------------------------------------------------------------------------------
The goal of the Brood of Spectrum Supremacy (BOSS) program is to provide a radio frequency (RF) spectrum analogue to night vision capabilities for the tactical warfighter, with a particular focus on RF-rich urban operations. The program is intended to apply collaborative processing capabilities for software-defined radios to specific military applications.

Cyber Trust (CT)
---------------------------------------------------------------------------------------------------------
The Cyber Trust program will create the technology and techniques to enable trustworthy information systems by:
1. Developing hardware, firmware, and microkernel architectures as necessary to provide foundational security for operating systems and applications.
2. Developing tools to find vulnerabilities in complex open source software.
3. Developing scalable formal methods to formally verify complex hardware/software.

Integrated Learning (IL)
---------------------------------------------------------------------------------------------------------
The Integrated Learning program SEEKS TO ACHIEVE REVOLUTIONARY ADVANCES IN MACHINE LEARNING BY CREATING SYSTEMS THAT OPPORTUNISTICALLY ASSEMBLE KNOWLEDGE FROM MANY DIFFERENT SOURCES IN ORDER TO LEARN. THE GOAL IS TO MOVE BEYOND THE CURRENT STATISTICALLY-ORIENTED PARADIGMS VIA THE INTEGRATION OF EXISTING LEARNING, REASONING, AND KNOWLEDGE REPRESENTATION TECHNOLOGIES INTO A COHERENT ARTIFACT THAT WILL BE ABLE TO LEARN MUCH MORE QUICKLY AND ROBUSTLY IN A WIDER RANGE OF APPLICATIONS. The program is FOCUSED UPON LEARNING MODELS OF ACTION FROM VERY SPARSE DATA, which will provide the ability to develop more effective military decision/planning support systems at lower costs. Target applications include military airspace management and medical logistics.

LANdroids
---------------------------------------------------------------------------------------------------------
Communications are essential to warfighters - they enable warfighters to share situational awareness and to stay coordinated with each other and command. Communications are important for voice and data and the importance for data traffic will only increase in the future. The problem is that urban settings hinder communications. Buildings, walls, vehicles, etc., create obstacles that impact the manner in which radio signals propagate. The net result is unreliable communications in these settings, which can leave warfighters, sensors, etc., without the benefit of reach back to command or each other.

This program will help to solve the urban communications problem by CREATING INTELLIGENT AUTONOMOUS ROBOTIC RADIO RELAY NODES, CALLED LANDROIDS (LOCAL AREA NETWORK DROIDS), WHICH WORK TO ESTABLISH AND MAINTAIN MESH NETWORKS THAT SUPPORT VOICE AND DATA TRAFFIC. Through autonomous movement and intelligent control algorithms, LANdroids can mitigate many of the communications problems present in urban settings, e.g., relaying signals into shadows and making small adjustments to reduce multi-path effects.

LANdroids will be pocket-sized and inexpensive. The concept of operations is that warfighters will carry several LANdroids, which they drop as needed during deployment. The LANdroids then form the mesh network and work to maintain it - establishing a communications infrastructure that supports the warfighters in that region.

Machine Reading (MR)
---------------------------------------------------------------------------------------------------------
The Machine Reading Program WILL BUILD A UNIVERSAL TEXT ENGINE THAT CAPTURES KNOWLEDGE FROM NATURALLY OCCURRING TEXT AND TRANSFORMS IT INTO THE FORMAL REPRESENTATIONS USED BY ARTIFICIAL INTELLIGENCE (AI) REASONING SYSTEMS. The Machine Reading Program will create an automated reading system that SERVES AS A BRIDGE BETWEEN KNOWLEDGE CONTAINED IN NATURAL TEXTS AND THE FORMAL REASONING SYSTEMS THAT NEED SUCH KNOWLEDGE.

Personalized Assistant that Learns (PAL)
---------------------------------------------------------------------------------------------------------
The mission of the PAL program is TO RADICALLY IMPROVE THE WAY COMPUTERS SUPPORT HUMANS BY ENABLING SYSTEMS THAT ARE COGNITIVE, I.E., COMPUTER SYSTEMS THAT CAN REASON, LEARN FROM EXPERIENCE, BE TOLD WHAT TO DO, EXPLAIN WHAT THEY ARE DOING, REFLECT ON THEIR EXPERIENCE, AND RESPOND ROBUSTLY TO SURPRISE. MORE SPECIFICALLY, PAL WILL DEVELOP A SERIES OF PROTOTYPE COGNITIVE SYSTEMS THAT CAN ACT AS AN ASSISTANT FOR COMMANDERS AND STAFF. Successful completion of this program will usher in a new era of computational support for a broad range of human activity.

Current software systems - in the military and elsewhere - are plagued by brittleness and the inability to deal with changing and novel situations - and must therefore be painstakingly programmed for every contingency. If PAL succeeds it could result in software systems that could learn on their own - that could adapt to changing situations without the need for constant reprogramming. PAL technology could drastically reduce the money spend by DoD on information systems of all kinds.

This is the FIRST BROAD-BASED RESEARCH PROGRAM IN COGNITIVE SYSTEMS SINCE THE STRATEGIC COMPUTING INITIATIVE FUNDED BY DARPA IN THE 1980S. Since then, there have been significant developments in the technologies needed to enable cognitive systems, such as machine learning, reasoning, perception, and, multi-modal interaction. Improvements in processors, memory, sensors and networking have also dramatically changed the context of cognitive systems research. It is now time to encourage the various areas to come together again by focusing on by a common application problem: a Personalized Assistant that Learns.

Developing cognitive systems that learn to adapt to their user could dramatically improve a wide range of military operations. The development and application of intelligent systems to support military decision-making may provide dramatic advances for traditional military roles and missions. The technologies developed under the PAL program are intended to make military decision-making more efficient and more effective at all levels.

For example, today's command centers require hundreds of staff members to support a relatively small number of key decision-makers. If PAL succeeds, and develops a new capability for "cognitive assistants," those assistants could eliminate the need for large command staffs - enabling smaller, more mobile, less vulnerable command centers.

Self-Regenerative Systems (SRS)
---------------------------------------------------------------------------------------------------------
The goal of the SRS program is to develop technology for building military computing systems that provide critical functionality at all times, in spite of damage caused by unintentional errors or attacks. All current systems suffer eventual failure due to the accumulated effects of errors or attacks. The SRS program aims to develop technologies enabling military systems to learn, regenerate themselves, and automatically improve their ability to deliver critical services. If successful, self-regenerative systems will show a positive trend in reliability, actually exceeding initial operating capability and approaching a theoretical optimal performance level over long time intervals.

Situation Aware Protocols in Edge Network Technologies (SAPIENT)
---------------------------------------------------------------------------------------------------------
The mission of the Situation Aware Protocols in Edge Network Technologies (SAPIENT) program is to create a new generation of adaptive communication systems that achieve new levels of functionality through situation-awareness.

Transfer Learning (TL)
---------------------------------------------------------------------------------------------------------
The TRANSFER LEARNING PROGRAM SEEKS TO SOLVE THE PROBLEM OF REUSING KNOWLEDGE DERIVED IN ONE DOMAIN TO HELP EFFECT SOLUTIONS IN ANOTHER DOMAIN. Adaptive systems, systems that respond to changes in their environment, stand to benefit significantly from the application of TL technology. Today's adaptive systems need to be trained for every new situation they encounter. This requires building new training data, which is the most expensive and most limiting aspect of deploying such systems. The TL PROGRAM ADDRESSES THIS SHORTCOMING BY IMBUING ADAPTIVE SYSTEMS WITH THE ABILITY TO ENCAPSULATE WHAT THEY HAVE LEARNED AND APPLY THIS KNOWLEDGE TO NEW SITUATIONS. Thus, rather than having to be retrained for each new context, TL enables systems to leverage what they have already learned in order to be effective much sooner and with less effort spent on training. Early applications of TL technology include adaptive ISR systems, robotic vision and manipulation, and automated population of databases from unstructured text.




============================================================
Command & Control @ http://www.darpa.mil/ipto/thrust_areas/thrust_cc.asp
============================================================
Command and control is the exercise of authority and direction by a properly designated commander over assigned and attached forces in the accomplishment of a mission. Without question the missions faced by our warfighters today (such as counter-insurgency) and the operational environments (such as cities) are more complex and dangerous than ever before. While following their rules of engagement, warfighters must make rapid decisions based on limited observables interpreted in the context of the evolving situation. Command and control systems must augment the observables within constrained timelines and present actionable results to the warfighter. IPTO ENABLES WARFIGTER SUCCESS BY CREATING TECHNOLOGIES AND SYSTEMS THAT PROVIDE TAILORED, CONSISTENT, PREDICTIVE SITUATION AWARENESS ACROSS ALL COMMAND ELEMENTS, AND CONTINUOUS SYNCHRONIZATION OF SENSING, STRIKE, COMMUNICATIONS, AND LOGISTICS TO MAXIMIZE THE EFFECTIVENESS OF MILITARY OPERATIONS WHILE MINIMIZING UNDESIRABLE SIDE EFFECTS. In counter-insurgency operations, targets of interest are often not known until a significant event (e.g. detonation of IED) occurs. In those instances, reliably and quickly determining the origin of the devices/vehicles becomes the key to preventing subsequent attacks. IPTO is creating systems that collect wide area observables in the absence of any strong a priori cues, analyze the prior time history of events and track insurgent activities to their point of origin.

Programs


Conflict Modeling, Planning, and Outcomes Experimentation (COMPOEX)
---------------------------------------------------------------------------------------------------------
DARPA's Conflict Modeling, Planning, and Outcomes Experimentation (COMPOEX) program is developing a suite of tools that will help military commanders and their civilian counterparts to plan, analyze and conduct complex campaigns. "Complex" here refers to those operations - often of long duration and large scale - that require careful consideration of not only traditional military, but also political, social, economic actions and ramifications.

Deep Green (DG)
---------------------------------------------------------------------------------------------------------
The Deep Green concept is an innovative approach to using simulation to support ongoing military operations while they are being conducted. The basic approach is to MAINTAIN A STATE-SPACE GRAPH OF POSSIBLE FUTURE STATES. SOFTWARE AGENTS USE INFORMATION ON THE TRAJECTORY OF THE ONGOING OPERATION, VICE A PRIORI STAFF ESTIMATES AS TO HOW THE BATTLE MIGHT UNFOLD, AS WELL AS SIMULATION TECHNOLOGIES, TO ASSESS THE LIKELIHOOD OF REACHING SOME SET OF POSSIBLE FUTURE STATES. THE LIKELIHOOD, UTILITY, AND FLEXIBILITY OF POSSIBLE FUTURE NODES IN THE STATE SPACE GRAPH ARE COMPUTED AND EVALUATED TO FOCUS THE PLANNING EFFORTS. This notion is called anticipatory planning and involves the generation of options (either manual or semi-automated) ahead of "real time," before the options are needed. In addition, the Deep Green concept provides mechanisms for adaptive execution, which can be described as "late binding," or choosing a branch in the state space graph at the last moment to maintain flexibility. By using information acquired from the ongoing operation, rather than assumptions made during the planning phase, commanders and staffs can make more informed choices and focus on building options for futures that are becoming more likely.

Heterogeneous Airborne Reconnaissance Team (HART)
---------------------------------------------------------------------------------------------------------
The complexity of counter-insurgency operations especially in the urban combat environment demands multiple sensing modes for agility and for persistent, ubiquitous coverage. The HART system implements collaborative control of reconnaissance, surveillance and target acquisition (RSTA) assets, so that the information can be made available to warfighters at every echelon.

Persistent Operational Surface Surveillance and Engagement (POSSE)
---------------------------------------------------------------------------------------------------------
The POSSE program is building a REAL-TIME, ALL-SOURCE EXPLOITATION SYSTEM TO PROVIDE INDICATIONS AND WARNINGS OF INSURGENT ACTIVITY DERIVED FROM AIRBORNE AND GROUND-BASED SENSORS. Envisioning a day when our sensors can be integrated into a cohesive "ISR Force", it's building AN INTEGRATED SUITE OF SIGNAL PROCESSING, PATTERN ANALYSIS, AND COLLECTION MANAGEMENT SOFTWARE that will increase reliability, reduce manpower, and speed up responses.

Predictive Analysis for Naval Deployment Activities (PANDA)
---------------------------------------------------------------------------------------------------------
The current CONOPS for achieving situation awareness in the maritime domain calls for close monitoring of those entities that we already have reason to be concerned about (i.e., we already suspect are threats or which carry cargos that could be dangerous in the hands of the wrong people). PANDA will ADVANCE TECHNOLOGIES AND DEVELOP AN ARCHITECTURE THAT WILL ALERT WATCHSTANDERS TO ANOMALOUS SHIP behavior AS IT OCCURS, allowing them to detect potentially dangerous behavior before it causes harm. These technologies and systems will be transitioned to various partners and customers throughout the development process, ensuring that the end product meets the needs of the services and watchstanders. Participants will work closely with the transition partners to aid in this process.

Urban Leader Tactical Response, Awareness & Visualization (ULTRA-Vis)
---------------------------------------------------------------------------------------------------------
Current military operations are focusing efforts on urban and asymmetric warfare, as well as distributed operations, but small unit leaders lack the capability to issue commands and share mission-relevant information in an urban environment non-line-of-sight. Various factors that can impact mission effectiveness and tempo of operations are:
1. Leaders communicate by shouting and hand signals;
2. Teams operate within earshot and line-of-sight;
3. Intra-squad radios are hard to hear; and
4. Leaders must stop to use handheld displays.
Military operations in the urban terrain (extensive areas with hostile forces, non-combatant populations, and complex infrastructure) require special capabilities and agility to conduct close-combat operations under highly dynamic, adverse conditions. In short, tactical leaders need the ability to adapt on the move, coordinate small unit actions and execute commands across a wider area of engagement. SIGNIFICANT TACTICAL ADVANTAGES COULD BE REALIZED THROUGH THE SMALL UNIT LEADER'S ABILITY TO INTUITIVELY GENERATE/ROUTE COMMANDS AND TIMELY ACTIONABLE COMBAT INFORMATION TO THE APPROPRIATE TEAM OR INDIVIDUAL WARFIGHTER IN A READILY UNDERSTOOD FORMAT THAT AVOIDS INFORMATION OVERLOAD.

============================================================
High Productivity Computing @ http://www.darpa.mil/ipto/thrust_areas/thrust_hpc.asp
============================================================
IPTO is DEVELOPING THE HIGH-PRODUCTIVITY, HIGH-PERFORMANCE COMPUTER HARDWARE AND THE ASSOCIATED SOFTWARE TECHNOLOGY BASE REQUIRED TO SUPPORT FUTURE CRITICAL NATIONAL SECURITY NEEDS FOR COMPUTATIONALLY-INTENSIVE AND DATA-INTENSIVE APPLICATIONS. THESE TECHNOLOGIES WILL LEAD TO NEW MULTI-GENERATION PRODUCT LINES OF COMMERCIALLY VIABLE, SUSTAINABLE COMPUTING SYSTEMS FOR A BROAD SPECTRUM OF SCIENTIFIC AND ENGINEERING APPLICATIONS, including both supercomputer and embedded computing. The goal is to ensure accessibility and usability of high end computing to a wide range of application developers, not just computational science experts. This is ESSENTIAL FOR MAINTAINING THE NATION'S STRENGTH IN SUPERCOMPUTING BOTH FOR ULTRA LARGE-SCALE ENGINEERING APPLICATIONS AND FOR SURVEILLANCE AND RECONNAISSANCE DATA ASSIMILATION AND EXPLOITATION. ONE OF THE MAJOR CHALLENGES CURRENTLY FACING THE DOD IS THE PROHIBITIVELY HIGH COST, TIME, AND EXPERTISE REQUIRED TO BUILD LARGE COMPLEX SOFTWARE SYSTEMS. POWERFUL NEW APPROACHES AND TOOLS ARE NEEDED TO ENABLE THE RAPID AND EFFICIENT PRODUCTION OF NEW SOFTWARE, INCLUDING SOFTWARE THAT CAN BE EASILY CHANGED TO ADDRESS NEW REQUIREMENTS AND TO PLATFORM AND ENVIRONMENTAL PERTURBATIONS. Computing capabilities must progress dramatically if U.S. forces are to exploit an ever-increasing diversity, quantity, and complexity of sensor and other types of data. Doing so both in command centers and on the battlefield will require significantly increasing performance and significantly decreasing power and size requirements.

Programs [there was currently no available description for these programs]


Architecture-Aware Compiler Environment (AACE)
---------------------------------------------------------------------------------------------------------

Disruptive Manufacturing Technology, Software Producibility (DMT-SWP)
---------------------------------------------------------------------------------------------------------
.

High Productivity Computing Systems (HPCS)
---------------------------------------------------------------------------------------------------------


============================================================
Language Processing @ http://www.darpa.mil/ipto/thrust_areas/thrust_lp.asp
============================================================
At present, the exploitation of foreign language speech and text is slow and labor intensive and as a result, the availability, quantity and timeliness of information from foreign-language sources is limited. IPTO is creating NEW TECHNOLOGIES AND SYSTEMS FOR AUTOMATING THE TRANSCRIPTION AND TRANSLATION OF FOREIGN LANGUAGES. These language processing capabilities will enable our military to exploit large volumes of speech and text in multiple languages, thereby increasing situational awareness at all levels of command. In particular, IPTO is AUTOMATING THE CAPABILITY TO MONITOR FOREIGN LANGUAGE MEDIA AND TO EXPLOIT FOREIGN LANGUAGE NEWS BROADCASTS with one-way (foreign-language-to-English) translation technologies. IPTO is also DEVELOPING HAND-HELD, TWO-WAY (FOREIGN-LANGUAGE-TO-ENGLISH AND ENGLISH-TO-FOREIGN-LANGUAGE) SPEECH-TO-SPEECH TRANSLATION SYSTEMS that enable the warfighter on the ground to communicate directly with local populations in their native language. Finally, IPTO is creating TECHNOLOGIES TO EXPLOIT THE INFORMATION CONTAINED IN HARD-COPY DOCUMENTS AND DOCUMENT IMAGES RESIDENT ON MAGNETIC AND OPTICAL MEDIA CAPTURED IN THE FIELd. Making full use of all of the information extracted from foreign-language sources REQUIRES THE CAPABILITY TO AUTOMATICALLY COLLATE, FILTER, SYNTHESIZE, SUMMARIZE, AND PRESENT RELEVANT INFORMATION IN TIMELY AND RELEVANT FORMS. IPTO is DEVELOPING NATURAL LANGUAGE PROCESSING SYSTEMS TO ENHANCE LOCAL, REGIONAL AND GLOBAL SITUATIONAL AWARENESS AND ELIMINATE THE NEED FOR TRANSLATORS AND SUBJECT MATTER EXPERTS AT EVERY MILITARY SITE WHERE FOREIGN-LANGUAGE INFORMATION IS OBTAINED.

Programs


Global Autonomous Language Exploitation (GALE)
---------------------------------------------------------------------------------------------------------
The goal of the GALE (Global Autonomous Language Exploitation) program is to DEVELOP AND APPLY COMPUTER SOFTWARE TECHNOLOGIES TO ABSORB, TRANSLATE, ANALYZE, AND INTERPRET HUGE VOLUMES OF SPEECH AND TEXT IN MULTIPLE LANGUAGES, eliminating the need for linguists and analysts, and automatically providing relevant, concise, actionable information to military command and personnel in a timely fashion. Automatic processing "engines" will convert and distill the data, delivering pertinent, consolidated information in easy-to-understand forms to military personnel and monolingual English-speaking analysts in response to direct or implicit requests.

Multilingual Automatic Document Classification Analysis and Translation (MADCAT)
---------------------------------------------------------------------------------------------------------
The United States has a compelling need for reliable information affecting military command, soldiers in the field, and national security. Currently, our warfighters encounter foreign language images in many forms, including, but not limited to graffiti, road signs, printed media, and captured records in the form of paper and computer files. Given the quantity of foreign language material, it is difficult to interpret the salient pieces of information, much of which is either ignored or analyzed too late to be of any use. The mission of the Multilingual Automatic Document Classification Analysis and Translation (MADCAT) Program is to AUTOMATICALLY CONVERT FOREIGN LANGUAGE TEXT IMAGES INTO ENGLISH TRANSCRIPTS, thus eliminating the need for linguists and analysts while automatically providing relevant, distilled actionable information to military command and personnel in a timely fashion.

Spoken Language Communication and Translation System for Tactical Use (TRANSTAC)
---------------------------------------------------------------------------------------------------------
Today, phrase-based translation devices are being tactically deployed. These one-way devices translate English input into pre-recorded phrases in target languages. While such systems are useful in many operational settings, the inability to translate foreign speech into English is a significant limitation. The mission of the Spoken Language Communication and Translation System for Tactical Use (TRANSTAC) program is to demonstrate capabilities to rapidly develop and field TWO-WAY TRANSLATION SYSTEMS THAT ENABLE SPEAKERS OF DIFFERENT LANGUAGES TO SPONTANEOUSLY COMMUNICATE WITH ONE ANOTHER IN REAL-WORLD TACTICAL SITUATIONS.


============================================================
Sensors & Processing @ http://www.darpa.mil/ipto/thrust_areas/thrust_sp.asp
============================================================
U.S. forces and sensors are increasingly networked across service, location, domain (land, sea and air), echelon, and platform. This trend increases responsiveness, flexibility and combat effectiveness, but also increases the inherent complexity of sensor and information management. IPTO is CREATING SYSTEMS THAT CAN DERIVE HIGH-LEVEL INFORMATION FROM SENSOR DATA STREAMS (FROM BOTH MANNED AND UNMANNED SYSTEMS), PRODUCE MEANINGFUL SUMMARIES OF COMPLEX DYNAMIC SITUATIONS, AND SCALE TO THOUSANDS OF SOURCES. Future battlefields will continue to be populated with targets that use mobility and concealment as key survival tactics, and high-value targets will range from quiet submarines, to mobile missile/artillery, to specific individual insurgents. IPTO develops and demonstrates system CONCEPTS THAT COMBINE NOVEL APPROACHES TO SENSING, SENSOR PROCESSING, SENSOR FUSION, AND INFORMATION MANAGEMENT TO ENABLE PERVASIVE AND PERSISTENT SURVEILLANCE OF THE BATTLESPACE AND DETECTION, IDENTIFICATION, TRACKING, ENGAGEMENT AND BATTLE DAMAGE ASSESSMENT FOR HIGH-VALUE TARGETS IN ALL WEATHER CONDITIONS AND IN ALL POSSIBLE COMBAT ENVIRONMENTS. Finally, warfighters in the field must concentrate on observing their immediate environment but at the same time must maintain awareness of the larger battlespace picture, and as a result they are susceptible to being swamped by too much detail. IPTO is creating system approaches that can exploit context and advanced information display/presentation techniques to overcome these challenges.

Programs

Autonomous Real-time Ground Ubiquitous Surveillance - Imaging System (ARGUS-IS)
---------------------------------------------------------------------------------------------------------
The mission of the Autonomous Real-time Ground Ubiquitous Surveillance - Imaging System (ARGUS-IS) program is to provide military users a flexible and responsive capability to find, track and monitor events and activities of interest on a continuous basis in areas of interest.

The overall objective is to increase situational awareness and understanding enabling an ability to find and fix critical events in a large area in enough time to influence events. ARGUS - IS provides military users an "eyes-on" persistent wide area surveillance capability to support tactical users in a dynamic battlespace or urban environment.

FOPEN Reconnaissance, Surveillance, Tracking and Engagement Radar (FORESTER)
---------------------------------------------------------------------------------------------------------
The Foliage Penetration Reconnaissance, Surveillance, Tracking and Engagement Radar (FORESTER) is a joint DARPA/Army program to develop and demonstrate an advanced airborne UHF radar capable of detecting people and vehicles moving under foliage. FORESTER will provide robust, wide-area, all-weather, persistent stand-off coverage of moving vehicles and dismounted troops under foliage, filling the surveillance gap that currently exists.

Multispectral Adaptive Networked Tactical Imaging System (MANTIS)
---------------------------------------------------------------------------------------------------------
The MANTIS program will develop, integrate and demonstrate A SOLDIER-WORN VISUALIZATION SYSTEM, CONSISTING OF A HEAD-MOUNTED MULTISPECTRAL SENSOR SUITE WITH A HIGH RESOLUTION DISPLAY AND A HIGH PERFORMANCE VISION PROCESSOR (ASIC), CONNECTED TO A SOLDIER-WORN POWER SUPPLY AND RADIO. The helmet-mounted MANTIS Vision Processor will provide the soldier with digitally fused, multispectral video imagery in real time from the Visible/Near Infrared (VNIR), the Short Wave Infrared (SWIR) and the Long Wave Infrared (LWIR) helmet-mounted sensors via the high resolution visor display. The processor adaptively fuses the digital imagery from the multispectral sensors providing the highest context, best nighttime imagery in real-time under varying battlefield conditions. The system also ALLOWS THE VIDEO IMAGERY TO BE RECORDED AND PLAYED BACK ON DEMAND AND ALLOWS THE OVERLAY OF BATTLEFIELD INFORMATION. MANTIS will exploit the existing soldier radio network and PROVIDE SOLDIER-TO-SOLDIER SHARING OF VIDEO CLIPS VIEWED AS PICTURE-IN-PICTURE ON THEIR HELMET MOUNTED DISPLAYS. MANTIS WILL "regain the nighttime advantage" and "EXPLOIT THE NET" TO PROVIDE THE INDIVIDUAL SOLDIER WITH UNPRECEDENTED SITUATIONAL AWARENESS.

NetTrack (NT)
---------------------------------------------------------------------------------------------------------
PERSISTENT RECONNAISSANCE, SURVEILLANCE, TRACKING AND TARGETING OF EVASIVE VEHICLES IN CLUTTERED ENVIRONMENTS.

Quint Networking Technology (QNT)
---------------------------------------------------------------------------------------------------------
In a network centric battle space, U.S. Forces must exploit distributed sensor platforms to rapidly and precisely find, fix, track, and engage static and moving targets in real time. There are several relevant thrusts to time critical targeting and strike areas within the Services. One aspect of these thrusts is to use data links to fully integrate tactical UAVs, dismounted ground forces and weapon control into the future network centric warfare environment.

The Quint Networking Technology (QNT) is a modular network data link program focused on providing a multi-band modular capability to close the seams between five nodes - Aircraft, UCAV, Weapons, tactical UAV and dismounted ground forces. The specific intended QNT hardware users are weapons, air control forces on the ground (dismounted) and tactical UAV's. These three are the focal points of the QNT effort with the other two elements using hardware and waveforms from established programs. The assumption is these other two types of platforms provide a starting point for building capability for the other three elements.

Standoff Precision ID in 3-D (SPI-3D)
---------------------------------------------------------------------------------------------------------
The SPI-3D program will develop and demonstrate the ability to provide precision geolocation of ground targets combined with high-resolution 3D imagery at useful standoff ranges. These dual capabilities will be provided using a sensor package composed of commercially available components. It will be capable of providing "optical quality precision at radar standoff ranges" and have the ability to overcome limited weapons effects obscuration, and penetrate moderate foliage. The figure below shows the operational concept of the SPI-3D system.

Urban Reasoning and Geospatial Exploitation Technology (URGENT)
---------------------------------------------------------------------------------------------------------
The recognition of targets in urban environments poses unique operational challenges for the warfighter. Historically, target recognition has focused on conventional military objects, with particular emphasis on military vehicles such as tanks and armored personnel carriers. In many cases, these threats exhibit unique signatures and are relatively geographically isolated from densely populated areas. The same cannot be said of today's asymmetric threats, which are embedded in urban areas, thereby forcing U.S. Forces to engage enemy combatants in cities with large civilian populations. Under these conditions, even the most common urban objects can have tactical significance: trash cans can contain improvised explosive devices, doors can conceal snipers, jersey barriers can block troop ingress, roof tops can become landing zones, and so on. Today's urban missions involve analyzing a multitude of urban objects in the area of regard. As military operations in urban regions have grown, the need to identify urban objects has become an important requirement for the military. URGENT WILL ENABLE UNDERSTANDING THE LOCATIONS, SHAPES, AND CLASSIFICATIONS OF OBJECTS FOR A BROAD RANGE OF PRESSING URBAN MISSION PLANNING ANALYTICAL QUERIES (E.G., FINDING ALL ROOF TOP LANDING ZONES ON THREE STORY BUILDINGS CLEAR OF VERTICAL OBSTRUCTIONS AND VERIFYING INGRESS ROUTES WITH MAXIMUM COVER FOR GROUND TROOPS). IN ADDITION, URGENT WILL ENABLE AUTOMATED TIME-SENSITIVE SITUATION ANALYSIS (E.G., ALERTING FOR VEHICLES FOUND ON A ROAD SHOULDER AFTER DARK AND ESTIMATING DAMAGE TO A BUILDING EXTERIOR AFTER AN EXPLOSION) THAT WILL MAKE A SIGNIFICANT POSITIVE IMPACT ON URBAN OPERATIONS.

Vehicle and Dismount Exploitation Radar (VADER)
---------------------------------------------------------------------------------------------------------
VADER is a RADAR SYSTEM DESIGNED TO ENABLE THE SURVEILLANCE AND TRACKING OF GROUND VEHICLES AND DISMOUNTS from a Warrior (or similar) unmanned aerial vehicle (UAV) platform. VADER will PROVIDE REAL-TIME DATA PRODUCTS TO A COMMAND ECHELONS AT WHICH THE REAL-TIME INFORMATION WILL BE IMMEDIATELY ACTIONABLE. For example, a warfighter could use the Warrior UAV with VADER installed to monitor a road, track a vehicle to a stop, OBSERVE DISMOUNT MOTION NEAR THE VEHICLE, CHARACTERIZE CERTAIN MOTIONS (LIKE SOMEONE CARRYING A HEAVY LOAD), AND MEASURE A GROUND DISTURBANCE AFTER THE VEHICLE DEPARTS.

Video and Image Retrieval and Analysis Tool (VIRAT)
---------------------------------------------------------------------------------------------------------
The overall goal of the Video and Image Retrieval and Analysis Tool (VIRAT) program is to produce A SCALABLE AND EXTENSIBLE END-TO-END SYSTEM THAT ENABLES MILITARY ANALYSTS TO OBTAIN GREATER VALUE FROM AERIAL VIDEO COLLECTED IN COMBAT ENVIRONMENTS.


============================================================
Emerging Technologies @ http://www.darpa.mil/ipto/thrust_areas/thrust_ep.asp
============================================================
IPTO is EXPLORING SEVERAL EMERGING INFORMATION PROCESSING TECHNOLOGIES INCLUDING NOVEL USES OF MODELING AND SIMULATION TO CREATE NEW BATTLE COMMAND PARADIGMS; REVOLUTIONARY APPROACHES TO POWER, SIZE AND PROGRAMMABILITY AS ENABLERS FOR COMPUTING AT THE EXASCALE; COMPUTATIONAL SOCIAL SCIENCE AS THE FOUNDATION FOR BETTER UNDERSTANDING OF THE WORLD FACED BY THE WARFIGHTER; ADVANCED SENSING ARCHITECTURES INCLUDING NEW SENSING MODALITIES TO COUNTER DIFFICULT THREATS; AUTOMATED STORAGE, INDEXING, ANALYSIS, CORRELATION, SEARCH, AND RETRIEVAL OF MULTIMEDIA DATA; AND TECHNIQUES TO ENABLE INFORMATION SHARING ACROSS ORGANIZATIONAL BOUNDARIES AND ADMINISTRATIVE/SECURITY DOMAINS.

Programs


Advanced Speech Encoding (ASE)
---------------------------------------------------------------------------------------------------------
Speech is the most natural form of human-to-human communications. However, THE MILITARY IS OFTEN FORCED TO OPERATE IN ENVIRONMENTS WHERE SPEECH IS DIFFICULT. For example, the quality and intelligibility of the acoustic signal can be severely degraded by HARSH ACOUSTIC NOISE BACKGROUNDS that are common in military environments. In addition, many situations also require war fighters to operate in silence and in a stealth mode so that their presence and intent are not compromised. THE ADVANCED SPEECH ENCODING (ASE) PROGRAM WILL DEVELOP TECHNOLOGY THAT WILL ENABLE COMMUNICATION IN THESE CHALLENGING MILITARY ENVIRONMENTS.

Information Theory for Mobile Ad-Hoc Networks (ITMANET)
---------------------------------------------------------------------------------------------------------
The mission of the Information Theory for Mobile Ad-Hoc Networks (ITMANET) program is TO DEVELOP AND EXPLOIT MORE POWERFUL INFORMATION THEORY CONCERNING MOBILE WIRELESS NETWORKS. The hypothesis of this program is that a specific challenge problem --- better understanding of MANET capacity limits --- will lead to actionable implications for network design and deployment. The anticipated byproducts of a more evolved theory include new separation theorems to inform wireless network "layering" as well as new protocol ideas.

Integrated Crisis Early Warning System (ICEWS)
---------------------------------------------------------------------------------------------------------
The Integrated Crisis Early Warning System (ICEWS) program seeks to DEVELOP A COMPREHENSIVE, INTEGRATED, AUTOMATED, GENERALIZABLE, AND VALIDATED SYSTEM TO MONITOR, ASSESS, AND FORECAST NATIONAL, SUB-NATIONAL, AND INTERNATIONAL CRISES IN A WAY THAT SUPPORTS DECISIONS ON HOW TO ALLOCATE RESOURCES TO MITIGATE THEM. ICEWS will provide Combatant Commanders (COCOMs) with a powerful, systematic capability to anticipate and respond to stability challenges in the Area of Responsibility (AOR); allocate resources efficiently in accordance to the risks they are designed to mitigate; and track and measure the effectiveness of resource allocations toward end-state stability objectives, in near-real time.

RealWorld
---------------------------------------------------------------------------------------------------------
The RealWorld program exploits technology innovation to PROVIDE EVERY WARFIGHTER WITH THE ABILITY TO OPEN A LAPTOP COMPUTER AND RAPIDLY CREATE A MISSION-SPECIFIC SIMULATION IN A RELEVANT GEO-SPECIFIC 3-D WORLD. currently, major simulation programs are time consuming, expensive, and require graduate-level expertise in computer programming. realworld will remove these barriers and, for the first time, PUT THE TACTICAL ADVANTAGE OF REAL-TIME SIMULATION DIRECTLY INTO THE HANDS OF THE WARFIGHTER.

DARPA’s Mind’s Eye project likely to advance AI


The DARPA “Mind’s Eye” program is another example of an ambitious AI program that is likely to get us closer to human-level AI. This program will be run out of DARPA's TCTO or Transformational Convergence Technology Officee.

The Mind’s Eye program --- to reach its goals --- has to be able to:

-have a fairly large invariant ontology of objects, motions, humans, weapons, military behaviors, scenes, and scenarios it recognizes in many different instantiations, forms, views, scale, and lighting;
-do visual scene recognition and understanding;
-understand behaviors of entities it is seeing;
-map such understandings into a larger higher level representation and understanding of what is taking place around it;
-presumably have to combine audio and visual recognition, since sound is an important source of information in a battlefield;
-have to have complex goal pursuit and attention focusing, to decide what to look at, and track, and spend its optical and computational resources on; and
-have natural language communication capabilities, or some other method of creating concise reports for human consumption and for receiving commands from humans

In sum, this project would require quite an advanced set of AI capabilities to function well.

The following is quoted from a short pdf at
https://www.fbo.gov/download/ef9/ef9960d732bf796e6557916b4adf3ea9/DARPA_Minds_Eye_Industry_Day_Announcement_15March2010_(2).pdf , to spark interest for people to attend a meeting at which the project will be discussed in more detail. It does not appear the BAA for this project has been posted yet.

The Mind’s Eye program seeks to develop in machines a capability that currently exists only in animals: visual intelligence. Humans in particular perform a wide range of visual tasks with ease, which no current artificial intelligence can do in a robust way. Humans have inherently strong spatial judgment and are able to learn new spatiotemporal concepts directly from the visual experience. Humans can visualize scenes and objects, as well as the actions involving those objects. Humans possess a powerful ability to manipulate those imagined scenes mentally to solve problems. A machine‐based implementation of such abilities would be broadly applicable to a wide range of applications.

This program pursues the capability to learn generally applicable and generative representations of action between objects in a scene directly from visual inputs, and then reason over those learned representations. A key distinction between this research and the state of the art in machine vision is that the latter has made continual progress in recognizing a wide range of objects and their properties—what might be thought of as the nouns in the description of a scene. The focus of Mind’s Eye is to add the perceptual and cognitive underpinnings for recognizing and reasoning about the
verbs in those scenes, enabling a more complete narrative of action in the visual experience.

One of the desired military capabilities resulting from this new form of visual intelligence is a
smart camera, with sufficient visual intelligence that it can report on activity in an area of observation. A camera with this kind of visual intelligence could be employed as a payload on a broad range of persistent stare surveillance platforms, from fixed surveillance systems, which would conceivably benefit from abundant computing power, to camera‐equipped perch‐and‐stare micro air vehicles, which would impose extreme limitations on payload size and available computing power. For the purpose of this research, employment of this capability on man‐portable unmanned ground vehicles (UGVs) is assumed. This provides a reasonable yet challenging set of development constraints, along with the potential to transition the technology to an objective ground force capability.

Mind’s Eye strongly emphasizes
fundamental research. It is expected that technology development teams will draw equally from the state of the art in cognitive systems, machine vision, and related fields to develop this new visual intelligence. To guide this transformative research toward operational benefits, the program will also feature flexible and opportunistic systems integration. This integration will leverage proven visual intelligence software to develop prototype smart cameras. Integrators will contribute an economical level of effort during the technology development phase, supporting participation in phase I program events (PI meetings, demonstrations, and evaluations) as well as development of detailed systems integration concepts that will be considered by DARPA at appropriate times for increased effort in phase II systems integration.

DARPA's deep learning program could advance AGI.

Below is a link to a DARPA request for a proposal for a program to perform deep learning. It wants a system that can automatically learn patterns of many different types from visual, auditory, and text with little human guidance, using automatically learned hierarchical invariant representations, of the general type described in the first few paragraphs of "THE SOFTWARE" section of the above post.

This is the type of project, which if the right people got the funding could really help advance AGI. It seems like Numenta, Poggio's group, or Hinton, could all submit compelling responses to this proposal. The request says it is interested in sponsoring multiple teams, and in disseminating much of what is learned to the public to advance the computing arts.

Start reading at page four of
http://www.darpa.mil/ipto/solicit/baa/BAA-09-40_PIP.pdf

Other voices predicting AI by 2020

In the main post above I stated human level AI could be probably built within roughly a decade, by 2020.

That is much sooner than the conventional wisdom in the AI community. But there are some very knowledgeable people who share my guess of approximately 2020. And some of them have considerable resources to throw at the problem.

In a Google Tech Talk, recorded in May 2006, Doug Lenat, mentioned in passing that Sergey Brin, one of the two founders of Google, had said AI could be built by 2020. Doug Lenat is head of Cycorp, the corporate continuation of one of the largest and longest big-AI projects. Lenat’s talk is at http://video.google.com/videoplay?docid=-7704388615049492068 . It provides a good overview of Cycorp ‘s Cyc system, and has an amusing introduction of Doug by Peter Norvig, co-author of one of the leading textbooks on AI and Google’s director of research.

In response to Lenat’s statement about Brin’s projection, I did a brief web search to see if I could find exactly what Brin had said about achieving AI by 2020. I was unable to find any other reference to the quote. But I did find the following information relevant to Google’s pursuit of AI and to the 2020 estimate.

As was cited on multiple web sites --- including http://www.naturalsearchblog.com/archives/2007/02/20/google-developing-artificial-intelligence-ai-brave-new-world --- Google’s Larry Page said at the 2007 conference for the American Association for the Advancement of the Sciences, that researchers at Google were working upon developing Artificial Intelligence. He said human brain algorithms actually weren’t all that complicated and could likely be approximated with sufficient computational power. He said, “We have some people at Google (who) are really trying to build artificial intelligence and to do it on a large scale. It’s not as far off as people think.”

According to http://www.alexandriaarchive.org/blog/index.php?s=brin : Sergey Brin is reported to have said that the perfect search engine would “look like the mind of God“. Similar ideas, but less extravagantly worded, have come from Marissa Mayer, Google’s VP of Search Products and User Experience when she talked about how Google’s massive data stores and sophisticated algorithms are acting more and more like “intelligence”.

In 2008 Nicholas Carr --- who served as executive editor of the Harvard Business Review, and who has written extensively on information technology --- wrote a book entitled The Big Switch: Rewiring the World, From Edison to Google. A review of it, at http://computersight.com/computers/the-big-switch-rewiring-the-world-from-edison-to-google-by-nicholas-carr/ , says:
“the book discussed the future of computing. The main discussion was with Google founders, Larry Page and Sergey Brin, about their dream of what their search engine will do in the coming years. According to Page and Brin, artificial intelligence is the main goal of those behind the future of Google. Google wants to link the human brain with the computer to share its search engine. The author also spoke about advancements Microsoft and other Computer Scientists want for the future of computing. …According to Carr, in 2020, Google’s dream may come true.”
At http://www.forbes.com/2008/01/11/google-carr-computing-tech-enter-cx_ag_0111computing.html , Andy Greenberg of Forbes.com interviews Carr about his book. Below is an excerpt:
[AG]Looking further ahead at Google's intentions, you write in The Big Switch that Google's ultimate plan is to create artificial intelligence. How does this follow from what the company's doing today?

[NC] It's pretty clear from what [Google co-founders] Larry Page and Sergey Brin have said in interviews that Google sees search as essentially a basic form of artificial intelligence. A year ago, Google executives said the company had achieved just 5% of its complete vision of search. That means, in order to provide the best possible results, Google's search engine will eventually have to know what people are thinking, how to interpret language, even the way users' brains operate.

Google has lots of experts in artificial intelligence working on these problems, largely from an academic perspective. But from a business perspective, artificial intelligence's effects on search results or advertising would mean huge amounts of money.

[AG] You've also suggested that Google wants to physically integrate search with the human brain.

[NC]This may sound like science fiction, but if you take Google's founders at their word, this is one of their ultimate goals. The idea is that you no longer have to sit down at a keyboard to locate information. It becomes automatic, a sort of machine-mind meld. Larry Page has discussed a scenario where you merely think of a question, and Google whispers the answer into your ear through your cellphone.

[AG] What would an ultra-intelligent Google of the future look like?


[NC]I think it's pretty clear that Google believes that there will eventually be an intelligence greater than what we think of today as human intelligence. Whether that comes out of all the world's computers networked together, or whether it comes from computers integrated with our brains, I don't know, and I'm not sure that Google knows. But the top executives at Google say that the company's goal is to pioneer that new form of intelligence. And the more closely that they can replicate or even expand how peoples' mind works, the more money they make.

[AG] You don't seem very optimistic about a future where Google is smarter than humans.

[NC]I think if Google's users were aware of that intention, they might be less enthusiastic about the prospect than the mathematicians and computer scientists at Google seem to be. A lot of people are worried that a superior intelligence would mean for human beings.

I'm not talking about Google robots walking around and pushing humans into lines. But Google seems intent on creating a machine that's able to do a lot of our thinking for us. When we begin to rely on a machine for memory and decision making, you have to wonder what happens to our free will.
At http://www.latimes.com/news/printedition/asection/la-oe-keen12jul12,1,1010933.story?ctrack=1&cset=true , Google CEO Eric Schmidt is reported to have said in 2007:
“By 2012, he wants Google to be able to tell all of us what we want. This technology, what Google co-founder Larry Page calls the "perfect search engine," might not only replace our shrinks but also all those marketing professionals whose livelihoods are based on predicting — or guessing — consumer desires.”
The article also says
“iGoogle is growing into a tightly-knit suite of services — personalized homepage, search engine, blog, e-mail system, mini-program gadgets, Web-browsing history, etc. — that together will create the world's most intimate information database. On iGoogle, we all get to aggregate our lives, consciously or not, so artificially intelligent software can sort out our desires. It will piece together our recent blog posts, where we've been online, our e-commerce history and cultural interests. It will amass so much information about each of us that eventually it will be able to logically determine what we want to do tomorrow and what job we want.”
http://www.computerworld.com.au/article/196706/bt_futurist_ai_entities_will_win_nobel_prizes_by_2020/ is an article about Ian Peterson, chief futurologist at British Telecom. In it he says:
“We will probably make conscious machines sometime between 2015 and 2020, I think. But it probably won't be like you and I. It will be conscious and aware of itself and it will be conscious in pretty much the same way as you and I, but it will work in a very different way. It will be an alien. It will be a different way of thinking from us, but nonetheless still thinking”
In response to the interviewer pointing out that

“…as soon as machines become intelligent, according to Moore's Law they will soon surpass humans. By the way, BT's 2006 technology timeline predicts that AI entities will be awarded with Nobel prizes by 2020, and soon after robots will become mentally superior to humans. What comes after that: the super intelligence or God 2.0? “

Peterson responds

“I think that I would certainly still go along with those time frames for superhuman intelligence, but I won't comment on God 2.0. I think that we still should expect a conscious computer smarter than people by 2020. I still see no reason why that it is not going to happen in that time frame. But I don't think we will understand it. The reason is because we don't even understand how some of the principal functions of consciousness should work. “
Of course, Microsoft Research is also putting a lot of effort into artificial intelligence research. A March 2, 2009 New York Times article at http://www.nytimes.com/2009/03/02/technology/business-computing/02compute.html , reports on some of Microsoft’s efforts in the field. Among other interesting things it says:
“Craig Mundie, the chief research and strategy officer at Microsoft, expects to see computing systems that are about 50 to 100 times more powerful than today’s systems by 2013.

“Most important, the new chips will consume about the same power as current chips, making possible things like a virtual assistant with voice- and facial-recognition skills that are embedded into an office door.

“We think that in five years’ time, people will be able to use computers to do things that today are just not accessible to them,” Mr. Mundie said during a speech last week. “You might find essentially a medical doctor in a box, so to speak, in the form of your computer that could help you with basic, nonacute care, medical problems that today you can get no medical support for.”

“With such technology in hand, Microsoft predicts a future filled with a vast array of devices much better equipped to deal with speech, artificial intelligence and the processing of huge databases.”
So, in sum, there is good reason to believe there will be an explosion in AI in the next ten years.

More progress on generalized AI techniques

In my first post above on this topic I said:
--------------------------------
“WILL IT WORK?
“============

“The answer is most probably, yes, because ...
“...
“...many of the learning, inferencing, and inference control mechanisms deployed in more narrow applications can be generalized to have applicability to AGI. “
--------------------------------

As evidence of the above statement, I am attaching a link to a lecture by Pedro Domingos of the University of Washington on what he views as a highly generalized AI learning and inferencing system using Markov logic networks. http://videolectures.net/bsciw08_domingos_mlwuv/  This representation shares many features with the hypergraph representation in OpenCogPrime by Goertzel et al.

There are some other interesting lectures from the same conference at http://videolectures.net/bsciw08_whistler/

“How Long Till Human-Level AI? What Do the Experts Say?”

Regarding this topic --- there is a very good article entitled “How Long Till Human-Level AI? What Do the Experts Say?” written by Ben Goertzel, Seth Baum, Ted Goertzel at http://hplusmagazine.com/articles/ai/how-long-till-human-level-ai  

To me its most important information is in the figure entitled “When Will Milestones of AGI be Achieved without New Funding”. It indicates that, of the 21 attendees at the AGI 2009 conference who answered the survey, 42% think AGI’s capable of passing the Turning Test will be created within ten to twenty-years.

Oddly that is slightly more than the 38% who think AGI’s would achieve the human-like capabilities of a 3rd grader within the same time frame. This might reflect the fact that too many of the attendees have been influenced by the famous Eliza experiment, which was a quasi Turing Test that actually managed to fool some people into thinking they were reading text generated by a human doctor --- using mid-1960s computers.

I have always assumed the Turing test would be administered by humans who understood human psychiatry and brain function, and artificial intelligence sufficiently that they would be able to smoke out a sub-human intelligence relatively quickly in the Turning Test.

In fact, I am the person quoted in that article for giving my reasons why I thought it would be more difficult to make a computer pass the turning test than to posses many of the other useful intellectual capabilities of a powerful human mind --- as quoted in the paragraph that follows:

“One observed that “making an AGI capable of doing powerful and creative thinking is probably easier than making one that imitates the many, complex behaviors of a human mind — many of which would actually be hindrances when it comes to creating Nobel-quality science.” He observed “humans tend to have minds that bore easily, wander away from a given mental task, and that care about things such as sexual attraction, all which would probably impede scientific ability, rather that promote it.” To successfully emulate a human, a computer might have to disguise many of its abilities, masquerading as being less intelligent — in certain ways — than it actually was. There is no compelling reason to spend time and money developing this capacity in a computer.”


I thought the idea --- suggested in one of the survey questioned mentioned in the article --- that AGI might be funded by 100 billion dollars is a little rich. I understand, however, such a large figure was picked to --- in effect --- ask how people how fast they thought AGI would be developed if money was virtually no obstacle.

I think AGI could be developed over ten years for well under 500 million dollars if the right people were administering and working on the project. (This does not count all the other money that is already likely to be invested in electronics, computer science, and more narrow AI in the coming decade.) Unfortunately, it would be hard for the government to know who were the right people, and what were the right approaches, for such a project. But I believe a well designed project, designed to achieve human level AGI, almost certainly could succeed in ten years with only 2 to 4 billion dollars of funding over that period. Such a project would fund multiple teams with say 10 to 30 million dollars to start, and then increasingly allocate funding over time to the teams and approaches that produced the most promising results.

2 to 4 billion dollars over ten years would be totally within the funding capacity of multiple government agencies.

Developing AGI in that time frame would be exceptionally valuable to America --- because it would give a tremendous chance to save our economy before its is bled to death --- by our trade imbalance with the rapidly developing world --- and --- by the many tens of trillions of dollars of in health care and other unfunded benefits America owes its seniors and government workers.