联系我们
我们只做按需定制化代写服务,绝对原创!
QQ:41806229 点击这里在线咨询代写
Email:admin@assignment.cc
网  址:http://www.assignment.cc/
支持Paypal、VISA、MasterCard、Discover等银行卡支付
Paypal支付
代写美国assignment,Brain Cell computer
发表日期:2013-10-06 08:39:46 | 来源:assignment.cc | 当前的位置:首页 > 代写assignment > 美国assignment代写 > 正文
Brain Cell Computer
Introduction:
These days it is widely understood that our thinking ability comes from the biological and psychological processes of the brain. These processes involve billions of brain cells working together with years of learning and experience to recognize patterns in the information carried into the brain from the senses. Before the discovery of evolution, most people believed that our thinking ability was a mystical experience that was somehow separate from the material world. Our souls were thought to be made of a heavenly essence.
In the human brain each nerve cell works like a tiny chemical computer. Modern computer processor’s can process information millions of times faster than a single brain cell, but because a microprocessor can only process a single stream of information at a time, computers need to become another million times more powerful before they are able to recognize patterns as effectively as the billions of cells in the human brain can.
The power of computers is rapidly increasing, and with new technologies like three dimensional chips, parallel processing, and photon computing, thinking computers are probably only decades away from becoming a reality. Unlike human brains, computers are precise in their calculations, and their memory does not fade over time.
Many people say that machines will only ever be tools for humans to use and believe that computers will never be able to think like humans. People who argue against the possibility of thinking computers are usually those who do not have a strong understanding of science and technology or those who claim that human intelligence requires some kind of heavenly essence.
Denial of the possibility of machine intelligence will not prevent it from happening. The desire for military superiority and higher profits will ensure that this technology will be ongoing and developed.
In this paper I am going to explain history and evolution of AI, different type of competitions in AI and also the role of them. I am also going to explain the general review of AI.
History and Evolution of AI:
Computers provided the technology necessary for AI, but it was not until the early 1950's that the link between human intelligence and machines was really observed.
In late 1955, Newell and Simon developed The Logic Theorist, considered by many to be the first AI program. The program represented each problem as a tree model which would attempt to solve problems by selecting the branch that would most likely result in the correct conclusion. The impact that the logic theorist made on both the public and the field of AI has made it a crucial stepping stone in developing the AI field.
John McCarthy considered to be the father of AI, organized a conference in 1956, to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming. McCarthy invited them to Vermont for "The Dartmouth summer research project on artificial intelligence." Because of McCarthy, the field would be known as Artificial intelligence from then on. Although not a huge success, the Dartmouth conference did bring together the founders in AI, and served to lay the groundwork for the future of AI research.
The first version of a new program The General Problem Solver(GPS) was tested In 1957.
McCarthy was busy developing a major breakthrough in AI history and in 1958 he announced his new development, the LISP language which is still used today. LISP stands for List Processing and was soon adopted as the language of choice among most AI developers.
Massachusetts Institute of Technology(MIT) received a 2.2 million dollar grant from the United States government in 1963 to be used in researching Machine-Aided Cognition (artificial intelligence). The Department of Defence’s grant to advanced research projects Agency (ARPA) was to ensure that the US would stay ahead of the Soviet Union in technological advancements. The project served to increase the pace of development in AI research, by drawing computer scientists from around the world.
After a few years many programs were developed, one notably was SHRDLU which was part of the micro world’s project. This consisted of research and programming in small worlds such as with a limited number of geometric shapes.
Turing Test:
The brilliant British Mathematician Alan Turing played a great role in the development of the computer. The imitation game known as the Turing test was devised by Turing as method for deciding whether the computer program was intelligent or not. In other words can computers think?
The Turing test takes place between two subjects and an interrogator, where the interrogator communicates with his two subjects via a computer terminal and decide which one is a human being and which is a computer program without seeing either of the subjects or speaking to them. The human being helps the interrogator to make the correct identification whereas the computer program attempts to trick the interrogator into making the wrong identification. If the latter case occurs, the computer program is said to be exhibiting intelligence.
[Example of Turing Test]
For an example the Interrogator may ask both subjects to do a mathematical calculation assuming that the computer will get it correct and be more accurate than the human. To counter this strategy the computer needs to know when it should fail to get a correct answer to the problem and appear to be like a human. To find out the human’s identity on the basis of emotion the interrogator may ask both subjects to respond to a poem for example which needs the computer to have knowledge in connection of the emotional makeup of human beings to get the correct response.
The great advantage of the Turing test is that it allows the interrogator to evaluate almost all of the evidence that we would assume to constitute thinking. In 1954 a decade before conversation simulators such as ELIZA emerged Alan Turing died and It is indeed unfortunate that he did not live to witness his test being performed.
Competitions in AI:
Abbadingo:
The Abbadingo One Deterministic Finite Automata Learning Competition was organized by two of the authors (Lang and Pearl mutter) and consisted of a set of challenge problems posted to the internet and token cash prizes of $1024.
The organizers had the following goals:
1). Encourage the development of new and better algorithms.
2). Encourage the learning theorists to implement some of their Ideas and gather empirical data concerning their Performance on complex problems which lie beyond the Proven Bounds, particularly in the direction of sparser training data.
3). Encourage empiricists to test their favourite methods on specific target concepts with high Kolmogorov complexity,(Descriptive/Algorithmic complexity) under strict experimental conditions that permit comparison of results between different groups by eliminating the possibility of hill climbing on test set performance. Promoting above goal of the development of new and better algorithms was clearly satisfied. Both Rodney Price and Hugues Juille made useful contributions to the state of the art in DFA induction.
CoNLL:
CoNLL stands for the conference of the Natural Language Learning. The Shared Tasks of CoNLL-2004 and CoNLL-2005 concerned the recognition of semantic roles for the English language, based on Prop Bank predicate-argument structures. Other tasks were named entity recognition (2002 – 2003), Clause identification (2001) and so on.
Two years of dependency parsing in the CoNLL shared task has brought an enormous boost to the development of dependency parser for multiple languages. Yet we only have vague ideas about the strength and weakness of different methods for language with different typological characteristics, even though nineteen different languages have been covered many different parsing and learning approaches.
DARPA:
DARPA stands for Defence Advanced Research Project Agency. Dr Anthony Tether is the director of DARPA. Tether said that DARPA is attracting those people who look at problems in a different way and find the solutions in a creative way. Tether also said that DARPA is the place where the risk and payoff are high($1000000 grand prize) and success may provide dramatic advances for traditional military roles and missions.
DARPA was driving the force behind technology advances for the development of stealth aircraft named F-117 and the B-2 bomber. Most recently, DARPA’s research led to the aircraft creation which is unmanned such as Global Hawk and the predator, which now carry out surveillance, reconnaissance and precision bombing mission.
DARPA’s are constantly aware of the threats to national security and technological opportunities faced by USA. The result of this is that DARPA is currently emphasizing the research in eight strategic thrusts
1). Detection, Precision ID, Tracking and Destruction of Elusive Surface Targets.
2). Location and Categorization of Underground Structures.
3). Network Manned and Unmanned Systems.
4). Robust, Self-forming Tactical Networks.
5). Assured Use of Space.
6). Cognitive Computing.
7). Bio-Revolution.
8). Force Multiplier for Urban Area Operation.
This Agency’s ability to adept rapidly to changing environments and to seek and embrace opportunity both in technology and in processes, while maintaining the historically proven principle of the agency, makes DARPA a unique research and development organization.
On Monday March 8th 2004, DARPA held a competition in which a autonomous car (means unmanned and not controlled by remote control) would drive itself from Los Angeles to Las Vegas under different circumstances like sand, Natural obstacles, Cattle guards, Erosion Gullies, etc.
No one did success in that race, which was shamed. No one actually able to make a car which manages to drove itself and travel distance in time and on a place but very next year 2005 there were three robots took part in the competitions and they actually drove the car which was unmanned and not controlled by remote so they won the $2 million dollars and so on.
Loebner Prize:
The first Loebner contest was held on the 8th Nov 1991 in Boston’s computer museum. According to Walter Daelmens, contributor for the Conference of Natural language Learning, EMNLP and ACL conferences. The state of art in the computational Language Learning field can be divided into two different types which are the methodological and the engineering work. Daelmens said “There has been Progress in theory and methodology, but that is not sufficient enough”.
His argument was against the engineering work and said that great progress has been done in engineering with most often incremental progress on specific task’s as a result rather than increased understanding of how language can be learned from the data. I agree with his thoughts on this.
In 1998 MegaHAL was introduced using a technique which was different than previous contestants in the Loebner contest. The purpose of MegaHAL was to demonstrate the different method of simulating conversation. It was also able to give replies in different languages.
NIST Open Machine Translation(MT):
National Institute of Standard Technology is a non – regulatory federal agency within the U.S. Department of commerce. NIST's mission is to promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.
NIST carries out its mission in four cooperative programs: The NIST Laboratories is research laboratory that advances the nation's technology infrastructure and is needed by U.S. industry rapidly to improve products and services. The Baldrige National Quality Program promotes the performance excellence among the U.S. manufacturers, service companies, educational institutions and health care providers, conducts outreach programs and manages the annual Malcolm Baldrige National Quality Award which recognizes the performance excellence and quality of the achievement. The Hollings Manufacturing Extension Partnership is the nationwide network of local centres offering both technical and business assistance to smaller manufacturers and the Technology Innovation Program is planned to provide cost-shared awards to industry, universities and consortia for research on potentially revolutionary technologies that address critical national and societal needs.
Review of “Artificial Intelligence” A general survey:
Cambridge University Professor Sir James Lighthill is a famous hydrodynamicist with a recent interest in application to biology. His review of AI was at the request of Brian Flowers head of the Science Research Council of Great Britain the main funding body for the British University of Scientific Research.
Lighthill Report of AI research falls into three categories.
Category A
Is advanced automation or application. The principle of this category has been approved by him. In Cat A there are some activities that are obviously applied but also activities that study the structure of intelligent behaviour like computer chess playing that is often done in order to observe such behaviours.
Category C
Consists of the study of the Central nervous system and computer modelling in support of both neurophysiology and psychology.
Category B
Considered as “Building robots” and the “bridge” between Category A and Category C. According to Lighthill a robot is a program or device built neither to serve a useful purpose nor to study the central nervous system to serve a world excluding unimates etc. Which are referred to as Industrial robots. The definition of bridge emphasize by lighthill states that often work in Cat. B is worthwhile because it’s been contributing to the other categories.
A serious consideration of these three categories is that most AI researchers loose their intellectual contact with lighthill immediately, because these categories have no place for what is or should be their main scientific activity – studying the structure of information and the structure of the problem solving process independently of application and independently of its realization in animals or humans. This study is based on different ideas (see Lighthill report).
AI’s contribution to the practical application has been significant but so far mostly peripheral to the central ideas and problems of AI according to John McCarthy, Thus LISP language for symbolic computing was developed for the use of AI, but has had application to symbolic computation in other areas, for example Physics. Some ideas like recursive function definition and conditional expression have been used in programming languages. However, the idea that have been used elsewhere don’t have specifically AI and might have been but without developing AI in mind. Time sharing for example, the first proposal for which had AI motivation and techniques of picture processing that were developed first in AI laboratory and have been used somewhere. Even in the current work in assembly using vision might have been developed without AI in mind.
Contribution of AI
Contribution of AI to neurophysiology has been small and mostly the negative character i.e. Showing that certain mechanisms are not well defined or they are inadequate to carryout the outcomes proposed by neurologists. Twenty years experience in programming machines to learn and solve the problem makes it impossible that cell assemblies per se would learn much without putting some additional organization. Physiologists today would be unlikely to propose such theory, however some things that are unlikely to work is not a positive contribution.
There must be more interaction between AI researchers and neurophysiology as soon as the neurophysiologists are ready to compare the information processing models of higher level function with physiological data. There is little contact at the nerve cell level because almost any of the proposed model of the neuron is a universal computing element, so that there is no connection between the structure of the neuron and what higher level process are possible.
On the other side the impacts of AI research on psychology have been larger as attested by various psychologists. First of all, Psychologists have used a model in which complex internal data structure cannot be noticed directly to animals and humans. Psychologist have come to use these models, because they exhibit the behaviour which can not be exhibited by models conforming to the tenets of behaviourism which necessarily allows only connection between externally observe variables. With psychoanalytic theories and theories related to emotional behaviours. Information processing models in psychology also induced dissatisfaction. Namely, these Processing models of emotional states can yield the prediction that can be compared with experiment or experience in more obvious ways than can vague models of psychoanalysis and its offspring.
Computerized telephone services are using voice recognition to automate bookings and bill payments as computers are already being programmed to understand speech. Because this technology continues to improve, natural sounding voices and a growing range of responses will make it increasingly difficult to tell the difference between a computer and a human operator.
Users of computers who can now type keywords into Internet search engines to find all of the available information on a particular subject will in the future have search technologies which will be able to read through web pages, extract relevant words and phrases then compile them into reports which provide the most accurate, readable and direct answers to people's questions.
Evaluation Tools:
[http://www.nist.gov/speech/tools/index.html]
What we had and what we could have in 50 years of time:-
When we look at the AI since Turing’s 1950 paper, many approaches have been tentatively rejected including the automation models, random search, sequence exploration and many others. There are different Heuristics which have been developed to reduce the various kinds of tree search, some of them are really special for some particular applications but the others are general. There has been good progress made in different kinds of information that can be represented in the memory of the computer but not for the fully general representation at this moment. The problem or perception of speech and vision has been emphasised and recognition has been found considerable in many cases. A beginning has been made in understanding of the semantics of natural language.
Progress has been slower than predicted but has continued nonetheless. Artificial intelligence problems that had begun to seem impossible in 1970 have been solved and there are now successful commercial products, for example: optical character recognition, industrial robotics, speech recognition, data mining and Google's search engine to name a few. In other areas, such as robotics tremendous progress has been made e.g, in 1970 the robot Shakey could not reliably cross a room in 8 hours but by 1995 the VaMP robot car of Mercedes-Benz and Ernst Dickmanns was driving on the Autobahn in traffic at up to 180 km/h.
Some discussion and conclusion:
John McCarthy’s opinion on Lighthill’s review was that present AI research suffers from some major deficiencies apart from the act that any scientists would achieve more if they were smarter and worked harder.
McCarthy says that if someone programs a computer to do something no computer has ever done before or writes a paper pointing out that the computer did it the paper is not giving any direction to the identification and study of the intellectual mechanism and often contains coherent accounts of how programs work. For example, SIGART newsletter prints the number of the games in ACM computer chess tournaments just as though the program were human players and there innards were inaccessible. People want to know if one program missed the right move, what was its thinking at that time? Through this experience we also need analysis of the position of the standard and how future programs can recognise this level and play better.
Theories that can be expressed mathematically are often attracted to AI problem solving by there intrinsic interest. Unfortunately for mathematicians many plausible maths theories like control theorems statistical decision theory have turned out to have little relation to AI. More recently the problem of theorem proving and representation has led to interesting mathematical problems in logic and mathematical theory of computation.
So, we have seen evolution of AI, Its contribution and the different competitions through this paper. I conclude that even if john McCarthy said AI is at the moderate level its my believe that AI progress as gone far beyond the moderate level and competition plays an important role in the AI community and AI also given a chance by its researchers to do more research and get more success in the field. AI is huge and there are lot more treasures to hunt. The race is continues…..
Bibliography
Bibliography means that I have referred some books and web references writing this essay; it helped me in different ways to give me proper guideline.
BOOKS:-
Lugar G. Artificial Intelligence – Structure and Strategies For
Complex problem Solving - fifth edition. England: Addison
Wesley,2005
2) Steels and Campbell. J. Progress in Artificial Intelligence.
England: Ellis Horwood Limited, 1985.
References:-
Abbadingo - DFA(Deterministic Finite Automata) learning competition Accessed on 18th Mar 2008, [http://abbadingo.cs.unm.edu/] [http://algoval.essex.ac.uk/rep/fst/CanberraDFA.pdf]
Butterman E and Travis C “ Evolution of AI” Accessed on 22 Mar 2008 [http://www.graduatingengineer.com/articles/feature/04-09-01a.html]
CoNLL – Conference on Computational Natural Language Learning
Accessed on 24th Mar 2008 [http://ifarm.nl/signll/conll/]
DUC- Document Understanding Conferences. New York 2007 [http://www-nlpir.nist.gov/projects/duc/pubs.html]
History of AI. Accessed on 25th Mar 2008 [http://en.wikipedia.org/wiki/History_of_artificial_intelligence]
[http://library.thinkquest.org/2705/history.html]
Hall J. McDonald R. Nilsson .J The CoNLL Shared Task on Dependency Parsing. University of Edinburgh. Accessed on 27th Mar 2008 [http://acl.ldc.upenn.edu/D/D07/D07-1096.pdf]
Kolmogorov_complexity, Webmaster, Accessed on 30th Mar 2008. [http://en.wikipedia.org/wiki/Kolmogorov_complexity].
McCarthy J. “ Review of Artificial Intelligence: A General Survey”. 2000. Stanford University. Accessed on 25th Mar 2008. [http://www.formal.stanford.edu/jmc/reviews/lighthill/lighthill.html]
NIST- General Information. 2001,Webmaster. Accessed on 30th Mar 2008.[http://www.nist.gov/public_affairs/general2.htm]
Parker M and Parker G. CEC- 2007 Xpilot–AI competition. Accessed on
20th Mar 2008.[http://www.xpilot-ai.org/competition/index.html]
Stewart R. “Shattering the sacred Myths ”. Jan 2006. Ingrams. Accessed on 25th Mar 2008. [http://www.evolutionarymetaphysics.net/advancing_ [technology.html] The DARPA Grand Challenge: Commemorative. Accessed On 24th Mar 2008 [http://www.darpa.mil/grandchallenge04/program.pdf]
Turing Test, university of California. USA. Accessed On 24th Mar 2008. [http://crl.ucsd.edu/~saygin/papers/MMTT.pdf] [http://www.alanturing.net/turing_archive/graphics/turingtest.gif]
产品简介:
这些天,它被广泛地理解我们的思维能力来自大脑的生理和心理过程。这些过程涉及数十亿脑细胞一起工作多年的学习和经验,从感官进入大脑中的信息进行识别模式。进化的发现之前,大多数人认为,我们的思维能力是一个神秘的经验,在某种程度上独立于物质世界的。我们的灵魂被认为是一个天堂般的本质。
在人类的大脑中每个神经细胞的工作原理像一个微小的化学计算机。现代计算机处理器处理信息上百万次的速度比单个脑细胞,但因为一个微处理器可以仅处理单一的信息流在同一时间,电脑需要成为另一个百万次更加强大之前,他们都能够有效识别模式作为几十亿个细胞可以在人的大脑。
计算机的力量正在迅速增加,三维芯片,并行处理,光子计算等新技术,思考的电脑可能只有几十年的路程成为现实。与人类大脑的不同,电脑是他们的精确计算,并随着时间的推移,他们的记忆不褪色。
很多人说,机器将永远只能是供人类使用的工具,相信计算机将永远不会像人类一样能够想到。谁反对的可能性会思考的电脑的人通常是那些谁没有科学技术的深刻理解,或那些谁主张,人类的智慧需要某种天上的本质。
机器智能的拒绝的可能性不会阻止它的发生。将确保军事优势的愿望和更高的利润,这一技术将持续发展。
在本文中,我将解释历史和进化的AI , AI不同类型的比赛中,他们的作用。我也要去解释的一般审查的AI 。
历史和演变AI :
电脑AI提供了必要的技术,但它是不是直到1950年代初,人类智能和机器之间的联系真的观察。
纽厄尔和西蒙在1955年年底,开发逻辑理论家,被许多人认为是第一的AI程序。该程序会尝试解决问题的选择分支树模型代表每个问题,最有可能导致正确的结论。公众和人工智能领域的逻辑理论家的影响,在开发人工智能领域中的一个关键的踏脚石。
约翰·麦卡锡认为,人工智能之父,于1956年组织了一次会议,吸引人才和专业知识,有兴趣的人在一个月机器智能的头脑风暴。麦卡锡邀请他们到佛蒙特“达特茅斯夏季人工智能研究项目。 ”由于麦卡锡,本场将被称为人工智能从此的。虽然不是一个巨大的成功,达特茅斯会议并汇集在AI的创始人,并担任人工智能研究奠定了基础,为未来。
一个新的程序的第一个版本的一般问题求解( GPS )是在1957年进行测试。
麦卡锡是忙于开发AI史上的一个重大突破,并于1958年,他宣布了他的新的发展,这是今天仍在使用的LISP语言。 LISP代表名单处理,大多数AI开发的首选语言之间很快就被采纳。
美国麻省理工学院( MIT )获得了2.2万美元的赠款,美国政府在1963年被用于研究机器辅助认知(人工智能) 。部国防先进研究项目署( ARPA )的赠款,以确保美国将保持领先地位的苏联技术的进步。该项目增加在人工智能研究的发展步伐,吸引来自世界各地的计算机科学家。
经过几年许多程序被开发,一个显着SHRDLU的是微观世界的项目的一部分。这包括与有限数量的几何形状,如小世界中的研究和编程。
图灵测试:
辉煌的英国数学家阿兰·图灵在计算机发展的起到了很大的作用。模仿游戏被称为图灵测试的设计,由图灵方法决定的计算机程序是智能与否。换句话说,计算机可以认为呢?
图灵测试之间发生两个科目和审讯,审讯与他的两个科目通过计算机终端进行通信,并决定哪一个是一个人,并没有看到的科目,或给他们讲这是一个计算机程序。人帮助询问作出正确的识别,而计算机程序试图欺骗读写错误的识别。如果发生后一种情况下,所述计算机程序被认为是表现出智能。
[例图灵测试]
举一个例子,读写可能会问这两个问题做一个假设计算机将得到纠正和比人更准确的数学计算。为了对付这种策略的计算机需要知道什么时候应该无法得到正确答案的问题,似乎是像一个人。为了找出人类的身份询问的基础上的情感可能会问这两个问题,以应对例如,一首诗,这需要电脑知识的情感构成人类得到正确的响应。
图灵测试的巨大优势是,它允许询问评估几乎所有的证据,我们会假设构成思想。十年前谈话模拟器如1954年ELIZA出现了阿兰·图灵死亡,这的确是不幸的是,他没能活着见证他的测试正在执行。
在AI的比赛:
Abbadingo :
由两位作者(郎和珍珠嘟哝)举办的Abbadingo确定性有限自动机学习竞赛,包括发布到网络和令牌现金奖励1024美元,一组的挑战性问题。
组织者有以下目标:
1)。鼓励开发新的,更好的算法。
2) 。鼓励学习理论家实现他们的一些想法,并收集有关其性能复杂的问题在于超越成熟的界限,特别是在方向的稀疏训练数据的实证数据。
3)。鼓励经验主义,以测试他们的喜爱的方法消除爬坡性能测试集的可能性,允许不同群体之间的业绩比较严格的实验条件下对特定目标的概念与柯尔莫哥洛夫复杂性高, (描述/算法复杂) 。促进上述目标的开发新的和更好的算法显然很满意。罗德尼·价格和雨果Juille的作出有益贡献的先进的在DFA感应。
CoNLL :
CoNLL代表的自然语言学习会议。 CoNLL -2004和CoNLL - 2005的共享任务涉及英语语言的语义角色,道具银行谓语参数结构的基础上确认。其他任务被命名实体识别( 2002 - 2003 ) (2001) ,第识别等。
CoNLL共享任务两年的依存分析依赖解析器的发展带来了巨大的推动多国语言。然而,我们只有模糊的想法的实力和弱点的语言与不同类型的特色不同的方法,即使已经覆盖了19种不同的语言很多不同的解析和学习方法。
DARPA :
国防高级研究计划局DARPA代表。张炳良系绳DARPA主任。系绳说, DARPA正在吸引那些人谁看,以不同的方式的问题,创造性地找到解决方案。系绳还表示, DARPA的风险和回报的地方是高的($ 1000000巨奖)和成功提供巨大进步,传统的军事角色和任务。
DARPA被命名为F - 117隐形战机和B-2轰炸机的发展推动技术进步的原动力。最近,国防部高级研究计划局的研究,这是如“全球鹰”和食肉动物,现在进行监视,侦察和精确轰炸任务的无人驾驶飞机创造的。
DARPA的不断意识到威胁国家安全和美国所面临的技术机会。这样做的结果是, DARPA目前强调研究在8个战略主旨
1)。检测,精密ID ,跟踪和销毁的难以捉摸的表面目标。
2) 。地下结构的位置和分类。
3)。网络载人和无人系统。
4)。乐百氏,自形成战术网络。
5)。放心使用空间。
6) 。认知计算。
7)。生物革命。
8)。城市区域操作的力量倍增器。
该机构的能力,善于迅速变化的环境和寻求和拥抱的机会,无论是在技术和流程,同时维持历史证明该机构的原则,使得DARPA独特的研究和发展组织。
于2004年3月8日(星期一) , DARPA召开了竞争的自主轿车(指无人,而不是通过遥控器控制)驱动器本身从洛杉矶到拉斯维加斯,根据不同的情况,如砂,天然的障碍,牛卫兵,侵蚀沟等。
没有人做成功的那场比赛,这是羞愧。居然没有一个人能够打个车管理自己开车旅行距离在时间和一个地方,但明年2005年,有三个机器人参加比赛,实际上,他们驾驶的汽车是无人驾驶的,而不是通过遥控控制所以他们赢得了200万美元的美元等。
勒布纳奖:
1991年11月8日举行第一洛伯纳比赛在波士顿的计算机博物馆。 EMNLP据沃尔特Daelmens贡献者自然语言学习会议和ACL会议。艺术的的计算语言学习字段的状态可以分为两种不同类型的方法和工程。 Daelmens说, “我们一直在理论和方法的研究进展,但是,是不是足够” 。
作为一个结果,而不是语言可以学到如何从数据更加了解他的论点是对工程的工作,并说已经做了巨大的进步,在工程具体任务最频繁循序渐进的。我同意他的想法在这个。
在1998年的megahal使用的技术,这是不同于以前勒布纳大赛参赛者介绍。的megahal的目的是要证明的不同的方法,模拟谈话。它也能在不同的语言给予答复。
打开NIST机器翻译(MT ) :
美国国家标准技术研究所是一个非 - 美国商务部监管的联邦机构内。 NIST的使命是促进美国的创新和产业竞争力,推进测量科学,标准和技术的方式,提高经济安全和改善我们的生活质量。
NIST执行其使命在四个合作计划: NIST的实验室是推进国家的科技基础设施和研究实验室,美国业界需要迅速地改进产品和服务。 Baldrige国家质量计划促进了美国制造商之间的性能卓越,服务公司,教育机构和医疗服务提供者,进行外展计划和管理一年一度的美国波多里奇国家质量奖,承认成就卓越的性能和质量。霍林斯制造业扩展伙伴关系是解决关键国家的全国性网络中心,提供技术和业务援助规模较小的制造商和技术创新项目计划提供潜在的革命性技术研究到产业界,大学和财团的成本共享的奖励和社会的需要。
审查“人工智能”的一般调查:
剑桥大学教授James Lighthill爵士是著名的流体动力学应用生物学与近期利益。他的评论是布赖恩花头大不列颠主力资金机构的科学研究理事会的要求,英国大学科研的AI 。
AI研究报告莱特希尔分为三类。
A类
先进的自动化或应用程序。这一类的原则,已批准由他负责。猫有一些活动,显然也研究结构的智能行为,如计算机下棋,以观察​​这种行为往往是做活动。
C类
包括神经生理学和心理学的支持中枢神经系统和计算机模拟研究。
B类
视为“大厦机器人”, A类和C类之间的“桥梁” ,根据莱特希尔机器人是一个程序或设备内置既不为一个有用的目的也不是为了研究中枢神经系统服务的世界,不包括unimates等的这被称为“工业机器人。桥的定义强调由莱特希尔国,猫经常工作。 B是值得的,因为它已经被其他类别。
认真考虑这三类是大多数人工智能研究者立刻失去他们的智力与莱特希尔接触,因为这些类别都没有的地方是什么,或者说应该是他们的主要的科学活动 - 学习的结构信息和结构,解决问题的过程独立的应用程序,及其在动物或人类中的实现无关。本研究是基于不同的想法(见莱特希尔报告) 。
AI的贡献的实际应用已经显著,但到目前为止大多是周边的中央的想法和问题的AI根据约翰·麦卡锡,因此, LISP语言符号计算是使用AI开发,但已经有应用程序的符号计算在其他领域,例如物理。递归函数定义和条件表达式的一些想法,如已使用的编程语言。然而,已在别处使用的想法不具体有AI和可能是,但不显影AI记。分时为例,首先建议有AI的动机和图像处理技术,首先在AI实验室开发,并已用于某处。即使在目前的工作中装配使用视觉可能已经没有AI的初衷。
人工智能的贡献
人工智能神经生理学的贡献已经很小,大多是负面的字符即显示某些机制没有很好地定义或他们是不够的结转库存由神经学家提出的结果。二十多年的经验,在编程的机器学习和解决问题,使本身的电池组件,不可能会学到很多东西,而不把一些额外的组织。生理学家今天会不会提出这样的理论,但是有些事情是不可能的工作没有了积极贡献。
神经生理学家尽快准备好更高层次的功能与生理数据的信息进行比较处理模型人工智能研究者和神经生理学之间必须有更多的互动。是很少接触上面的神经细胞的水平,因为几乎所有的神经元所提出的模型是一种通用的计算元件,以便有没有连接的神经元之间的结构和更高级别的过程是可能的。
在另一边, AI心理学研究的影响一直较大,通过各种心理学家证明。首先,心理医生已经使用了一个模型,其中复杂的内部数据结构不能直接注意到对动物和人类。心理学家来使用这些模型中,由于它们具有不能符合行为主义的原理,这必然只允许外部观察变量之间的连接模型所表现出的行为。情绪行为的精神分析理论和相关理论。在心理学的信息处理模型也引起不满。也就是说,这些情绪状态的处理模式,可以产生比精神及其后代的模糊模型预测,可以以更明显的方式相比,实验或经验。
电脑电话服务使用语音识别的电脑已经被编程,了解语音自动订票和支付账单。因为这项技术的不断提高,自然的声音的声音和范围日益扩大的反应会告诉一台计算机和一个人的运营商之间的差异变得越来越困难。
电脑的用户现在可以到互联网搜索引擎中键入关键字找到所有可用的信息就某一特定主题,将在未来的搜索技术将能够读通过网页,提取相关的单词和短语,然后将它们编译成报告提供最准确的,可读的和直接的回答人们的提问。
评估工具:
[ http://www.nist.gov/speech/tools/index.html ]
我们在50年的时间里,我们能有什么: - 
当我们看AI自1950年图灵的纸,许多方法已经试探性地拒绝,包括自动化模型,随机搜索,序列勘探和其他许多人。有不同的启发式已发展到减少各种树搜索,其中一些真的特别对于一些特殊的应用程序,但其他的都一般。一直在各种不同的信息在内存中的计算机可以表示取得良好进展,但并非完全通用表示在这一刻。问题或一直强调语音和视觉感知和识别已发现在许多情况下,相当。 A开头已经取得了在理解自然语言的语义。
一直进展比预计的要慢,但仍然继续。已经开始在1970年似乎是不可能的人工智能问题已经解决了,现在有成功的商业产品,例如:光学字符识别,工业机器人,语音识别,数据挖掘和谷歌的搜索引擎仅举几例。已在其他领域,如机器人技术的巨大进步,例如在1970年机器人沙基不能可靠地跨越一个房间在8小时内,但到了1995年,梅赛德斯 - 奔驰和恩斯特Dickmanns的的鞋面机器人车在高速公路上行驶在交通最多至180公里每小时。
一些讨论和结论:
约翰·麦卡锡的莱特希尔的审查意见是,目前的人工智能研究存在一些重大缺陷除了行为,任何科学家将实现更多,如果他们更聪明,更努力。
麦卡锡说,如果有人计划做一些事情,从来没有电脑之前完成或写入文件,指出电脑做纸张没有给予任何方向的鉴定和知识产权机制的研究,往往包含相干账目的电脑程序是如何工作的。例如, SIGART通讯打印程序在ACM计算机国际象棋比赛,就好像人类玩家和内脏无法进入游戏的数量。人们想知道的一个程序,如果错过了正确的举动,当时思想是什么?通过这方面的经验,我们还需要分析的标准位置和未来的计划如何能认识到这一点的水平,更好地发挥。
可以表示数学的理论往往吸引到人工智能解决问题有内在兴趣。不幸的是,许多数学家合理的数学理论,如控制定理统计决策理论已经变成了AI有关系不大。最近定理证明和代表性的问题,导致在逻辑和数学理论计算有趣的数学问题。
所以,我们已经看到进化的AI,它的贡献,本文通过不同的比赛。我的结论是即使约翰·麦卡锡说, AI是在适度的水平,我认为AI已经远远超出了中等水平的进步和竞争起着重要的作用,也有机会做更多的研究,其研究人员在人工智能社区和AI并在该领域获得更大的成功。 AI是巨大的,有很多珍品打猎。这场比赛是继.....
参考书目
参考书目意味着我已经提到了一些书籍和网站引用写这篇文章,它帮助我在不同的方式给我正确的指引。
书籍: - 
卢格G.人工智能 - 结构和战略
解决复杂问题 - 第五版。英格兰:艾迪
韦斯利, 2005
2 )钢材和坎贝尔。在人工智能的研究进展。
英格兰:埃利斯,1985霍伍德有限公司。
参考文献: - 
Abbadingo - DFA (确定性有限自动机)于2008年3月18日访问学习竞赛, http://abbadingo.cs.unm.edu/ ] [ http://algoval.essex.ac.uk/rep/fst/CanberraDFA.pdf ]
2008年访问22 Butterman E和特拉维斯C“演变AI ”三月[ http://www.graduatingengineer.com/articles/feature/04-09-01a.html ]
CoNLL - 计算的自然语言学习会议
访问于2008年3月24日[ http://ifarm.nl/signll/conll/ ]
DUC文件的理解会议。 2007年纽约[ http://www-nlpir.nist.gov/projects/duc/pubs.html ]
人工智能的历史。访问于2008年3月25日[ http://en.wikipedia.org/wiki/History_of_artificial_intelligence ]
[ http://library.thinkquest.org/2705/history.html ]
J.麦当劳R.厅尼尔森研究依存分析的的CoNLL共享任务。爱丁堡大学。于2008年3月27日访问[ http://acl.ldc.upenn.edu/D/D07/D07-1096.pdf ]
Kolmogorov_complexity ,站长, 2008年3月的30日访问。 ] [ http://en.wikipedia.org/wiki/Kolmogorov_complexity 。
麦卡锡J.检讨“人工智能:一种综合调查” 。 2000年。斯坦福大学。 2008年3月25日访问。 [ http://www.formal.stanford.edu/jmc/reviews/lighthill/lighthill.html ]
NIST的一般信息。 2001年,网站管理员。访问于2008年3月30日。 [ http://www.nist.gov/public_affairs/general2.htm ]
帕克M和帕克G. CEC -2007与xpilot - AI竞争。访问
二○○八年三月二十日[ http://www.xpilot-ai.org/competition/index.html ] 。
斯图尔特R. “拆东墙补西墙神圣的神话” 。 2006年1月。英格拉姆。 2008年3月25日访问。 [ http://www.evolutionarymetaphysics.net/advancing_ [ technology.html ] DARPA大挑战:纪念。访问于2008年3月24日[ http://www.darpa.mil/grandchallenge04/program.pdf ]
图灵测试,加州大学。美国2008年3月24日访问。 [ http://crl.ucsd.edu/ ~萨伊根/论文/ MMTT.pdf的] [ http://www.alanturing.net/turing_archive/graphics/turingtest.gif ]