快捷搜索: 纽约时报  抗疫  经济学人  武汉  疫情  香港 

研究的目的是让机器人具备与人类相似的社交技能

One argument for why robots will never fully measure up to people is because they lack human-like social skills.But researchers are experimenting with new methods to give robots social skills to better interact with humans.Two new studies provide evidence of progress in this kind of research.

为什么机器人永远无法完全达到人类的标准,其中一个原因是它们缺乏像人类一样的社交技能。但研究人员正在试验新的方法,让机器人具备更好地与人类互动的社交技能。两项新的研究为这类研究的进展提供了证据。

One experiment was carried out by researchers from the Massachusetts Institute of Technology,MIT.The team developed a machine learning system for self-driving vehicles that is designed to learn the social characteristics of other drivers.

麻省理工学院的研究人员进行了一项实验。该小组开发了一个自动驾驶车辆的机器学习系统,旨在学习其他驾驶员的社会特征。

The researchers studied driving situations to learn how other drivers on the road were likely to behave.Since not all human drivers act the same way,the data was meant to teach the driverless car to avoid dangerous situations.

研究人员研究了驾驶情况,以了解道路上其他司机可能的行为。由于不是所有的人类司机都会这样做,所以这些数据是为了教无人驾驶汽车避免危险情况。

The researchers say the technology uses tools borrowed from the field of social psychology.In this experiment,scientists created a system that attempted to decide whether a person's driving style is more selfish or selfless.In road tests,self-driving vehicles equipped with the system improved their ability to predict what other drivers would do by up to 25 percent.

研究人员说,这项技术使用了从社会心理学领域借来的工具。在这个实验中,科学家创造了一个系统,试图决定一个人的驾驶风格是更自私还是无私。在道路测试中,配备了该系统的自动驾驶车辆可以将他们预测其他驾驶员将做什么的能力提高25%。

In one test,the self-driving car was observed making a left-hand turn.The study found the system could cause the vehicle to wait before making the turn if it predicted the oncoming drivers acted selfishly and might be unsafe.But when oncoming vehicles were judged to be selfless,the self-driving car could make the turn without delay because it saw less risk of unsafe behavior.

在一次测试中,观察到自动驾驶汽车左转。研究发现,如果系统预测到迎面而来的司机自私行事,可能不安全,则可能导致车辆在转弯前等待。但当迎面而来的车辆被判定为无私时,自动驾驶的汽车可以毫不迟延地转弯,因为它看到不安全行为的风险较小。
纽约时报中英文网 www.qqenglish.com


Wilko Schwarting is the lead writer of a report describing the research.He told MIT News that any robot working with or operating around humans needs to be able to effectively learn their intentions to better understand their behavior.

威尔科施瓦廷是一份描述这项研究的报告的主要作者。他告诉麻省理工新闻,任何与人类一起工作或在人类周围操作的机器人,都需要能够有效地学习他们的意图,以便更好地理解他们的行为。

"People's tendencies to be collaborative or competitive often spill over into how they behave as drivers,"Schwarting said.He added that the MIT experiments sought to understand whether a system could be trained to measure and predict such behaviors.

施瓦廷说:“人们合作或竞争的倾向往往会影响到他们作为司机的表现。”。他补充说,麻省理工学院的实验试图了解一个系统是否能够被训练来测量和预测这种行为。

The system was designed to understand the right behaviors to use in different driving situations.For example,even the most selfless driver should know that quick and decisive action is sometimes needed to avoid danger,the researchers noted.

该系统旨在了解在不同驾驶情况下使用的正确行为。例如,研究人员指出,即使是最无私的司机也应该知道,有时需要快速果断的行动来避免危险。

The MIT team plans to expand its research model to include other things that a self-driving vehicle might need to deal with.These include predictions about people walking around traffic,as well as bicycles and other things found in driving environments.

麻省理工学院的团队计划扩展其研究模型,将自动驾驶汽车可能需要处理的其他问题也包括在内。其中包括对人们在交通中走动的预测,以及在驾驶环境中发现的自行车和其他东西。

The researchers say they believe the technology could also be used in vehicles with human drivers.It could act as a warning system against other drivers judged to be behaving aggressively.
纽约时报中英文网 http://www.qqenglish.com/


研究人员说,他们相信这项技术也可以用于有人驾驶的车辆。它可以作为一个警告系统,以防其他司机被认为行为咄咄逼人。

Another social experiment involved a game competition between humans and a robot.Researchers from Carnegie Mellon University tested whether a robot's"trash talk"would affect humans playing in a game against the machine.To"trash talk"is to talk about someone in a negative or insulting way usually to get them to make a mistake.

另一个社会实验涉及人类和机器人之间的游戏竞赛。卡内基梅隆大学的研究人员测试了机器人的“垃圾话”是否会影响人类在与机器对抗的游戏中的表现。说“垃圾话”就是用消极或侮辱的方式谈论某人,通常是为了让他们犯错误。

A humanoid robot,named Pepper,was programmed to say things to a human opponent like"I have to say you are a terrible player."Another robot statement was,"Over the course of the game,your playing has become confused."

一个名叫Pepper的仿人机器人被设计成对人类对手说“我不得不说你是个糟糕的玩家。”另一个机器人的说法是,“在游戏过程中,你的游戏变得混乱。”

The study involved each of the humans playing a game with the robot 35 different times.The game was called Guards and Treasures which is used to study decision making.The study found that players criticized by the robot generally performed worse in the games than humans receiving praise.

这项研究让每个人与机器人进行35次不同的游戏。这个游戏被称为警卫和宝藏,用来研究决策。研究发现,受到机器人批评的玩家在游戏中的表现通常比受到表扬的人类差。

One of the lead researchers was Fei Fang,an assistant professor at Carnegie Mellon's Institute for Software Research.She said in a news release the study represents a departure from most human-robot experiments."This is one of the first studies of human-robot interaction in an environment where they are not cooperating,"Fang said.

主要研究人员之一是卡内基梅隆大学软件研究所助理教授费芳。她在新闻稿中说,这项研究与大多数人类机器人实验有所不同。”方舟子说:“这是人类与机器人在不合作环境中相互作用的首次研究。”。

The research suggests that humanoid robots have the ability to affect people socially just as humans do.Fang said this ability could become more important in the future when machines and humans are expected to interact regularly.

研究表明,仿人机器人有能力像人类一样影响人们的社交活动。方舟子说,这种能力在未来机器和人类定期互动时可能变得更加重要。

"We can expect home assistants to be cooperative,"she said."But in situations such as online shopping,they may not have the same goals as we do."

“我们可以期待家庭助理的合作,”她说但在网上购物等情况下,他们的目标可能与我们不一样。”
网站部分信息来源于自互联网和网友上传,只为方便大家查询浏览,请自行核对信息的真实情况,本站将不承担任何责任!

您可以还会对下面的文章感兴趣:

  • 36小时环游新加坡
  • 中国颁布新规,限制未成年人玩游戏
  • 辞掉工作、花了57天,他们找回了走失的狗
  • 改善健康也许很简单:每天少吃300卡
  • 伦敦也为空气污染发愁
  • 最新评论

    留言与评论(共有 条评论)
       
    验证码: