—— 持续领航 品牌经营 ——


搜索关键词:  as  test  xxx


来源:博亚体育app官网   发布时间:2023-04-23 18:27nbsp;  点击量:

本文摘要:“A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Isaac Asimov’s precept formed the moral underpinning of his futuristic fiction; but 75 years after he first articulated his three laws of rob


“A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Isaac Asimov’s precept formed the moral underpinning of his futuristic fiction; but 75 years after he first articulated his three laws of robotics, the first and crucial principle is being overtaken by reality.“机器人不得损害人类,或亲眼目睹人类个体将遭到危险性而袖手不管,”艾萨克?阿西莫夫(Isaac Asimov)的戒律奠下了其未来主义小说的道德基础;但在他首次具体阐释“机器人三定律”的75年后,这条至关重要的第一原则正在被现实压过。True, there are as yet no killer androids rampaging across the battlefield. But there are already defensive systems in place that can be programmed to detect and fire at threats — whether incoming missiles or approaching humans. The Pentagon has tested a swarm of miniature drones — raising the possibility that commanders could in future send clouds of skybots into enemy territory equipped to gather intelligence, block radar or — aided by face recognition technology — carry out assassinations. From China to Israel, Russia to Britain, many governments are keen to put rapid advances in artificial intelligence to military use.到底,迄今为止还没“刺客机器人”遨游在战场上。但现在早已经常出现了能用来探查威胁并向目标——无论是飞到的导弹还是附近的人类——还击的防卫系统。


This is a source of alarm to researchers and tech industry executives. Already under fire for the impact that disruptive technologies will have on society, they have no wish to see their commercial innovations adapted to devastating effect. Hence this week’s call from the founders of robotics and AI companies for the UN to take action to prevent an arms race in lethal autonomous weapons systems. In an open letter, they underline their concern that such technology could permit conflict “at a scale greater than ever”, could help repressive regimes quell dissent, or that weapons could be hacked “to behave in undesirable ways”.对于研究人员和科技业高管来说,这种情况有一点忧虑。他们早已因颠覆性技术将对社会产生的影响而备受批评,他们不期望看见自己的商业创意被改建后用作生产吞噬。因此,百余家机器人和人工智能企业的创始人日前牵头敦促联合国采取行动,制止各国在致命性自律武器系统方面进行军备竞赛。

他们在公开信中特别强调了他们的忧虑,称之为此类技术有可能使冲突超过“前所未有的规模”、有可能协助专制政权压制异议人士者,这些武器还有可能因受到黑客攻击而作出危害的不道德。Their concerns are well-founded, but attempts to regulate these weapons are fraught with ethical and practical difficulties. Those who support the increasing use of AI in warfare argue that it has the potential to lessen suffering, not only because fewer front line troops would be needed, but because intelligent weapon systems would be better able to minimise civilian casualties. Targeted strikes against militants would obviate the need for indiscriminate bombing of the kind seen in Falluja or, more recently, Mosul. And there would be many less contentious uses for AI — say, driverless convoys on roads vulnerable to ambush.他们的疑虑是有根据的,但企图掌控这类武器在伦理和实践中方面都不存在艰难。那些反对在战争中更加多用于人工智能的人指出,此类技术有可能增加损害,不只因为所须要部署的前线部队增加,也因为智能武器系统可以更佳地增加平民死伤。如果可以针对登陆作战人员进行目标具体的压制行动,也就不用展开无差别的狂轰滥炸,从而可以防止费卢杰(Falluja)或最近摩苏尔(Mosul)再次发生的那种惨剧。

人工智能还将研发出有很多没有那么具备争议的用途——比如说,在易受伏击路段用于无人驾驶车队。At present, there is a broad consensus among governments against deploying fully autonomous weapons — systems that can select and engage targets with no meaningful human control. For the US military, this is a moral red line: there must always be a human operator responsible for a decision to kill. For others in the debate, it is a practical consideration — autonomous systems could behave unpredictably or be vulnerable to hacking.目前,各国政府在赞成部署仅有自律武器——这类武器可在没实际人为掌控的情况下自由选择目标并向其反攻——方面不存在普遍共识。对于美国军方而言,有一条道德红线:杀人的要求必需由人类操作者作出。

对于争辩中的其他各方而言,不存在一个现实的考量,即自律系统有可能作出难以预测的行径、或更容易受到黑客攻击。It becomes far harder to draw boundaries between systems with a human “in the loop” — in full control of a single drone, for example — and those where humans are “on the loop”, supervising and setting parameters for a broadly autonomous system. In the latter case — which might apply to anti-aircraft systems now, or to future drone swarms — it is arguable whether human oversight would amount to effective control in the heat of battle.如今在“人在决策圈内”的系统(例如几乎掌控一架无人机)和“人在决策圈之上”的系统(人类监督几乎自律的系统并为之原作参数)之间更加无以区分界限了。后一种技术有可能限于于如今的防空系统或未来的无人机群,但一个疑惑是,当战斗转入白热化阶段,人类监督否不会构成有效地的掌控。

Existing humanitarian law helps to an extent. The obligations to distinguish between combatants and civilians, avoid indiscriminate attacks and weapons that cause unnecessary suffering still apply; and commanders must take responsibility when they deploy robots just as they do for the actions of servicemen and women.现有的人道主义法则有一定的起到。人们有责任区分登陆作战人员和平民、防止无差别反击以及不会导致不必要损害的武器;当指挥官像派出士兵一样部署机器人去继续执行任务时,他们必需分担适当的责任。

But the AI industry is right to call for clearer rules, no matter how hard it may be to frame and enforce them. Killer robots may remain the stuff of science fiction, but self-operating weapons are a fast-approaching reality.但是人工智能行业敦促制订更加具体规则的作法是准确的,无论这类规则多难制订和继续执行。“刺客机器人”有可能依然只不存在于科幻小说中,但自律操作者的武器将要沦为现实。



微信二维码 微信二维码

Q Q:936918683

Copyright © 2006-2022 www.lirenhangsh.com. 博亚体育app官网科技 版权所有