Covariant公司的RFM-1机器人实现了一个极为有意思的功能,那就是在机器人执行任务的过程中如果遇到无法处理的情况下就会停止下来然后等待人类的语言指示,比如:夹具向上移动2cm,更换更大型号的夹具,等待,可以说该公司在目前人工智能算法还不能完全胜任任务的情况下引入了人类协助的方法,该种方式虽然并没有脱离人类的监管但是也不需要耗费太多的人类精力,并且该工作在research领域上意义更大。
中文(翻译):
例如,向它展示一个装满运动装备的箱子的图像,并告诉它拿起网球包。然后,机器人可以抓取物品,生成一个图像,显示网球消失后箱子的样子,或者创建一个视频,展示机器人执行任务时的俯视图。
如果模型预测它无法正确抓取物品,它甚至可能会回复说:“我无法牢固抓握。你有什么建议吗?”回复可能会建议它在手臂上使用特定数量的吸盘,以便更好地抓握——例如八个而不是六个。
陈告诉我,这代表了机器人的一大进步,它们可以使用训练数据来适应环境,而不是以前一代工业机器人所依赖的复杂的、特定任务的代码。这也是迈向工地的一步,管理者可以用人类语言发布指令,而不必担心人力劳动的局限性。(“使用以下食谱为红辣椒意大利面准备600份快餐套餐。不要休息!”)
纽约大学的通用机器人和人工智能实验室负责人Lerrel Pinto表示,尽管机器人学家以前构建过基本的多模态机器人,并在实验室环境中使用过它们,但在规模上部署一个能够以这么多模式进行通信的机器人,对于公司来说是一项令人印象深刻的成就。
Pinto告诉我,为了超越竞争对手,Covariant必须获得足够的数据,让机器人在野外变得有用。仓库地板和装货码头是它将面临考验的地方,它将不断地与新指令、人员、物体和环境互动。
“那些将训练出良好模型的团队,要么拥有已经大量的机器人数据,要么有生成这些数据的能力。”他说。
Covariant表示,该模型具有“类似人类”的推理能力,但也有其局限性。在演示中,我可以看到Covariant机器人的实时视频,以及一个与之交流的聊天窗口,陈邀请我随意提问模型。当我要求机器人“将香蕉送回Tote Two”时,它在追踪自己的步骤时遇到了困难,导致它先拿起了一个海绵,然后是一个苹果,然后是一大堆其他物品,最终才完成了香蕉任务。
“它不理解这个新概念,”陈解释说,“但这是一个很好的例子——在没有良好训练数据的地方,它可能还不能很好地工作。”
该公司的新模型体现了机器人世界中的一场范式转变。研究人员不再通过物理方程式和代码等指令手动教导机器人世界是如何运作的,而是通过数百万次观察来教导它,就像人类学习一样。
陈说,结果“实际上可以作为一个非常有效的灵活大脑来解决任意的机器人任务。”
原文:
For example, show it an image of a bin filled with sports equipment, and tell it to pick up the pack of tennis balls. The robot can then grab the item, generate an image of what the bin will look like after the tennis balls are gone, or create a video showing a bird’s-eye view of how the robot will look doing the task.
If the model predicts it won’t be able to properly grasp the item, it might even type back, “I can’t get a good grip. Do you have any tips?” A response could advise it to use a specific number of the suction cups on its arms to give it better a grasp—eight versus six, for example.
This represents a leap forward, Chen told me, in robots that can adapt to their environment using training data rather than the complex, task-specific code that powered the previous generation of industrial robots. It’s also a step toward worksites where managers can issue instructions in human language without concern for the limitations of human labor. (“Pack 600 meal-prep kits for red pepper pasta using the following recipe. Take no breaks!”)
Lerrel Pinto, a researcher who runs the general-purpose robotics and AI lab at New York University and has no ties to Covariant, says that even though roboticists have built basic multimodal robots before and used them in lab settings, deploying one at scale that’s able to communicate in this many modes marks an impressive feat for the company.
To outpace its competitors, Covariant will have to get its hands on enough data for the robot to become useful in the wild, Pinto told me. Warehouse floors and loading docks are where it will be put to the test, constantly interacting with new instructions, people, objects, and environments.
“The groups which are going to train good models are going to be the ones that have either access to already large amounts of robot data or capabilities to generate those data,” he says.
Covariant says the model has a “human-like” ability to reason, but it has its limitations. During the demonstration, in which I could see a live feed of a Covariant robot as well as a chat window to communicate with it, Chen invited me to prompt the model with anything I wanted. When I asked the robot to “return the banana to Tote Two,” it struggled with retracing its steps, leading it to pick up a sponge, then an apple, then a host of other items before it finally accomplished the banana task.
“It doesn’t understand the new concept,” Chen said by way of explanation, “but it’s a good example—it might not work well yet in the places where you don’t have good training data.”
The company’s new model embodies a paradigm shift rippling through the robotics world. Rather than teaching a robot how the world works manually, through instructions like physics equations and code, researchers are teaching it in the same way humans learn: through millions of observations.
The result “really can act as a very effective flexible brain to solve arbitrary robot tasks,” Chen said.
标签:good,Covariant,机器人,robot,data,its,RFM From: https://www.cnblogs.com/devilmaycry812839668/p/18167817