首页 > 其他分享 >元学习(Meta Learning)最全论文、视频、书籍资源整理

元学习(Meta Learning)最全论文、视频、书籍资源整理

时间:2023-06-23 12:33:25浏览次数:50  
标签:Shot 最全 Few Meta 2019 2018 Learning




元学习(Meta Learning)最全论文、视频、书籍资源整理_自然语言处理


Meta Learning,叫做元学习或者 Learning to Learn 学会学习,包括Zero-Shot/One-Shot/Few-Shot 学习,模型无关元学习(Model Agnostic Meta Learning)和元强化学习(Meta Reinforcement Learning)。元学习是人工智能领域,继深度学习是人工智能领域,继深度学习 -> 深度强化学习、生成对抗之后,又一个重要的研究分支,也是是近期的研究热点,加州伯克利大学在这方面做了大量工作。

经典文章、代码、书籍、博客、视频教程、数据集等其他资源,提供给需要的朋友。

内容整理自网络,资源原地址:https://github.com/ZHANGHeng19931123/awesome-video-object-detection

目录

经典论文和代码

书籍

博客

视频教程

数据集

论坛集合

知名研究者

经典论文和代码

资源详细列表如下。

Zero-Shot / One-Shot / Few-Shot 学习

Siamese Neural Networks for One-shot Image Recognition, (2015), Gregory Koch, Richard Zemel, Ruslan Salakhutdinov.

Prototypical Networks for Few-shot Learning, (2017), Jake Snell, Kevin Swersky, Richard S. Zemel.

Gaussian Prototypical Networks for Few-Shot Learning on Omniglot (2017), Stanislav Fort.

Matching Networks for One Shot Learning, (2017), Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, Daan Wierstra.

Learning to Compare: Relation Network for Few-Shot Learning, (2017), Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H.S. Torr, Timothy M. Hospedales.

One-shot Learning with Memory-Augmented Neural Networks, (2016), Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, Timothy Lillicrap.

Optimization as a Model for Few-Shot Learning, (2016), Sachin Ravi and Hugo Larochelle.

An embarrassingly simple approach to zero-shot learning, (2015), B Romera-Paredes, Philip H. S. Torr.

Low-shot Learning by Shrinking and Hallucinating Features, (2017), Bharath Hariharan, Ross Girshick.

Low-shot learning with large-scale diffusion, (2018), Matthijs Douze, Arthur Szlam, Bharath Hariharan, Hervé Jégou.

Low-Shot Learning with Imprinted Weights, (2018), Hang Qi, Matthew Brown, David G. Lowe.

One-Shot Video Object Segmentation, (2017), S. Caelles and K.K. Maninis and J. Pont-Tuset and L. Leal-Taixe' and D. Cremers and L. Van Gool.

One-Shot Learning for Semantic Segmentation, (2017), Amirreza Shaban, Shray Bansal, Zhen Liu, Irfan Essa, Byron Boots.

Few-Shot Segmentation Propagation with Guided Networks, (2018), Kate Rakelly, Evan Shelhamer, Trevor Darrell, Alexei A. Efros, Sergey Levine.

Few-Shot Semantic Segmentation with Prototype Learning, (2018), Nanqing Dong and Eric P. Xing.

Dynamic Few-Shot Visual Learning without Forgetting, (2018), Spyros Gidaris, Nikos Komodakis.

Feature Generating Networks for Zero-Shot Learning, (2017), Yongqin Xian, Tobias Lorenz, Bernt Schiele, Zeynep Akata.

Meta-Learning Deep Visual Words for Fast Video Object Segmentation, (2019), Harkirat Singh Behl, Mohammad Najafi, Anurag Arnab, Philip H.S. Torr.

模型无关元学习

(Model Agnostic Meta Learning)

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks, (2017), Chelsea Finn, Pieter Abbeel, Sergey Levine.

Adversarial Meta-Learning, (2018), Chengxiang Yin, Jian Tang, Zhiyuan Xu, Yanzhi Wang.

On First-Order Meta-Learning Algorithms, (2018), Alex Nichol, Joshua Achiam, John Schulman.

Meta-SGD: Learning to Learn Quickly for Few-Shot Learning, (2017), Zhenguo Li, Fengwei Zhou, Fei Chen, Hang Li.

Gradient Agreement as an Optimization Objective for Meta-Learning, (2018), Amir Erfan Eshratifar, David Eigen, Massoud Pedram.

Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace, (2018), Yoonho Lee, Seungjin Choi.

A Simple Neural Attentive Meta-Learner, (2018), Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, Pieter Abbeel.

Personalizing Dialogue Agents via Meta-Learning, (2019), Zhaojiang Lin, Andrea Madotto, Chien-Sheng Wu, Pascale Fung.

How to train your MAML, (2019), Antreas Antoniou, Harrison Edwards, Amos Storkey.

Learning to learn by gradient descent by gradient descent, (206), Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas.

Unsupervised Learning via Meta-Learning, (2019), Kyle Hsu, Sergey Levine, Chelsea Finn.

Few-Shot Image Recognition by Predicting Parameters from Activations, (2018), Siyuan Qiao, Chenxi Liu, Wei Shen, Alan Yuille.

One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning, (2018), Tianhe Yu, Chelsea Finn, Annie Xie, Sudeep Dasari, Pieter Abbeel, Sergey Levine,

MetaGAN: An Adversarial Approach to Few-Shot Learning, (2018), ZHANG, Ruixiang and Che, Tong and Ghahramani, Zoubin and Bengio, Yoshua and Song, Yangqiu.

Fast Parameter Adaptation for Few-shot Image Captioning and Visual Question Answering,(2018), Xuanyi Dong, Linchao Zhu, De Zhang, Yi Yang, Fei Wu.

CAML: Fast Context Adaptation via Meta-Learning, (2019), Luisa M Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, Shimon Whiteson.

Meta-Learning for Low-resource Natural Language Generation in Task-oriented Dialogue Systems, (2019), Fei Mi, Minlie Huang, Jiyong Zhang, Boi Faltings.

MIND: Model Independent Neural Decoder, (2019), Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan.

Toward Multimodal Model-Agnostic Meta-Learning, (2018), Risto Vuorio, Shao-Hua Sun, Hexiang Hu, Joseph J. Lim.

Alpha MAML: Adaptive Model-Agnostic Meta-Learning, (2019), Harkirat Singh Behl, Atılım Güneş Baydin, Philip H. S. Torr.

Online Meta-Learning, (2019), Chelsea Finn, Aravind Rajeswaran, Sham Kakade, Sergey Levine.

元强化学习

(Meta Reinforcement Learning)

Generalizing Skills with Semi-Supervised Reinforcement Learning, (2017), Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, Sergey Levine.

Guided Meta-Policy Search, (2019), Russell Mendonca, Abhishek Gupta, Rosen Kralev, Pieter Abbeel, Sergey Levine, Chelsea Finn.

End-to-End Robotic Reinforcement Learning without Reward Engineering, (2019), Avi Singh, Larry Yang, Kristian Hartikainen, Chelsea Finn, Sergey Levine.

Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables, (2019), Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, Sergey Levine.

Task-Agnostic Dynamics Priors for Deep Reinforcement Learning, (2019), Yilun Du, Karthik Narasimhan.

Meta Reinforcement Learning with Task Embedding and Shared Policy,(2019), Lin Lan, Zhenguo Li, Xiaohong Guan, Pinghui Wang.

NoRML: No-Reward Meta Learning, (2019), Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Jie Tan, Chelsea Finn.

Actor-Critic Algorithms for Constrained Multi-agent Reinforcement Learning, (2019), Raghuram Bharadwaj Diddigi, Sai Koti Reddy Danda, Prabuchandran K. J., Shalabh Bhatnagar.

Adaptive Guidance and Integrated Navigation with Reinforcement Meta-Learning, (2019), Brian Gaudet, Richard Linares, Roberto Furfaro.

Watch, Try, Learn: Meta-Learning from Demonstrations and Reward, (2019), Allan Zhou, Eric Jang, Daniel Kappler, Alex Herzog, Mohi Khansari, Paul Wohlhart, Yunfei Bai, Mrinal Kalakrishnan, Sergey Levine, Chelsea Finn.

Options as responses: Grounding behavioural hierarchies in multi-agent RL, (2019), Alexander Sasha Vezhnevets, Yuhuai Wu, Remi Leblond, Joel Z. Leibo.

Learning latent state representation for speeding up exploration, (2019), Giulia Vezzani, Abhishek Gupta, Lorenzo Natale, Pieter Abbeel.

Beyond Exponentially Discounted Sum: Automatic Learning of Return Function, (2019), Yufei Wang, Qiwei Ye, Tie-Yan Liu.

Learning Efficient and Effective Exploration Policies with Counterfactual Meta Policy, (2019), Ruihan Yang, Qiwei Ye, Tie-Yan Liu.

Dealing with Non-Stationarity in Multi-Agent Deep Reinforcement Learning, (2019), Georgios Papoudakis, Filippos Christianos, Arrasy Rahman, Stefano V. Albrecht.

Learning to Discretize: Solving 1D Scalar Conservation Laws via Deep Reinforcement Learning, (2019), Yufei Wang, Ziju Shen, Zichao Long, Bin Dong.

书籍

Hands-On Meta Learning with Python: Meta learning using one-shot learning, MAML, Reptile, and Meta-SGD with TensorFlow, (2019), Sudharsan Ravichandiran.

博客

Berkeley Artificial Intelligence Research blog

Meta-Learning: Learning to Learn Fast

Meta-Reinforcement Learning

How to train your MAML: A step by step approach

An Introduction to Meta-Learning

From zero to research — An introduction to Meta-learning

What’s New in Deep Learning Research: Understanding Meta-Learning

视频教程

Chelsea Finn: Building Unsupervised Versatile Agents with Meta-Learning

Sam Ritter: Meta-Learning to Make Smart Inferences from Small Data

Model Agnostic Meta Learning by Siavash Khodadadeh

Meta Learning by Siraj Raval

Meta Learning by Hugo Larochelle

Meta Learning and One-Shot Learning

数据集

最常用的数据集列表:

Omniglot

mini-ImageNet

ILSVRC

FGVC aircraft

Caltech-UCSD Birds-200-2011

Check several other datasets by Google here.

研讨会

MetaLearn 2017

MetaLearn 2018

MetaLearn 2019

知名研究者

Chelsea Finn, UC Berkeley

Pieter Abbeel, UC Berkeley

Erin Grant, UC Berkeley

Raia Hadsell, DeepMind

Misha Denil, DeepMind

Adam Santoro, DeepMind

Sachin Ravi, Princeton University

David Abel, Brown University

Brenden Lake, Facebook AI Research


标签:Shot,最全,Few,Meta,2019,2018,Learning
From: https://blog.51cto.com/u_13046751/6537645

相关文章

  • 论文阅读 | Soteria: Provable Defense against Privacy Leakage in Federated Learni
    Soteria:基于表示的联邦学习中可证明的隐私泄露防御https://ieeexplore.ieee.org/document/95781923FL隐私泄露的根本原因3.1FL中的表示层信息泄露问题设置在FL中,有多个设备和一个中央服务器。服务器协调FL进程,其中每个参与设备仅与服务器通信本地模型参数,同时保持其本地......
  • 宇宙最全面的C++面试题v2.0
    作为一个后端人,是无论如何要对C++有一定了解底。很多同学都对C++有一定的抵触情绪,因为C++知识点繁杂全面,深度与广度俱在,准备面试需要很长的时间。本篇的主要目的是梳理知识脉络,挑选最精华的面试题,以飨读者,事半功倍!准备面试一定要有侧重点,标为❤属于高频考点,需要反复记忆。建议平......
  • 深度学习/图像处理历史最全最细-网络、技巧、迭代-论文整理分享
        本资源整理了深度学习/图像处理技术发展过程中的所有模型、优化技巧、网络结构优化、迭代过程中所有经典论文,并进行了详细的分类,按重要程度进行了仔细的划分,对于想要了解深度学习模型迭代朋友来说非常值得参考。     本资源整理自网络,源地址:https://github.com/xw-hu/......
  • 神经网络问答生成最全模型、策略、应用相关论文、资源、评测整理分享
        本文整理了基于神经网络的问答系统中,文本生成相关各种算法、优化策略、应用场景的经典论文,以及相关的评估、开源资源等等,需要朋友自取。    资源整理自网络,源地址:https://github.com/teacherpeterpan/Question-Generation-Paper-List 目录    综述论文    1.R......
  • 历史最全图像/视频去模糊化精选论文整理分享
        本资源整理了图像/视频图模糊化相关的经典论文、相关的数据集。涉及基于深度学习技术的单图像盲运动去模糊化,非深度学习单图像盲运动去模糊化,非盲去模糊化,多图像/视频运动去模糊化等方面,分享给需要的朋友。     资源整理自网络,源地址:https://github.com/subeeshvasu/......
  • 2022年最新对比学习(Contrastive Learning)相关必读论文整理分享
        要说到对比学习(ContrastiveLearning),首先要从自监督学习开始讲起。自监督学习属于无监督学习范式的一种,特点是不需要人工标注的类别标签信息,直接利用数据本身作为监督信息,来学习样本数据的特征表达,并用于下游任务。    当前自监督学习可以被大致分为两类:    Genera......
  • 历史最全GAN模型PyTorch代码实现整理分享
        如果你是第一次接触AE自编码器和GAN生成对抗网络,那这将会是一个非常有用且效率的学习资源。所有的内容使用PyTorch编写,编写格式清晰,非常适合PyTorch新手作为学习资源。本项目的所有模型目前都是基于MNIST数据库进行图片生成。MNIST数据集是一个比较小,一个光CPU就能跑起来的......
  • 历史最全Java资源大全中文版整理分享
       很多程序员应该记得GitHub上有一个Awesome-XXX系列的资源整理。本资源对Java相关的资源列表进行翻译和整理,内容包括:构建工具、数据库、框架、模板、安全、代码分析、日志、第三方库、书籍、Java站点等。分享给需要的朋友。目录内容截图......
  • 史上最全Android性能优化方案解析
    Android中的性能优分为以下几个方面:布局优化网络优化安装包优化内存优化卡顿优化启动优化……一.布局优化布局优化的本质就是减少View的层级。常见的布局优化方案如下:在LinearLayout和RelativeLayout都可以完成布局的情况下优先选择LinearLayout,可以减少View的层级,但是注意相同组......
  • 全网最新最全首届“陇剑杯”网络安全大赛完整WIRTEUP --- 简介
    分为理论题和ctf题目理论题20道,单选10个,多选10个,全队只能答1次,考察法律和理论技术ctf题目分为11个大题,具体是签到(1题)、jwt(6题)、webshell(7题)、日志分析(3题)、流量分析(3题)、内存分析(2题)、简单日志分析(3题)、SQL注入(3题)、wifi(1题)、ios(8题)、机密内存(5题),大题主中包......