首页 > 其他分享 >The stuff make you know 90% of what matters today

The stuff make you know 90% of what matters today

时间:2024-05-29 09:33:15浏览次数:11  
标签:what Neural make arxiv matters https pdf org Networks

The technical papers to show you the key under the hood technologies in AI - 2024-05-10

1. The Annotated Transformer (Attention is All You Need - https://arxiv.org/pdf/1706.03762)

https://nlp.seas.harvard.edu/annotated-transformer/

The Transformer has been on a lot of people's minds over the last five years. This post presents an annotated version of the paper in the form of a line-by-line implementation. It reorders and deletes some sections from the original paper and adds comments throughout. This document itself is a working notebook, and should be a completely usable implementation. Code is available here (https://github.com/harvardnlp/annotated-transformer/)

 

2. The First Law of Complexodynamics

https://scottaaronson.blog/?p=762

https://scottaaronson.blog/

The blog of Scott Aaronson - "If you take nothing else from this blog: quantum computers won't solve hard problems instantly by just trying all solutions in parallel"

 

3. The Unreasonable Effectiveness of Recurrent Neural Networks

https://karpathy.github.io/2015/05/21/rnn-effectiveness/

"We'll train RNNs to generate text character by character and ponder the question "how is that even possible?"

BTW, together with this post I am also releasing code that allows you to train character-level language models based on multi-layer LSTMs. (https://github.com/karpathy/char-rnn)

 

4. Understanding LSTM Networks

https://colah.github.io/posts/2015-08-Understanding-LSTMs/

 

5. Recurrent Neural Network Regularization

https://arxiv.org/pdf/1409.2329.pdf

Present a simple regularization technique for Recurrent Neural Networks (RNNs)- with Long Short-Term Memory (LSTM) units.

 

6. Keeping Neural Networks Simple by Minimizing the Description Length of the Weights

https://www.cs.toronto.edu/~hinton/absps/colt93.pdf

Supervised neural networks generalize well if there is much less information in the weights than there is in the output vectors of the training cases.

 

7. Pointer Networks

https://arxiv.org/pdf/1506.03134.pdf

Introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.

 

8. ImageNet Classification with Deep Convolutional Neural Networks

https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf

AlexNet

code: https://github.com/ulrichstern/cuda-convnet

 

9. Order Matters: Sequence to Sequence for Sets

https://arxiv.org/pdf/1511.06391

The order in which we organize input and/or output data matters significantly when learning an inderlying model.

 

10. GPipe: Easy Scalling with Micro-Batch Pipeline Parallelism

https://arxiv.org/pdf/1811.06965

Introduce GPipe, a pipeline parallelism library that allows acaling any network that can be expressed as a sequence of layers

 

11. Deep Residual Learning for Image Recognition

https://arxiv.org/pdf/1512.03385

ResNet

 

12. Multi-scale Context Aggregation by Dilated Convolution

https://arxiv.org/pdf/1511.07122

A new convolution network module that is specifically designed for dense prediction

 

13. Nerual Message Passing for Quantum Chemistry

https://arxiv.org/pdf/1704.01212

Message Passing Nerual Networks (MPNNs)

 

14. Attention Is All You Need

https://arxiv.org/pdf/1706.03762

Attention and Transformer

 

15. Neural Machine Translation by Jointly Learning to Align and Translate

https://arxiv.org/pdf/1409.0473

Allow a model to automatically (soft-)serach for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.

 

16. Identity Mapings in Deep Residual Networks

https://arxiv.org/pdf/1603.05027

Analyze the propagation formulations behind the resdual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation.

code: https://github.com/KaimingHe/resnet-1k-layers

 

17. A Simple Neural Network Module for Relational Reasoning

https://arxiv.org/pdf/1706.01427

Use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning

 

18. Variational Lossy Autoencoder

https://arxiv.org/pdf/1611.02731

Present a simple but principle method to learn such global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN

 

19. Relational Recurrent Neural Networks

https://arxiv.org/pdf/1806.01822

Relational Memory Core (RMC) - which employs multi-head dot product attention to allow memories to interact

 

20. Quantifying the Rise and Fall of Complexity in Closed Systems: The Coffee Automaton

https://arxiv.org/pdf/1405.6903

 

21. Neural Tuing Machines

https://arxiv.org/pdf/1410.5401

 

22. Deep Speech 2: End-to-End Speech Recognition in English and Mandarin

https://arxiv.org/pdf/1512.02595

 

23. Scaling Laws for Neural Language Models

https://arxiv.org/pdf/2001.08361

 

24. A Tutorial Introduction to the Minimum Description Length Principle

https://arxiv.org/pdf/math/0406077

 

25. Machine Super Intelligence

https://www.vetta.org/documents/Machine_Super_Intelligence.pdf

 

26. Kolmogorov Complexity and Algorithmic Randomnes

https://www.lirmm.fr/~ashen/kolmbook-eng-scan.pdf

 

27. CS231n Convolutional Neural Networks for Visual Recognition

https://cs231n.github.io/

标签:what,Neural,make,arxiv,matters,https,pdf,org,Networks
From: https://www.cnblogs.com/munanbuer/p/18219530

相关文章

  • Makefile中,call `$(call)` 函数
    在Makefile中,`$(call...)`是一个函数调用语法,它允许你定义一个名为函数的宏,并在Makefile的其他地方调用它。这种功能提供了一种重用Makefile中复杂或经常重复的代码的方法。`$(call...)`语法的基本形式如下:```make$(callfunction_name,arg1,arg2,...)```这里,`function_nam......
  • What is MSIX?
    WhatisMSIX?  HighlightsPackageexistingWindowsapps.UsetheMSIXPackagingTooltocreateanMSIXpackageforanyWindowsapp,oldornew.TheMSIXpackagingtoolstreamlinesthepackagingexperience,offeringaninteractiveuserinterfaceorc......
  • 记录一次WhatTheFuck经历
    起因很早之前就一直在维护一个git仓库,平时调研什么组件就会在里面新建一个springboot的工程用来编写示例代码。最一开始使用的是SpringInitializr,后来网站更新之后,只能生成JDK17+的工程,WhatTheFuck?近期刚从8切换到11.于是弃用并改用StartAliyun。今天调研ClickHouse,生成的......
  • What You See Is What You Get 所见即所得 20240525~0526 心得记录
    #参访《成都味之道生物科技有限公司》#矿泉水250毫升,不浪费Worth:在生活中寻找和理解真正有价值的事物,关注内在价值和意义。Zest:以热情和积极的态度面对生活,享受生活中的每一个瞬间。Discover:不断探索和发现新的事物,不断学习和成长,丰富人生体验。看见工厂里面横幅里面一句话"......
  • Git push时报错:fatal: Could not read from remote repository. Please make sure you
    这个问题困扰了我好久,在网上试了各种方法都不管用,最后重新设置了代理才解决,现在记录一下整个流程:先使用[email protected]看ssh的返回信息,如果出现:You'vesuccessfullyauthenticated,butGitHubdoesnotprovideshellaccess.,则说明你的ssh连接没有问题,否则重新生成密钥......
  • U-Boot Makefile分析
    当我们拿到开发板以后,是有三种uboot的,这三种uboot的区别如表所示:U-Boot初次编译首先在Ubuntu中安装ncurses库,否则编译会报错:sudoapt-getinstalllibncurses5-dev将正点原子提供的uboot-imx-2016.03-2.1.0-ge468cdc-v1.5.tar.bz2拷贝到自己建的文件夹下,并进行解压......
  • CMake
    一、CMake概述CMake是一个跨平台的项目构建工具。编写完CMakeLists.txt(注意文件名称是固定的)后,执行cmake命令,生成Makefile文件。当然你也可以手动去编写Makefile文件,但是编写的工具量比较大,而且依赖关系也比较多,比较容易出错。从项目源码到可执行文件的流程图:项目的一般目录结......
  • clion的cMakeList.txt的配置
    保留第一第二行,从第三行开始,使用下面内容替换: #使用此CMakeList时,若要新建C++文件,请按照以下步骤:#1.右键根目录——新建——C/C++源文件#2.在弹出的对话框中,输入文件名(英文小写及下划线),后缀为".cpp",不要勾选“添加到目标”,点击确定。#3.点击左上角横线——文件......
  • cmakelist 编译源码生成动态静态库并链接到项目
    当我们使用vscode编译c++代码时,需要加入第三方代码,而它没有库时。这时候我们就需要自己写一个Cmakelist编译成库,然后链接到自己的项目上。下面我以qt的qtpropertybrowser类为例,这个类并不在qt的标准库中,若是在qtcreator中使用,需要在pro引入该文件路径(qt安装目录里-\Qt\5......
  • makefile 进行宏定义的便捷之处
    1.Makefile中:MYVERSION=1.0.00000001CFLAGS+=-DMYVERSION=\"$(MYVERSION )\" all:mainapp mainapp:mainapp.cgcc$(CFLAGS)-omainappmainapp.c  2.c文件中//mainapp.c#include<stdio.h> #ifdefMYVERSION#define  STR(x)  ......