欢迎光临散文网 会员登陆 & 注册

45篇Transformer精选论文分享!模型、架构、训练方法一次看完!

2023-08-16 17:55 作者:深度之眼官方账号  | 我要投稿

今天来聊聊transformer。

得益于ChatGPT的爆火,今年大模型可谓是人工智能领域最热门的研究方向,作为大模型奠基之作的transformer也重新活跃在众人面前,新的研究成果一个接一个出,学姐锐评:卷。

对于刚入门AI的同学来说,transformer是必学的知识点;对于其他人工智能领域的同学来说,transformer更是必须要掌握的基础。

所以学姐这回帮大家整理了transformer相关的论文资料,包括23篇模型相关论文,10篇架构相关论文,8篇预训练后处理4篇训练方法,方面刚入门的小白快速上手,也方便其他同学梳理自己的知识体系。

论文list如下:

扫码添加小享,回复“精选45”  

免费获取全部45篇论文+代码合集

一、模型(23)

GPT

Improving Language Understanding by Generative Pre-Training

GPT-2

Language Models are Unsupervised Multitask Learners

GPT-3

Language Models are Few-Shot Learners

GPT-3.5

Models referred to as"GPT 3.5"

GPT-4

GPT-4 Technical Report

GPT-NeoX

GPT-NeoX-20B: An Open-Source Autoregressive Language Model

GPT-J

Pretrained Models

Gopher

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

AlphaCode

Competition-Level Code Generation with AlphaCode

RETRO

Improving language models by retrievingfrom trillions of tokens

Chinchilla

Training Compute-Optimal Large Language Models

Flamingo

Flamingo: a Visual Language Model for FewShot Learning

Gato

A Generalist Agent

Anthropic LM

A General Language Assistantas a Laboratory for Alignment

PaLM

PaLM: Scaling Language Modeling with Pathways

GLaM

GLaM: Efficient Scaling of Language Models with Mixture-of-Experts

LAMDA

LaMDA: Language Models for Dialog Applications

LLaMA

Open and Efficient Foundation Language Models

Switch

Switch Transformers: Scaling to Trillion Parameter Modelswith Simple and Efficient Sparsity

BLOOM

BLOOM: A 176B-Parameter Open-Access MultilingualLanguage Model

Galactica

Galactica: A Large Language Model for Science

OPT

OPT: Open Pre-trained Transformer Language Models

GLM-130B

GLM-130B: AN OPEN BILINGUAL PRE-TRAINEDMODEL

二、架构(10)

多查询注意力

Fast Transformer Decoding: One Write-Head is All You Need

稀疏注意力

Generating Long Sequences with Sparse Transformers

混合专家

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

UNIFIED SCALING LAWS FOR ROUTED LANGUAGE MODELS

Efficient Large Scale Language Modeling with Mixtures of Experts

FlashAttention

FLASHATTENTION: Fast and Memory-Efficient Exact Attention with IO-Awareness

编码器 + 解码器

Attention Is All You Need

平行注意力

PaLM: Scaling Language Modeling with Pathways

RoPE

ROFORMER: ENHANCED TRANSFORMER WITH ROTARYPOSITION EMBEDDING

ALiBi

TRAIN SHORT.TEST LONG: ATTENTION WITH LINEARBIASES ENABLES INPUT LENGTH EXTRAPOLATION

三、预训练后处理(8)

采用 PPO 算法的 RLHF

Deep Reinforcement Learning from Human Preferences

Learning to summarize from human feedback

Constitutional

Constitutional Al: Harmlessness from AI Feedback

Minerva

Solving Quantitative Reasoning Problems with Language Models

Codex

Evaluating Large Language Models Trained on Code

FeedME (SFT)

Training language models to follow instructions with human feedback

Fine-Tuning Language Models from Human Preferences

FLAN

FINETUNED LANGUAGE MODELS ARE ZERO-SHOTLEARNERS

四、训练方法(4)

设置超参数

Training Compute-Optimal Large Language Models

Scaling Laws for Neural Language Models

基于人类反馈的预训练

Pretraining Language Models with Human Preferences

MuP

Tensor Programs V:Tuning Large Neural Networks viaZero-Shot Hyperparameter Transfer

扫码添加小享,回复“精选45”  

免费获取全部45篇论文+代码合集


45篇Transformer精选论文分享!模型、架构、训练方法一次看完!的评论 (共 条)

分享到微博请遵守国家法律