Skip to main content
Back to top
Ctrl
+
K
目录
论文阅读指南
高效阅读方法及流程
如何阅读论文
论文速读十问
读论文与口头报告的几项重点
参考材料
论文阅读清单
神经网络基础(basis)
注意力部分(attention)
批量&正则化(batch&normalization)
图像分类(CLAS)
高级卷积网络知识(Convolutional)
AI合成部分(GAN)
自然语言处理(NLP)
目标检测(OBJ)
循环神经网络(RNN)
目标分割(SEG)
Transformer
多模态(MultiModal Learning)
大语言模型(Large Language Models)
论文阅读笔记
多模态(MultiModal Machine Learning)
BLIP: Bootstrapping Language-Image Pre-training
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
大语言模型(Large Language Models)
OPT: OPT : Open Pre-trained Transformer Language Models
GPT-v1:Improving Language Understanding by Generative Pre-Training
GPT-v2:Language Models are Unsupervised Multitask Learners
GPT-v3:Language Models are Few-Shot Learners
GPT-v4:GPT-4 Technical Report
论文阅读记录
论文阅读总结
Repository
Suggest edit
Open issue
.md
.pdf
GPT-v1:Improving Language Understanding by Generative Pre-Training
GPT-v1:Improving Language Understanding by Generative Pre-Training
#