Transformer架构
encoder和decoder区别
Embeddings from Language Model (ELMO)
一种基于上下文的预训练模型,用于生成具有语境的词向量。原理讲解ELMO中的几个问题
Bidirectional Encoder Representations from Transformers (BERT)
BERT就是原生transformer中的Encoder两…
1、 调用模型库,定义参数,做数据预处理
import numpy as np
import torch
from torchvision.datasets import FashionMNIST
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import torch.nn.functional as F
im…
Images Speak in Images: A Generalist Painter for In-Context Visual Learning
GitHub - baaivision/Painter: Painter & SegGPT Series: Vision Foundation Models from BAAI
可以做什么: 输入和输出都是图片,并且不同人物输出的图片格式相同&a…