锋哥原创的Transformer 大语言模型(LLM)基石视频教程:
https://www.bilibili.com/video/BV1X92pBqEhV
课程介绍
本课程主要讲解Transformer简介,Transformer架构介绍,Transformer架构详解,包括输入层,位置编码,多头注意力机制,前馈神经网络,编码器层,解码器层,输出层,以及Transformer Pytorch2内置实现,Transformer基于PyTorch2手写实现等知识。
Transformer 大语言模型(LLM)基石 - Transformer架构详解 - 层归一化(Layer Normalization)详解以及算法实现
层归一化(LayerNorm)是对输入的每一维特征进行归一化,使得网络在训练过程中保持稳定。
公式:
优点:
稳定性:层归一化可以防止输入值的范围过大或过小,使得训练过程更加稳定。
与批大小无关:不像批归一化依赖于批量大小,层归一化是针对每个样本单独进行归一化,非常适合序列模型。
代码实现:
# 层归一化 class LayerNorm(nn.Module): # features 512 词嵌入维度 eps 极小值常数,防止零分母 def __init__(self, features, eps=1e-6): super().__init__() self.gamma = nn.Parameter(torch.ones(features)) # 可学习的缩放参数 self.beta = nn.Parameter(torch.zeros(features)) # 可学习的平移参数 self.eps = eps def forward(self, x): """ 前向传播 参数: x: 输入张量 [batch_size, seq_len, d_model] [3,5,512] 来自多头自注意力机制,或者来自前馈神经网络 返回: 归一化后的张量 """ mean = x.mean(-1, keepdim=True) # 计算均值 [3,5,512] variance = x.var(-1, keepdim=True) # 计算方差 [3,5,512] x = (x - mean) / torch.sqrt(variance + self.eps) # 归一化 [3,5,512] return self.gamma * x + self.beta if __name__ == '__main__': vocab_size = 2000 # 词表大小 embedding_dim = 512 # 词嵌入维度的大小 embeddings = Embeddings(vocab_size, embedding_dim) embed_result = embeddings( torch.tensor([[1999, 2, 99, 4, 5], [66, 2, 3, 22, 5], [66, 2, 3, 4, 5]])) print("embed_result.shape:", embed_result.shape) print("embed_result", embed_result) positional_encoding = PositionalEncoding(embedding_dim) result = positional_encoding(embed_result) print("result:", result) print("result.shape:", result.shape) # 测试自注意力机制 # query = key = value = result # mask = create_sequence_mask(5) # dropout = nn.Dropout(0.1) # attention_output, attention_weights = self_attention(query, key, value, mask, dropout) # print("attention_output.shape:", attention_output.shape) # [3, 5, 512] # print("attention_weights.shape:", attention_weights.shape) # [3, 5, 5] mha = MultiHeadAttention(d_model=512, num_heads=8) print(mha) mask = create_sequence_mask(5) result = mha(result, result, result, mask) print("result.shape:", result.shape) # [3, 5, 512] # 测试前馈神经网络 # ffn = FeedForward(d_model=512, d_ff=2048) # ffn_result = ffn(result) # print('ffn_result:', ffn_result.shape) # 测试层归一化 ln = LayerNorm(features=512) ln_result = ln(result) print("ln_result:", ln_result.shape)运行结果: