news 2026/3/29 17:42:30

深度学习在相位测量偏折术中的应用【附源码+教程】

作者头像

张小明

前端开发工程师

1.2k 24
文章封面图
深度学习在相位测量偏折术中的应用【附源码+教程】

博主简介:擅长数据搜集与处理、建模仿真、程序设计、仿真代码、论文写作与指导,毕业论文、期刊论文经验交流。

✅成品或者定制,扫描文章底部微信二维码。


(1) 基于改进U-Net网络的单帧变形条纹相位检索方法

相位测量偏折术是一种高精度的光滑表面三维测量技术,通过分析被测表面反射的变形条纹图像获取表面的斜率信息,进而重建表面的三维形貌。传统的相移法需要采集多帧相移条纹图像才能准确计算相位信息,这严重限制了测量效率,无法满足动态测量的需求。单帧相位检索方法虽然只需要一帧图像,但传统的傅里叶变换法精度有限,且会造成边缘细节信息的丢失。本研究提出了基于改进U-Net网络的单帧变形条纹相位检索方法,利用深度学习强大的非线性拟合能力,从单帧变形条纹图像中直接预测包裹相位分布。网络架构基于U-Net的编码器解码器结构,编码器路径通过连续的卷积和下采样操作提取变形条纹图像的多尺度特征,捕获条纹的局部调制特性和全局分布模式。解码器路径通过上采样和跳跃连接逐步恢复相位图的空间分辨率,跳跃连接将编码器的浅层特征直接传递到解码器对应层,保留条纹的高频细节信息,确保相位图的边缘清晰。针对相位检索任务的特点,在网络中引入残差连接和注意力机制,残差连接使网络能够学习输入条纹与目标相位之间的映射残差,加快收敛速度;注意力机制使网络自适应地关注条纹变形显著的区域,提高相位估计的局部精度。训练数据集通过传统十步相移法采集制作,包含大量变形条纹图像与对应的高精度包裹相位图像对。实验结果表明,训练后的网络能够从单帧变形条纹图像中准确预测包裹相位,精度接近十步相移法水平,同时将测量效率提升了一个数量级。

(2) 基于深度学习的单帧调制度测量与质量评估方法

调制度是结构光测量中的重要质量指标,反映了条纹图像的对比度和信噪比,低调制度区域的相位测量结果可靠性较差,需要在数据处理中进行识别和滤除。传统的调制度计算方法同样需要多帧相移图像,与相位检索面临相同的效率瓶颈。本研究将深度学习方法扩展到单帧调制度测量任务,设计了专门的神经网络从单帧变形条纹图像中直接预测调制度分布图。调制度测量网络同样采用改进的U-Net架构,但针对调制度预测任务的特点进行了优化。调制度是一个非负实数,反映条纹的局部振幅特性,与相位预测相比具有不同的数据分布特点。在网络输出层使用ReLU激活函数确保预测值非负,在损失函数中结合均方误差和结构相似度损失,前者约束预测值的数值精度,后者约束预测图像的空间结构相似性。训练数据集通过多帧相移算法计算得到的高精度调制度图像作为标签,建立变形条纹图像与调制度图像的映射关系。网络训练采用数据增强策略,包括随机裁剪、旋转、亮度调整等操作,增强模型对不同测量条件的泛化能力。实验结果表明,深度学习方法能够从单帧图像中准确预测调制度分布,预测结果与十步相移法计算的调制度高度一致,为后续的数据质量评估和异常区域识别提供了高效的技术手段。结合相位预测网络和调制度预测网络,形成了完整的单帧条纹分析方案,显著提升了相位测量偏折术的测量效率。

(3) 基于正交复合光栅编码的动态光滑表面三维重建方法

实现光滑表面的动态三维测量需要从单帧图像中同时获取两个正交方向的相位信息,这在传统方法中通常采用正交复合光栅编码实现,即在同一帧图像中叠加水平和垂直方向的条纹。然而复合条纹的解调面临频谱混叠和颜色串扰等问题,二维傅里叶变换法难以准确分离两个方向的相位信息,严重影响测量精度。本研究提出了基于深度学习的正交复合条纹解调方法,训练神经网络直接从单帧复合条纹图像中预测两个方向的包裹相位。网络架构采用双分支输出设计,共享编码器提取复合条纹图像的公共特征,两个独立的解码器分支分别预测水平方向和垂直方向的相位分布。共享编码器设计使网络能够学习两个方向条纹之间的相互关系和约束,利用一个方向的信息辅助另一个方向的解调,提高整体预测精度。

import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import DataLoader, Dataset class DoubleConvBlock(nn.Module): def __init__(self, in_channels, out_channels): super(DoubleConvBlock, self).__init__() self.conv = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1), nn.BatchNorm2d(out_channels), nn.ReLU(inplace=True), nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1), nn.BatchNorm2d(out_channels), nn.ReLU(inplace=True) ) def forward(self, x): return self.conv(x) class AttentionBlock(nn.Module): def __init__(self, F_g, F_l, F_int): super(AttentionBlock, self).__init__() self.W_g = nn.Sequential( nn.Conv2d(F_g, F_int, kernel_size=1), nn.BatchNorm2d(F_int) ) self.W_x = nn.Sequential( nn.Conv2d(F_l, F_int, kernel_size=1), nn.BatchNorm2d(F_int) ) self.psi = nn.Sequential( nn.Conv2d(F_int, 1, kernel_size=1), nn.BatchNorm2d(1), nn.Sigmoid() ) self.relu = nn.ReLU(inplace=True) def forward(self, g, x): g1 = self.W_g(g) x1 = self.W_x(x) psi = self.relu(g1 + x1) psi = self.psi(psi) return x * psi class ImprovedUNet(nn.Module): def __init__(self, in_channels=1, out_channels=1): super(ImprovedUNet, self).__init__() self.encoder1 = DoubleConvBlock(in_channels, 64) self.encoder2 = DoubleConvBlock(64, 128) self.encoder3 = DoubleConvBlock(128, 256) self.encoder4 = DoubleConvBlock(256, 512) self.bottleneck = DoubleConvBlock(512, 1024) self.upconv4 = nn.ConvTranspose2d(1024, 512, kernel_size=2, stride=2) self.attention4 = AttentionBlock(512, 512, 256) self.decoder4 = DoubleConvBlock(1024, 512) self.upconv3 = nn.ConvTranspose2d(512, 256, kernel_size=2, stride=2) self.attention3 = AttentionBlock(256, 256, 128) self.decoder3 = DoubleConvBlock(512, 256) self.upconv2 = nn.ConvTranspose2d(256, 128, kernel_size=2, stride=2) self.attention2 = AttentionBlock(128, 128, 64) self.decoder2 = DoubleConvBlock(256, 128) self.upconv1 = nn.ConvTranspose2d(128, 64, kernel_size=2, stride=2) self.attention1 = AttentionBlock(64, 64, 32) self.decoder1 = DoubleConvBlock(128, 64) self.final_conv = nn.Conv2d(64, out_channels, kernel_size=1) self.pool = nn.MaxPool2d(kernel_size=2, stride=2) def forward(self, x): e1 = self.encoder1(x) e2 = self.encoder2(self.pool(e1)) e3 = self.encoder3(self.pool(e2)) e4 = self.encoder4(self.pool(e3)) b = self.bottleneck(self.pool(e4)) d4 = self.upconv4(b) e4 = self.attention4(d4, e4) d4 = self.decoder4(torch.cat([d4, e4], dim=1)) d3 = self.upconv3(d4) e3 = self.attention3(d3, e3) d3 = self.decoder3(torch.cat([d3, e3], dim=1)) d2 = self.upconv2(d3) e2 = self.attention2(d2, e2) d2 = self.decoder2(torch.cat([d2, e2], dim=1)) d1 = self.upconv1(d2) e1 = self.attention1(d1, e1) d1 = self.decoder1(torch.cat([d1, e1], dim=1)) return self.final_conv(d1) class PhaseRetrievalNetwork(nn.Module): def __init__(self): super(PhaseRetrievalNetwork, self).__init__() self.unet = ImprovedUNet(in_channels=1, out_channels=1) self.residual_refine = nn.Sequential( nn.Conv2d(2, 32, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(32, 32, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(32, 1, kernel_size=3, padding=1) ) def forward(self, fringe_image): coarse_phase = self.unet(fringe_image) combined = torch.cat([fringe_image, coarse_phase], dim=1) refined_phase = coarse_phase + self.residual_refine(combined) return torch.tanh(refined_phase) * np.pi class ModulationRetrievalNetwork(nn.Module): def __init__(self): super(ModulationRetrievalNetwork, self).__init__() self.unet = ImprovedUNet(in_channels=1, out_channels=1) def forward(self, fringe_image): modulation = self.unet(fringe_image) return F.relu(modulation) class DualBranchPhaseNetwork(nn.Module): def __init__(self): super(DualBranchPhaseNetwork, self).__init__() self.shared_encoder = nn.Sequential( DoubleConvBlock(1, 64), nn.MaxPool2d(2), DoubleConvBlock(64, 128), nn.MaxPool2d(2), DoubleConvBlock(128, 256), nn.MaxPool2d(2), DoubleConvBlock(256, 512) ) self.horizontal_decoder = nn.Sequential( nn.ConvTranspose2d(512, 256, kernel_size=2, stride=2), DoubleConvBlock(256, 256), nn.ConvTranspose2d(256, 128, kernel_size=2, stride=2), DoubleConvBlock(128, 128), nn.ConvTranspose2d(128, 64, kernel_size=2, stride=2), DoubleConvBlock(64, 64), nn.Conv2d(64, 1, kernel_size=1) ) self.vertical_decoder = nn.Sequential( nn.ConvTranspose2d(512, 256, kernel_size=2, stride=2), DoubleConvBlock(256, 256), nn.ConvTranspose2d(256, 128, kernel_size=2, stride=2), DoubleConvBlock(128, 128), nn.ConvTranspose2d(128, 64, kernel_size=2, stride=2), DoubleConvBlock(64, 64), nn.Conv2d(64, 1, kernel_size=1) ) def forward(self, composite_fringe): shared_features = self.shared_encoder(composite_fringe) horizontal_phase = torch.tanh(self.horizontal_decoder(shared_features)) * np.pi vertical_phase = torch.tanh(self.vertical_decoder(shared_features)) * np.pi return horizontal_phase, vertical_phase class SSIMLoss(nn.Module): def __init__(self, window_size=11): super(SSIMLoss, self).__init__() self.window_size = window_size self.channel = 1 self.window = self._create_window(window_size, self.channel) def _create_window(self, window_size, channel): gauss = torch.tensor([np.exp(-(x - window_size//2)**2 / (2 * 1.5**2)) for x in range(window_size)]) gauss = gauss / gauss.sum() window = gauss.unsqueeze(1) * gauss.unsqueeze(0) window = window.unsqueeze(0).unsqueeze(0).repeat(channel, 1, 1, 1) return window def forward(self, pred, target): window = self.window.to(pred.device) mu1 = F.conv2d(pred, window, padding=self.window_size//2, groups=self.channel) mu2 = F.conv2d(target, window, padding=self.window_size//2, groups=self.channel) mu1_sq = mu1 ** 2 mu2_sq = mu2 ** 2 mu1_mu2 = mu1 * mu2 sigma1_sq = F.conv2d(pred * pred, window, padding=self.window_size//2, groups=self.channel) - mu1_sq sigma2_sq = F.conv2d(target * target, window, padding=self.window_size//2, groups=self.channel) - mu2_sq sigma12 = F.conv2d(pred * target, window, padding=self.window_size//2, groups=self.channel) - mu1_mu2 C1 = 0.01 ** 2 C2 = 0.03 ** 2 ssim = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2)) return 1 - ssim.mean() class CombinedPhaseLoss(nn.Module): def __init__(self, alpha=0.5): super(CombinedPhaseLoss, self).__init__() self.mse = nn.MSELoss() self.ssim = SSIMLoss() self.alpha = alpha def forward(self, pred, target): mse_loss = self.mse(pred, target) ssim_loss = self.ssim(pred, target) return self.alpha * mse_loss + (1 - self.alpha) * ssim_loss class OrthogonalConstraintLoss(nn.Module): def __init__(self, lambda_orth=0.1): super(OrthogonalConstraintLoss, self).__init__() self.lambda_orth = lambda_orth def forward(self, phase_h, phase_v): correlation = torch.mean(phase_h * phase_v) return self.lambda_orth * torch.abs(correlation) class FringeDataset(Dataset): def __init__(self, fringe_images, phase_images, modulation_images=None): self.fringe_images = fringe_images self.phase_images = phase_images self.modulation_images = modulation_images def __len__(self): return len(self.fringe_images) def __getitem__(self, idx): fringe = torch.from_numpy(self.fringe_images[idx]).float().unsqueeze(0) phase = torch.from_numpy(self.phase_images[idx]).float().unsqueeze(0) if self.modulation_images is not None: modulation = torch.from_numpy(self.modulation_images[idx]).float().unsqueeze(0) return fringe, phase, modulation return fringe, phase def train_phase_network(model, train_loader, val_loader, epochs, learning_rate): optimizer = optim.Adam(model.parameters(), lr=learning_rate) criterion = CombinedPhaseLoss(alpha=0.7) scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', patience=5) for epoch in range(epochs): model.train() train_loss = 0 for fringe, phase in train_loader: optimizer.zero_grad() pred_phase = model(fringe) loss = criterion(pred_phase, phase) loss.backward() optimizer.step() train_loss += loss.item() model.eval() val_loss = 0 with torch.no_grad(): for fringe, phase in val_loader: pred_phase = model(fringe) loss = criterion(pred_phase, phase) val_loss += loss.item() scheduler.step(val_loss) return model def train_dual_branch_network(model, train_loader, epochs, learning_rate): optimizer = optim.Adam(model.parameters(), lr=learning_rate) phase_criterion = CombinedPhaseLoss() orth_criterion = OrthogonalConstraintLoss() for epoch in range(epochs): model.train() for composite, phase_h, phase_v in train_loader: optimizer.zero_grad() pred_h, pred_v = model(composite) loss_h = phase_criterion(pred_h, phase_h) loss_v = phase_criterion(pred_v, phase_v) loss_orth = orth_criterion(pred_h, pred_v) total_loss = loss_h + loss_v + loss_orth total_loss.backward() optimizer.step() return model if __name__ == "__main__": phase_net = PhaseRetrievalNetwork() modulation_net = ModulationRetrievalNetwork() dual_branch_net = DualBranchPhaseNetwork() dummy_fringe = torch.randn(4, 1, 256, 256) phase_output = phase_net(dummy_fringe) modulation_output = modulation_net(dummy_fringe) phase_h, phase_v = dual_branch_net(dummy_fringe)


如有问题,可以直接沟通

👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇

版权声明: 本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!
网站建设 2026/3/27 13:16:55

打工人上班摸魚小說-第六章 病遁、加薪与U盘深处的秘密

第六章 病遁、加薪与U盘深处的秘密 “精力充沛(被动)”的效果是潜移默化的。 周末两天,林舟睡得格外踏实,醒来时那种仿佛被卡车碾过的沉重感消失了。喉咙的不适也彻底消退,周一清晨站在镜子前,他甚至觉得…

作者头像 李华
网站建设 2026/3/27 0:29:46

LLM Fine-Tuning|七阶段微调【工程系列】3.第二阶段:模型初始化

本篇主要针对,第二阶段|模型初始化,进行工程方法论的分解 简单理解:模型的初始化决定训练稳定性与收敛速度 包含,模型初始化(Model Initialisation)阶段的 1.工程定义和核心目标 2.工程视角的 关键操作步骤 3.模型选择的 核心考量点 4.工程中…

作者头像 李华
网站建设 2026/3/27 14:21:06

9 款 AI 写论文哪个好?深度实测:虎贲等考 AI 凭硬核实力 C 位出道

毕业季来临,AI 写论文工具成了毕业生的 “香饽饽”。市面上的工具五花八门,功能参差不齐,到底哪款才是真正的学术救星?作为深耕论文写作科普的测评博主,我耗时两周,对虎贲等考 AI、图灵论文 AI 写作助手、J…

作者头像 李华