본문 바로가기

pytorch

[Pytorch] Inputting specified model parameters to optimizer params_to_optimize = [param for name, param in self.model.named_parameters() if self.check_name_validity(TRAIN_WEIGHT_LIST, name)]self.opt = optim.AdamW(params_to_optimize, lr=config['lr'], weight_decay=config['w_decay']) 더보기
How to count the number of parameters of a NN and measure FLOPs required for the NN from fvcore.nn import FlopCountAnalysisdef count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) ResNet = ConvNet(use_pretrained=True, feature_extract=False, resent_model=saved_args.resnet_model)N_param = count_parameters(ResNet) / 1e6input_tensor = torch.randn(1, 3, 320, 640)flops = FlopCountAnalysis(ResNet, input_tensor.. 더보기
Initialize specific parameters of a NN with pre-trained ones and stop them from learning # pre-trained & target state dictfile_name = './saved_models/nuscenes_CVT_model1100/saved_chk_point_27.pt'pre_state_dict = torch.load(file_name, map_location=torch.device('cpu'))['model_state_dict']target_state_dict = model.state_dict()# copy from pre-trained to targetfor name, param in pre_state_dict.items(): if 'shallow' in name: if name.replace("module.", "") in target_state_dict: .. 더보기
[Pytorch] Transformer w/o self-attention implementation compatible with TensorRT class TransFormer(nn.Module): def __init__(self, dim, heads, dim_head, drop=0.1, qkv_bias=True): super(TransFormer, self).__init__() self.dim_head = dim_head self.scale = dim_head ** -0.5 self.heads = heads self.to_q = nn.Linear(dim, heads * dim_head, bias=qkv_bias) self.to_k = nn.Linear(dim, heads * dim_head, bias=qkv_bias) self.to_v = nn.Line.. 더보기
Feature Pyramid Network (FPN) pytorch implementation class FPN(nn.Module): def __init__(self, dim, sizes, channels): ''' dim : target dimension sizes = [57, 113, 225, 450] channels = [1024, 512, 256, 64] ''' super(FPN, self).__init__() self.sizes = sizes self.channels = channels self.dim_reduce, self.merge = nn.ModuleDict(), nn.ModuleDict() for idx, size in enumerate(sizes): self.dim_reduce[str(size)] = nn.Conv2d(channels[idx], dim, kernel_size=1,.. 더보기
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. Solution: pip install protobuf==3.20.* 더보기
Pytorch how to use nn.ModuleDict with zip for iteration class TEST(nn.Module): def __init__(self): super().__init__() self.conv = nn.Conv2d(10, 10, 1) def forward(self, x): return self.conv(x) num_res = 2 BEVEncoder = nn.ModuleDict() UpSampler = nn.ModuleDict() for _ in range(num_res): IsSelfAttn = True if _ == 0 else False BEVEncoder[str(_)] = TEST() if (_ == 0): UpSampler[str(_)] = None else: UpSampler[str(_)] = TEST() for (_, enc), (_, up) in sort.. 더보기
[Pytorch] Loading specific keys for NN initialization This is from the answers in https://discuss.pytorch.org/t/how-to-load-part-of-pre-trained-model/1113/16 How to load part of pre trained model? After model_dict.update(pretrained_dict), the model_dict may still have keys that pretrained_model doesn’t have, which will cause a error. Assum following situation: pretrained_dict: ['A', 'B', 'C', 'D'] model_dict: ['A', 'B', 'C', 'E'] After pretrained_ .. 더보기
Image Augmentation (Photometric) 방법 class PhotoMetricDistortion: """Apply photometric distortion to image sequentially, every transformation is applied with a probability of 0.5. The position of random contrast is in second or second to last. 1. random brightness 2. random contrast (mode 0) 3. convert color from BGR to HSV 4. random saturation 5. random hue 6. convert color from HSV to BGR 7. random contrast (mode 1) 8. randomly s.. 더보기
Torch.Tensor can generate 'nan' elements nn.Parameter(torch.Tensor(batch, h_dim)) 위의 tensor는 종종 nan element를 발생시킨다. 따라서 다음과 같이 tensor 생성에 사용하는것을 추천한다. nn.Parameter(torch.rand(batch, h_dim)) 더보기