본문 바로가기

Deep Learning

How to count the number of parameters of a NN and measure FLOPs required for the NN from fvcore.nn import FlopCountAnalysisdef count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) ResNet = ConvNet(use_pretrained=True, feature_extract=False, resent_model=saved_args.resnet_model)N_param = count_parameters(ResNet) / 1e6input_tensor = torch.randn(1, 3, 320, 640)flops = FlopCountAnalysis(ResNet, input_tensor.. 더보기
Denoising Diffusion Probabilistic model (1) https://towardsdatascience.com/diffusion-model-from-scratch-in-pytorch-ddpm-9d9760528946 Diffusion Model from Scratch in PytorchImplementation of Denoising Diffusion Probabilistic Models (DDPM)towardsdatascience.com  (2) https://jang-inspiration.com/ddpm-1https://jang-inspiration.com/ddpm-2 [논문리뷰] DDPM: Denoising Diffusion Probabilistic ModelDDPM의 Loss Function과 각 term들의 의미, 그리고 Sampling 및 T.. 더보기
Initialize specific parameters of a NN with pre-trained ones and stop them from learning # pre-trained & target state dictfile_name = './saved_models/nuscenes_CVT_model1100/saved_chk_point_27.pt'pre_state_dict = torch.load(file_name, map_location=torch.device('cpu'))['model_state_dict']target_state_dict = model.state_dict()# copy from pre-trained to targetfor name, param in pre_state_dict.items(): if 'shallow' in name: if name.replace("module.", "") in target_state_dict: .. 더보기
[Pytorch] Transformer w/o self-attention implementation compatible with TensorRT class TransFormer(nn.Module): def __init__(self, dim, heads, dim_head, drop=0.1, qkv_bias=True): super(TransFormer, self).__init__() self.dim_head = dim_head self.scale = dim_head ** -0.5 self.heads = heads self.to_q = nn.Linear(dim, heads * dim_head, bias=qkv_bias) self.to_k = nn.Linear(dim, heads * dim_head, bias=qkv_bias) self.to_v = nn.Line.. 더보기
Feature Pyramid Network (FPN) pytorch implementation class FPN(nn.Module): def __init__(self, dim, sizes, channels): ''' dim : target dimension sizes = [57, 113, 225, 450] channels = [1024, 512, 256, 64] ''' super(FPN, self).__init__() self.sizes = sizes self.channels = channels self.dim_reduce, self.merge = nn.ModuleDict(), nn.ModuleDict() for idx, size in enumerate(sizes): self.dim_reduce[str(size)] = nn.Conv2d(channels[idx], dim, kernel_size=1,.. 더보기
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. Solution: pip install protobuf==3.20.* 더보기
Pytorch how to use nn.ModuleDict with zip for iteration class TEST(nn.Module): def __init__(self): super().__init__() self.conv = nn.Conv2d(10, 10, 1) def forward(self, x): return self.conv(x) num_res = 2 BEVEncoder = nn.ModuleDict() UpSampler = nn.ModuleDict() for _ in range(num_res): IsSelfAttn = True if _ == 0 else False BEVEncoder[str(_)] = TEST() if (_ == 0): UpSampler[str(_)] = None else: UpSampler[str(_)] = TEST() for (_, enc), (_, up) in sort.. 더보기
[Pytorch] Loading specific keys for NN initialization This is from the answers in https://discuss.pytorch.org/t/how-to-load-part-of-pre-trained-model/1113/16 How to load part of pre trained model? After model_dict.update(pretrained_dict), the model_dict may still have keys that pretrained_model doesn’t have, which will cause a error. Assum following situation: pretrained_dict: ['A', 'B', 'C', 'D'] model_dict: ['A', 'B', 'C', 'E'] After pretrained_ .. 더보기
Image Frustum to Global 3D # generate camera frustum h, w = self.cfg['image']['h'], self.cfg['image']['w'] n_cam, dim, downsampled_h, downsampled_w = feat.size() # Depth grid depth_grid = torch.arange(1, 65, 1, dtype=torch.float) depth_grid = depth_grid.view(-1, 1, 1).expand(-1, downsampled_h, downsampled_w) n_depth_slices = depth_grid.shape[0] # x and y grids x_grid = torch.linspace(0, w - 1, downsampled_w, dtype=torch.f.. 더보기
Deformable DETR attention operation cuda build CUDA operation cd ./models/ops sh ./make.sh # unit test (should see all checking is True) python test.py Requirement pip install -r requirements.txt Point Pillars cd ops python setup.py develop 더보기