본문 바로가기

Deep Learning

[Pytorch] Transformer w/o self-attention implementation compatible with TensorRT class TransFormer(nn.Module): def __init__(self, dim, heads, dim_head, drop=0.1, qkv_bias=True): super(TransFormer, self).__init__() self.dim_head = dim_head self.scale = dim_head ** -0.5 self.heads = heads self.to_q = nn.Linear(dim, heads * dim_head, bias=qkv_bias) self.to_k = nn.Linear(dim, heads * dim_head, bias=qkv_bias) self.to_v = nn.Line.. 더보기
Feature Pyramid Network (FPN) pytorch implementation class FPN(nn.Module): def __init__(self, dim, sizes, channels): ''' dim : target dimension sizes = [57, 113, 225, 450] channels = [1024, 512, 256, 64] ''' super(FPN, self).__init__() self.sizes = sizes self.channels = channels self.dim_reduce, self.merge = nn.ModuleDict(), nn.ModuleDict() for idx, size in enumerate(sizes): self.dim_reduce[str(size)] = nn.Conv2d(channels[idx], dim, kernel_size=1,.. 더보기
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. Solution: pip install protobuf==3.20.* 더보기
Pytorch how to use nn.ModuleDict with zip for iteration class TEST(nn.Module): def __init__(self): super().__init__() self.conv = nn.Conv2d(10, 10, 1) def forward(self, x): return self.conv(x) num_res = 2 BEVEncoder = nn.ModuleDict() UpSampler = nn.ModuleDict() for _ in range(num_res): IsSelfAttn = True if _ == 0 else False BEVEncoder[str(_)] = TEST() if (_ == 0): UpSampler[str(_)] = None else: UpSampler[str(_)] = TEST() for (_, enc), (_, up) in sort.. 더보기
[Pytorch] Loading specific keys for NN initialization This is from the answers in https://discuss.pytorch.org/t/how-to-load-part-of-pre-trained-model/1113/16 How to load part of pre trained model? After model_dict.update(pretrained_dict), the model_dict may still have keys that pretrained_model doesn’t have, which will cause a error. Assum following situation: pretrained_dict: ['A', 'B', 'C', 'D'] model_dict: ['A', 'B', 'C', 'E'] After pretrained_ .. 더보기
Image Frustum to Global 3D # generate camera frustum h, w = self.cfg['image']['h'], self.cfg['image']['w'] n_cam, dim, downsampled_h, downsampled_w = feat.size() # Depth grid depth_grid = torch.arange(1, 65, 1, dtype=torch.float) depth_grid = depth_grid.view(-1, 1, 1).expand(-1, downsampled_h, downsampled_w) n_depth_slices = depth_grid.shape[0] # x and y grids x_grid = torch.linspace(0, w - 1, downsampled_w, dtype=torch.f.. 더보기
Deformable DETR attention operation cuda build CUDA operation cd ./models/ops sh ./make.sh # unit test (should see all checking is True) python test.py Requirement pip install -r requirements.txt Point Pillars cd ops python setup.py develop 더보기
Image Augmentation (Photometric) 방법 class PhotoMetricDistortion: """Apply photometric distortion to image sequentially, every transformation is applied with a probability of 0.5. The position of random contrast is in second or second to last. 1. random brightness 2. random contrast (mode 0) 3. convert color from BGR to HSV 4. random saturation 5. random hue 6. convert color from HSV to BGR 7. random contrast (mode 1) 8. randomly s.. 더보기
Torch.Tensor can generate 'nan' elements nn.Parameter(torch.Tensor(batch, h_dim)) 위의 tensor는 종종 nan element를 발생시킨다. 따라서 다음과 같이 tensor 생성에 사용하는것을 추천한다. nn.Parameter(torch.rand(batch, h_dim)) 더보기
Pytorch DDP (Distributed Data Parallel) 사용 관련 웹페이지 모음 1. Pytorch Tutorial [overview] https://pytorch.org/tutorials/beginner/dist_overview.html PyTorch Distributed Overview — PyTorch Tutorials 2.0.1+cu117 documentation PyTorch Distributed Overview Author: Shen Li Note View and edit this tutorial in github. This is the overview page for the torch.distributed package. The goal of this page is to categorize documents into different topics and briefly d.. 더보기