
MLNLP
(
機器學習演算法與自然語言處理
)社群是國內外知名自然語言處理社群,受眾覆蓋國內外NLP碩博生、高校老師以及企業研究人員。
社群的願景 是促進國內外自然語言處理,機器學習學術界、產業界和廣大愛好者之間的交流,特別是初學者同學們的進步。
(
機器學習演算法與自然語言處理
)社群是國內外知名自然語言處理社群,受眾覆蓋國內外NLP碩博生、高校老師以及企業研究人員。
轉載自 | 極市平臺
ECCV 2022 論文分方向整理目前在極市社群持續更新中,已累計更新了54篇,專案地址:https://github.com/extreme-assistant/ECCV2022-Paper-Code-Interpretation
以下是本週更新的 ECCV 2022 論文,包含檢測,分割,影像處理,影片理解,神經網路結構設計,無監督學習,遷移學習等方向。
- 檢測
- 分割
- 影像處理
- 影片處理
- 影像、影片檢索與理解
- 估計
- 目標跟蹤
- 文字檢測與識別
- GAN/生成式/對抗式
- 神經網路結構設計
- 資料處理
- 模型訓練/泛化
- 模型壓縮
- 模型評估
- 半監督學習/自監督學習
- 多模態/跨模態學習
- 小樣本學習
- 強化學習
1
『檢測』
2D目標檢測
[1] Point-to-Box Network for Accurate Object Detection via Single Point Supervision (透過單點監督實現精確目標檢測的點對盒網路)
paper:https://arxiv.org/abs/2207.06827
code:https://github.com/ucas-vg/p2bnet

[2] You Should Look at All Objects (您應該檢視所有物體)
paper:https://arxiv.org/abs/2207.07889
code:https://github.com/charlespikachu/yslao
[3] Adversarially-Aware Robust Object Detector (對抗性感知魯棒目標檢測器)
paper:https://arxiv.org/abs/2207.06202
code:https://github.com/7eu7d7/robustdet

3D目標檢測
[1] Rethinking IoU-based Optimization for Single-stage 3D Object Detection (重新思考基於 IoU 的單階段 3D 物件檢測最佳化)
paper:https://arxiv.org/abs/2207.09332

人物互動檢測
[1] Towards Hard-Positive Query Mining for DETR-based Human-Object Interaction Detection (面向基於 DETR 的人機互動檢測的硬性查詢挖掘)
paper:https://arxiv.org/abs/2207.05293
code:https://github.com/muchhair/hqm

影像異常檢測
[1] DICE: Leveraging Sparsification for Out-of-Distribution Detection (DICE:利用稀疏化進行分佈外檢測)
paper:https://arxiv.org/abs/2111.09805
code:https://github.com/deeplearning-wisc/dice
2
『分割』
例項分割
[1] Box-supervised Instance Segmentation with Level Set Evolution (具有水平集進化的框監督例項分割)
paper:https://arxiv.org/abs/2207.09055
[2] OSFormer: One-Stage Camouflaged Instance Segmentation with Transformers (OSFormer:使用 Transformers 進行單階段偽裝例項分割)
paper:https://arxiv.org/abs/2207.02255
code:https://github.com/pjlallen/osformer

語義分割
[1] 2DPASS: 2D Priors Assisted Semantic Segmentation on LiDAR Point Clouds (2DPASS:雷射雷達點雲上的二維先驗輔助語義分割)
paper:https://arxiv.org/abs/2207.04397
code:https://github.com/yanx27/2dpass
影片目標分割
[1] Learning Quality-aware Dynamic Memory for Video Object Segmentation (影片物件分割的學習質量感知動態記憶體)
paper:https://arxiv.org/abs/2207.07922
code:https://github.com/workforai/qdmn

3
『影像處理』
超解析度
[1] Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution Networks (超低精度超解析度網路的動態雙可訓練邊界)
paper:https://arxiv.org/abs/2203.03844
code:https://github.com/zysxmu/ddtb
影像去噪
[1] Deep Semantic Statistics Matching (D2SM) Denoising Network (深度語義統計匹配(D2SM)去噪網路)
paper:https://arxiv.org/abs/2207.09302
影像復原/影像增強/影像重建
[1] Semantic-Sparse Colorization Network for Deep Exemplar-based Colorization (用於基於深度示例的著色的語義稀疏著色網路)
paper:https://arxiv.org/abs/2112.01335

[2] Geometry-aware Single-image Full-body Human Relighting (幾何感知單影像全身人體重新照明)
paper:https://arxiv.org/abs/2207.04750

[3] Multi-Modal Masked Pre-Training for Monocular Panoramic Depth Completion (單目全景深度補全的多模態蒙面預訓練)
paper:https://arxiv.org/abs/2203.09855
[4] PanoFormer: Panorama Transformer for Indoor 360 Depth Estimation (PanoFormer:用於室內 360 深度估計的全景變壓器)
paper:https://arxiv.org/abs/2203.09283
[5] SESS: Saliency Enhancing with Scaling and Sliding (SESS:透過縮放和滑動增強顯著性)
paper:https://arxiv.org/abs/2207.01769
[6] RigNet: Repetitive Image Guided Network for Depth Completion (RigNet:用於深度補全的重複影像引導網路)
paper:https://arxiv.org/abs/2107.13802

影像外推(Image Outpainting)
[1] Outpainting by Queries (透過查詢進行外包)
paper:https://arxiv.org/abs/2207.05312
code:https://github.com/kaiseem/queryotr

風格遷移(Style Transfer)
[1] CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer (CCPL:通用風格遷移的對比相干性保留損失)
paper:https://arxiv.org/abs/2207.04808
code:https://github.com/JarrentWu1031/CCPL
4
『影片處理』
[1] Improving the Perceptual Quality of 2D Animation Interpolation (提高二維動畫插值的感知質量)
paper:https://arxiv.org/abs/2111.12792
code:https://github.com/shuhongchen/eisai-anime-interpolator

[2] Real-Time Intermediate Flow Estimation for Video Frame Interpolation (影片幀插值的即時中間流估計)
paper:https://arxiv.org/abs/2011.06294
code:https://github.com/MegEngine/arXiv2020-RIFE
5
『影像、影片檢索與理解』
動作識別
[1] ReAct: Temporal Action Detection with Relational Queries (ReAct:使用關係查詢的時間動作檢測)
paper:https://arxiv.org/abs/2207.07097
code:https://github.com/sssste/react
[2] Hunting Group Clues with Transformers for Social Group Activity Recognition (用Transformers尋找群體線索用於社會群體活動識別)
paper:https://arxiv.org/abs/2207.05254
影片理解
[1] GraphVid: It Only Takes a Few Nodes to Understand a Video (GraphVid:只需幾個節點即可理解影片)
paper:https://arxiv.org/abs/2207.01375
[2] Deep Hash Distillation for Image Retrieval (用於影像檢索的深度雜湊蒸餾)
paper:https://arxiv.org/abs/2112.08816
code:https://github.com/youngkyunjang/deep-hash-distillation

影片檢索(Video Retrieval)
[1] TS2-Net: Token Shift and Selection Transformer for Text-Video Retrieval (TS2-Net:用於文字影片檢索的令牌移位和選擇轉換器)
paper:https://arxiv.org/abs/2207.07852
code:https://github.com/yuqi657/ts2_net

[2] Lightweight Attentional Feature Fusion: A New Baseline for Text-to-Video Retrieval (輕量級注意力特徵融合:文字到影片檢索的新基線)
paper:https://arxiv.org/abs/2112.01832
6
『估計』
位姿估計
[1] Category-Level 6D Object Pose and Size Estimation using Self-Supervised Deep Prior Deformation Networks (使用自監督深度先驗變形網路的類別級 6D 物件姿勢和大小估計)
paper:https://arxiv.org/abs/2207.05444
code:https://github.com/jiehonglin/self-dpdn

深度估計
[1] Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches (使用最優對抗補丁對單目深度估計進行物理攻擊)
paper:https://arxiv.org/abs/2207.04718
7
『目標跟蹤』
[1] Towards Grand Unification of Object Tracking (邁向目標跟蹤的大統一)
paper:https://arxiv.org/abs/2207.07078
code:https://github.com/masterbin-iiau/unicorn

8
『文字檢測與識別』
[1] Dynamic Low-Resolution Distillation for Cost-Efficient End-to-End Text Spotting (用於經濟高效的端到端文字識別的動態低解析度蒸餾)
paper:https://arxiv.org/abs/2207.06694
code:https://github.com/hikopensource/davar-lab-ocr
9
『GAN/生成式/對抗式』
[1] Eliminating Gradient Conflict in Reference-based Line-Art Colorization (消除基於參考的藝術線條著色中的梯度衝突)
paper:https://arxiv.org/abs/2207.06095
code:https://github.com/kunkun0w0/sga

[2] WaveGAN: Frequency-aware GAN for High-Fidelity Few-shot Image Generation (WaveGAN:用於高保真少鏡頭影像生成的頻率感知 GAN)
paper:https://arxiv.org/abs/2207.07288
code:https://github.com/kobeshegu/eccv2022_wavegan
[3] FakeCLR: Exploring Contrastive Learning for Solving Latent Discontinuity in Data-Efficient GANs (FakeCLR:探索對比學習以解決資料高效 GAN 中的潛在不連續性)
paper:https://arxiv.org/abs/2207.08630
code:https://github.com/iceli1007/fakeclr

[4] UniCR: Universally Approximated Certified Robustness via Randomized Smoothing (UniCR:透過隨機平滑獲得普遍近似的認證魯棒性)
paper:https://arxiv.org/abs/2207.02152
10
『神經網路結構設計』
神經網路架構搜尋(NAS)
[1] ScaleNet: Searching for the Model to Scale (ScaleNet:搜尋要擴充套件的模型)
paper:https://arxiv.org/abs/2207.07267
code:https://github.com/luminolx/scalenet

[2] Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning (整合知識引導的子網路搜尋和過濾器修剪微調)
paper:https://arxiv.org/abs/2203.02651
code:https://github.com/sseung0703/ekg
[3] EAGAN: Efficient Two-stage Evolutionary Architecture Search for GANs (EAGAN:GAN 的高效兩階段進化架構搜尋)
paper:https://arxiv.org/abs/2111.15097
code:https://github.com/marsggbo/EAGAN
11
『資料處理』
歸一化
[1] Fine-grained Data Distribution Alignment for Post-Training Quantization (訓練後量化的細粒度資料分佈對齊)
paper:https://arxiv.org/abs/2109.04186
code:https://github.com/zysxmu/fdda
12
『模型訓練/泛化』
噪聲標籤
[1] Learning with Noisy Labels by Efficient Transition Matrix Estimation to Combat Label Miscorrection (透過有效的轉移矩陣估計學習噪聲標籤以對抗標籤錯誤校正)
paper:https://arxiv.org/abs/2111.14932

13
『模型壓縮』
知識蒸餾
[1] Knowledge Condensation Distillation (知識濃縮蒸餾)
paper:https://arxiv.org/abs/2207.05409
code:https://github.com/dzy3/kcd)
14
『模型評估』
[1] Hierarchical Latent Structure for Multi-Modal Vehicle Trajectory Forecasting (多模式車輛軌跡預測的分層潛在結構)
paper:https://arxiv.org/abs/2207.04624
code:https://github.com/d1024choi/hlstrajforecast
15
『半監督/無監督/自監督學習』
[1] FedX: Unsupervised Federated Learning with Cross Knowledge Distillation (FedX:具有交叉知識蒸餾的無監督聯合學習)
paper:https://arxiv.org/abs/2207.09158

[2] Synergistic Self-supervised and Quantization Learning (協同自監督和量化學習)
paper:https://arxiv.org/abs/2207.05432
code:https://github.com/megvii-research/ssql-eccv2022)
[3] Contrastive Deep Supervision (對比深度監督)
paper:https://arxiv.org/abs/2207.05306
code:https://github.com/archiplab-linfengzhang/contrastive-deep-supervision
[4] Dense Teacher: Dense Pseudo-Labels for Semi-supervised Object Detection (稠密教師:用於半監督目標檢測的稠密偽標籤)
paper:https://arxiv.org/abs/2207.02541

[5] Image Coding for Machines with Omnipotent Feature Learning (具有全能特徵學習的機器的影像編碼)
paper:https://arxiv.org/abs/2207.01932
16
『多模態學習/跨模態』
視覺-語言
[1] Contrastive Vision-Language Pre-training with Limited Resources (資源有限的對比視覺語言預訓練)
paper:https://arxiv.org/abs/2112.09331
code:https://github.com/zerovl/zerovl
跨模態
[1] Cross-modal Prototype Driven Network for Radiology Report Generation (用於放射學報告生成的跨模式原型驅動網路)
paper:https://arxiv.org/abs/
code:https://github.com/markin-wang/xpronet

17
『小樣本學習』
[1] Learning Instance and Task-Aware Dynamic Kernels for Few Shot Learning (用於少數鏡頭學習的學習例項和任務感知動態核心)
paper:https://arxiv.org/abs/2112.03494

18
『遷移學習/自適應』
[1] Factorizing Knowledge in Neural Networks (在神經網路中分解知識)
paper:https://arxiv.org/abs/2207.03337
code:https://github.com/adamdad/knowledgefactor
[2] CycDA: Unsupervised Cycle Domain Adaptation from Image to Video (CycDA:從影像到影片的無監督迴圈域自適應)
paper:https://arxiv.org/abs/2203.16244
19
『什麼是對比學習?』
[1] Target-absent Human Attention (目標缺失——人類注意力缺失)
paper:https://arxiv.org/abs/2207.01166
code:https://github.com/neouyghur/sess

掃描二維碼新增小助手微信
關於我們
