Skip to the content.

迁移学习 Transfer Learning


Everything about Transfer Learning.

PapersTutorialsResearch areasTheorySurveyCodeDataset & benchmark


howpublished = {\url{}},   
title = {Everything about Transfer Learning and Domain Adapation},  
author = {Wang, Jindong and others}  

Awesome MIT License LICENSE More related repos:Activity recognitionMachine learning

NOTE: You can directly open the code in Gihub Codespaces on the web to run them without downloading! Also, try

0.Papers (论文)

Awesome transfer learning papers (迁移学习文章汇总)

Latest papers: (All papers are also put in doc/

Latest papers (2021-10-26) - BMVC-21 [SILT: Self-supervised Lighting Transfer Using Implicit Image Decomposition]( - Lighting transfer using implicit image decomposition - 用隐式图像分解进行光照迁移 - [Domain Adaptation in Multi-View Embedding for Cross-Modal Video Retrieval]( - Domain adaptation for cross-modal video retrieval - 用领域自适应进行跨模态的视频检索 - [Age and Gender Prediction using Deep CNNs and Transfer Learning]( - Age and gender prediction using transfer learning - 用迁移学习进行年龄和性别预测 - [Domain Adaptation for Rare Classes Augmented with Synthetic Samples]( - Domain adaptation for rare class - 稀疏类的domain adaptation - WACV-22 [AuxAdapt: Stable and Efficient Test-Time Adaptation for Temporally Consistent Video Semantic Segmentation]( - Test-time adaptation for video semantic segmentation - 测试时adaptation用于视频语义分割 - NeurIPS-21 [Unsupervised Domain Adaptation with Dynamics-Aware Rewards in Reinforcement Learning]( - Domain adaptation in reinforcement learning - 在强化学习中应用domain adaptation
Latest papers (2021-10-21) - WACV-21 [Domain Generalization through Audio-Visual Relative Norm Alignment in First Person Action Recognition]( - Domain generalization by audio-visual alignment - 通过音频-视频对齐进行domain generalization - BMVC-21 [Dynamic Feature Alignment for Semi-supervised Domain Adaptation]( - Dynamic feature alignment for semi-supervised DA - 动态特征对齐用于半监督DA
Latest papers (2021-10-19) - NeurIPS-21 [FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling]( [知乎解读]( [code]( - Curriculum pseudo label with a unified codebase TorchSSL - 半监督方法FlexMatch和统一算法库TorchSSL
Latest papers (2021-10-14) - [Rethinking supervised pre-training for better downstream transferring]( - Rethink better finetune - 重新思考预训练以便更好finetune - [Music Sentiment Transfer]( - Music sentiment transfer learning - 迁移学习用于音乐sentiment
Latest papers (2021-10-11) - NeurIPS-21 [Model Adaptation: Historical Contrastive Learning for Unsupervised Domain Adaptation without Source Data]( - Source-free domain adaptation using constrastive learning - 无源域数据的DA,利用对比学习 - [Understanding Domain Randomization for Sim-to-real Transfer]( - Understanding domain randomizationfor sim-to-real transfer - 对强化学习中的sim-to-real transfer进行理论上的分析 - [Dynamically Decoding Source Domain Knowledge For Unseen Domain Generalization]( - Ensemble learning for domain generalization - 用集成学习进行domain generalization - [Scale Invariant Domain Generalization Image Recapture Detection]( - Scale invariant domain generalizaiton - 尺度不变的domain generalization
Latest papers (2021-09) - IEEE TIP-21 [Joint Clustering and Discriminative Feature Alignment for Unsupervised Domain Adaptation]( - Clustering and discriminative alignment for DA - 聚类与判定式对齐用于DA - IEEE TNNLS-21 [Entropy Minimization Versus Diversity Maximization for Domain Adaptation]( - Entropy minimization versus diversity max for DA - 熵最小化与diversity最大化 - [Adversarial Domain Feature Adaptation for Bronchoscopic Depth Estimation]( - Adversarial domain adaptation for bronchoscopic depth estimation - 用对抗领域自适应进行支气管镜的深度估计 - EMNLP-21 [Few-Shot Intent Detection via Contrastive Pre-Training and Fine-Tuning]( - Few-shot intent detection using pretrain and finetune - 用迁移学习进行少样本意图检测 - EMNLP-21 [Non-Parametric Unsupervised Domain Adaptation for Neural Machine Translation]( - UDA for machine translation - 用领域自适应进行机器翻译 - [KroneckerBERT: Learning Kronecker Decomposition for Pre-trained Language Models via Knowledge Distillation]( - Using Kronecker decomposition and knowledge distillation for pre-trained language models compression - 用Kronecker分解和知识蒸馏来进行语言模型的压缩 - [Cross-Region Domain Adaptation for Class-level Alignment]( - Cross-region domain adaptation for class-level alignment - 跨区域的领域自适应用于类级别的对齐 - [Unsupervised domain adaptation for cross-modality liver segmentation via joint adversarial learning and self-learning]( - Domain adaptation for cross-modality liver segmentation - 使用domain adaptation进行肝脏的跨模态分割 - [CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation]( - Cross-domain transformer for domain adaptation - 基于transformer进行domain adaptation - ICCV-21 [Shape-Biased Domain Generalization via Shock Graph Embeddings]( - Domain generalization based on shape information - 基于形状进行domain generalization - [Domain and Content Adaptive Convolution for Domain Generalization in Medical Image Segmentation]( - Domain generalization for medical image segmentation - 领域泛化用于医学图像分割 - [Class-conditioned Domain Generalization via Wasserstein Distributional Robust Optimization]( - Domain generalization with wasserstein DRO - 使用Wasserstein DRO进行domain generalization - [FedZKT: Zero-Shot Knowledge Transfer towards Heterogeneous On-Device Models in Federated Learning]( - Zero-shot transfer in heterogeneous federated learning - 零次迁移用于联邦学习 - [Fishr: Invariant Gradient Variances for Out-of-distribution Generalization]( - Invariant gradient variances for OOD generalization - 不变梯度方差,用于OOD - [How Does Adversarial Fine-Tuning Benefit BERT?]( - Examine how does adversarial fine-tuning help BERT - 探索对抗性finetune如何帮助BERT - [Contrastive Domain Adaptation for Question Answering using Limited Text Corpora]( - Contrastive domain adaptation for QA - QA任务中应用对比domain adaptation
Latest papers (2021-08) - [Robust Ensembling Network for Unsupervised Domain Adaptation]( - Ensembling network for domain adaptation - 集成嵌入网络用于domain adaptation - [Federated Multi-Task Learning under a Mixture of Distributions]( - Federated multi-task learning - 联邦多任务学习 - [Fine-tuning is Fine in Federated Learning]( - Finetuning in federated learning - 在联邦学习中进行finetune - [Federated Multi-Target Domain Adaptation]( - Federated multi-target DA - 联邦学习场景下的多目标DA - [Learning Transferable Parameters for Unsupervised Domain Adaptation]( - Learning partial transfer parameters for DA - 学习适用于迁移部分的参数做UDA任务 - MICCAI-21 [A Systematic Benchmarking Analysis of Transfer Learning for Medical Image Analysis]( - A benchmark of transfer learning for medical image - 一个详细的迁移学习用于医学图像的benchmark - [TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation]( - Vision transformer for domain adaptation - 用视觉transformer进行DA - CIKM-21 [AdaRNN: Adaptive Learning and Forecasting of Time Series]( [Code]( [知乎文章]( [Video]( - A new perspective to using transfer learning for time series analysis - 一种新的建模时间序列的迁移学习视角 - TKDE-21 [Unsupervised Deep Anomaly Detection for Multi-Sensor Time-Series Signals]( - Anomaly detection using semi-supervised and transfer learning - 半监督学习用于无监督异常检测 - SemDIAL-21 [Generating Personalized Dialogue via Multi-Task Meta-Learning]( - Generate personalized dialogue using multi-task meta-learning - 用多任务元学习生成个性化的对话 - ICCV-21 [BiMaL: Bijective Maximum Likelihood Approach to Domain Adaptation in Semantic Scene Segmentation]( - Bijective MMD for domain adaptation - 双射MMD用于语义分割 - [A Survey on Cross-domain Recommendation: Taxonomies, Methods, and Future Directions]( - A survey on cross-domain recommendation - 跨领域的推荐的综述 - [A Data Augmented Approach to Transfer Learning for Covid-19 Detection]( - Data augmentation to transfer learning for COVID - 迁移学习使用数据增强,用于COVID-19 - MM-21 [Few-shot Unsupervised Domain Adaptation with Image-to-class Sparse Similarity Encoding]( - Few-shot DA with image-to-class sparse similarity encoding - 小样本的领域自适应 - [Dual-Tuning: Joint Prototype Transfer and Structure Regularization for Compatible Feature Learning]( - Prototype transfer and structure regularization - 原型的迁移学习 - [Finetuning Pretrained Transformers into Variational Autoencoders]( - Finetune transformer to VAE - 把transformer迁移到VAE - [Pre-trained Models for Sonar Images]( - Pre-trained models for sonar images - 针对声纳图像的预训练模型 - [Domain Adaptor Networks for Hyperspectral Image Recognition]( - Finetune for hyperspectral image recognition - 针对高光谱图像识别的迁移学习
Latest papers (2021-07) - CVPR-21 [Efficient Conditional GAN Transfer With Knowledge Propagation Across Classes]( - Transfer conditional GANs to unseen classes - 通过知识传递,迁移预训练的conditional GAN到新类别 - CVPR-21 [Ego-Exo: Transferring Visual Representations From Third-Person to First-Person Videos]( - Transfer learning from third-person to first-person video - 从第三人称视频迁移到第一人称 - [Toward Co-creative Dungeon Generation via Transfer Learning]( - Game scene generation with transfer learning - 用迁移学习生成游戏场景 - [Transfer Learning in Electronic Health Records through Clinical Concept Embedding]( - Transfer learning in electronic health record - 迁移学习用于医疗记录管理 - CVPR-21 [Conditional Bures Metric for Domain Adaptation]( - A new metric for domain adaptation - 提出一个新的metric用于domain adaptation - CVPR-21 [Wasserstein Barycenter for Multi-Source Domain Adaptation]( - Use Wasserstein Barycenter for multi-source domain adaptation - 利用Wasserstein Barycenter进行DA - CVPR-21 [Generalized Domain Adaptation]( - A general definition for domain adaptation - 一个更抽象更一般的domain adaptation定义 - CVPR-21 [Reducing Domain Gap by Reducing Style Bias]( - Syle-invariant training for adaptation and generalization - 通过训练图像对style无法辨别来进行DA和DG - CVPR-21 [Uncertainty-Guided Model Generalization to Unseen Domains]( - Uncertainty-guided generalization - 基于不确定性的domain generalization - CVPR-21 [Adaptive Methods for Real-World Domain Generalization]( - Adaptive methods for domain generalization - 动态算法,用于domain generalization - 20210716 ICML-21 [Continual Learning in the Teacher-Student Setup: Impact of Task Similarity]( - Investigating task similarity in teacher-student learning - 调研在continual learning下teacher-student learning问题的任务相似度 - 20210716 BMCV-extend [Exploring Dropout Discriminator for Domain Adaptation]( - Using multiple discriminators for domain adaptation - 用分布估计代替点估计来做domain adaptation - 20210716 TPAMI-21 [Lifelong Teacher-Student Network Learning]( - Lifelong distillation - 持续的知识蒸馏 - 20210716 MICCAI-21 [Few-Shot Domain Adaptation with Polymorphic Transformers]( - Few-shot domain adaptation with polymorphic transformer - 用多模态transformer做少样本的domain adaptation - 20210716 InterSpeech-21 [Speech2Video: Cross-Modal Distillation for Speech to Video Generation]( - Cross-model distillation for video generation - 跨模态蒸馏用于语音到video的生成 - 20210716 ICML-21 workshop [Leveraging Domain Adaptation for Low-Resource Geospatial Machine Learning]( - Using domain adaptation for geospatial ML - 用domain adaptation进行地理空间的机器学习

1.Introduction and Tutorials (简介与教程)

Want to quickly learn transfer learning?想尽快入门迁移学习?看下面的教程。

2.Transfer Learning Areas and Papers (研究领域与相关论文)

3.Theory and Survey (理论与综述)

Here are some articles on transfer learning theory and survey.

Survey (综述文章):

Theory (理论文章):

4.Code (代码)

Unified codebases for:

More: see HERE and HERE for an instant run using Google’s Colab.

5.Transfer Learning Scholars (著名学者)

Here are some transfer learning scholars and labs.


Please note that this list is far not complete. A full list can be seen in here. Transfer learning is an active field. If you are aware of some scholars, please add them here.

6.Transfer Learning Thesis (硕博士论文)

Here are some popular thesis on transfer learning.

这里, 提取码:txyz。

7.Datasets and Benchmarks (数据集与评测结果)

Please see HERE for the popular transfer learning datasets and benchmark results.


8.Transfer Learning Challenges (迁移学习比赛)

Applications (迁移学习应用)

See HERE for transfer learning applications.


Other Resources (其他资源)

Contributing (欢迎参与贡献)

If you are interested in contributing, please refer to HERE for instructions in contribution.

[Notes]This Github repo can be used by following the corresponding licenses. I want to emphasis that it may contain some PDFs or thesis, which were downloaded by me and can only be used for academic purposes. The copyrights of these materials are owned by corresponding publishers or organizations. All this are for better adademic research. If any of the authors or publishers have concerns, please contact me to delete or replace them.