Skip to the content.

迁移学习 Transfer Learning


Overleaf

Everything about Transfer Learning.

PapersTutorialsResearch areasTheorySurveyCodeDataset & benchmark

ThesisScholarsContestsJournal/conferenceApplicationsOthersContributing

@Misc{transferlearning.xyz,
howpublished = {\url{http://transferlearning.xyz}},   
title = {Everything about Transfer Learning and Domain Adapation},  
author = {Wang, Jindong and others}  
}  

Awesome MIT License LICENSE 996.icu Related repos:Activity recognitionMachine learning


NOTE: You can directly open the code in Gihub Codespaces on the web to run them without downloading! Also, try github.dev.

0.Papers (论文)

Awesome transfer learning papers (迁移学习文章汇总)

Latest papers: (All papers are also put in doc/awesome_papers.md)

Latest papers (2021-12-01) - NeurIPS-21 [On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources](https://arxiv.org/abs/2111.13822) - Theory and algorithm of domain-invariant learning for transfer learning - 对invariant representation的理论和算法 - WACV-22 [Semi-supervised Domain Adaptation via Sample-to-Sample Self-Distillation](https://arxiv.org/abs/2111.14353) - Sample-level self-distillation for semi-supervised DA - 样本层次的自蒸馏用于半监督DA - [ROBIN : A Benchmark for Robustness to Individual Nuisancesin Real-World Out-of-Distribution Shifts](https://arxiv.org/abs/2111.14341) - A benchmark for robustness to individual OOD - 一个OOD的benchmark - ICML-21 workshop [Towards Principled Disentanglement for Domain Generalization](https://arxiv.org/abs/2111.13839) - Principled disentanglement for domain generalization - Principled解耦用于domain generalization
Latest papers (2021-11) - NeurIPS-21 workshop [CytoImageNet: A large-scale pretraining dataset for bioimage transfer learning](https://arxiv.org/abs/2111.11646) - A large-scale dataset for bioimage transfer learning - 一个大规模的生物图像数据集用于迁移学习 - NeurIPS-21 workshop [Component Transfer Learning for Deep RL Based on Abstract Representations](https://arxiv.org/abs/2111.11525) - Deep transfer learning for RL - 深度迁移学习用于强化学习 - NeurIPS-21 workshop [Maximum Mean Discrepancy for Generalization in the Presence of Distribution and Missingness Shift](https://arxiv.org/abs/2111.10344) - MMD for covariate shift - 用MMD来解决covariate shift问题 - [Combined Scaling for Zero-shot Transfer Learning](https://arxiv.org/abs/2111.10050) - Scaling up for zero-shot transfer learning - 增大训练规模用于zero-shot迁移学习 - [Federated Learning with Domain Generalization](https://arxiv.org/abs/2111.10487) - Federated domain generalization - 联邦学习+domain generalization - [Semi-Supervised Domain Generalization in Real World:New Benchmark and Strong Baseline](https://arxiv.org/abs/2111.10221) - Semi-supervised domain generalization - 半监督+domain generalization - MICCAI-21 [Domain Generalization for Mammography Detection via Multi-style and Multi-view Contrastive Learning](https://arxiv.org/abs/2111.10827) - Domain generalization for mammography detection - 领域泛化用于乳房X射线检查 - [On Representation Knowledge Distillation for Graph Neural Networks](https://arxiv.org/abs/2111.04964) - Knowledge distillation for GNN - 适用于GNN的知识蒸馏 - BMVC-21 [Domain Attention Consistency for Multi-Source Domain Adaptation](https://arxiv.org/abs/2111.03911) - Multi-source domain adaptation using attention consistency - 用attention一致性进行多源的domain adaptation - [Action Recognition using Transfer Learning and Majority Voting for CSGO](https://arxiv.org/abs/2111.03882) - Using transfer learning and majority voting for action recognition - 使用迁移学习和多数投票进行动作识别 - [Open-Set Crowdsourcing using Multiple-Source Transfer Learning](https://arxiv.org/abs/2111.04073) - Open-set crowdsourcing using multiple-source transfer learning - 使用多源迁移进行开放集的crowdsourcing - [Improved Regularization and Robustness for Fine-tuning in Neural Networks](https://arxiv.org/abs/2111.04578) - Improve regularization and robustness for finetuning - 针对finetune提高其正则和鲁棒性 - [TimeMatch: Unsupervised Cross-Region Adaptation by Temporal Shift Estimation](https://arxiv.org/abs/2111.02682) - Temporal domain adaptation - NeurIPS-21 [Modular Gaussian Processes for Transfer Learning](https://arxiv.org/abs/2110.13515) - Modular Gaussian process for transfer learning - 在迁移学习中使用modular Gaussian过程 - [Estimating and Maximizing Mutual Information for Knowledge Distillation](https://arxiv.org/abs/2110.15946) - Global and local mutual information maximation for knowledge distillation - 局部和全局互信息最大化用于蒸馏 - [On Label Shift in Domain Adaptation via Wasserstein Distance](https://arxiv.org/abs/2110.15520) - Using Wasserstein distance to solve label shift in domain adaptation - 在DA领域中用Wasserstein distance去解决label shift问题 - [Xi-Learning: Successor Feature Transfer Learning for General Reward Functions](https://arxiv.org/abs/2110.15701) - General reward function transfer learning in RL - 在强化学习中general reward function的迁移学习 - [C-MADA: Unsupervised Cross-Modality Adversarial Domain Adaptation framework for medical Image Segmentation](https://arxiv.org/abs/2110.15823) - Cross-modality domain adaptation for medical image segmentation - 跨模态的DA用于医学图像分割 - [Deep Transfer Learning for Multi-source Entity Linkage via Domain Adaptation](https://arxiv.org/abs/2110.14509) - Domain adaptation for multi-source entiry linkage - 用DA进行多源的实体链接 - [Temporal Knowledge Distillation for On-device Audio Classification](https://arxiv.org/abs/2110.14131) - Temporal knowledge distillation for on-device ASR - 时序知识蒸馏用于设备端的语音识别 - [Transferring Domain-Agnostic Knowledge in Video Question Answering](https://arxiv.org/abs/2110.13395) - Domain-agnostic learning for VQA - 在VQA任务中进行迁移学习
Latest papers (2021-10) - BMVC-21 [SILT: Self-supervised Lighting Transfer Using Implicit Image Decomposition](https://arxiv.org/abs/2110.12914) - Lighting transfer using implicit image decomposition - 用隐式图像分解进行光照迁移 - [Domain Adaptation in Multi-View Embedding for Cross-Modal Video Retrieval](https://arxiv.org/abs/2110.12812) - Domain adaptation for cross-modal video retrieval - 用领域自适应进行跨模态的视频检索 - [Age and Gender Prediction using Deep CNNs and Transfer Learning](https://arxiv.org/abs/2110.12633) - Age and gender prediction using transfer learning - 用迁移学习进行年龄和性别预测 - [Domain Adaptation for Rare Classes Augmented with Synthetic Samples](https://arxiv.org/abs/2110.12216) - Domain adaptation for rare class - 稀疏类的domain adaptation - WACV-22 [AuxAdapt: Stable and Efficient Test-Time Adaptation for Temporally Consistent Video Semantic Segmentation](https://arxiv.org/abs/2110.12369) - Test-time adaptation for video semantic segmentation - 测试时adaptation用于视频语义分割 - NeurIPS-21 [Unsupervised Domain Adaptation with Dynamics-Aware Rewards in Reinforcement Learning](https://arxiv.org/abs/2110.12997) - Domain adaptation in reinforcement learning - 在强化学习中应用domain adaptation - WACV-21 [Domain Generalization through Audio-Visual Relative Norm Alignment in First Person Action Recognition](https://arxiv.org/abs/2110.10101) - Domain generalization by audio-visual alignment - 通过音频-视频对齐进行domain generalization - BMVC-21 [Dynamic Feature Alignment for Semi-supervised Domain Adaptation](https://arxiv.org/abs/2110.09641) - Dynamic feature alignment for semi-supervised DA - 动态特征对齐用于半监督DA - NeurIPS-21 [FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling](https://arxiv.org/abs/2110.08263) [知乎解读](https://zhuanlan.zhihu.com/p/422930830) [code](https://github.com/TorchSSL/TorchSSL) - Curriculum pseudo label with a unified codebase TorchSSL - 半监督方法FlexMatch和统一算法库TorchSSL - [Rethinking supervised pre-training for better downstream transferring](https://arxiv.org/abs/2110.06014) - Rethink better finetune - 重新思考预训练以便更好finetune - [Music Sentiment Transfer](https://arxiv.org/abs/2110.05765) - Music sentiment transfer learning - 迁移学习用于音乐sentiment - NeurIPS-21 [Model Adaptation: Historical Contrastive Learning for Unsupervised Domain Adaptation without Source Data](http://arxiv.org/abs/2110.03374) - Source-free domain adaptation using constrastive learning - 无源域数据的DA,利用对比学习 - [Understanding Domain Randomization for Sim-to-real Transfer](http://arxiv.org/abs/2110.03239) - Understanding domain randomizationfor sim-to-real transfer - 对强化学习中的sim-to-real transfer进行理论上的分析 - [Dynamically Decoding Source Domain Knowledge For Unseen Domain Generalization](http://arxiv.org/abs/2110.03027) - Ensemble learning for domain generalization - 用集成学习进行domain generalization - [Scale Invariant Domain Generalization Image Recapture Detection](http://arxiv.org/abs/2110.03496) - Scale invariant domain generalizaiton - 尺度不变的domain generalization
Latest papers (2021-09) - IEEE TIP-21 [Joint Clustering and Discriminative Feature Alignment for Unsupervised Domain Adaptation](https://ieeexplore.ieee.org/abstract/document/9535218) - Clustering and discriminative alignment for DA - 聚类与判定式对齐用于DA - IEEE TNNLS-21 [Entropy Minimization Versus Diversity Maximization for Domain Adaptation](https://ieeexplore.ieee.org/abstract/document/9537640) - Entropy minimization versus diversity max for DA - 熵最小化与diversity最大化 - [Adversarial Domain Feature Adaptation for Bronchoscopic Depth Estimation](https://arxiv.org/abs/2109.11798) - Adversarial domain adaptation for bronchoscopic depth estimation - 用对抗领域自适应进行支气管镜的深度估计 - EMNLP-21 [Few-Shot Intent Detection via Contrastive Pre-Training and Fine-Tuning](https://arxiv.org/abs/2109.06349) - Few-shot intent detection using pretrain and finetune - 用迁移学习进行少样本意图检测 - EMNLP-21 [Non-Parametric Unsupervised Domain Adaptation for Neural Machine Translation](https://arxiv.org/abs/2109.06604) - UDA for machine translation - 用领域自适应进行机器翻译 - [KroneckerBERT: Learning Kronecker Decomposition for Pre-trained Language Models via Knowledge Distillation](https://arxiv.org/abs/2109.06243) - Using Kronecker decomposition and knowledge distillation for pre-trained language models compression - 用Kronecker分解和知识蒸馏来进行语言模型的压缩 - [Cross-Region Domain Adaptation for Class-level Alignment](https://arxiv.org/abs/2109.06422) - Cross-region domain adaptation for class-level alignment - 跨区域的领域自适应用于类级别的对齐 - [Unsupervised domain adaptation for cross-modality liver segmentation via joint adversarial learning and self-learning](https://arxiv.org/abs/2109.05664) - Domain adaptation for cross-modality liver segmentation - 使用domain adaptation进行肝脏的跨模态分割 - [CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation](https://arxiv.org/abs/2109.06165) - Cross-domain transformer for domain adaptation - 基于transformer进行domain adaptation - ICCV-21 [Shape-Biased Domain Generalization via Shock Graph Embeddings](https://arxiv.org/abs/2109.05671) - Domain generalization based on shape information - 基于形状进行domain generalization - [Domain and Content Adaptive Convolution for Domain Generalization in Medical Image Segmentation](https://arxiv.org/abs/2109.05676) - Domain generalization for medical image segmentation - 领域泛化用于医学图像分割 - [Class-conditioned Domain Generalization via Wasserstein Distributional Robust Optimization](https://arxiv.org/abs/2109.03676) - Domain generalization with wasserstein DRO - 使用Wasserstein DRO进行domain generalization - [FedZKT: Zero-Shot Knowledge Transfer towards Heterogeneous On-Device Models in Federated Learning](https://arxiv.org/abs/2109.03775) - Zero-shot transfer in heterogeneous federated learning - 零次迁移用于联邦学习 - [Fishr: Invariant Gradient Variances for Out-of-distribution Generalization](https://arxiv.org/abs/2109.02934) - Invariant gradient variances for OOD generalization - 不变梯度方差,用于OOD - [How Does Adversarial Fine-Tuning Benefit BERT?](https://arxiv.org/abs/2108.13602) - Examine how does adversarial fine-tuning help BERT - 探索对抗性finetune如何帮助BERT - [Contrastive Domain Adaptation for Question Answering using Limited Text Corpora](https://arxiv.org/abs/2108.13854) - Contrastive domain adaptation for QA - QA任务中应用对比domain adaptation
Latest papers (2021-08) - [Robust Ensembling Network for Unsupervised Domain Adaptation](https://arxiv.org/abs/2108.09473) - Ensembling network for domain adaptation - 集成嵌入网络用于domain adaptation - [Federated Multi-Task Learning under a Mixture of Distributions](https://arxiv.org/abs/2108.10252) - Federated multi-task learning - 联邦多任务学习 - [Fine-tuning is Fine in Federated Learning](http://arxiv.org/abs/2108.07313) - Finetuning in federated learning - 在联邦学习中进行finetune - [Federated Multi-Target Domain Adaptation](http://arxiv.org/abs/2108.07792) - Federated multi-target DA - 联邦学习场景下的多目标DA - [Learning Transferable Parameters for Unsupervised Domain Adaptation](https://arxiv.org/abs/2108.06129) - Learning partial transfer parameters for DA - 学习适用于迁移部分的参数做UDA任务 - MICCAI-21 [A Systematic Benchmarking Analysis of Transfer Learning for Medical Image Analysis](https://arxiv.org/abs/2108.05930) - A benchmark of transfer learning for medical image - 一个详细的迁移学习用于医学图像的benchmark - [TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation](https://arxiv.org/abs/2108.05988) - Vision transformer for domain adaptation - 用视觉transformer进行DA - CIKM-21 [AdaRNN: Adaptive Learning and Forecasting of Time Series](https://arxiv.org/abs/2108.04443) [Code](https://github.com/jindongwang/transferlearning/tree/master/code/deep/adarnn) [知乎文章](https://zhuanlan.zhihu.com/p/398036372) [Video](https://www.bilibili.com/video/BV1Gh411B7rj/) - A new perspective to using transfer learning for time series analysis - 一种新的建模时间序列的迁移学习视角 - TKDE-21 [Unsupervised Deep Anomaly Detection for Multi-Sensor Time-Series Signals](https://arxiv.org/abs/2107.12626) - Anomaly detection using semi-supervised and transfer learning - 半监督学习用于无监督异常检测 - SemDIAL-21 [Generating Personalized Dialogue via Multi-Task Meta-Learning](https://arxiv.org/abs/2108.03377) - Generate personalized dialogue using multi-task meta-learning - 用多任务元学习生成个性化的对话 - ICCV-21 [BiMaL: Bijective Maximum Likelihood Approach to Domain Adaptation in Semantic Scene Segmentation](https://arxiv.org/abs/2108.03267) - Bijective MMD for domain adaptation - 双射MMD用于语义分割 - [A Survey on Cross-domain Recommendation: Taxonomies, Methods, and Future Directions](https://arxiv.org/abs/2108.03357) - A survey on cross-domain recommendation - 跨领域的推荐的综述 - [A Data Augmented Approach to Transfer Learning for Covid-19 Detection](https://arxiv.org/abs/2108.02870) - Data augmentation to transfer learning for COVID - 迁移学习使用数据增强,用于COVID-19 - MM-21 [Few-shot Unsupervised Domain Adaptation with Image-to-class Sparse Similarity Encoding](https://arxiv.org/abs/2108.02953) - Few-shot DA with image-to-class sparse similarity encoding - 小样本的领域自适应 - [Dual-Tuning: Joint Prototype Transfer and Structure Regularization for Compatible Feature Learning](https://arxiv.org/abs/2108.02959) - Prototype transfer and structure regularization - 原型的迁移学习 - [Finetuning Pretrained Transformers into Variational Autoencoders](https://arxiv.org/abs/2108.02446) - Finetune transformer to VAE - 把transformer迁移到VAE - [Pre-trained Models for Sonar Images](http://arxiv.org/abs/2108.01111) - Pre-trained models for sonar images - 针对声纳图像的预训练模型 - [Domain Adaptor Networks for Hyperspectral Image Recognition](http://arxiv.org/abs/2108.01555) - Finetune for hyperspectral image recognition - 针对高光谱图像识别的迁移学习
Latest papers (2021-07) - CVPR-21 [Efficient Conditional GAN Transfer With Knowledge Propagation Across Classes](https://openaccess.thecvf.com/content/CVPR2021/html/Shahbazi_Efficient_Conditional_GAN_Transfer_With_Knowledge_Propagation_Across_Classes_CVPR_2021_paper.html) - Transfer conditional GANs to unseen classes - 通过知识传递,迁移预训练的conditional GAN到新类别 - CVPR-21 [Ego-Exo: Transferring Visual Representations From Third-Person to First-Person Videos](https://openaccess.thecvf.com/content/CVPR2021/html/Li_Ego-Exo_Transferring_Visual_Representations_From_Third-Person_to_First-Person_Videos_CVPR_2021_paper.html) - Transfer learning from third-person to first-person video - 从第三人称视频迁移到第一人称 - [Toward Co-creative Dungeon Generation via Transfer Learning](http://arxiv.org/abs/2107.12533) - Game scene generation with transfer learning - 用迁移学习生成游戏场景 - [Transfer Learning in Electronic Health Records through Clinical Concept Embedding](https://arxiv.org/abs/2107.12919) - Transfer learning in electronic health record - 迁移学习用于医疗记录管理 - CVPR-21 [Conditional Bures Metric for Domain Adaptation](https://openaccess.thecvf.com/content/CVPR2021/html/Luo_Conditional_Bures_Metric_for_Domain_Adaptation_CVPR_2021_paper.html) - A new metric for domain adaptation - 提出一个新的metric用于domain adaptation - CVPR-21 [Wasserstein Barycenter for Multi-Source Domain Adaptation](https://openaccess.thecvf.com/content/CVPR2021/html/Montesuma_Wasserstein_Barycenter_for_Multi-Source_Domain_Adaptation_CVPR_2021_paper.html) - Use Wasserstein Barycenter for multi-source domain adaptation - 利用Wasserstein Barycenter进行DA - CVPR-21 [Generalized Domain Adaptation](https://openaccess.thecvf.com/content/CVPR2021/html/Mitsuzumi_Generalized_Domain_Adaptation_CVPR_2021_paper.html) - A general definition for domain adaptation - 一个更抽象更一般的domain adaptation定义 - CVPR-21 [Reducing Domain Gap by Reducing Style Bias](https://openaccess.thecvf.com/content/CVPR2021/html/Nam_Reducing_Domain_Gap_by_Reducing_Style_Bias_CVPR_2021_paper.html) - Syle-invariant training for adaptation and generalization - 通过训练图像对style无法辨别来进行DA和DG - CVPR-21 [Uncertainty-Guided Model Generalization to Unseen Domains](https://openaccess.thecvf.com/content/CVPR2021/html/Qiao_Uncertainty-Guided_Model_Generalization_to_Unseen_Domains_CVPR_2021_paper.html) - Uncertainty-guided generalization - 基于不确定性的domain generalization - CVPR-21 [Adaptive Methods for Real-World Domain Generalization](https://openaccess.thecvf.com/content/CVPR2021/html/Dubey_Adaptive_Methods_for_Real-World_Domain_Generalization_CVPR_2021_paper.html) - Adaptive methods for domain generalization - 动态算法,用于domain generalization - 20210716 ICML-21 [Continual Learning in the Teacher-Student Setup: Impact of Task Similarity](https://arxiv.org/abs/2107.04384) - Investigating task similarity in teacher-student learning - 调研在continual learning下teacher-student learning问题的任务相似度 - 20210716 BMCV-extend [Exploring Dropout Discriminator for Domain Adaptation](https://arxiv.org/abs/2107.04231) - Using multiple discriminators for domain adaptation - 用分布估计代替点估计来做domain adaptation - 20210716 TPAMI-21 [Lifelong Teacher-Student Network Learning](https://arxiv.org/abs/2107.04689) - Lifelong distillation - 持续的知识蒸馏 - 20210716 MICCAI-21 [Few-Shot Domain Adaptation with Polymorphic Transformers](https://arxiv.org/abs/2107.04805) - Few-shot domain adaptation with polymorphic transformer - 用多模态transformer做少样本的domain adaptation - 20210716 InterSpeech-21 [Speech2Video: Cross-Modal Distillation for Speech to Video Generation](https://arxiv.org/abs/2107.04806) - Cross-model distillation for video generation - 跨模态蒸馏用于语音到video的生成 - 20210716 ICML-21 workshop [Leveraging Domain Adaptation for Low-Resource Geospatial Machine Learning](https://arxiv.org/abs/2107.04983) - Using domain adaptation for geospatial ML - 用domain adaptation进行地理空间的机器学习

1.Introduction and Tutorials (简介与教程)

Want to quickly learn transfer learning?想尽快入门迁移学习?看下面的教程。


2.Transfer Learning Areas and Papers (研究领域与相关论文)


3.Theory and Survey (理论与综述)

Here are some articles on transfer learning theory and survey.

Survey (综述文章):

Theory (理论文章):


4.Code (代码)

Unified codebases for:

More: see HERE and HERE for an instant run using Google’s Colab.


5.Transfer Learning Scholars (著名学者)

Here are some transfer learning scholars and labs.

全部列表以及代表工作性见这里

Please note that this list is far not complete. A full list can be seen in here. Transfer learning is an active field. If you are aware of some scholars, please add them here.


6.Transfer Learning Thesis (硕博士论文)

Here are some popular thesis on transfer learning.

这里, 提取码:txyz。


7.Datasets and Benchmarks (数据集与评测结果)

Please see HERE for the popular transfer learning datasets and benchmark results.

这里整理了常用的公开数据集和一些已发表的文章在这些数据集上的实验结果。


8.Transfer Learning Challenges (迁移学习比赛)


Journals and Conferences

See here for a full list of related journals and conferences.


Applications (迁移学习应用)

See HERE for transfer learning applications.

迁移学习应用请见这里


Other Resources (其他资源)


Contributing (欢迎参与贡献)

If you are interested in contributing, please refer to HERE for instructions in contribution.


[Notes]This Github repo can be used by following the corresponding licenses. I want to emphasis that it may contain some PDFs or thesis, which were downloaded by me and can only be used for academic purposes. The copyrights of these materials are owned by corresponding publishers or organizations. All this are for better adademic research. If any of the authors or publishers have concerns, please contact me to delete or replace them.