Gang Niu (Senior Research Scientist at RIKEN)


Publications


[ Books, Preprints, Conference Papers, Journal Articles, Theses ]

An asterisk (*) beside authors' names indicates equal contributions.


Books

  1. M. Sugiyama, H. Bao, T. Ishida, N. Lu, T. Sakai, and G. Niu.
    Machine Learning from Weak Supervision: An Empirical Risk Minimization Approach,
    320 pages, Adaptive Computation and Machine Learning series, The MIT Press, 2022.
    (My name is missing from the author list in all retailers due to a system issue of the distributor Penguin Random House;
    Their information system can only store up to 5 authors who can be received by the retailers from their meta-data feeds)

  2. K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, S. Sabato (Eds.).
    Proceedings of 39th International Conference on Machine Learning (ICML 2022),
    27723 pages, Proceedings of Machine Learning Research, vol. 162, 2022.


Preprints (no review)

  1. R. Gao, F. Liu, K. Zhou, G. Niu, B. Han, and J. Cheng.
    Local reweighting for adversarial training.
    [ arXiv ]

  2. Y. Cao, L. Feng, S. Shu, Y. Xu, B. An, G. Niu, and M. Sugiyama.
    Multi-class classification from single-class data with confidences.
    [ arXiv ]

  3. X. Xia, T. Liu, B. Han, M. Gong, J. Yu, G. Niu, and M. Sugiyama.
    Instance correction for learning with open-set noisy labels.
    [ arXiv ]

  4. C. Chen*, J. Zhang*, X. Xu, T. Hu, G. Niu, G. Chen, and M. Sugiyama.
    Guided interpolation for adversarial training.
    [ arXiv ]

  5. J. Zhu*, J. Zhang*, B. Han, T. Liu, G. Niu, H. Yang, M. Kankanhalli, and M. Sugiyama.
    Understanding the interaction of adversarial training with noisy labels.
    [ arXiv ]

  6. S. Wu*, X. Xia*, T. Liu, B. Han, M. Gong, N. Wang, H. Liu, and G. Niu.
    Multi-class classification from noisy-similarity-labeled data.
    [ arXiv ]

  7. J. Zhang*, B. Han*, G. Niu, T. Liu, and M. Sugiyama.
    Where is the bottleneck of adversarial learning with unlabeled data?
    [ arXiv ]

  8. F. Liu, J. Lu, B. Han, G. Niu, G. Zhang, and M. Sugiyama.
    Butterfly: A panacea for all difficulties in wildly unsupervised domain adaptation.
    [ arXiv ]

  9. C.-Y. Hsieh, M. Xu, G. Niu, H.-T. Lin, and M. Sugiyama.
    A pseudo-label method for coarse-to-fine multi-label learning with limited supervision.
    [ OpenReview ]

  10. M. Xu, B. Li, G. Niu, B. Han, and M. Sugiyama.
    Revisiting sample selection approach to positive-unlabeled learning: Turning unlabeled data into positive rather than negative.
    [ arXiv ]

  11. M. Xu, G. Niu, B. Han, I. W. Tsang, Z.-H. Zhou, and M. Sugiyama.
    Matrix co-completion for multi-label classification with missing features and labels.
    [ arXiv ]


Conference Papers (full review)

  1. J. Xu, Y. Ren, X. Wang, L. Feng, Z. Zhang, G. Niu, and X. Zhu.
    Investigating and mitigating the side effects of noisy views for self-supervised clustering algorithms in practical multi-view scenarios.
    In Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2024), to appear.
    [ paper ]

  2. A. Wuerkaixi, S. Cui, J. Zhang, K. Yan, B. Han, G. Niu, L. Fang, C. Zhang, and M. Sugiyama.
    Accurate forgetting for heterogeneous federated continual learning.
    In Proceedings of 12th International Conference on Learning Representations (ICLR 2024), 19 pages, Vienna, Austria, May 7--11, 2024.
    [ paper, OpenReview ]

  3. S. Chen, G. Niu, C. Gong, O. Koc, J. Yang, and M. Sugiyama.
    Robust similarity learning with difference alignment regularization.
    In Proceedings of 12th International Conference on Learning Representations (ICLR 2024), 22 pages, Vienna, Austria, May 7--11, 2024.
    [ paper, OpenReview ]

  4. T. Fang, N. Lu, G. Niu, and M. Sugiyama.
    Generalizing importance weighting to a universal solver for distribution shift problems.
    In Advances in Neural Information Processing Systems 36 (NeurIPS 2023), pp. 24171--24190, New Orleans, Louisiana, USA, Dec 10--Dec 16, 2023.
    (This paper was selected for spotlight presentation; spotlights : acceptance : submissions = 378 : 3218 : 12343)
    [ paper, OpenReview ]

  5. J. Zhu, G. Yu, J. Yao, T. Liu, G. Niu, M. Sugiyama, and B. Han.
    Diversified outlier exposure for out-of-distribution detection via informative extrapolation.
    In Advances in Neural Information Processing Systems 36 (NeurIPS 2023), pp. 22702--22734, New Orleans, Louisiana, USA, Dec 10--Dec 16, 2023.
    [ paper, OpenReview ]

  6. W. Wang, L. Feng, Y. Jiang, G. Niu, M.-L. Zhang, and M. Sugiyama.
    Binary classification with confidence difference.
    In Advances in Neural Information Processing Systems 36 (NeurIPS 2023), pp. 5936--5960, New Orleans, Louisiana, USA, Dec 10--Dec 16, 2023.
    [ paper, OpenReview ]

  7. J. Xu, S. Chen, Y. Ren, X. Shi, H.-T. Shen, G. Niu, and X. Zhu.
    Self-weighted contrastive learning among multiple views for mitigating representation degeneration.
    In Advances in Neural Information Processing Systems 36 (NeurIPS 2023), pp. 1119--1131, New Orleans, Louisiana, USA, Dec 10--Dec 16, 2023.
    [ paper, OpenReview ]

  8. M.-K. Xie, J.-H. Xiao, H.-Z. Liu, G. Niu, M. Sugiyama, and S.-J. Huang.
    Class-distribution-aware pseudo-labeling for semi-supervised multi-label learning.
    In Advances in Neural Information Processing Systems 36 (NeurIPS 2023), pp. 25731--25747, New Orleans, Louisiana, USA, Dec 10--Dec 16, 2023.
    [ paper, OpenReview ]

  9. P. Yang, M.-K. Xie, C.-C. Zong, L. Feng, G. Niu, M. Sugiyama, and S.-J. Huang.
    Multi-label knowledge distillation.
    In Proceedings of 2023 IEEE/CVF International Conference on Computer Vision (ICCV 2023), pp. 17271--17280, Pairs, France, Oct 2--6, 2023.
    [ paper ]

  10. J. Tang, S. Chen, G. Niu, M. Sugiyama, and C. Gong.
    Distribution shift matters for knowledge distillation with webly collected images.
    In Proceedings of 2023 IEEE/CVF International Conference on Computer Vision (ICCV 2023), pp. 17470--17480, Pairs, France, Oct 2--6, 2023.
    [ paper ]

  11. R. Dong*, F. Liu*, H. Chi, T. Liu, M. Gong, G. Niu, M. Sugiyama, and B. Han.
    Diversity-enhancing generative network for few-shot hypothesis adaptation.
    In Proceedings of 40th International Conference on Machine Learning (ICML 2023), PMLR, vol. 202, pp. 8260--8275, Honolulu, Hawaii, USA, Jul 24--30, 2023.
    [ paper ]

  12. H. Wei, H. Zhuang, R. Xie, L. Feng, G. Niu, B. An, and Y. Li.
    Mitigating memorization of noisy labels by clipping the model prediction.
    In Proceedings of 40th International Conference on Machine Learning (ICML 2023), PMLR, vol. 202, pp. 36868--36886, Honolulu, Hawaii, USA, Jul 24--30, 2023.
    [ paper ]

  13. Z. Wei, L. Feng, B. Han, T. Liu, G. Niu, X. Zhu, and H. Shen.
    A universal unbiased method for classification from aggregate observations.
    In Proceedings of 40th International Conference on Machine Learning (ICML 2023), PMLR, vol. 202, pp. 36804--36820, Honolulu, Hawaii, USA, Jul 24--30, 2023.
    [ paper ]

  14. S. Xia*, J. Lv*, N. Xu, G. Niu, and X. Geng.
    Towards effective visual representations for partial-label learning.
    In Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2023), pp. 15589--15598, Vancouver, British Columbia, Canada, Jun 18--22, 2023.
    [ paper ]

  15. T. Ishida, I. Yamane, N. Charoenphakdee, G. Niu, and M. Sugiyama.
    Is the performance of my deep network too good to be true? A direct approach to estimating the Bayes error in binary classification.
    In Proceedings of 11th International Conference on Learning Representations (ICLR 2023), 22 pages, Kigali, Rwanda, May 1--5, 2023.
    (This paper was selected for oral presentation; orals : acceptance : submissions = 90 : 1579 : 4966)
    [ paper, OpenReview ]

  16. J. Zhou*, J. Zhu*, J. Zhang, T. Liu, G. Niu, B. Han, and M. Sugiyama.
    Adversarial training with complementary labels: On the benefit of gradually informative attacks.
    In Advances in Neural Information Processing Systems 35 (NeurIPS 2022), pp. 23621--23633, New Orleans, Louisiana, USA, Nov 28--Dec 9, 2022.
    [ paper, OpenReview ]

  17. S. Chen, C. Gong, J. Li, J. Yang, G. Niu, and M. Sugiyama.
    Learning contrastive embedding in low-dimensional space.
    In Advances in Neural Information Processing Systems 35 (NeurIPS 2022), pp. 6345--6357, New Orleans, Louisiana, USA, Nov 28--Dec 9, 2022.
    [ paper, OpenReview ]

  18. Y. Cao, T. Cai, L. Feng, L. Gu, J. Gu, B. An, G. Niu, and M. Sugiyama.
    Generalizing consistent multi-class classification with rejection to be compatible with arbitrary losses.
    In Advances in Neural Information Processing Systems 35 (NeurIPS 2022), pp. 521--534, New Orleans, Louisiana, USA, Nov 28--Dec 9, 2022.
    [ paper, OpenReview ]

  19. S. Yang, E. Yang, B. Han, Y. Liu, M. Xu, G. Niu, and T. Liu.
    Estimating instance-dependent Bayes-label transition matrix using a deep neural network.
    In Proceedings of 39th International Conference on Machine Learning (ICML 2022), PMLR, vol. 162, pp. 25302--25312, Baltimore, Maryland, USA, Jul 17--23, 2022.
    [ paper ]

  20. J. Wei, H. Liu, T. Liu, G. Niu, M. Sugiyama, and Y. Liu.
    To smooth or not? When label smoothing meets noisy labels.
    In Proceedings of 39th International Conference on Machine Learning (ICML 2022), PMLR, vol. 162, pp. 23589--23614, Baltimore, Maryland, USA, Jul 17--23, 2022.
    [ paper ]

  21. R. Gao, J. Wang, K. Zhou, F. Liu, B. Xie, G. Niu, B. Han, and J. Cheng.
    Fast and reliable evaluation of adversarial robustness with minimum-margin attack.
    In Proceedings of 39th International Conference on Machine Learning (ICML 2022), PMLR, vol. 162, pp. 7144--7163, Baltimore, Maryland, USA, Jul 17--23, 2022.
    [ paper ]

  22. D. Cheng, T. Liu, Y. Ning, N. Wang, B. Han, G. Niu, X. Gao, and M. Sugiyama.
    Instance-dependent label-noise learning with manifold-regularized transition matrix estimation.
    In Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022), pp. 16630--16639, New Orleans, Louisiana, USA, Jun 19--24, 2022.
    [ paper ]

  23. H. Wang, R. Xiao, Y. Li, L. Feng, G. Niu, G. Chen, and J. Zhao.
    PiCO: Contrastive label disambiguation for partial label learning.
    In Proceedings of 10th International Conference on Learning Representations (ICLR 2022), 18 pages, Online, Apr 25--29, 2022.
    (This paper was selected for oral presentation; orals : acceptance : submissions = 55 : 1094 : 3391)
    (In addition, this paper received Outstanding Paper Honorable Mention)
    [ paper, OpenReview ]

  24. H. Chi*, F. Liu*, W. Yang, L. Lan, T. Liu, B. Han, G. Niu, M. Zhou, and M. Sugiyama.
    Meta discovery: Learning to discover novel classes given very limited data.
    In Proceedings of 10th International Conference on Learning Representations (ICLR 2022), 20 pages, Online, Apr 25--29, 2022.
    (This paper was selected for spotlight presentation; spotlights : acceptance : submissions = 176 : 1094 : 3391)
    [ paper, OpenReview ]

  25. Y. Yao, T. Liu, B. Han, M. Gong, G. Niu, M. Sugiyama, and D. Tao.
    Rethinking class-prior estimation for positive-unlabeled learning.
    In Proceedings of 10th International Conference on Learning Representations (ICLR 2022), 21 pages, Online, Apr 25--29, 2022.
    [ paper, OpenReview ]

  26. J. Wei, Z. Zhu, H. Cheng, T. Liu, G. Niu, and Y. Liu.
    Learning with noisy labels revisited: A study using real-world human annotations.
    In Proceedings of 10th International Conference on Learning Representations (ICLR 2022), 23 pages, Online, Apr 25--29, 2022.
    [ CIFAR-N Dataset, paper, OpenReview ]

  27. F. Zhang, L. Feng, B. Han, T. Liu, G. Niu, T. Qin, and M. Sugiyama.
    Exploiting class activation value for partial-label learning.
    In Proceedings of 10th International Conference on Learning Representations (ICLR 2022), 17 pages, Online, Apr 25--29, 2022.
    [ paper, OpenReview ]

  28. J. Zhu, J. Yao, B. Han, J. Zhang, T. Liu, G. Niu, J. Zhou, J. Xu, and H. Yang.
    Reliable adversarial distillation with unreliable teachers.
    In Proceedings of 10th International Conference on Learning Representations (ICLR 2022), 15 pages, Online, Apr 25--29, 2022.
    [ paper, OpenReview ]

  29. N. Lu, Z. Wang, X. Li, G. Niu, Q. Dou, and M. Sugiyama.
    Federated learning from only unlabeled data with class-conditional-sharing clients.
    In Proceedings of 10th International Conference on Learning Representations (ICLR 2022), 22 pages, Online, Apr 25--29, 2022.
    [ paper, OpenReview ]

  30. Y. Zhang, M. Gong, T. Liu, G. Niu, X. Tian, B. Han, B. Schölkopf, and K. Zhang.
    CausalAdv: Adversarial robustness through the lens of causality.
    In Proceedings of 10th International Conference on Learning Representations (ICLR 2022), 20 pages, Online, Apr 25--29, 2022.
    [ paper, OpenReview ]

  31. X. Xia, T. Liu, B. Han, M. Gong, J. Yu, G. Niu, and M. Sugiyama.
    Sample selection with uncertainty of losses for learning with noisy labels.
    In Proceedings of 10th International Conference on Learning Representations (ICLR 2022), 23 pages, Online, Apr 25--29, 2022.
    [ paper, OpenReview ]

  32. Y. Bai*, E. Yang*, B. Han, Y. Yang, J. Li, Y. Mao, G. Niu, and T. Liu.
    Understanding and improving early stopping for learning with noisy labels.
    In Advances in Neural Information Processing Systems 34 (NeurIPS 2021), pp. 24392--24403, Online, Dec 6--14, 2021.
    [ paper, OpenReview ]

  33. Q. Wang*, F. Liu*, B. Han, T. Liu, C. Gong, G. Niu, M. Zhou, and M. Sugiyama.
    Probabilistic margins for instance reweighting in adversarial training.
    In Advances in Neural Information Processing Systems 34 (NeurIPS 2021), pp. 23258--23269, Online, Dec 6--14, 2021.
    [ paper, OpenReview ]

  34. Y. Yao, T. Liu, M. Gong, B. Han, G. Niu, and K. Zhang.
    Instance-dependent label-noise learning under a structural causal model.
    In Advances in Neural Information Processing Systems 34 (NeurIPS 2021), pp. 4409--4420, Online, Dec 6--14, 2021.
    [ paper, OpenReview ]

  35. L. Feng, S. Shu, Y. Cao, L. Tao, H. Wei, T. Xiang, B. An, and G. Niu.
    Multiple-instance learning from similar and dissimilar bags.
    In Proceedings of 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2021), pp. 374--382, Online, Aug 14--18, 2021.
    [ paper ]

  36. S. Chen, G. Niu, C. Gong, J. Li, J. Yang, and M. Sugiyama.
    Large-margin contrastive learning with distance polarization regularizer.
    In Proceedings of 38th International Conference on Machine Learning (ICML 2021), PMLR, vol. 139, pp. 1673--1683, Online, Jul 18--24, 2021.
    [ paper ]

  37. H. Yan, J. Zhang, G. Niu, J. Feng, V. Y. F. Tan, and M. Sugiyama.
    CIFS: Improving adversarial robustness of CNNs via channel-wise importance-based feature selection.
    In Proceedings of 38th International Conference on Machine Learning (ICML 2021), PMLR, vol. 139, pp. 11693--11703, Online, Jul 18--24, 2021.
    [ paper ]

  38. R. Gao*, F. Liu*, J. Zhang*, B. Han, T. Liu, G. Niu, and M. Sugiyama.
    Maximum mean discrepancy test is aware of adversarial attacks.
    In Proceedings of 38th International Conference on Machine Learning (ICML 2021), PMLR, vol. 139, pp. 3564--3575, Online, Jul 18--24, 2021.
    [ paper ]

  39. X. Li, T. Liu, B. Han, G. Niu, and M. Sugiyama.
    Provably end-to-end label-noise learning without anchor points.
    In Proceedings of 38th International Conference on Machine Learning (ICML 2021), PMLR, vol. 139, pp. 6403--6413, Online, Jul 18--24, 2021.
    [ paper ]

  40. X. Du*, J. Zhang*, B. Han, T. Liu, Y. Rong, G. Niu, J. Huang, and M. Sugiyama.
    Learning diverse-structured networks for adversarial robustness.
    In Proceedings of 38th International Conference on Machine Learning (ICML 2021), PMLR, vol. 139, pp. 2880--2891, Online, Jul 18--24, 2021.
    [ paper ]

  41. A. Berthon, B. Han, G. Niu, T. Liu, and M. Sugiyama.
    Confidence scores make instance-dependent label-noise learning possible.
    In Proceedings of 38th International Conference on Machine Learning (ICML 2021), PMLR, vol. 139, pp. 825--836, Online, Jul 18--24, 2021.
    [ paper ]

  42. Y. Zhang, G. Niu, and M. Sugiyama.
    Learning noise transition matrix from only noisy labels via total variation regularization.
    In Proceedings of 38th International Conference on Machine Learning (ICML 2021), PMLR, vol. 139, pp. 12501--12512, Online, Jul 18--24, 2021.
    [ paper ]

  43. L. Feng, S. Shu, N. Lu, B. Han, M. Xu, G. Niu, B. An, and M. Sugiyama.
    Pointwise binary classification with pairwise confidence comparisons.
    In Proceedings of 38th International Conference on Machine Learning (ICML 2021), PMLR, vol. 139, pp. 3252--3262, Online, Jul 18--24, 2021.
    [ paper ]

  44. N. Lu*, S. Lei*, G. Niu, I. Sato, and M. Sugiyama.
    Binary classification from multiple unlabeled datasets via surrogate set classification.
    In Proceedings of 38th International Conference on Machine Learning (ICML 2021), PMLR, vol. 139, pp. 7134--7144, Online, Jul 18--24, 2021.
    [ paper ]

  45. Y. Cao, L. Feng, Y. Xu, B. An, G. Niu, and M. Sugiyama.
    Learning from similarity-confidence data.
    In Proceedings of 38th International Conference on Machine Learning (ICML 2021), PMLR, vol. 139, pp. 1272--1282, Online, Jul 18--24, 2021.
    [ paper ]

  46. S. Wu*, X. Xia*, T. Liu, B. Han, M. Gong, N. Wang, H. Liu, and G. Niu.
    Class2Simi: A noise reduction perspective on learning with noisy labels.
    In Proceedings of 38th International Conference on Machine Learning (ICML 2021), PMLR, vol. 139, pp. 11285--11295, Online, Jul 18--24, 2021.
    [ paper ]

  47. J. Zhang, J. Zhu, G. Niu, B. Han, M. Sugiyama, and M. Kankanhalli.
    Geometry-aware instance-reweighted adversarial training.
    In Proceedings of 9th International Conference on Learning Representations (ICLR 2021), 29 pages, Online, May 3--7, 2021.
    (This paper was selected for oral presentation; orals : acceptance : submissions = 53 : 860 : 2997)
    [ paper, OpenReview ]

  48. A. Jacovi, G. Niu, Y. Goldberg, and M. Sugiyama.
    Scalable evaluation and improvement of document set expansion via neural positive-unlabeled learning.
    In Proceedings of 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021), pp. 581--592, Online, Apr 19--23, 2021.
    [ paper ]

  49. Q. Wang, B. Han, T. Liu, G. Niu, J. Yang, and C. Gong.
    Tackling instance-dependent label noise via a universal probabilistic model.
    In Proceedings of 35th AAAI Conference on Artificial Intelligence (AAAI 2021), pp. 10183--10191, Online, Feb 2--9, 2021.
    [ paper ]

  50. X. Xia, T. Liu, B. Han, N. Wang, M. Gong, H. Liu, G. Niu, D. Tao, and M. Sugiyama.
    Part-dependent label noise: Towards instance-dependent label noise.
    In Advances in Neural Information Processing Systems 33 (NeurIPS 2020), pp. 7597--7610, Online, Dec 6--12, 2020.
    (This paper was selected for spotlight presentation; spotlights : acceptance : submissions = 280 : 1900 : 9454)
    [ paper ]

  51. T. Fang*, N. Lu*, G. Niu, and M. Sugiyama.
    Rethinking importance weighting for deep learning under distribution shift.
    In Advances in Neural Information Processing Systems 33 (NeurIPS 2020), pp. 11996--12007, Online, Dec 6--12, 2020.
    (This paper was selected for spotlight presentation; spotlights : acceptance : submissions = 280 : 1900 : 9454)
    [ paper ]

  52. Y. Yao, T. Liu, B. Han, M. Gong, J. Deng, G. Niu, and M. Sugiyama.
    Dual T: Reducing estimation error for transition matrix in label-noise learning.
    In Advances in Neural Information Processing Systems 33 (NeurIPS 2020), pp. 7260--7271, Online, Dec 6--12, 2020.
    [ paper ]

  53. L. Feng, J. Lv, B. Han, M. Xu, G. Niu, X. Geng, B. An, and M. Sugiyama.
    Provably consistent partial-label learning.
    In Advances in Neural Information Processing Systems 33 (NeurIPS 2020), pp. 10948--10960, Online, Dec 6--12, 2020.
    [ paper ]

  54. C. Wang, B. Han, S. Pan, J. Jiang, G. Niu, and G. Long.
    Cross-graph: Robust and unsupervised embedding for attributed graphs with corrupted structure.
    In Proceedings of 20th IEEE International Conference on Data Mining (ICDM 2020), pp. 571-580, Online, Nov 17--20, 2020.
    [ paper ]

  55. J. Zhang*, X. Xu*, B. Han, G. Niu, L. Cui, M. Sugiyama, and M. Kankanhalli.
    Attacks which do not kill training make adversarial learning stronger.
    In Proceedings of 37th International Conference on Machine Learning (ICML 2020), PMLR, vol. 119, pp. 11278--11287, Online, Jul 12--18, 2020.
    [ paper ]

  56. B. Han, G. Niu, X. Yu, Q. Yao, M. Xu, I. W. Tsang, and M. Sugiyama.
    SIGUA: Forgetting may make learning with noisy labels more robust.
    In Proceedings of 37th International Conference on Machine Learning (ICML 2020), PMLR, vol. 119, pp. 4006--4016, Online, Jul 12--18, 2020.
    [ paper ]

  57. L. Feng*, T. Kaneko*, B. Han, G. Niu, B. An, and M. Sugiyama.
    Learning with multiple complementary labels.
    In Proceedings of 37th International Conference on Machine Learning (ICML 2020), PMLR, vol. 119, pp. 3072--3081, Online, Jul 12--18, 2020.
    [ paper ]

  58. Y.-T. Chou, G. Niu, H.-T. Lin, and M. Sugiyama.
    Unbiased risk estimators can mislead: A case study of learning with complementary labels.
    In Proceedings of 37th International Conference on Machine Learning (ICML 2020), PMLR, vol. 119, pp. 1929--1938, Online, Jul 12--18, 2020.
    [ paper ]

  59. Q. Yao, H. Yang, B. Han, G. Niu, and J. T. Kwok.
    Searching to exploit memorization effect in learning with noisy labels.
    In Proceedings of 37th International Conference on Machine Learning (ICML 2020), PMLR, vol. 119, pp. 10789--10798, Online, Jul 12--18, 2020.
    [ paper ]

  60. J. Lv, M. Xu, L. Feng, G. Niu, X. Geng, and M. Sugiyama.
    Progressive identification of true labels for partial-label learning.
    In Proceedings of 37th International Conference on Machine Learning (ICML 2020), PMLR, vol. 119, pp. 6500--6510, Online, Jul 12--18, 2020.
    [ paper ]

  61. T. Ishida, I. Yamane, T. Sakai, G. Niu, and M. Sugiyama.
    Do we need zero training loss after achieving zero training error?
    In Proceedings of 37th International Conference on Machine Learning (ICML 2020), PMLR, vol. 119, pp. 4604--4614, Online, Jul 12--18, 2020.
    [ paper ]

  62. N. Lu, T. Zhang, G. Niu, and M. Sugiyama.
    Mitigating overfitting in supervised classification from two unlabeled datasets: A consistent risk correction approach.
    In Proceedings of 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020), PMLR, vol. 108, pp. 1115--1125, Online, Aug 26--28, 2020.
    [ paper ]

  63. C. Li, M. E. Khan, Z. Sun, G. Niu, B. Han, S. Xie, and Q. Zhao.
    Beyond unfolding: Exact recovery of latent convex tensor decomposition under reshuffling.
    In Proceedings of 34th AAAI Conference on Artificial Intelligence (AAAI 2020), pp. 4602--4609, New York, New York, USA, Feb 7--12, 2020.
    [ paper ]

  64. L. Xu, J. Honda, G. Niu, and M. Sugiyama.
    Uncoupled regression from pairwise comparison data.
    In Advances in Neural Information Processing Systems 32 (NeurIPS 2019), pp. 3992--4002, Vancouver, British Columbia, Canada, Dec 8--14, 2019.
    [ paper ]

  65. X. Xia, T. Liu, N. Wang, B. Han, C. Gong, G. Niu, and M. Sugiyama.
    Are anchor points really indispensable in label-noise learning?
    In Advances in Neural Information Processing Systems 32 (NeurIPS 2019), pp. 6838--6849, Vancouver, British Columbia, Canada, Dec 8--14, 2019.
    [ paper ]

  66. Y.-G. Hsieh, G. Niu, and M. Sugiyama.
    Classification from positive, unlabeled and biased negative data.
    In Proceedings of 36th International Conference on Machine Learning (ICML 2019), PMLR, vol. 97, pp. 2820--2829, Long Beach, California, USA, Jun 9--15, 2019.
    [ paper ]

  67. T. Ishida, G. Niu, A. K. Menon, and M. Sugiyama.
    Complementary-label learning for arbitrary losses and models.
    In Proceedings of 36th International Conference on Machine Learning (ICML 2019), PMLR, vol. 97, pp. 2971--2980, Long Beach, California, USA, Jun 9--15, 2019.
    [ paper ]

  68. X. Yu, B. Han, J. Yao, G. Niu, I. W. Tsang, and M. Sugiyama.
    How does disagreement help generalization against label corruption?
    In Proceedings of 36th International Conference on Machine Learning (ICML 2019), PMLR, vol. 97, pp. 7164--7173, Long Beach, California, USA, Jun 9--15, 2019.
    [ paper ]

  69. N. Lu, G. Niu, A. K. Menon, and M. Sugiyama.
    On the minimal supervision for training any binary classifier from only unlabeled data.
    In Proceedings of 7th International Conference on Learning Representations (ICLR 2019), 18 pages, New Orleans, Louisiana, USA, May 6--9, 2019.
    [ paper, OpenReview ]

  70. T. Ishida, G. Niu, and M. Sugiyama.
    Binary classification from positive-confidence data.
    In Advances in Neural Information Processing Systems 31 (NeurIPS 2018), pp. 5917--5928, Montreal, Quebec, Canada, Dec 2--8, 2018.
    (This paper was selected for spotlight presentation; spotlights : acceptance : submissions = 168 : 1011 : 4856)
    [ paper ]

  71. B. Han*, J. Yao*, G. Niu, M. Zhou, I. W. Tsang, Y. Zhang, and M. Sugiyama.
    Masking: A new perspective of noisy supervision.
    In Advances in Neural Information Processing Systems 31 (NeurIPS 2018), pp. 5836--5846, Montreal, Quebec, Canada, Dec 2--8, 2018.
    [ paper ]

  72. B. Han*, Q. Yao*, X. Yu, G. Niu, M. Xu, W. Hu, I. W. Tsang, and M. Sugiyama.
    Co-teaching: Robust training of deep neural networks with extremely noisy labels.
    In Advances in Neural Information Processing Systems 31 (NeurIPS 2018), pp. 8527--8537, Montreal, Quebec, Canada, Dec 2--8, 2018.
    [ paper ]

  73. S.-J. Huang, M. Xu, M.-K. Xie, M. Sugiyama, G. Niu, and S. Chen.
    Active feature acquisition with supervised matrix completion.
    In Proceedings of 24th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2018), pp. 1571--1579, London, UK, Aug 19--23, 2018.
    [ paper ]

  74. W. Hu, G. Niu, I. Sato, and M. Sugiyama.
    Does distributionally robust supervised learning give robust classifiers?
    In Proceedings of 35th International Conference on Machine Learning (ICML 2018), PMLR, vol. 80, pp. 2029--2037, Stockholm, Sweden, Jul 10--15, 2018.
    [ paper ]

  75. H. Bao, G. Niu, and M. Sugiyama.
    Classification from pairwise similarity and unlabeled data.
    In Proceedings of 35th International Conference on Machine Learning (ICML 2018), PMLR, vol. 80, pp. 452--461, Stockholm, Sweden, Jul 10--15, 2018.
    [ paper ]

  76. R. Kiryo, G. Niu, M. C. du Plessis, and M. Sugiyama.
    Positive-unlabeled learning with non-negative risk estimator.
    In Advances in Neural Information Processing Systems 30 (NeurIPS 2017), pp. 1674--1684, Long Beach, California, USA, Dec 4--9, 2017.
    (This paper was selected for oral presentation; orals : acceptance : submissions = 40 : 678 : 3240)
    [ paper ]

  77. T. Ishida, G. Niu, W. Hu, and M. Sugiyama.
    Learning from complementary labels.
    In Advances in Neural Information Processing Systems 30 (NeurIPS 2017), pp. 5644--5654, Long Beach, California, USA, Dec 4--9, 2017.
    [ paper ]

  78. H. Shiino, H. Sasaki, G. Niu, and M. Sugiyama.
    Whitening-free least-squares non-Gaussian component analysis.
    In Proceedings of 9th Asian Conference on Machine Learning (ACML 2017), PMLR, vol. 77, pp. 375--390, Seoul, Korea, Nov 15--17, 2017.
    (This paper received Best Paper Runner-up Award)
    [ paper ]

  79. T. Sakai, M. C. du Plessis, G. Niu, and M. Sugiyama.
    Semi-supervised classification based on classification from positive and unlabeled data.
    In Proceedings of 34th International Conference on Machine Learning (ICML 2017), PMLR, vol. 70, pp. 2998--3006, Sydney, Australia, Aug 6--11, 2017.
    [ paper ]

  80. G. Niu, M. C. du Plessis, T. Sakai, Y. Ma, and M. Sugiyama.
    Theoretical comparisons of positive-unlabeled learning against positive-negative learning.
    In Advances in Neural Information Processing Systems 29 (NeurIPS 2016), pp. 1199--1207, Barcelona, Spain, Dec 5--10, 2016.
    [ paper ]

  81. H. Sasaki, G. Niu, and M. Sugiyama.
    Non-Gaussian component analysis with log-density gradient estimation.
    In Proceedings of 19th International Conference on Artificial Intelligence and Statistics (AISTATS 2016), PMLR, vol. 51, pp. 1177--1185, Cadiz, Spain, May 9--11, 2016.
    [ paper ]

  82. T. Zhao, G. Niu, N. Xie, J. Yang, and M. Sugiyama.
    Regularized policy gradients: Direct variance reduction in policy gradient estimation.
    In Proceedings of 7th Asian Conference on Machine Learning (ACML 2015), PMLR, vol. 45, pp. 333--348, Hong Kong, China, Nov 20--22, 2015.
    [ paper ]

  83. M. C. du Plessis, G. Niu, and M. Sugiyama.
    Class-prior estimation for learning from positive and unlabeled data.
    In Proceedings of 7th Asian Conference on Machine Learning (ACML 2015), PMLR, vol. 45, pp. 221--236, Hong Kong, China, Nov 20--22, 2015.
    [ paper ]

  84. M. C. du Plessis, G. Niu, and M. Sugiyama.
    Convex formulation for learning from positive and unlabeled data.
    In Proceedings of 32nd International Conference on Machine Learning (ICML 2015), PMLR, vol. 37, pp. 1386--1394, Lille, France, Jul 6--11, 2015.
    [ paper ]

  85. M. C. du Plessis, G. Niu, and M. Sugiyama.
    Analysis of learning from positive and unlabeled data.
    In Advances in Neural Information Processing Systems 27 (NeurIPS 2014), pp. 703--711, Montreal, Quebec, Canada, Dec 8--13, 2014.
    [ paper ]

  86. G. Niu, B. Dai, M. C. du Plessis, and M. Sugiyama.
    Transductive learning with multi-class volume approximation.
    In Proceedings of 31st International Conference on Machine Learning (ICML 2014), PMLR, vol. 32, no. 2, pp. 1377--1385, Beijing, China, Jun 21--26, 2014.
    [ paper ]

  87. M. C. du Plessis, G. Niu, and M. Sugiyama.
    Clustering unclustered data: Unsupervised binary labeling of two datasets having different class balances.
    In Proceedings of the 2013 Conference on Technologies and Applications of Artificial Intelligence (TAAI 2013), pp. 1--6, Taipei, Taiwan, Dec 6--8, 2013.
    (This paper received Best Paper Award)
    [ paper ]

  88. G. Niu, W. Jitkrittum, B. Dai, H. Hachiya, and M. Sugiyama.
    Squared-loss mutual information regularization: A novel information-theoretic approach to semi-supervised learning.
    In Proceedings of 30th International Conference on Machine Learning (ICML 2013), PMLR, vol. 28, no. 3, pp. 10--18, Atlanta, Georgia, USA, Jun 16--21, 2013.
    [ paper ]

  89. G. Niu, B. Dai, M. Yamada, and M. Sugiyama.
    Information-theoretic semi-supervised metric learning via entropy regularization.
    In Proceedings of 29th International Conference on Machine Learning (ICML 2012), pp. 89--96, Edinburgh, Scotland, Jun 26--Jul 1, 2012.
    [ paper ]

  90. T. Zhao, H. Hachiya, G. Niu, and M. Sugiyama.
    Analysis and improvement of policy gradient estimation.
    In Advances in Neural Information Processing Systems 24 (NeurIPS 2011), pp. 262--270, Granada, Spain, Dec 12--17, 2011.
    [ paper ]

  91. M. Yamada, G. Niu, J. Takagi, and M. Sugiyama.
    Computationally efficient sufficient dimension reduction via squared-loss mutual information.
    In Proceedings of 3rd Asian Conference on Machine Learning (ACML 2011), PMLR, vol. 20, pp. 247--262, Taoyuan, Taiwan, Nov 13--15, 2011.
    [ paper ]

  92. G. Niu, B. Dai, L. Shang, and M. Sugiyama.
    Maximum volume clustering.
    In Proceedings of 14th International Conference on Artificial Intelligence and Statistics (AISTATS 2011), PMLR, vol. 15, pp. 561--569, Fort Lauderdale, Florida, USA, Apr 11--13, 2011.
    [ paper ]

  93. B. Dai, B. Hu, and G. Niu.
    Bayesian maximum margin clustering.
    In Proceedings of 10th IEEE International Conference on Data Mining (ICDM 2010), pp. 108--117, Sydney, Australia, Dec 14--17, 2010.
    [ paper ]

  94. G. Niu, B. Dai, Y. Ji, and L. Shang.
    Rough margin based core vector machine.
    In Proceedings of 14th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD 2010), LNCS, vol. 6118, pp. 134--141, Hyderabad, India, Jun 21--24, 2010.
    [ paper ]

  95. B. Dai and G. Niu.
    Compact margin machine.
    In Proceedings of 14th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD 2010), LNCS, vol. 6119, pp. 507--514, Hyderabad, India, Jun 21--24, 2010.
    [ paper ]


Journal Articles

  1. Y. Gao, D. Wu, J. Zhang, G. Gan, X.-T. Xia, G. Niu, and M. Sugiyama.
    On the effectiveness of adversarial training against backdoor attacks.
    IEEE Transactions on Neural Networks and Learning Systems, to appear.
    [ link ]

  2. J. Lv, B. Liu, L. Feng, N. Xu, M. Xu, B. An, G. Niu, X. Geng, and M. Sugiyama.
    On the robustness of average losses for partial-label learning.
    IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 5, pp. 2569--2583, 2024.
    [ link ]

  3. H. Wang, R. Xiao, Y. Li, L. Feng, G. Niu, G. Chen, and J. Zhao.
    PiCO+: Contrastive label disambiguation for robust partial label learning.
    IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 5, pp. 3183--3198, 2024.
    [ link ]

  4. S. Chen, C. Gong, X. Li, J. Yang, G. Niu, and M. Sugiyama.
    Boundary-restricted metric learning.
    Machine Learning, vol. 112, no. 12, pp. 4723--4762, 2023.
    [ link ]

  5. S. Yang, S. Wu, E. Yang, B. Han, Y. Liu, M. Xu, G. Niu, and T. Liu.
    A parametrical model for instance-dependent label noise.
    IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 12, pp. 14055--14068, 2023.
    [ link ]

  6. L. Feng, S. Shu, Y. Cao, L. Tao, H. Wei, T. Xiang, B. An, and G. Niu.
    Multiple-instance learning from unlabeled bags with pairwise similarity.
    IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 11, pp. 11599--11609, 2023.
    [ link ]

  7. T. Zhao, S. Wu, G. Li, Y. Chen, G. Niu, and M. Sugiyama.
    Learning intention-aware policies in deep reinforcement learning.
    Neural Computation, vol. 35, no. 10, pp. 1657--1677, 2023.
    [ link ]

  8. C. Gong, Y. Ding, B. Han, G. Niu, J. Yang, J. You, D. Tao, and M. Sugiyama.
    Class-wise denoising for robust learning under label noise.
    IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 3, pp. 2835--2848, 2023.
    [ link ]

  9. T. Zhao, Y. Wang, W. Sun, Y. Chen, G. Niu, and M. Sugiyama.
    Representation learning for continuous action spaces is beneficial for efficient policy learning.
    Neural Networks, vol. 159, pp. 137--152, 2023.
    [ link ]

  10. S. Wu, T. Liu, B. Han, J. Yu, G. Niu, and M. Sugiyama.
    Learning from noisy pairwise similarity and unlabeled data.
    Journal of Machine Learning Research, vol. 23, no. 307, pp. 1--34, 2022.
    [ link ]

  11. Z. Wang, J. Jiang, B. Han, L. Feng, B. An, G. Niu, and G. Long.
    SemiNLL: A framework of noisy-label learning by semi-supervised learning.
    Transactions on Machine Learning Research, 07/2022, 25 pages, 2022.
    [ link ]

  12. J. Zhang*, X. Xu*, B. Han, T. Liu, L. Cui, G. Niu, and M. Sugiyama.
    NoiLIn: Improving adversarial training and correcting stereotype of noisy labels.
    Transactions on Machine Learning Research, 06/2022, 25 pages, 2022.
    [ link ]

  13. Y. Pan, I. W. Tsang, W. Chen, G. Niu, and M. Sugiyama.
    Fast and robust rank aggregation against model misspecification.
    Journal of Machine Learning Research, vol. 23, no. 23, pp. 1--35, 2022.
    [ link ]

  14. W. Xu, G. Niu, A. Hyvärinen, and M. Sugiyama.
    Direction matters: On influence-preserving graph summarization and max-cut principle for directed graphs.
    Neural Computation, vol. 33, no. 8, pp. 2128--2162, 2021.
    [ link ]

  15. T. Sakai, G. Niu, and M. Sugiyama.
    Information-theoretic representation learning for positive-unlabeled classification.
    Neural Computation, vol. 33, no. 1, pp. 244--268, 2021.
    [ link ]

  16. H. Sasaki, T. Kanamori, A. Hyvärinen, G. Niu, and M. Sugiyama.
    Mode-seeking clustering and density ridge estimation via direct estimation of density-derivative-ratios.
    Journal of Machine Learning Research, vol. 18, no. 180, pp. 1--45, 2018.
    [ link ]

  17. T. Sakai, G. Niu, and M. Sugiyama.
    Semi-supervised AUC optimization based on positive-unlabeled learning.
    Machine Learning, vol. 107, no. 4, pp. 767--794, 2018.
    [ link ]

  18. H. Sasaki, V. Tangkaratt, G. Niu, and M. Sugiyama.
    Sufficient dimension reduction via direct estimation of the gradients of logarithmic conditional densities.
    Neural Computation, vol. 30, no. 2, pp. 477--504, 2018.
    [ link ]

  19. M. C. du Plessis*, G. Niu*, and M. Sugiyama.
    Class-prior estimation for learning from positive and unlabeled data.
    Machine Learning, vol. 106, no. 4, pp. 463--492, 2017.
    [ link ]

  20. H. Sasaki, Y.-K. Noh, G. Niu, and M. Sugiyama.
    Direct density-derivative estimation.
    Neural Computation, vol. 28, no. 6, pp. 1101--1140, 2016.
    [ link ]

  21. G. Niu, B. Dai, M. Yamada, and M. Sugiyama.
    Information-theoretic semi-supervised metric learning via entropy regularization.
    Neural Computation, vol. 26, no. 8, pp. 1717--1762, 2014.
    [ link ]

  22. D. Calandriello, G. Niu, and M. Sugiyama.
    Semi-supervised information-maximization clustering.
    Neural Networks, vol. 57, pp. 103--111, 2014.
    [ link ]

  23. M. Sugiyama, G. Niu, M. Yamada, M. Kimura, and H. Hachiya.
    Information-maximization clustering based on squared-loss mutual information.
    Neural Computation, vol. 26, no. 1, pp. 84--131, 2014.
    [ link ]

  24. G. Niu, B. Dai, L. Shang, and M. Sugiyama.
    Maximum volume clustering: A new discriminative clustering approach.
    Journal of Machine Learning Research, vol. 14 (Sep), pp. 2641--2687, 2013.
    [ link ]

  25. T. Zhao, H. Hachiya, G. Niu, and M. Sugiyama.
    Analysis and improvement of policy gradient estimation.
    Neural Networks, vol. 26, pp. 118--129, 2012.
    [ link ]

  26. Y. Ji, J. Chen, G. Niu, L. Shang, and X. Dai.
    Transfer learning via multi-view principal component analysis.
    Journal of Computer Science and Technology, vol. 26, no. 1, pp. 81--98, 2011.
    [ link ]


Theses

  1. Gang Niu.
    Discriminative methods with imperfect supervision in machine learning (204 pages).
    Doctoral Thesis, Department of Computer Science, Tokyo Institute of Technology, Sep 2013.

  2. Gang Niu.
    Support vector learning based on rough set modeling (71 pages in Chinese).
    Master Thesis, Department of Computer Science and Technology, Nanjing University, May 2010.