Gang Niu (Research Scientist at RIKEN-AIP)


Publications


[ Conference Papers, Journal Articles, Workshop Presentations, Theses ]


Preprints

  • J. Zhang, B. Han, G. Niu, T. Liu, and M. Sugiyama.
    Where is the bottleneck of adversarial learning with unlabeled data?
    [ arXiv ]

  • A. Jacovi, G. Niu, Y. Goldberg, and M. Sugiyama.
    Scalable evaluation and improvement of document set expansion via neural positive-unlabeled learning.
    [ arXiv ]

  • N. Lu, T. Zhang, G. Niu, and M. Sugiyama.
    Mitigating overfitting in supervised classification from two unlabeled datasets: A consistent risk correction approach.
    [ arXiv ]

  • W. Xu, G. Niu, A. Hyv√§rinen, and M. Sugiyama.
    Direction matters: On influence-preserving graph summarization and max-cut principle for directed graphs.
    [ arXiv ]

  • Y. Pan, W. Chen, G. Niu, I. W. Tsang, and M. Sugiyama.
    Fast and robust rank aggregation against model misspecification.
    [ arXiv ]

  • F. Liu, J. Lu, B. Han, G. Niu, G. Zhang, and M. Sugiyama.
    Butterfly: A panacea for all difficulties in wildly unsupervised domain adaptation.
    [ arXiv ]

  • C.-Y. Hsieh, M. Xu, G. Niu, H.-T. Lin, and M. Sugiyama.
    A pseudo-label method for coarse-to-fine multi-label learning with limited supervision.
    [ OpenReview ]

  • M. Xu, B. Li, G. Niu, B. Han, and M. Sugiyama.
    Revisiting sample selection approach to positive-unlabeled learning: Turning unlabeled data into positive rather than negative.
    [ arXiv ]

  • B. Han, G. Niu, J. Yao, X. Yu, M. Xu, Q. Yao, I. W. Tsang, and M. Sugiyama.
    Pumpout: A meta approach to robust deep learning with noisy labels.
    [ arXiv ]

  • M. Kato, L. Xu, G. Niu, and M. Sugiyama.
    Alternate estimation of a classifier and the class-prior from positive and unlabeled data.
    [ arXiv ]

  • M. Xu, G. Niu, B. Han, I. W. Tsang, Z.-H. Zhou, and M. Sugiyama.
    Matrix co-completion for multi-label classification with missing features and labels.
    [ arXiv ]

  • T. Sakai, G. Niu, and M. Sugiyama.
    Information-theoretic representation learning for positive-unlabeled classification.
    [ arXiv ]


Conference Papers (full review)

  1. C. Li, M. E. Khan, Z. Sun, G. Niu, B. Han, S. Xie, and Q. Zhao.
    Beyond unfolding: Exact recovery of latent convex tensor decomposition under reshuffling.
    In Proceedings of 34th AAAI Conference on Artificial Intelligence (AAAI'20), to appear.
    [ paper ]

  2. L. Xu, J. Honda, G. Niu, and M. Sugiyama.
    Uncoupled regression from pairwise comparison data.
    In Advances in Neural Information Processing Systems 32 (NeurIPS'19), to appear.
    [ paper ]

  3. X. Xia, T. Liu, N. Wang, B. Han, C. Gong, G. Niu, and M. Sugiyama.
    Are anchor points really indispensable in label-noise learning?
    In Advances in Neural Information Processing Systems 32 (NeurIPS'19), to appear.
    [ paper ]

  4. Y.-G. Hsieh, G. Niu, and M. Sugiyama.
    Classification from positive, unlabeled and biased negative data.
    In Proceedings of 36th International Conference on Machine Learning (ICML'19), PMLR, vol. 97, pp. 2820--2829, Long Beach, California, USA, Jun 9--15, 2019.
    [ paper ]

  5. T. Ishida, G. Niu, A. K. Menon, and M. Sugiyama.
    Complementary-label learning for arbitrary losses and models.
    In Proceedings of 36th International Conference on Machine Learning (ICML'19), PMLR, vol. 97, pp. 2971--2980, Long Beach, California, USA, Jun 9--15, 2019.
    [ paper ]

  6. X. Yu, B. Han, J. Yao, G. Niu, I. W. Tsang, and M. Sugiyama.
    How does disagreement help generalization against label corruption?
    In Proceedings of 36th International Conference on Machine Learning (ICML'19), PMLR, vol. 97, pp. 7164--7173, Long Beach, California, USA, Jun 9--15, 2019.
    [ paper ]

  7. N. Lu, G. Niu, A. K. Menon, and M. Sugiyama.
    On the minimal supervision for training any binary classifier from only unlabeled data.
    In Proceedings of 7th International Conference on Learning Representations (ICLR'19), 18 pages, New Orleans, Louisiana, USA, May 6--9, 2019.
    [ paper, OpenReview ]

  8. T. Ishida, G. Niu, and M. Sugiyama.
    Binary classification for positive-confidence data.
    In Advances in Neural Information Processing Systems 31 (NeurIPS'18), pp. 5917--5928, Montreal, Quebec, Canada, Dec 2--8, 2018.
    (This paper was selected for spotlight presentation; there are 168 spotlights among 1011 acceptance out of 4856 submissions)
    [ paper ]

  9. B. Han, J. Yao, G. Niu, M. Zhou, I. W. Tsang, Y. Zhang, and M. Sugiyama.
    Masking: A new perspective of noisy supervision.
    In Advances in Neural Information Processing Systems 31 (NeurIPS'18), pp. 5836--5846, Montreal, Quebec, Canada, Dec 2--8, 2018.
    [ paper ]

  10. B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. W. Tsang, and M. Sugiyama.
    Co-teaching: Robust training of deep neural networks with extremely noisy labels.
    In Advances in Neural Information Processing Systems 31 (NeurIPS'18), pp. 8527--8537, Montreal, Quebec, Canada, Dec 2--8, 2018.
    [ paper ]

  11. W. Hu, G. Niu, I. Sato, and M. Sugiyama.
    Does distributionally robust supervised learning give robust classifiers?
    In Proceedings of 35th International Conference on Machine Learning (ICML'18), PMLR, vol. 80, pp. 2029--2037, Stockholm, Sweden, Jul 10--15, 2018.
    [ paper ]

  12. H. Bao, G. Niu, and M. Sugiyama.
    Classification from pairwise similarity and unlabeled data.
    In Proceedings of 35th International Conference on Machine Learning (ICML'18), PMLR, vol. 80, pp. 452--461, Stockholm, Sweden, Jul 10--15, 2018.
    [ paper ]

  13. S.-J. Huang, M. Xu, M.-K. Xie, M. Sugiyama, G. Niu, and S. Chen.
    Active feature acquisition with supervised matrix completion.
    In Proceedings of 24th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'18), pp. 1571--1579, London, UK, Aug 19--23, 2018.
    [ paper ]

  14. R. Kiryo, G. Niu, M. C. du Plessis, and M. Sugiyama.
    Positive-unlabeled learning with non-negative risk estimator.
    In Advances in Neural Information Processing Systems 30 (NeurIPS'17), pp. 1674--1684, Long Beach, California, USA, Dec 4--9, 2017.
    (This paper was selected for oral presentation; there are 40 orals among 678 acceptance out of 3240 submissions)
    [ paper ]

  15. T. Ishida, G. Niu, W. Hu, and M. Sugiyama.
    Learning from complementary labels.
    In Advances in Neural Information Processing Systems 30 (NeurIPS'17), pp. 5644--5654, Long Beach, California, USA, Dec 4--9, 2017.
    [ paper ]

  16. H. Shiino, H. Sasaki, G. Niu, and M. Sugiyama.
    Whitening-free least-squares non-Gaussian component analysis.
    In Proceedings of 9th Asian Conference on Machine Learning (ACML'17), PMLR, vol. 77, pp. 375--390, Seoul, Korea, Nov 15--17, 2017.
    (This paper was selected for Best Paper Runner-up Award)
    [ paper ]

  17. T. Sakai, M. C. du Plessis, G. Niu, and M. Sugiyama.
    Semi-supervised classification based on classification from positive and unlabeled data.
    In Proceedings of 34th International Conference on Machine Learning (ICML'17), PMLR, vol. 70, pp. 2998--3006, Sydney, Australia, Aug 6--11, 2017.
    [ paper ]

  18. G. Niu, M. C. du Plessis, T. Sakai, Y. Ma, and M. Sugiyama.
    Theoretical comparisons of positive-unlabeled learning against positive-negative learning.
    In Advances in Neural Information Processing Systems 29 (NeurIPS'16), pp. 1199--1207, Barcelona, Spain, Dec 5--10, 2016.
    [ paper ]

  19. H. Sasaki, G. Niu, and M. Sugiyama.
    Non-Gaussian component analysis with log-density gradient estimation.
    In Proceedings of 18th International Conference on Artificial Intelligence and Statistics (AISTATS'16), PMLR, vol. 51, pp. 1177--1185, Cadiz, Spain, May 9--11, 2016.
    [ paper ]

  20. T. Zhao, G. Niu, N. Xie, J. Yang, and M. Sugiyama.
    Regularized policy gradients: Direct variance reduction in policy gradient estimation.
    In Proceedings of 7th Asian Conference on Machine Learning (ACML'15), PMLR, vol. 45, pp. 333--348, Hong Kong, China, Nov 20--22, 2015.
    [ paper ]

  21. M. C. du Plessis, G. Niu, and M. Sugiyama.
    Class-prior estimation for learning from positive and unlabeled data.
    In Proceedings of 7th Asian Conference on Machine Learning (ACML'15), PMLR, vol. 45, pp. 221--236, Hong Kong, China, Nov 20--22, 2015.
    [ paper ]

  22. M. C. du Plessis, G. Niu, and M. Sugiyama.
    Convex formulation for learning from positive and unlabeled data.
    In Proceedings of 32nd International Conference on Machine Learning (ICML'15), PMLR, vol. 37, pp. 1386--1394, Lille, France, Jul 6--11, 2015.
    [ paper ]

  23. M. C. du Plessis, G. Niu, and M. Sugiyama.
    Analysis of learning from positive and unlabeled data.
    In Advances in Neural Information Processing Systems 27 (NeurIPS'14), pp. 703--711, Montreal, Quebec, Canada, Dec 8--13, 2014.
    [ paper ]

  24. G. Niu, B. Dai, M. C. du Plessis, and M. Sugiyama.
    Transductive learning with multi-class volume approximation.
    In Proceedings of 31st International Conference on Machine Learning (ICML'14), PMLR, vol. 32, no. 2, pp. 1377--1385, Beijing, China, Jun 21--26, 2014.
    [ paper ]

  25. M. C. du Plessis, G. Niu, and M. Sugiyama.
    Clustering unclustered data: Unsupervised binary labeling of two datasets having different class balances.
    In Proceedings of 2013 Conference on Technologies and Applications of Artificial Intelligence (TAAI'13), pp. 1--6, Taipei, Taiwan, Dec 6--8, 2013.
    (This paper was selected for Best Paper Award)
    [ paper ]

  26. G. Niu, W. Jitkrittum, B. Dai, H. Hachiya, and M. Sugiyama.
    Squared-loss mutual information regularization: A novel information-theoretic approach to semi-supervised learning.
    In Proceedings of 30th International Conference on Machine Learning (ICML'13), PMLR, vol. 28, no. 3, pp. 10--18, Atlanta, Georgia, USA, Jun 16--21, 2013.
    [ paper ]

  27. G. Niu, B. Dai, M. Yamada, and M. Sugiyama.
    Information-theoretic semi-supervised metric learning via entropy regularization.
    In Proceedings of 29th International Conference on Machine Learning (ICML'12), pp. 89--96, Edinburgh, Scotland, Jun 26--Jul 1, 2012.
    [ paper ]

  28. T. Zhao, H. Hachiya, G. Niu, and M. Sugiyama.
    Analysis and improvement of policy gradient estimation.
    In Advances in Neural Information Processing Systems 24 (NeurIPS'11), pp. 262--270, Granada, Spain, Dec 12--17, 2011.
    [ paper ]

  29. M. Yamada, G. Niu, J. Takagi, and M. Sugiyama.
    Computationally efficient sufficient dimension reduction via squared-loss mutual information.
    In Proceedings of 3rd Asian Conference on Machine Learning (ACML'11), PMLR, vol. 20, pp. 247--262, Taoyuan, Taiwan, Nov 13--15, 2011.
    [ paper ]

  30. G. Niu, B. Dai, L. Shang, and M. Sugiyama.
    Maximum volume clustering.
    In Proceedings of 14th International Conference on Artificial Intelligence and Statistics (AISTATS'11), PMLR, vol. 15, pp. 561--569, Fort Lauderdale, Florida, USA, Apr 11--13, 2011.
    [ paper ]

  31. B. Dai, B. Hu, and G. Niu.
    Bayesian maximum margin clustering.
    In Proceedings of 10th IEEE International Conference on Data Mining (ICDM'10), pp. 108--117, Sydney, Australia, Dec 14--17, 2010.
    [ paper ]

  32. G. Niu, B. Dai, Y. Ji, and L. Shang.
    Rough margin based core vector machine.
    In Proceedings of 14th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'10), LNCS, vol. 6118, pp. 134--141, Hyderabad, India, Jun 21--24, 2010.
    [ paper ]

  33. B. Dai and G. Niu.
    Compact margin machine.
    In Proceedings of 14th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'10), LNCS, vol. 6119, pp. 507--514, Hyderabad, India, Jun 21--24, 2010.
    [ paper ]


Journal Articles

  1. H. Sasaki, T. Kanamori, A. Hyvärinen, G. Niu, and M. Sugiyama.
    Mode-seeking clustering and density ridge estimation via direct estimation of density-derivative-ratios.
    Journal of Machine Learning Research, vol. 18, no. 180, pp. 1--45, 2018.
    [ link ]

  2. T. Sakai, G. Niu, and M. Sugiyama.
    Semi-supervised AUC optimization based on positive-unlabeled learning.
    Machine Learning, vol. 107, no. 4, pp. 767--794, 2018.
    [ link ]

  3. H. Sasaki, V. Tangkaratt, G. Niu, and M. Sugiyama.
    Sufficient dimension reduction via direct estimation of the gradients of logarithmic conditional densities.
    Neural Computation, vol. 30, no. 2, pp. 477--504, 2018.
    [ link ]

  4. M. C. du Plessis, G. Niu, and M. Sugiyama.
    Class-prior estimation for learning from positive and unlabeled data.
    Machine Learning, vol. 106, no. 4, pp. 463--492, 2017.
    (MCdP and GN have contributed equally to this work; GN is the corresponding author)
    [ link ]

  5. H. Sasaki, Y.-K. Noh, G. Niu, and M. Sugiyama.
    Direct density-derivative estimation.
    Neural Computation, vol. 28, no. 6, pp. 1101--1140, 2016.
    [ link ]

  6. G. Niu, B. Dai, M. Yamada, and M. Sugiyama.
    Information-theoretic semi-supervised metric learning via entropy regularization.
    Neural Computation, vol. 26, no. 8, pp. 1717--1762, 2014.
    [ link ]

  7. D. Calandriello, G. Niu, and M. Sugiyama.
    Semi-supervised information-maximization clustering.
    Neural Networks, vol. 57, pp. 103--111, 2014.
    [ link ]

  8. M. Sugiyama, G. Niu, M. Yamada, M. Kimura, and H. Hachiya.
    Information-maximization clustering based on squared-loss mutual information.
    Neural Computation, vol. 26, no. 1, pp. 84--131, 2014.
    [ link ]

  9. G. Niu, B. Dai, L. Shang, and M. Sugiyama.
    Maximum volume clustering: A new discriminative clustering approach.
    Journal of Machine Learning Research, vol. 14 (Sep), pp. 2641--2687, 2013.
    [ link ]

  10. T. Zhao, H. Hachiya, G. Niu, and M. Sugiyama.
    Analysis and improvement of policy gradient estimation.
    Neural Networks, vol. 26, pp. 118--129, 2012.
    [ link ]

  11. Y. Ji, J. Chen, G. Niu, L. Shang, and X. Dai.
    Transfer learning via multi-view principal component analysis.
    Journal of Computer Science and Technology, vol. 26, no. 1, pp. 81--98, 2011.
    [ link ]


Workshop Presentations (selected)

  1. G. Niu.
    Robust learning against label noise.
    Presented at The All-RIKEN Workshop 2019, Wako, Japan, Dec 5--6, 2019.
    (This was an award speech)

  2. G. Niu.
    When weakly-supervised learning meets deep learning.
    Presented at 3rd IJCAI BOOM, Stockholm, Sweden, Jul 13, 2018.
    (This was an invited talk)

  3. G. Niu.
    When deep learning meets weakly-supervised learning.
    Presented at Deep Learning: Theory, Algorithms, and Applications, Tokyo, Japan, Mar 19--22, 2018.
    (This was an invited talk)
    [ slides, video ]

  4. G. Niu.
    Statistical learning from weak supervision.
    Presented at 1st IRCN Retreat 2018, Yokohama, Japan, Mar 17--18, 2018.
    (This was an invited talk)

  5. G. Niu.
    Recent advances on positive-unlabeled (PU) learning.
    Presented at 30th IBISML (joint with PRMU and CVIM), Tokyo, Japan, Sep 15--16, 2017.
    (This was an invited talk)
    [ slides ]

  6. G. Niu (presented by Tomoya Sakai).
    Positive-unlabeled learning with application to semi-supervised learning.
    Presented at Microsoft Research Asia Academic Day 2017, Yilan, Taiwan, May 26, 2017.

  7. G. Niu, B. Dai, M. Yamada, and M. Sugiyama.
    Information-theoretic semi-supervised metric learning via entropy regularization.
    Presented at 21st MLSS, Kyoto, Japan, Aug 27--Sep 7, 2012.

  8. G. Niu, B. Dai, L. Shang, and M. Sugiyama.
    Maximum volume clustering.
    Presented at 18th MLSS, Bordeaux, France, Sep 4--17, 2011.


Theses

  1. Gang Niu.
    Discriminative methods with imperfect supervision in machine learning (204 pages).
    Doctoral Thesis, Department of Computer Science, Tokyo Institute of Technology, Tokyo, Japan, Sep 2013.

  2. Gang Niu.
    Support vector learning based on rough set modeling (71 pages in Chinese).
    Master Thesis, Department of Computer Science and Technology, Nanjing University, Nanjing, China, May 2010.