Publications
[ Conference Papers,
Journal Articles,
Workshop Presentations,
Theses ]
An asterisk (*) beside authors' names indicates equal contributions.
Preprints
T. Fang*, N. Lu*, G. Niu, and M. Sugiyama.
Rethinking importance weighting for deep learning under distribution shift.
[ arXiv ]
S. Wu, X. Xia, T. Liu, B. Han, M. Gong, N. Wang, H. Liu, and G. Niu.
Class2Simi: A new perspective on learning with label noise.
[ arXiv ]
X. Xia, T. Liu, B. Han, N. Wang, M. Gong, H. Liu, G. Niu, D. Tao, and M. Sugiyama.
Partsdependent label noise: Towards instancedependent label noise.
[ arXiv ]
Y. Yao, T. Liu, B. Han, M. Gong, J. Deng, G. Niu, and M. Sugiyama.
Dual T: Reducing estimation error for transition matrix in labelnoise learning.
[ arXiv ]
S. Wu*, X. Xia*, T. Liu, B. Han, M. Gong, N. Wang, H. Liu, and G. Niu.
Multiclass classification from noisysimilaritylabeled data.
[ arXiv ]
Y. Yao, T. Liu, B. Han, M. Gong, G. Niu, M. Sugiyama, and D. Tao.
Towards mixture proportion estimation without irreducibility.
[ arXiv ]
A. Berthon, B. Han, G. Niu, T. Liu, and M. Sugiyama.
Confidence scores make instancedependent labelnoise learning possible.
[ arXiv ]
J. Zhang*, B. Han*, G. Niu, T. Liu, and M. Sugiyama.
Where is the bottleneck of adversarial learning with unlabeled data?
[ arXiv ]
A. Jacovi, G. Niu, Y. Goldberg, and M. Sugiyama.
Scalable evaluation and improvement of document set expansion via neural positiveunlabeled learning.
[ arXiv ]
W. Xu, G. Niu, A. Hyvärinen, and M. Sugiyama.
Direction matters: On influencepreserving graph summarization and maxcut principle for directed graphs.
[ arXiv ]
Y. Pan, W. Chen, G. Niu, I. W. Tsang, and M. Sugiyama.
Fast and robust rank aggregation against model misspecification.
[ arXiv ]
F. Liu, J. Lu, B. Han, G. Niu, G. Zhang, and M. Sugiyama.
Butterfly: A panacea for all difficulties in wildly unsupervised domain adaptation.
[ arXiv ]
C.Y. Hsieh, M. Xu, G. Niu, H.T. Lin, and M. Sugiyama.
A pseudolabel method for coarsetofine multilabel learning with limited supervision.
[ OpenReview ]
M. Xu, B. Li, G. Niu, B. Han, and M. Sugiyama.
Revisiting sample selection approach to positiveunlabeled learning: Turning unlabeled data into positive rather than negative.
[ arXiv ]
M. Kato, L. Xu, G. Niu, and M. Sugiyama.
Alternate estimation of a classifier and the classprior from positive and unlabeled data.
[ arXiv ]
M. Xu, G. Niu, B. Han, I. W. Tsang, Z.H. Zhou, and M. Sugiyama.
Matrix cocompletion for multilabel classification with missing features and labels.
[ arXiv ]
T. Sakai, G. Niu, and M. Sugiyama.
Informationtheoretic representation learning for positiveunlabeled classification.
[ arXiv ]
Conference Papers (full review)
J. Zhang*, X. Xu*, B. Han, G. Niu, L. Cui, M. Sugiyama, and M. Kankanhalli.
Attacks which do not kill training make adversarial learning stronger.
In Proceedings of 37th International Conference on Machine Learning (ICML 2020),
to appear.
[ paper ]
B. Han, G. Niu, X. Yu, Q. Yao, M. Xu, I. W. Tsang, and M. Sugiyama.
SIGUA: Forgetting may make learning with noisy labels more robust.
In Proceedings of 37th International Conference on Machine Learning (ICML 2020),
to appear.
[ paper ]
L. Feng*, T. Kaneko*, B. Han, G. Niu, B. An, and M. Sugiyama.
Learning with multiple complementary labels.
In Proceedings of 37th International Conference on Machine Learning (ICML 2020),
to appear.
[ paper ]
Y.T. Chou, G. Niu, H.T. Lin, and M. Sugiyama.
Unbiased risk estimators can mislead: A case study of learning with complementary labels.
In Proceedings of 37th International Conference on Machine Learning (ICML 2020),
to appear.
[ paper ]
Q. Yao, H. Yang, B. Han, G. Niu, and J. T. Kwok.
Searching to exploit memorization effect in learning with noisy labels.
In Proceedings of 37th International Conference on Machine Learning (ICML 2020),
to appear.
[ paper ]
J. Lv, M. Xu, L. Feng, G. Niu, X. Geng, and M. Sugiyama
Progressive identification of true labels for partiallabel learning.
In Proceedings of 37th International Conference on Machine Learning (ICML 2020),
to appear.
[ paper ]
T. Ishida, I. Yamane, T. Sakai, G. Niu, and M. Sugiyama.
Do we need zero training loss after achieving zero training error?
In Proceedings of 37th International Conference on Machine Learning (ICML 2020),
to appear.
[ paper ]
N. Lu, T. Zhang, G. Niu, and M. Sugiyama.
Mitigating overfitting in supervised classification from two unlabeled datasets: A consistent risk correction approach.
In Proceedings of 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020),
to appear.
[ paper ]
C. Li, M. E. Khan, Z. Sun, G. Niu, B. Han, S. Xie, and Q. Zhao.
Beyond unfolding: Exact recovery of latent convex tensor decomposition under reshuffling.
In Proceedings of 34th AAAI Conference on Artificial Intelligence (AAAI 2020),
pp. 46024609, New York, New York, USA, Feb 712, 2020.
[ paper ]
L. Xu, J. Honda, G. Niu, and M. Sugiyama.
Uncoupled regression from pairwise comparison data.
In Advances in Neural Information Processing Systems 32 (NeurIPS 2019),
pp. 39924002, Vancouver, British Columbia, Canada, Dec 814, 2019.
[ paper ]
X. Xia, T. Liu, N. Wang, B. Han, C. Gong, G. Niu, and M. Sugiyama.
Are anchor points really indispensable in labelnoise learning?
In Advances in Neural Information Processing Systems 32 (NeurIPS 2019),
pp. 68386849, Vancouver, British Columbia, Canada, Dec 814, 2019.
[ paper ]
Y.G. Hsieh, G. Niu, and M. Sugiyama.
Classification from positive, unlabeled and biased negative data.
In Proceedings of 36th International Conference on Machine Learning (ICML 2019),
PMLR, vol. 97, pp. 28202829, Long Beach, California, USA, Jun 915, 2019.
[ paper ]
T. Ishida, G. Niu, A. K. Menon, and M. Sugiyama.
Complementarylabel learning for arbitrary losses and models.
In Proceedings of 36th International Conference on Machine Learning (ICML 2019),
PMLR, vol. 97, pp. 29712980, Long Beach, California, USA, Jun 915, 2019.
[ paper ]
X. Yu, B. Han, J. Yao, G. Niu, I. W. Tsang, and M. Sugiyama.
How does disagreement help generalization against label corruption?
In Proceedings of 36th International Conference on Machine Learning (ICML 2019),
PMLR, vol. 97, pp. 71647173, Long Beach, California, USA, Jun 915, 2019.
[ paper ]
N. Lu, G. Niu, A. K. Menon, and M. Sugiyama.
On the minimal supervision for training any binary classifier from only unlabeled data.
In Proceedings of 7th International Conference on Learning Representations (ICLR 2019),
18 pages, New Orleans, Louisiana, USA, May 69, 2019.
[ paper,
OpenReview ]
T. Ishida, G. Niu, and M. Sugiyama.
Binary classification for positiveconfidence data.
In Advances in Neural Information Processing Systems 31 (NeurIPS 2018),
pp. 59175928, Montreal, Quebec, Canada, Dec 28, 2018.
(This paper was selected for spotlight presentation;
there are 168 spotlights among 1011 acceptance out of 4856 submissions)
[ paper ]
B. Han*, J. Yao*, G. Niu, M. Zhou, I. W. Tsang, Y. Zhang, and M. Sugiyama.
Masking: A new perspective of noisy supervision.
In Advances in Neural Information Processing Systems 31 (NeurIPS 2018),
pp. 58365846, Montreal, Quebec, Canada, Dec 28, 2018.
[ paper ]
B. Han*, Q. Yao*, X. Yu, G. Niu, M. Xu, W. Hu, I. W. Tsang, and M. Sugiyama.
Coteaching: Robust training of deep neural networks with extremely noisy labels.
In Advances in Neural Information Processing Systems 31 (NeurIPS 2018),
pp. 85278537, Montreal, Quebec, Canada, Dec 28, 2018.
[ paper ]
W. Hu, G. Niu, I. Sato, and M. Sugiyama.
Does distributionally robust supervised learning give robust classifiers?
In Proceedings of 35th International Conference on Machine Learning (ICML 2018),
PMLR, vol. 80, pp. 20292037, Stockholm, Sweden, Jul 1015, 2018.
[ paper ]
H. Bao, G. Niu, and M. Sugiyama.
Classification from pairwise similarity and unlabeled data.
In Proceedings of 35th International Conference on Machine Learning (ICML 2018),
PMLR, vol. 80, pp. 452461, Stockholm, Sweden, Jul 1015, 2018.
[ paper ]
S.J. Huang, M. Xu, M.K. Xie, M. Sugiyama, G. Niu, and S. Chen.
Active feature acquisition with supervised matrix completion.
In Proceedings of 24th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2018),
pp. 15711579, London, UK, Aug 1923, 2018.
[ paper ]
R. Kiryo, G. Niu, M. C. du Plessis, and M. Sugiyama.
Positiveunlabeled learning with nonnegative risk estimator.
In Advances in Neural Information Processing Systems 30 (NeurIPS 2017),
pp. 16741684, Long Beach, California, USA, Dec 49, 2017.
(This paper was selected for oral presentation;
there are 40 orals among 678 acceptance out of 3240 submissions)
[ paper ]
T. Ishida, G. Niu, W. Hu, and M. Sugiyama.
Learning from complementary labels.
In Advances in Neural Information Processing Systems 30 (NeurIPS 2017),
pp. 56445654, Long Beach, California, USA, Dec 49, 2017.
[ paper ]
H. Shiino, H. Sasaki, G. Niu, and M. Sugiyama.
Whiteningfree leastsquares nonGaussian component analysis.
In Proceedings of 9th Asian Conference on Machine Learning (ACML 2017),
PMLR, vol. 77, pp. 375390, Seoul, Korea, Nov 1517, 2017.
(This paper was selected for Best Paper Runnerup Award)
[ paper ]
T. Sakai, M. C. du Plessis, G. Niu, and M. Sugiyama.
Semisupervised classification based on classification from positive and unlabeled data.
In Proceedings of 34th International Conference on Machine Learning (ICML 2017),
PMLR, vol. 70, pp. 29983006, Sydney, Australia, Aug 611, 2017.
[ paper ]
G. Niu, M. C. du Plessis, T. Sakai, Y. Ma, and M. Sugiyama.
Theoretical comparisons of positiveunlabeled learning against positivenegative learning.
In Advances in Neural Information Processing Systems 29 (NeurIPS 2016),
pp. 11991207, Barcelona, Spain, Dec 510, 2016.
[ paper ]
H. Sasaki, G. Niu, and M. Sugiyama.
NonGaussian component analysis with logdensity gradient estimation.
In Proceedings of 19th International Conference on Artificial Intelligence and Statistics (AISTATS 2016),
PMLR, vol. 51, pp. 11771185, Cadiz, Spain, May 911, 2016.
[ paper ]
T. Zhao, G. Niu, N. Xie, J. Yang, and M. Sugiyama.
Regularized policy gradients: Direct variance reduction in policy gradient estimation.
In Proceedings of 7th Asian Conference on Machine Learning (ACML 2015),
PMLR, vol. 45, pp. 333348, Hong Kong, China, Nov 2022, 2015.
[ paper ]
M. C. du Plessis, G. Niu, and M. Sugiyama.
Classprior estimation for learning from positive and unlabeled data.
In Proceedings of 7th Asian Conference on Machine Learning (ACML 2015),
PMLR, vol. 45, pp. 221236, Hong Kong, China, Nov 2022, 2015.
[ paper ]
M. C. du Plessis, G. Niu, and M. Sugiyama.
Convex formulation for learning from positive and unlabeled data.
In Proceedings of 32nd International Conference on Machine Learning (ICML 2015),
PMLR, vol. 37, pp. 13861394, Lille, France, Jul 611, 2015.
[ paper ]
M. C. du Plessis, G. Niu, and M. Sugiyama.
Analysis of learning from positive and unlabeled data.
In Advances in Neural Information Processing Systems 27 (NeurIPS 2014),
pp. 703711, Montreal, Quebec, Canada, Dec 813, 2014.
[ paper ]
G. Niu, B. Dai, M. C. du Plessis, and M. Sugiyama.
Transductive learning with multiclass volume approximation.
In Proceedings of 31st International Conference on Machine Learning (ICML 2014),
PMLR, vol. 32, no. 2, pp. 13771385, Beijing, China, Jun 2126, 2014.
[ paper ]
M. C. du Plessis, G. Niu, and M. Sugiyama.
Clustering unclustered data: Unsupervised binary labeling of two datasets having different class balances.
In Proceedings of 2013 Conference on Technologies and Applications of Artificial Intelligence (TAAI 2013),
pp. 16, Taipei, Taiwan, Dec 68, 2013.
(This paper was selected for Best Paper Award)
[ paper ]
G. Niu, W. Jitkrittum, B. Dai, H. Hachiya, and M. Sugiyama.
Squaredloss mutual information regularization: A novel informationtheoretic approach to semisupervised learning.
In Proceedings of 30th International Conference on Machine Learning (ICML 2013),
PMLR, vol. 28, no. 3, pp. 1018, Atlanta, Georgia, USA, Jun 1621, 2013.
[ paper ]
G. Niu, B. Dai, M. Yamada, and M. Sugiyama.
Informationtheoretic semisupervised metric learning via entropy regularization.
In Proceedings of 29th International Conference on Machine Learning (ICML 2012),
pp. 8996, Edinburgh, Scotland, Jun 26Jul 1, 2012.
[ paper ]
T. Zhao, H. Hachiya, G. Niu, and M. Sugiyama.
Analysis and improvement of policy gradient estimation.
In Advances in Neural Information Processing Systems 24 (NeurIPS 2011),
pp. 262270, Granada, Spain, Dec 1217, 2011.
[ paper ]
M. Yamada, G. Niu, J. Takagi, and M. Sugiyama.
Computationally efficient sufficient dimension reduction via squaredloss mutual information.
In Proceedings of 3rd Asian Conference on Machine Learning (ACML 2011),
PMLR, vol. 20, pp. 247262, Taoyuan, Taiwan, Nov 1315, 2011.
[ paper ]
G. Niu, B. Dai, L. Shang, and M. Sugiyama.
Maximum volume clustering.
In Proceedings of 14th International Conference on Artificial Intelligence and Statistics (AISTATS 2011),
PMLR, vol. 15, pp. 561569, Fort Lauderdale, Florida, USA, Apr 1113, 2011.
[ paper ]
B. Dai, B. Hu, and G. Niu.
Bayesian maximum margin clustering.
In Proceedings of 10th IEEE International Conference on Data Mining (ICDM 2010),
pp. 108117, Sydney, Australia, Dec 1417, 2010.
[ paper ]
G. Niu, B. Dai, Y. Ji, and L. Shang.
Rough margin based core vector machine.
In Proceedings of 14th PacificAsia Conference on Knowledge Discovery and Data Mining (PAKDD 2010),
LNCS, vol. 6118, pp. 134141, Hyderabad, India, Jun 2124, 2010.
[ paper ]
B. Dai and G. Niu.
Compact margin machine.
In Proceedings of 14th PacificAsia Conference on Knowledge Discovery and Data Mining (PAKDD 2010),
LNCS, vol. 6119, pp. 507514, Hyderabad, India, Jun 2124, 2010.
[ paper ]
Journal Articles
H. Sasaki, T. Kanamori, A. Hyvärinen, G. Niu, and M. Sugiyama.
Modeseeking clustering and density ridge estimation via direct estimation of densityderivativeratios.
Journal of Machine Learning Research, vol. 18, no. 180, pp. 145, 2018.
[ link ]
T. Sakai, G. Niu, and M. Sugiyama.
Semisupervised AUC optimization based on positiveunlabeled learning.
Machine Learning, vol. 107, no. 4, pp. 767794, 2018.
[ link ]
H. Sasaki, V. Tangkaratt, G. Niu, and M. Sugiyama.
Sufficient dimension reduction via direct estimation of the gradients of logarithmic conditional densities.
Neural Computation, vol. 30, no. 2, pp. 477504, 2018.
[ link ]
M. C. du Plessis*, G. Niu*, and M. Sugiyama.
Classprior estimation for learning from positive and unlabeled data.
Machine Learning, vol. 106, no. 4, pp. 463492, 2017.
[ link ]
H. Sasaki, Y.K. Noh, G. Niu, and M. Sugiyama.
Direct densityderivative estimation.
Neural Computation, vol. 28, no. 6, pp. 11011140, 2016.
[ link ]
G. Niu, B. Dai, M. Yamada, and M. Sugiyama.
Informationtheoretic semisupervised metric learning via entropy regularization.
Neural Computation, vol. 26, no. 8, pp. 17171762, 2014.
[ link ]
D. Calandriello, G. Niu, and M. Sugiyama.
Semisupervised informationmaximization clustering.
Neural Networks, vol. 57, pp. 103111, 2014.
[ link ]
M. Sugiyama, G. Niu, M. Yamada, M. Kimura, and H. Hachiya.
Informationmaximization clustering based on squaredloss mutual information.
Neural Computation, vol. 26, no. 1, pp. 84131, 2014.
[ link ]
G. Niu, B. Dai, L. Shang, and M. Sugiyama.
Maximum volume clustering: A new discriminative clustering approach.
Journal of Machine Learning Research, vol. 14 (Sep), pp. 26412687, 2013.
[ link ]
T. Zhao, H. Hachiya, G. Niu, and M. Sugiyama.
Analysis and improvement of policy gradient estimation.
Neural Networks, vol. 26, pp. 118129, 2012.
[ link ]
Y. Ji, J. Chen, G. Niu, L. Shang, and X. Dai.
Transfer learning via multiview principal component analysis.
Journal of Computer Science and Technology, vol. 26, no. 1, pp. 8198, 2011.
[ link ]
Workshop Presentations (selected)
G. Niu.
Robust learning against label noise.
Presented at The AllRIKEN Workshop 2019, Wako, Japan, Dec 56, 2019.
(This was an award speech)
G. Niu.
When weakly supervised learning meets deep learning.
Presented at 3rd IJCAI BOOM, Stockholm, Sweden, Jul 13, 2018.
(This was an invited talk)
G. Niu.
When deep learning meets weakly supervised learning.
Presented at Deep Learning: Theory, Algorithms, and Applications, Tokyo, Japan, Mar 1922, 2018.
(This was an invited talk)
[ slides,
video ]
G. Niu.
Statistical learning from weak supervision.
Presented at 1st IRCN Retreat 2018, Yokohama, Japan, Mar 1718, 2018.
(This was an invited talk)
G. Niu.
Recent advances on positiveunlabeled (PU) learning.
Presented at 30th IBISML (joint with PRMU and CVIM), Tokyo, Japan, Sep 1516, 2017.
(This was an invited talk)
[ slides ]
G. Niu (presented by Tomoya Sakai).
Positiveunlabeled learning with application to semisupervised learning.
Presented at Microsoft Research Asia Academic Day 2017, Yilan, Taiwan, May 26, 2017.
G. Niu, B. Dai, M. Yamada, and M. Sugiyama.
Informationtheoretic semisupervised metric learning via entropy regularization.
Presented at 21st MLSS, Kyoto, Japan, Aug 27Sep 7, 2012.
G. Niu, B. Dai, L. Shang, and M. Sugiyama.
Maximum volume clustering.
Presented at 18th MLSS, Bordeaux, France, Sep 417, 2011.
Theses
Gang Niu.
Discriminative methods with imperfect supervision in machine learning (204 pages).
Doctoral Thesis, Department of Computer Science, Tokyo Institute of Technology, Sep 2013.
Gang Niu.
Support vector learning based on rough set modeling (71 pages in Chinese).
Master Thesis, Department of Computer Science and Technology, Nanjing University, May 2010.
