Bamdev Mishra

Applied Machine Learning Researcher

Publications

[Preprints]   [Journals]   [Conferences]   [Workshop papers]   [PhD thesis]   [Talks]   [Posters]   [Patents]   [Miscellaneous]

A list of my publications is also available on Google Scholar.

Please contact me if you don’t find something.

Preprints and Tech reports

Z. Huang, W. Huang, P. Jawanpuria, BM, “Federated Learning on Riemannian Manifolds with Differential Privacy,” arXiv preprint arXiv:2404.10029, 2024 [arXiv:2404.10029].

A. Han, BM, P. Jawanpuria, A. Takeda, “A Framework for Bilevel Optimization on Riemannian Manifolds,” arXiv preprint arXiv:2402.03883, 2024 [arXiv:2402.03883].

S. Utpala, A. Han, P. Jawanpuria, and BM, “Rieoptax: Riemannian Optimization in JAX,” arXiv preprint arXiv:2210.04840, 2022 [arXiv:2210.04840][GitHub].

BM, N T V Satya Dev, H. Kasai, and P. Jawanpuria, “Manifold optimization for optimal transport”, arXiv preprint arXiv:2103.00902, 2021 [arXiv:2103.00902][MOT].

BM, H. Kasai, and P. Jawanpuria, “Riemannian optimization on the simplex of positive definite matrices”, optimization for machine learning (OPT 2020) workshop at NeurIPS 2020 [arXiv:1906.10436][manifold factory file available on Manopt].

BM, K. Adithya Apuroop, and R. Sepulchre, “A Riemannian geometry for low-rank matrix completion”, Technical report, 2012 [arXiv:1211.1550] [qGeomMC Matlab code webpage]. The content also appears as Chapter 4 of my PhD thesis.

Journal articles

A. Han, P. Jawanpuria, and BM, “Riemannian optimization” to appear as a chapter in Encyclopedia of Optimization, 2024.

P. Jawanpuria, BM, and K. Gurumoorthy, “Revisiting stochastic submodular maximization with cardinality constraint: A bandit perspective,” accepted to TMLR, 2024 [OpenReview].

A. Han, BM, P. Jawanpuria, and J. Gao, “Differentially private Riemannian optimization,” accepted to MLJ, 2023 [Published version][arXiv:2205.09494].

A. Han, BM, P. Jawanpuria, and J. Gao, “Nonconvex-nonconcave min-max optimization on Riemannian manifolds,” accepted to TMLR, 2023 [TMLR openreview].

A. Han, BM, P. Jawanpuria, P. Kumar, and J. Gao, “Riemannian Hamiltonian methods for min-max optimization on manifolds,” accepted to SIOPT, 2023 [arXiv:2204.11418][Matlab codes][Python GitHub][Personal pdf copy][Presentation].

S. Utpala, A. Han, P. Jawanpuria, and BM, “Improved Differentially Private Riemannian Optimization: Fast Sampling and Variance Reduction,” TMLR, 2023 [OpenReview.net].

A. Han, BM, P. Jawanpuria, and J. Gao, “Riemannian block SPD coupling manifold and its application to optimal transport”, accepted to MLJ (ACML journal track special issue), Oral at ACML, 2022 [Publisher’s version][arXiv:2201.12933][Codes].

H. Kasai, P. Jawanpuria, and BM, “A Riemannian approach to low-rank tensor learning”, Published as a chapter in Tensors for Data Processing, 2021 [Personal copy of chapter][Publisher version][GitHub].

H. Sato, H. Kasai, and BM, “Riemannian stochastic variance reduced gradient algorithm”, accepted to SIOPT 2019 [Personal copy][Publisher copy][arXiv:1702.05594]. A shorter version is “Riemannian stochastic variance reduced gradient on Grassmann manifold”, Technical report, 2016 [ arXiv:1605.07367] [Matlab codes]. A still shorter version has been accepted to 9th NeurIPS workshop on optimization for machine learning (OPT2016) to be held at Barcelona.

BM, H. Kasai, P. Jawanpuria, and A. Saroop, “A Riemannian gossip approach to subspace learning on Grassmann manifold”, accepted to Machine Learning, 2019 [Personal copy][Publisher copy][arXiv:1705.00467][Matlab codes][Video]. It is an extension of the technical report [arXiv:1605.06968]. A shorter version was earlier accepted to 9th NeurIPS workshop on optimization for machine learning (OPT2016) held at Barcelona.

P. Jawanpuria, B. Arjun, A. Kunchukuttan, and BM, “Learning multilingual word embeddings in latent metric space: a geometric approach”, accepted to TACL [Publisher’s Pdf][arXiv:1808.08773][GitHub].

Y. Shi, BM, and W. Chen, “Topological interference management with user admission control via Riemannian optimization”, accepted to IEEE Transactions on Wireless Communications, 2017 [arXiv:1607.07252][Publisher’s pdf].

BM and R. Sepulchre, “Riemannian preconditioning”, SIAM Journal on optimization, 26(1), pp. 635 – 660, 2016 [arXiv:1405.6055] [Publisher’s pdf].

Y. Sun, J. Gao, X. Hong, BM, and B. Yin, “Heterogeneous tensor decomposition for clustering via manifold optimization”, Transactions on Pattern Analysis and Machine Intelligence, 38(3) pp. 476 – 489, 2016 [arXiv:1504.01777] [Publisher’s pdf].

N. Boumal, BM, P.-A. Absil, and R. Sepulchre, “Manopt, a Matlab toolbox for optimization on manifolds”, Journal of Machine Learning Research 15(Apr), pp. 1455 – 1459, 2014 [Publisher’s pdf] [arXiv:1308.5200] [Webpage].

BM, G. Meyer, S. Bonnabel, and R. Sepulchre, “Fixed-rank matrix factorizations and Riemannian low-rank optimization”, Computational Statistics 29(3 – 4), pp. 591 – 621, 2014 [Publisher’s pdf] [arXiv:1209.0430] [Matlab codes supplied with the qGeomMC package].

BM, G. Meyer, F. Bach, and R. Sepulchre, “Low-rank optimization with trace norm penalty”, SIAM Journal on Optimization 23 (4), pp. 2124 – 2149, 2013 [Publisher’s pdf] [arXiv:1112.2318] [Matlab code webpage].

Conference proceedings

A. Han, P. Jawanpuria, and BM, “Riemannian coordinate descent algorithms on matrix manifolds,” accepted ICML 2024.

P. Manupriya, P. Jawanpuria, BM, K. Gurumoorthy, and S. Jagarlapudi “Submodular framework for structured-sparse optimal transport,” accepted ICML 2024.

N. Mishra, BM, P. Jawanpuria, and P. Kumar, “A Gauss-Newton Approach for Min-Max Optimization in Generative Adversarial Networks,” accepted IJCNN 2024 [arXiv].

A. Han, BM, P. Jawanpuria, and J. Gao, “Generalized Bures-Wasserstein geometry for positive definite matrices,” accepted Geometric Science of Information, 2023 [arXiv:2110.10464][Codes].

I. Mishra, A. Dasgupta, P. Jawanpuria, BM, and P. Kumar, “Light-weight deep extreme multilabel classification,” IJCNN, 2023 [arXiv:2304.11045][Published personal copy]. It was awarded the poster award at IJCNN 2023.

A. Han, BM, P. Jawanpuria, and J. Gao, “Riemannian accelerated gradient methods via extrapolation,” AISTATS, 2023 [arXiv:2208.06619].

S. Banerjee, BM, P. Jawanpuria, and M. Shrivastava “Generalised spherical text embedding”, accepted to ICON, 2022 [arXiv:2211.16801][ACL Anthology].

A. R. Chaudhuri, P. Jawanpuria, and BM, “ProtoBandit: Efficient prototype selection via multi-armed bandits”, accepted (oral) to ACML, 2022 [arXiv:2210.01860][Presentation]. The published version of this paper (Roy Chaudhuri et al., 2022) contains an erratum in Theorem 5, i.e., ProtoBandit scales as O(k^3|S|) instead of O(k|S|). This is because the ν0 of ProtoBandit at every iteration and the overall ν (across all the iterations) related as ν0 = ν/(k(1−1/e−ϵ)) and not ν0 =ν(1−1/e−ϵ) (as mentioned in the published version). We had earlier missed the factor k in the denominator. Consequently, an extra factor of k comes in the numerator in Theorem 5. To this end, we have highlighted in red all the changes in this arXiv version.

A. Han, BM, P. Jawanpuria, and J. Gao, “On Riemannian optimization over positive definite matrices with the Bures-Wasserstein geometry”, accepted to NeurIPS, 2021 [arXiv:2106.00286][Codes].

P. Jawanpuria, N T V Satya Dev, and BM, “Efficient robust optimal transport: formulations and algorithms”, accepted to IEEE CDC, 2021 [arXiv:2010.11852][Code]. A short version presented in the optimization for machine learning (OPT 2020) workshop at NeurIPS 2020.

K. S. Gurumoorthy, P. Jawanpuria, and BM, “SPOT: A framework for selection of prototypes using optimal transport”, accepted to ECML PKDD Applied Data Science Track, 2021 [arXiv:2103.10159][SPOTGreedy code][Python codes at GitHub][Presentation].

P. Jawanpuria, M. Meghwanshi, and BM, “A simple approach to learning unsupervised multilingual embeddings”, accepted to EMNLP 2020 as short paper [arXiv:2004.05991][Presentation][Video].

P. Jawanpuria, N T V Satya Dev, A. Kunchukuttan, and BM, “Learning geometric word meta-embeddings “, Proceedings of the 5th Workshop on Representation Learning for NLP (RepL4NLP-2020), co-located with ACL 2020 [arXiv:2004.09219][Presentation][Video].

P. Jawanpuria, M. Meghwanshi, and BM, “Geometry-aware domain adaptation for unsupervised alignment of word embeddings”, accepted to ACL 2020 as short paper [arXiv:2004.08243][Matlab codes][Presentation][Video].

P. Jawanpuria, M. Meghwanshi, and BM, “Low-rank approximations of hyperbolic embeddings”, accepted to CDC 2019 [Personal copy][arXiv:1903:07307][Matlab codes][Presentation].

A. R. Yelundur, V. Chaoji, and BM, “Detection of Review Abuse via Semi-Supervised Binary Multi-Target Tensor Decomposition”, accepted at KDD (applied data science track) 2019 [arXiv:1905.06246][Video][Personal copy]. Earlier version was “E-commerce anomaly detection: a Bayesian semi-Supervised tensor decomposition approach using natural gradients”, arXiv preprint arXiv:1804.03836 2018 [arXiv:1804.03836][Presentation video].

H. Kasai, P. Jawanpuria, and BM, “Riemannian adaptive stochastic gradient algorithms on matrix manifolds”, ICML 2019 [Slides][Video at 1:01:29][Publisher’s copy][arXiv:1902.01144][Matlab codes].

H. Kasai and BM, “Inexact trust-region algorithms on Riemannian manifolds”, accepted to NeurIPS 2018 [GitHub][Publisher’s copy].

M. Nimishakavi, P. Jawanpuria, and BM, “A dual framework for low-rank tensor completion”, accepted to NeurIPS 2018 [arXiv:1712.01193][Matlab codes page][Publisher’s copy][Presentation]. A shorter version got accepted to the NeurIPS workshop on Synergies in Geometric Data Analysis, 2017.

M. Nimishakavi, BM, Manish Gupta, and Partha Talukdar, “Inductive framework for multi-aspect streaming tensor completion with side information”, accepted to CIKM 2018 [arXiv:1802.06371][Personal copy][GitHub].

S. Mahadevan, BM, and S. Ghosh, “A unified framework for domain adaptation using metric learning on manifolds”, accepted to ECML-PKDD 2018 [Personal copy][Publisher’s pdf][arXiv:1804.10834][GitHub].

H. Kasai and BM, “Riemannian joint dimensionality reduction and dictionary learning on symmetric positive definite manifold”, accepted to EUSIPCO 2018 [Personal copy][Publisher copy].

H. Kasai, H. Sato, and BM, “Riemannian stochastic recursive gradient algorithm”, accepted to ICML, 2018 [Publisher’s pdf][HK’s presentation video].

P. Jawanpuria and BM, “A unified framework for structured low-rank matrix learning”, accepted to ICML, 2018 [Publisher’s pdf][arXiv:1704.07352][Matlab codes page]. A shorter version got accepted to 10th NeurIPS Workshop on Optimization for Machine Learning, 2017.

H. Kasai, H. Sato, and BM, “Riemannian stochastic quasi-Newton algorithm with variance reduction and its convergence analysis”, accepted to AISTATS, 2018 [arXiv:1703.04890][Publisher’s pdf].

Y. Shi, BM, and Wei Chen, “A sparse and low-rank optimization framework for network topology control in dense fog-RAN,” accepted to the IEEE 85th Vehicular Technology Conference, 2017. The earlier title was “A sparse and low-rank optimization framework for index coding via Riemannian optimization”, Technical report, 2016 [arXiv:1604.04325] [Matlab codes]. A shorter version of the work presenting a unified approach, titled “Sparse and low-rank decomposition for big data systems via smoothed Riemannian optimization,” also appeared in the 9th NeurIPS workshop on optimization for machine learning (OPT2016) held in Barcelona.

BM and R. Sepulchre, “Scaled stochastic gradient descent for low-rank matrix completion”, Accepted to the 55th IEEE Conference on Decision and Control, 2016 [Publisher’s copy] [ arXiv:1603.04989] [Matlab codes]. A different version is accepted to the internal Amazon machine learning conference (AMLC) 2016. AMLC is a platform for internal Amazon researchers to present their work.

H. Kasai and BM, “Low-rank tensor completion: a Riemannian manifold preconditioning approach”, accepted to ICML 2016 [Publisher’s copy] [Supplementary material] [arXiv:1605.08257]. A shorter version is at 8th NeurIPS workshop on optimization for machine learning (OPT2015). Earlier title “Riemannian preconditioning for tensor completion”, arXiv:1506.02159, 2015 [arXiv:1506.02159] [Matlab code webpage].

R. Liégeois, BM, M. Zorzi, and R. Sepulchre, “Sparse plus low-rank autoregressive identification in neuroimaging time series”, Accepted for publication in the proceedings of the 54th IEEE Conference on Decision and Control, 2015 [Publisher’s pdf] [arXiv:1503.08639] [Matlab code webpage].

BM and R. Sepulchre, “R3MC: A Riemannian three-factor algorithm for low-rank matrix completion”, Proceedings of the 53rd IEEE Conference on Decision and Control, pp. 1137 – 1142, 2014 [arXiv:1306.2672] [R3MC Matlab code webpage] [Publisher’s pdf].

BM and B. Vandereycken, “A Riemannian approach to low-rank algebraic Riccati equations”, Proceedings of the 21st International Symposium on Mathematical Theory of Networks and Systems, Extended abstract, pp. 965 – 968, 2014 [Publisher’s pdf][arXiv:1312.4883] [Matlab code webpage].

BM, G. Meyer, and R. Sepulchre, “Low-rank optimization for distance matrix completion”, Proceedings of the 50th IEEE Conference on Decision and Control, pp. 4455 – 4460, 2011 [Publisher’s pdf] [arXiv:1304.6663] [Matlab code webpage].

Workshop papers

M. Meghwanshi, P. Jawanpuria, A. Kunchukuttan, H. Kasai, and BM, “McTorch, a manifold optimization library for deep learning”, arXiv preprint arXiv:1810.01811 2018 [arXiv:1810.01811][GitHub][Presentation]. Also accepted at the MLOSS workshop at NeurIPS 2018.

M. Bhutani, P. Jawanpuria, H. Kasai, and BM, “Low-rank geometric mean metric learning”, accepted to the Geometry in Machine Learning (GiMLi) workshop in ICML, 2018 [arXiv:1806.05454].

M. Bhutani and BM, “A two-dimensional decomposition approach for matrix completion through gossip”, arXiv preprint arXiv:1711.07684 2017 [arXiv:1711.07684]. A shorter version got accepted to the NeurIPS workshop on Emergent Communication, 2017.

V. Badrinarayanan, BM, and R. Cipolla, “Symmetry-invariant optimization in deep networks”, Technical report, arXiv:arXiv:1511.01754, 2015 [arXiv:1511.01754] [Matlab code webpage]. A shorter version appears in “Understanding symmetries in deep networks”, 8th NeurIPS workshop on optimization for machine learning (OPT2015) [arXiv:1511.01029].

PhD thesis


BM, “A Riemannian approach to large-scale constrained least-squares with symmetries”, PhD thesis, University of Liège, 2014 [Orbi:173257] [Thesis presentation]. The PhD jury included Lieven De Lathauwer, Quentin Louveaux, Louis Wehenkel, Philippe Toint, Francis Bach (co-supervisor), and Rodolphe Sepulchre (supervisor).

Talks

Gave a keynote talk at EE research days at IIT Kanpur 2023. The topic was on ML for mobile devices at Microsoft, India: scenarios, challenges, and opportunities.

Gave a talk at ICIAM 2023 on “min-max optimization on manifolds”, 2023 [Presentation].

Gave a talk at NISER on “Prototype selection using optimal transport and multi-armed bandits”, 2023.

Invited speaker at CyPhySS-2022 symposium, Bengaluru, 2022. The talk was on optimal transport and Riemannian optimization. Flyer here.

Invited to present a webinar on Machine Learning by DRIEMS college, Bhubaneswar.

Was invited as a mentor to the LogML summer school 2021. Had a great time working with attendees who presented the wonderful short work.

Gave a talk on manifold optimization in machine learning summer school 2019.

Gave a couple of talks at GIAN 2019 at IIT Bhubaneswar. The talks focus on industrial machine learning problems and the role of geometric optimization.

Gave a talk on manifold optimization: applications and algorithms at IIIT Hyderabad, 2018.

Gave a talk on the role of machine learning in Office India at Microsoft IDC, 2018.

Gave a seminar on dimensionality reduction techniques using matrix factorization at IIT Bhubaneswar on 16 Jan 2018.

Gave a seminar in the numerical analysis group at the University of Geneva on 07 October 2016.

Gave a lecture talk at IFCAM summer school on large-scale optimization at IISc on 07 July 2016. The talk was focussed on manifold optimization: applications and algorithms.

On 30 September 2015, gave a seminar talk on Manopt: a Matlab toolbox for optimization on manifolds in the department of systems and control engineering at IIT Bombay. Gave a second seminar talk on the same topic at the School of Electrical Sciences, IIT Bhubaneswar on 12 October 2015.

In Workshop on low-rank optimization and applications, Hausdorff Center for Mathematics, Bonn, June 2015, presented Riemannian preconditioning.

On 04 June 2015, gave a seminar talk on a Riemannian approach to large-scale constrained least-squares with symmetries at UCLouvain, Belgium as part of the big data seminar series.

In TCMM Workshop 2014, together with Nicolas Boumal, presented the Manopt toolbox.

In Dolomites Workshop 2013, presented the work on “Fixed-rank optimization on Riemannian quotient manifolds”.

In Benelux meeting on systems and control 2013, presented the work on “Tuning metrics for low-rank matrix completion” [Book of Abstracts: Page 157].

In ISMP 2012, presented the work on “Fixed-rank matrix factorizations and the design of invariant optimization algorithms” [Book of Abstracts: Page 145].

In Benelux meeting on systems and control 2012, presented the work on “Low-rank optimization on the set of low-rank non-symmetric matrices” [Book of Abstracts: Page 103].

In Benelux meeting on systems and control 2011, presented the work on “Manifold based optimization techniques for low-rank distance matrix completion” [Book of Abstracts: Page 75].

Posters

Presented three posters on Structured low-rank learning, a dual framework for tensor completion, and a two-dimensional gossip approach to matrix completion at NeurIPS workshops, 2017.

Presented three posters on RSVRG, gossip for matrix completion, and low-rank and sparse decomposition at the optimization for machine learning workshop (OPT2016) at NeurIPS, Barcelona, 2016.

Along with Hiroyuki Kasai, presented poster on low-rank tensor completion: a Riemannian preconditioning approach at ICML 2016.

Scaled SGD for matrix completion at internal Amazon machine learning conference, Seattle, 2016.

Two posters at the NeurIPS workshop on optimization for machine learning, 2015. One on the tensor completion and the other on symmetry-invariant optimization in deep networks.

Riemannian preconditioning for tensor completion in Low-rank Optimization and Applications workshop, Hausdorff Center for Mathematics, Bonn, 2015.

Riemannian preconditioning in IAP DySCO study day, 2014.

Trace norm minimization in IAP DySCO study day, 2012.

Patents


A patent filed on discovering troubleshooting guides for support engineers.


A patent filed on computing competitive reference prices of third-party unique products.

A patent filed on a two-dimensional distributed computing approach for matrix completion through gossip [US Patent].

Miscellaneous


I maintain a list of fixed-rank algorithms and geometries here. The list is out of date as of 2018.