default search action
Journal of Machine Learning Research, Volume 22
Volume 22, 2021
- Krishnakumar Balasubramanian, Tong Li, Ming Yuan:
On the Optimality of Kernel-Embedding Based Goodness-of-Fit Tests. 1:1-1:45 - Gilles Blanchard, Aniket Anand Deshmukh, Ürün Dogan, Gyemin Lee, Clayton Scott:
Domain Generalization by Marginal Transfer Learning. 2:1-2:55 - Stefano Tracà, Cynthia Rudin, Weiyu Yan:
Regulating Greed Over Time in Multi-Armed Bandits. 3:1-3:99 - Erich Merrill, Alan Fern, Xiaoli Z. Fern, Nima Dolatnia:
An Empirical Study of Bayesian Optimization: Acquisition Versus Partition. 4:1-4:25 - Carlos Alberto Gomez-Uribe, Brian Karrer:
The Decoupled Extended Kalman Filter for Dynamic Exponential-Family Factorization Models. 5:1-5:25 - Fadhel Ayed, Marco Battiston, Federico Camerlenghi, Stefano Favaro:
Consistent estimation of small masses in feature sampling. 6:1-6:28 - Viktor Bengs, Róbert Busa-Fekete, Adil El Mesaoudi-Paul, Eyke Hüllermeier:
Preference-based Online Learning with Dueling Bandits: A Survey. 7:1-7:108 - Benjamin Lu, Johanna Hardin:
A Unified Framework for Random Forest Prediction Error Estimation. 8:1-8:41 - Defeng Sun, Kim-Chuan Toh, Yancheng Yuan:
Convex Clustering: Model, Theoretical Guarantee and Efficient Algorithm. 9:1-9:32 - Bumeng Zhuo, Chao Gao:
Mixing Time of Metropolis-Hastings for Bayesian Community Detection. 10:1-10:89 - Yunxiao Chen, Zhiliang Ying, Haoran Zhang:
Unfolding-Model-Based Visualization: Theory, Method and Applications. 11:1-11:51 - Shenglong Zhou, Naihua Xiu, Hou-Duo Qi:
Global and Quadratic Convergence of Newton Hard-Thresholding Pursuit. 12:1-12:45 - Xiao Di, Yuan Ke, Runze Li:
Homogeneity Structure Learning in Large-scale Panel Data with Heavy-tailed Errors. 13:1-13:42 - Maryam Aziz, Emilie Kaufmann, Marie-Karelle Riviere:
On Multi-Armed Bandit Designs for Dose-Finding Trials. 14:1-14:38 - Jagdeep Singh Bhatia:
Simple and Fast Algorithms for Interactive Machine Learning with Random Counter-examples. 15:1-15:30 - Shih-Yuan Yu, Sujit Rokka Chhetri, Arquimedes Canedo, Palash Goyal, Mohammad Abdullah Al Faruque:
Pykg2vec: A Python Library for Knowledge Graph Embedding. 16:1-16:6 - Nikola B. Kovachki, Andrew M. Stuart:
Continuous Time Analysis of Momentum Methods. 17:1-17:40 - Gaoxia Jiang, Wenjian Wang, Yuhua Qian, Jiye Liang:
A Unified Sample Selection Framework for Output Noise Filtering: An Error-Bound Perspective. 18:1-18:66 - Alexandre d'Aspremont, Mihai Cucuringu, Hemant Tyagi:
Ranking and synchronization from pairwise measurements via SVD. 19:1-19:63 - Guillaume Maillard, Sylvain Arlot, Matthieu Lerasle:
Aggregated Hold-Out. 20:1-20:55 - Lei Yang, Jia Li, Defeng Sun, Kim-Chuan Toh:
A Fast Globally Linearly Convergent Algorithm for the Computation of Wasserstein Barycenters. 21:1-21:37 - Purnamrita Sarkar, Y. X. Rachel Wang, Soumendu Sundar Mukherjee:
When random initializations help: a study of variational inference for community detection. 22:1-22:46 - Giulio Galvan, Matteo Lapucci, Chih-Jen Lin, Marco Sciandrone:
A Two-Level Decomposition Framework Exploiting First and Second Order Information for SVM Training Problems. 23:1-23:38 - Riikka Huusari, Hachem Kadri:
Entangled Kernels - Beyond Separability. 24:1-24:40 - Yunwen Lei, Ting Hu, Ke Tang:
Generalization Performance of Multi-pass Stochastic Gradient Descent with Convex Loss Functions. 25:1-25:41 - Tuhin Sarkar, Alexander Rakhlin, Munther A. Dahleh:
Finite Time LTI System Identification. 26:1-26:61 - Hamid Eftekhari, Moulinath Banerjee, Yaacov Ritov:
Inference In High-dimensional Single-Index Models Under Symmetric Designs. 27:1-27:63 - Julian Zimmert, Yevgeny Seldin:
Tsallis-INF: An Optimal Algorithm for Stochastic and Adversarial Bandits. 28:1-28:49 - Wanrong Zhang, Sara Krehbiel, Rui Tuo, Yajun Mei, Rachel Cummings:
Single and Multiple Change-Point Detection with Differential Privacy. 29:1-29:36 - Oliver Kroemer, Scott Niekum, George Konidaris:
A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms. 30:1-30:82 - Tianyu Wang, Marco Morucci, M. Usaid Awan, Yameng Liu, Sudeepa Roy, Cynthia Rudin, Alexander Volfovsky:
FLAME: A Fast Large-scale Almost Matching Exactly Approach to Causal Inference. 31:1-31:41 - Fei Lu, Mauro Maggioni, Sui Tang:
Learning interaction kernels in heterogeneous systems of agents from multiple trajectories. 32:1-32:67 - Tijana Zrnic, Aaditya Ramdas, Michael I. Jordan:
Asynchronous Online Testing of Multiple Hypotheses. 33:1-33:39 - Imtiaz Ahmed, Xia Ben Hu, Mithun P. Acharya, Yu Ding:
Neighborhood Structure Assisted Non-negative Matrix Factorization and Its Application in Unsupervised Point-wise Anomaly Detection. 34:1-34:32 - Melkior Ornik, Ufuk Topcu:
Learning and Planning for Time-Varying MDPs Using Maximum Likelihood Estimation. 35:1-35:40 - Carlos Villacampa-Calvo, Bryan Zaldivar, Eduardo C. Garrido-Merchán, Daniel Hernández-Lobato:
Multi-class Gaussian Process Classification with Noisy Inputs. 36:1-36:52 - Zhao Tang Luo, Huiyan Sang, Bani K. Mallick:
A Bayesian Contiguous Partitioning Method for Learning Clustered Latent Variables. 37:1-37:52 - Umit Kose, Andrzej Ruszczynski:
Risk-Averse Learning by Temporal Difference Methods with Markov Risk Measures. 38:1-38:34 - Guillaume Tauzin, Umberto Lupo, Lewis Tunstall, Julian Burella Pérez, Matteo Caorsi, Anibal M. Medina-Mardones, Alberto Dassatti, Kathryn Hess:
giotto-tda: : A Topological Data Analysis Toolkit for Machine Learning and Data Exploration. 39:1-39:6 - Anton Bakhtin, Yuntian Deng, Sam Gross, Myle Ott, Marc'Aurelio Ranzato, Arthur Szlam:
Residual Energy-Based Models for Text. 40:1-40:41 - Henning Lange, Steven L. Brunton, J. Nathan Kutz:
From Fourier to Koopman: Spectral Methods for Long-term Time Series Prediction. 41:1-41:38 - Wenlong Mou, Yi-An Ma, Martin J. Wainwright, Peter L. Bartlett, Michael I. Jordan:
High-Order Langevin Diffusion Yields an Accelerated MCMC Algorithm. 42:1-42:41 - Rahul Parhi, Robert D. Nowak:
Banach Space Representer Theorems for Neural Networks and Ridge Splines. 43:1-43:40 - Jason M. Altschuler, Enric Boix-Adserà:
Wasserstein barycenters can be computed in polynomial time in fixed dimension. 44:1-44:19 - Ye Tian, Yang Feng:
RaSE: Random Subspace Ensemble Classification. 45:1-45:93 - T. Tony Cai, Hongzhe Li, Rong Ma:
Optimal Structured Principal Subspace Estimation: Metric Entropy and Minimax Rates. 46:1-46:45 - Soon Hoe Lim:
Understanding Recurrent Neural Networks Using Nonequilibrium Response Theory. 47:1-47:48 - Behzad Azmi, Dante Kalise, Karl Kunisch:
Optimal Feedback Law Recovery by Gradient-Augmented Sparse Polynomial Regression. 48:1-48:32 - Damek Davis, Dmitriy Drusvyatskiy, Lin Xiao, Junyu Zhang:
From Low Probability to High Confidence in Stochastic Convex Optimization. 49:1-49:38 - Nguyen Thi Kim Hue, Monica Chiogna:
Structure Learning of Undirected Graphical Models for Count Data. 50:1-50:53 - Junlong Zhu, Qingtao Wu, Mingchuan Zhang, Ruijuan Zheng, Keqin Li:
Projection-free Decentralized Online Learning for Submodular Maximization over Time-Varying Networks. 51:1-51:42 - Alper Atamtürk, Andrés Gómez, Shaoning Han:
Sparse and Smooth Signal Estimation: Convexification of L0-Formulations. 52:1-52:43 - Weiwei Li, Jan Hannig, Sayan Mukherjee:
Subspace Clustering through Sub-Clusters. 53:1-53:37 - Xinming Yang, Lingrui Gan, Naveen N. Narisetty, Feng Liang:
GemBag: Group Estimation of Multiple Bayesian Graphical Models. 54:1-54:48 - Minjie Wang, Genevera I. Allen:
Integrative Generalized Convex Clustering Optimization and Feature Selection for Mixed Multi-View Data. 55:1-55:73 - Charlie Frogner, Sebastian Claici, Edward Chien, Justin Solomon:
Incorporating Unlabeled Data into Distributionally Robust Learning. 56:1-56:46 - George Papamakarios, Eric T. Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan:
Normalizing Flows for Probabilistic Modeling and Inference. 57:1-57:64 - Zhe Fei, Yi Li:
Estimation and Inference for High Dimensional Generalized Linear Models: A Splitting and Smoothing Approach. 58:1-58:32 - Konstantinos E. Nikolakakis, Dionysios S. Kalogerias, Anand D. Sarwate:
Predictive Learning on Hidden Tree-Structured Ising Models. 59:1-59:82 - Jonathan Tuck, Shane T. Barratt, Stephen P. Boyd:
A Distributed Method for Fitting Laplacian Regularized Stratified Models. 60:1-60:37 - Yunwen Lei, Yiming Ying:
Stochastic Proximal AUC Maximization. 61:1-61:45 - Mariusz Kubkowski, Jan Mielniczuk, Pawel Teisseyre:
How to Gain on Power: Novel Conditional Independence Tests Based on Short Expansion of Conditional Mutual Information. 62:1-62:57 - Nicolás García Trillos, Franca Hoffmann, Bamdad Hosseini:
Geometric structure of graph Laplacian embeddings. 63:1-63:55 - Botao Hao, Boxiang Wang, Pengyuan Wang, Jingfei Zhang, Jian Yang, Will Wei Sun:
Sparse Tensor Additive Regression. 64:1-64:43 - Yanqing Zhang, Xuan Bi, Niansheng Tang, Annie Qu:
Dynamic Tensor Recommender Systems. 65:1-65:35 - Haishan Ye, Luo Luo, Zhihua Zhang:
Approximate Newton Methods. 66:1-66:41 - Trambak Banerjee, Qiang Liu, Gourab Mukherjee, Wengunag Sun:
A General Framework for Empirical Bayes Estimation in Discrete Linear Exponential Family. 67:1-67:46 - Chirag Gupta, Sivaraman Balakrishnan, Aaditya Ramdas:
Path Length Bounds for Gradient Descent and Flow. 68:1-68:63 - Shujie Ma, Liangjun Su, Yichong Zhang:
Determining the Number of Communities in Degree-corrected Stochastic Block Models. 69:1-69:63 - Lasse Petersen, Niels Richard Hansen:
Testing Conditional Independence via Quantile Regression Based Partial Copulas. 70:1-70:47 - Tao Luo, Zhi-Qin John Xu, Zheng Ma, Yaoyu Zhang:
Phase Diagram for Two-layer ReLU Neural Networks at Infinite-width Limit. 71:1-71:47 - Erhan Bayraktar, Ibrahim Ekren, Xin Zhang:
Prediction against a limited adversary. 72:1-72:33 - Michael Muehlebach, Michael I. Jordan:
Optimization with Momentum: Dynamical, Control-Theoretic, and Symplectic Perspectives. 73:1-73:50 - Benjamin Charlier, Jean Feydy, Joan Alexis Glaunès, François-David Collin, Ghislain Durif:
Kernel Operations on the GPU, with Autodiff, without Memory Overflows. 74:1-74:6 - Jorge Pérez, Pablo Barceló, Javier Marinkovic:
Attention is Turing-Complete. 75:1-75:35 - Alain Celisse, Martin Wahl:
Analyzing the discrepancy principle for kernelized spectral filter learning algorithms. 76:1-76:59 - Yasuhiro Fujita, Prabhat Nagarajan, Toshiki Kataoka, Takahiro Ishikawa:
ChainerRL: A Deep Reinforcement Learning Library. 77:1-77:14 - Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, Léo Gautheron, Nathalie T. H. Gayraud, Hicham Janati, Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy, Danica J. Sutherland, Romain Tavenard, Alexander Tong, Titouan Vayer:
POT: Python Optimal Transport. 78:1-78:8 - Chris Mingard, Guillermo Valle Pérez, Joar Skalse, Ard A. Louis:
Is SGD a Bayesian sampler? Well, almost. 79:1-79:64 - Zengfeng Huang, Xuemin Lin, Wenjie Zhang, Ying Zhang:
Communication-Efficient Distributed Covariance Sketch, with Application to Distributed PCA. 80:1-80:38 - Maxime Cauchois, Suyash Gupta, John C. Duchi:
Knowing what You Know: valid and validated confidence sets in multiclass and multilabel prediction. 81:1-81:42 - Mehdi Ali, Max Berrendorf, Charles Tapley Hoyt, Laurent Vermue, Sahand Sharifzadeh, Volker Tresp, Jens Lehmann:
PyKEEN 1.0: A Python Library for Training and Evaluating Knowledge Graph Embeddings. 82:1-82:6 - Rishabh Dudeja, Daniel Hsu:
Statistical Query Lower Bounds for Tensor PCA. 83:1-83:51 - Jiyuan Tu, Weidong Liu, Xiaojun Mao, Xi Chen:
Variance Reduced Median-of-Means Estimator for Byzantine-Robust Distributed Inference. 84:1-84:67 - Ohad Shamir:
Gradient Methods Never Overfit On Separable Data. 85:1-85:20 - Andreas C. Damianou, Neil D. Lawrence, Carl Henrik Ek:
Multi-view Learning as a Nonparametric Nonlinear Inter-Battery Factor Analysis. 86:1-86:51 - Patrick Kreitzberg, Oliver Serang:
On Solving Probabilistic Linear Diophantine Equations. 87:1-87:24 - Can M. Le:
Edge Sampling Using Local Network Information. 88:1-88:29 - Feifei Wang, Junni L. Zhang, Yichao Li, Ke Deng, Jun S. Liu:
Bayesian Text Classification and Summarization via A Class-Specified Topic Model. 89:1-89:48 - Tomer Galanti, Sagie Benaim, Lior Wolf:
Risk Bounds for Unsupervised Cross-Domain Mapping with IPMs. 90:1-90:42 - Tingting Zhao, Alexandre Bouchard-Côté:
Analysis of high-dimensional Continuous Time Markov Chains using the Local Bouncy Particle Sampler. 91:1-91:41 - Anastasis Kratsios, Cody B. Hyndman:
NEU: A Meta-Algorithm for Universal UAP-Invariant Feature Representation. 92:1-92:51 - Zhengrong Xing, Peter Carbonetto, Matthew Stephens:
Flexible Signal Denoising via Flexible Empirical Bayes Shrinkage. 93:1-93:28 - Xiaoyi Mai, Romain Couillet:
Consistent Semi-Supervised Graph Regularization for High Dimensional Data. 94:1-94:48 - Hanyuan Hang, Zhouchen Lin, Xiaoyu Liu, Hongwei Wen:
Histogram Transform Ensembles for Large-scale Regression. 95:1-95:87 - Kai Puolamäki, Emilia Oikarinen, Andreas Henelius:
Guided Visual Exploration of Relations in Data Sets. 96:1-96:32 - Alberto Maria Metelli, Matteo Pirotta, Daniele Calandriello, Marcello Restelli:
Safe Policy Iteration: A Monotonically Improving Approximate Policy Iteration Approach. 97:1-97:83 - Alekh Agarwal, Sham M. Kakade, Jason D. Lee, Gaurav Mahajan:
On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift. 98:1-98:76 - Lin Liu, Rajarshi Mukherjee, James M. Robins, Eric Tchetgen Tchetgen:
Adaptive estimation of nonparametric functionals. 99:1-99:66 - Matthias Feurer, Jan N. van Rijn, Arlind Kadra, Pieter Gijsbers, Neeratyoy Mallik, Sahithya Ravi, Andreas Müller, Joaquin Vanschoren, Frank Hutter:
OpenML-Python: an extensible Python API for OpenML. 100:1-100:5 - Baoxun Wang, Zhen Xu, Huan Zhang, Kexin Qiu, Deyuan Zhang, Chengjie Sun:
LocalGAN: Modeling Local Distributions for Adversarial Response Generation. 101:1-101:29 - Gunwoong Park, Sang Jun Moon, Sion Park, Jong-June Jeon:
Learning a High-dimensional Linear Structural Equation Model via l1-Regularized Regression. 102:1-102:41 - Guodong Zhang, Xuchan Bao, Laurent Lessard, Roger B. Grosse:
A Unified Analysis of First-Order Methods for Smooth Games via Integral Quadratic Constraints. 103:1-103:39 - Joseph D. Janizek, Pascal Sturmfels, Su-In Lee:
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks. 104:1-104:54 - James T. Wilson, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth:
Pathwise Conditioning of Gaussian Processes. 105:1-105:47 - Gérard Ben Arous, Reza Gheissari, Aukosh Jagannath:
Online stochastic gradient descent on non-convex losses from high-dimensional inference. 106:1-106:51 - Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Michael Auli, Armand Joulin:
Beyond English-Centric Multilingual Machine Translation. 107:1-107:48 - Zhu Li, Jean-Francois Ton, Dino Oglic, Dino Sejdinovic:
Towards a Unified Analysis of Random Fourier Features. 108:1-108:51 - Ronan Perry, Gavin Mischler, Richard Guo, Theo Lee, Alexander Chang, Arman Koul, Cameron Franz, Hugo Richard, Iain Carmichael, Pierre Ablin, Alexandre Gramfort, Joshua T. Vogelstein:
mvlearn: Multiview Machine Learning in Python. 109:1-109:7 - Jacob Montiel, Max Halford, Saulo Martiello Mastelini, Geoffrey Bolmier, Raphaël Sourty, Robin Vaysse, Adil Zouitine, Heitor Murilo Gomes, Jesse Read, Talel Abdessalem, Albert Bifet:
River: machine learning for streaming data in Python. 110:1-110:8 - Steven Siwei Ye, Oscar Hernan Madrid Padilla:
Non-parametric Quantile Regression via the K-NN Fused Lasso. 111:1-111:38 - Xun Qian, Zheng Qu, Peter Richtárik:
L-SVRG and L-Katyusha with Arbitrary Sampling. 112:1-112:47 - Ashia C. Wilson, Ben Recht, Michael I. Jordan:
A Lyapunov Analysis of Accelerated Methods in Optimization. 113:1-113:34 - Ali Ramezani-Kebrya, Fartash Faghri, Ilya Markov, Vitalii Aksenov, Dan Alistarh, Daniel M. Roy:
NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization. 114:1-114:43 - Michael R. Metel, Akiko Takeda:
Stochastic Proximal Methods for Non-Smooth Non-Convex Constrained Sparse Optimization. 115:1-115:36 - Victor Hamer, Pierre Dupont:
An Importance Weighted Feature Selection Stability Measure. 116:1-116:57 - Shaofeng Deng, Shuyang Ling, Thomas Strohmer:
Strong Consistency, Graph Laplacians, and the Stochastic Block Model. 117:1-117:44 - Chidubem Arachie, Bert Huang:
A General Framework for Adversarial Label Learning. 118:1-118:33 - Gérard Biau, Maxime Sangnier, Ugo Tanielian:
Some Theoretical Insights into Wasserstein GANs. 119:1-119:45 - Wei Wang, Matthew Stephens:
Empirical Bayes Matrix Factorization. 120:1-120:40 - Vikram Krishnamurthy, George Yin:
Langevin Dynamics for Adaptive Inverse Reinforcement Learning of Stochastic Gradient Algorithms. 121:1-121:49 - Kyriakos Axiotis, Maxim Sviridenko:
Sparse Convex Optimization via Adaptively Regularized Hard Thresholding. 122:1-122:47 - George Wynne, François-Xavier Briol, Mark Girolami:
Convergence Guarantees for Gaussian Process Means With Misspecified Likelihoods and Smoothness. 123:1-123:40 - Jingyi Jessica Li, Yiling Elaine Chen, Xin Tong:
A flexible model-free prediction-based framework for feature ranking. 124:1-124:54 - Peng Zhao, Guanghui Wang, Lijun Zhang, Zhi-Hua Zhou:
Bandit Convex Optimization in Non-stationary Environments. 125:1-125:45 - Molei Liu, Yin Xia, Kelly Cho, Tianxi Cai:
Integrative High Dimensional Multiple Testing with Heterogeneity under Data Sharing Constraints. 126:1-126:26 - Ismael Lemhadri, Feng Ruan, Louis Abraham, Robert Tibshirani:
LassoNet: A Neural Network with Feature Sparsity. 127:1-127:29 - Rohit Agrawal, Thibaut Horel:
Optimal Bounds between f-Divergences and Integral Probability Metrics. 128:1-128:59 - Niladri S. Chatterji, Philip M. Long:
Finite-sample Analysis of Interpolating Linear Classifiers in the Overparameterized Regime. 129:1-129:30 - Steve Hanneke:
Learning Whenever Learning is Possible: Universal Learning under General Stochastic Processes. 130:1-130:116 - Carlo D'Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, Jan Peters:
MushroomRL: Simplifying Reinforcement Learning Research. 131:1-131:5 - Adriano Pastore, Michael Gastpar:
Locally Differentially-Private Randomized Response for Discrete Distribution Learning. 132:1-132:56 - Alberto Bietti, Alekh Agarwal, John Langford:
A Contextual Bandit Bake-off. 133:1-133:49 - Camille Castera, Jérôme Bolte, Cédric Févotte, Edouard Pauwels:
An Inertial Newton Algorithm for Deep Learning. 134:1-134:31 - Antoine Dedieu, Hussein Hazimeh, Rahul Mazumder:
Learning Sparse Classifiers: Continuous and Mixed Integer Optimization Perspectives. 135:1-135:47 - Liam Hodgkinson, Robert Salomone, Fred Roosta:
Implicit Langevin Algorithms for Sampling From Log-concave Densities. 136:1-136:30 - Tong Wang, Qihang Lin:
Hybrid Predictive Models: When an Interpretable Model Collaborates with a Black-box Model. 137:1-137:38 - Yunzhang Zhu, Renxiong Liu:
An algorithmic view of L2 regularization and some path-following algorithms. 138:1-138:62 - Jianqing Fan, Bai Jiang, Qiang Sun:
Hoeffding's Inequality for General Markov Chains and Its Applications to Statistical Learning. 139:1-139:35 - Fanghui Liu, Lei Shi, Xiaolin Huang, Jie Yang, Johan A. K. Suykens:
Generalization Properties of hyper-RKHS and its Applications. 140:1-140:38 - Johan Alenlöv, Arnoud Doucet, Fredrik Lindsten:
Pseudo-Marginal Hamiltonian Monte Carlo. 141:1-141:45 - Jesús Arroyo, Avanti Athreya, Joshua Cape, Guodong Chen, Carey E. Priebe, Joshua T. Vogelstein:
Inference for Multiple Heterogeneous Networks with a Common Invariant Subspace. 142:1-142:49 - Henning Petzka, Cristian Sminchisescu:
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks. 143:1-143:34 - Swati Gupta, Vijay Kamble:
Individual Fairness in Hindsight. 144:1-144:35 - Viet Huynh, Nhat Ho, Nhan Dam, XuanLong Nguyen, Mikhail Yurochkin, Hung Bui, Dinh Q. Phung:
On efficient multilevel Clustering via Wasserstein distances. 145:1-145:43 - Krishnakumar Balasubramanian:
Nonparametric Modeling of Higher-Order Interactions via Hypergraphons. 146:1-146:35 - Meiling Hao, Lianqiang Qu, Dehan Kong, Liuquan Sun, Hongtu Zhu:
Optimal Minimax Variable Selection for Large-Scale Matrix Linear Regression Model. 147:1-147:39 - Wooseok Ha, Kimon Fountoulakis, Michael W. Mahoney:
Statistical guarantees for local graph clustering. 148:1-148:54 - Zebin Yang, Aijun Zhang:
Hyperparameter Optimization via Sequential Uniform Designs. 149:1-149:47 - Tian Tong, Cong Ma, Yuejie Chi:
Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent. 150:1-150:63 - László Györfi, Roi Weiss:
Universal consistency and rates of convergence of multiclass prototype algorithms in metric spaces. 151:1-151:25 - Antonio Blanca, Zongchen Chen, Daniel Stefankovic, Eric Vigoda:
Hardness of Identity Testing for Restricted Boltzmann Machines and Potts models. 152:1-152:56 - Kyohei Atarashi, Satoshi Oyama, Masahito Kurihara:
Factorization Machines with Regularization for Sparse Feature Interactions. 153:1-153:50 - Yikun Zhang, Yen-Chi Chen:
Kernel Smoothing, Mean Shift, and Their Learning Theory with Directional Data. 154:1-154:92 - Licong Lin, Edgar Dobriban:
What Causes the Test Error? Going Beyond Bias-Variance via ANOVA. 155:1-155:82 - Eric Lybrand, Rayan Saab:
A Greedy Algorithm for Quantizing Neural Networks. 156:1-156:38 - Takuo Matsubara, Chris J. Oates, François-Xavier Briol:
The Ridgelet Prior: A Covariance Function Approach to Prior Specification for Bayesian Neural Networks. 157:1-157:57 - Takeru Matsuda, Masatoshi Uehara, Aapo Hyvärinen:
Information criteria for non-normalized models. 158:1-158:33 - Niladri S. Chatterji, Philip M. Long, Peter L. Bartlett:
When Does Gradient Descent with Logistic Loss Find Interpolating Two-Layer Networks? 159:1-159:48 - Antoine Grosnit, Alexander I. Cowen-Rivers, Rasul Tutunov, Ryan-Rhys Griffiths, Jun Wang, Haitham Bou-Ammar:
Are We Forgetting about Compositional Optimisers in Bayesian Optimisation? 160:1-160:78 - Tim van Erven, Wouter M. Koolen, Dirk van der Hoeven:
MetaGrad: Adaptation using Multiple Learning Rates in Online Learning. 161:1-161:61 - Krikamol Muandet, Motonobu Kanagawa, Sorawit Saengkyongam, Sanparith Marukatat:
Counterfactual Mean Embeddings. 162:1-162:71 - Ivan Stelmakh, Nihar B. Shah, Aarti Singh:
PeerReview4All: Fair and Accurate Reviewer Assignment in Peer Review. 163:1-163:66 - Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, Hugo Larochelle:
Improving Reproducibility in Machine Learning Research(A Report from the NeurIPS 2019 Reproducibility Program). 164:1-164:20 - Charles H. Martin, Michael W. Mahoney:
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning. 165:1-165:73 - Ryan R. Curtin, Marcus Edel, Rahul Ganesh Prabhu, Suryoday Basak, Zhihao Lou, Conrad Sanderson:
The ensmallen library for flexible numerical optimization. 166:1-166:6 - Daniel J. Luckett, Eric B. Laber, Siyeon Kim, Michael R. Kosorok:
Estimation and Optimization of Composite Outcomes. 167:1-167:40 - Jeffrey W. Miller:
Asymptotic Normality, Concentration, and Coverage of Generalized Posteriors. 168:1-168:53 - Mingrui Liu, Hassan Rafique, Qihang Lin, Tianbao Yang:
First-order Convergence Theory for Weakly-Convex-Weakly-Concave Min-max Problems. 169:1-169:34 - Bin Gu, Xiyuan Wei, Shangqian Gao, Ziran Xiong, Cheng Deng, Heng Huang:
Black-Box Reductions for Zeroth-Order Gradient Algorithms to Achieve Lower Query Complexity. 170:1-170:47 - Hongwei Sun, Qiang Wu:
Optimal Rates of Distributed Regression with Imperfect Kernels. 171:1-171:34 - Fadoua Balabdaoui, Charles R. Doss, Cécile Durot:
Unlinked Monotone Regression. 172:1-172:60 - Jing Dong, Xin T. Tong:
Replica Exchange for Non-Convex Optimization. 173:1-173:59 - Vishakha Patil, Ganesh Ghalme, Vineet Nair, Y. Narahari:
Achieving Fairness in the Stochastic Multi-Armed Bandit Problem. 174:1-174:31 - Stefano Peluchetti, Stefano Favaro:
Doubly infinite residual neural networks: a diffusion process approach. 175:1-175:48 - Uri Stemmer:
Locally Private k-Means Clustering. 176:1-176:30 - Xin Bing, Florentina Bunea, Seth Strimas-Mackey, Marten H. Wegkamp:
Prediction Under Latent Factor Regression: Adaptive PCR, Interpolating Predictors and Beyond. 177:1-177:50 - Tineke Blom, Mirthe M. van Diepen, Joris M. Mooij:
Conditional independences and causal relations implied by sets of equations. 178:1-178:62 - Yuetian Luo, Garvesh Raskutti, Ming Yuan, Anru R. Zhang:
A Sharp Blockwise Tensor Perturbation Bound for Orthogonal Iteration. 179:1-179:48 - Trambak Banerjee, Gourab Mukherjee, Debashis Paul:
Improved Shrinkage Prediction under a Spiked Covariance Structure. 180:1-180:40 - Janis Klaise, Arnaud Van Looveren, Giovanni Vacanti, Alexandru Coca:
Alibi Explain: Algorithms for Explaining Machine Learning Models. 181:1-181:7 - Pascal Klink, Hany Abdulsamad, Boris Belousov, Carlo D'Eramo, Jan Peters, Joni Pajarinen:
A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning. 182:1-182:52 - Marissa A. Weis, Kashyap Chitta, Yash Sharma, Wieland Brendel, Matthias Bethge, Andreas Geiger, Alexander S. Ecker:
Benchmarking Unsupervised Object Representations for Video Sequences. 183:1-183:61 - Martin Binder, Florian Pfisterer, Michel Lang, Lennart Schneider, Lars Kotthoff, Bernd Bischl:
mlr3pipelines - Flexible Machine Learning Pipelines in R. 184:1-184:7 - HanQin Cai, Keaton Hamm, Longxiu Huang, Deanna Needell:
Mode-wise Tensor Decompositions: Multi-dimensional Generalizations of CUR Decompositions. 185:1-185:36 - Andrew K. Massimino, Mark A. Davenport:
As You Like It: Localization via Paired Comparisons. 186:1-186:39 - Rasmus Bonnevie, Mikkel N. Schmidt:
Matrix Product States for Inference in Discrete Probabilistic Models. 187:1-187:48 - Michael Thomas Smith, Mauricio A. Álvarez, Neil D. Lawrence:
Differentially Private Regression and Classification with Sparse Gaussian Processes. 188:1-188:41 - Saber Salehkaleybar, Arsalan Sharif-Nassab, S. Jamaloddin Golestani:
One-Shot Federated Learning: Theoretical Limits and Algorithms to Achieve Them. 189:1-189:47 - Changyue Song, Kaibo Liu, Xi Zhang:
Collusion Detection and Ground Truth Inference in Crowdsourcing for Labeling Tasks. 190:1-190:45 - Yann Issartel:
On the Estimation of Network Complexity: Dimension of Graphons. 191:1-191:62 - Fei Wang, Ling Zhou, Lu Tang, Peter X. K. Song:
Method of Contraction-Expansion (MOCE) for Simultaneous Inference in Linear Models. 192:1-192:32 - Majid Noroozi, Marianna Pensky, Ramchandra Rimal:
Sparse Popularity Adjusted Stochastic Block Model. 193:1-193:36 - Keith D. Levin, Fred Roosta, Minh Tang, Michael W. Mahoney, Carey E. Priebe:
Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings. 194:1-194:59 - Pierre Humbert, Batiste Le Bars, Laurent Oudre, Argyris Kalogeratos, Nicolas Vayatis:
Learning Laplacian Matrix from Graph Signals with Sparse Spectral Representation. 195:1-195:47 - Ping Xu, Yue Wang, Xiang Chen, Zhi Tian:
COKE: Communication-Censored Decentralized Kernel Learning. 196:1-196:35 - Alexandre Bouchard-Côté, Andrew Roth:
Particle-Gibbs Sampling for Bayesian Feature Allocation Models. 197:1-197:105 - Tiffany M. Tang, Genevera I. Allen:
Integrated Principal Components Analysis. 198:1-198:71 - Jinshan Zeng, Shao-Bo Lin, Yuan Yao, Ding-Xuan Zhou:
On ADMM in Deep Learning: Convergence and Saturation-Avoidance. 199:1-199:67 - Joon Kwon:
Refined approachability algorithms and application to regret minimization with global costs. 200:1-200:38 - Yingfan Wang, Haiyang Huang, Cynthia Rudin, Yaron Shaposhnik:
Understanding How Dimension Reduction Tools Work: An Empirical Approach to Deciphering t-SNE, UMAP, TriMap, and PaCMAP for Data Visualization. 201:1-201:73 - Huafeng Liu, Liping Jing, Jingxuan Wen, Pengyu Xu, Jiaqi Wang, Jian Yu, Michael K. Ng:
Interpretable Deep Generative Recommendation Models. 202:1-202:54 - Gábor Lugosi, Jakub Truszkowski, Vasiliki Velona, Piotr Zwiernik:
Learning partial correlation graphs and graphical models by covariance queries. 203:1-203:41 - Peter L. Bartlett, Philip M. Long:
Failures of Model-dependent Generalization Bounds for Least-norm Interpolation. 204:1-204:15 - Zhiyan Ding, Qin Li:
Langevin Monte Carlo: random coordinate descent and variance reduction. 205:1-205:51 - Jeongho Kim, Jaeuk Shin, Insoon Yang:
Hamilton-Jacobi Deep Q-Learning for Deterministic Continuous-Time Systems with Lipschitz Continuous Controls. 206:1-206:34 - Lam M. Nguyen, Quoc Tran-Dinh, Dzung T. Phan, Phuong Ha Nguyen, Marten van Dijk:
A Unified Convergence Analysis for Shuffling-Type Gradient Methods. 207:1-207:44 - Steffen Grünewälder, Azadeh Khaleghi:
Oblivious Data for Fairness with Kernels. 208:1-208:36 - Ian Covert, Scott M. Lundberg, Su-In Lee:
Explaining by Removing: A Unified Framework for Model Explanation. 209:1-209:90 - Amin Rakhsha, Goran Radanovic, Rati Devidze, Xiaojin Zhu, Adish Singla:
Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks. 210:1-210:45 - Lydia T. Liu, Feng Ruan, Horia Mania, Michael I. Jordan:
Bandit Learning in Decentralized Matching Markets. 211:1-211:34 - Tolga Ergen, Mert Pilanci:
Convex Geometry and Duality of Over-parameterized Neural Networks. 212:1-212:63 - Jianyu Wang, Gauri Joshi:
Cooperative SGD: A Unified Framework for the Design and Analysis of Local-Update SGD Algorithms. 213:1-213:50 - Hubert Baniecki, Wojciech Kretowicz, Piotr Piatyszek, Jakub Wisniewski, Przemyslaw Biecek:
dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python. 214:1-214:7 - Pawel Rosciszewski, Michal Martyniak, Filip Schodowski:
TensorHive: Management of Exclusive GPU Access for Distributed Machine Learning Workloads. 215:1-215:5 - Lili Zheng, Garvesh Raskutti, Rebecca Willett, Benjamin Mark:
Context-dependent Networks in Multivariate Time Series: Models, Methods, and Risk Bounds in High Dimensions. 216:1-216:88 - Lorenzo Dall'Amico, Romain Couillet, Nicolas Tremblay:
A Unified Framework for Spectral Clustering in Sparse Graphs. 217:1-217:56 - Zixin Zhong, Wang Chi Chueng, Vincent Y. F. Tan:
Thompson Sampling Algorithms for Cascading Bandits. 218:1-218:66 - Georgia Papadogeorgou, Zhengwu Zhang, David B. Dunson:
Soft Tensor Regression. 219:1-219:53 - Xi Chen, Victor Chernozhukov, Iván Fernández-Val, Scott Kostyshak, Ye Luo:
Shape-Enforcing Operators for Generic Point and Interval Estimators of Functions. 220:1-220:42 - Eitan Richardson, Yair Weiss:
A Bayes-Optimal View on Adversarial Examples. 221:1-221:28 - Vidya Muthukumar, Adhyyan Narang, Vignesh Subramanian, Mikhail Belkin, Daniel Hsu, Anant Sahai:
Classification vs regression in overparameterized regimes: Does the loss function matter? 222:1-222:69 - Joseph De Vilmarest, Olivier Wintenberger:
Stochastic Online Optimization using Kalman Recursion. 223:1-223:55 - Leo L. Duan, David B. Dunson:
Bayesian Distance Clustering. 224:1-224:27 - Rui Wang, Yuesheng Xu:
Representer Theorems in Banach Spaces: Minimum Norm Interpolation, Regularized Learning and Semi-Discrete Inverse Problems. 225:1-225:65 - Yang Liu, Tao Fan, Tianjian Chen, Qian Xu, Qiang Yang:
FATE: An Industrial Grade Platform for Collaborative Learning With Data Protection. 226:1-226:6 - María Pérez-Ortiz, Omar Rivasplata, John Shawe-Taylor, Csaba Szepesvári:
Tighter Risk Certificates for Neural Networks. 227:1-227:40 - Tengyuan Liang:
How Well Generative Adversarial Networks Learn Distributions. 228:1-228:41 - Valerio Biscione, Jeffrey S. Bowers:
Convolutional Neural Networks Are Not Invariant to Translation, but They Can Learn to Be. 229:1-229:28 - Stéphane Chrétien, Mihai Cucuringu, Guillaume Lecué, Lucie Neirac:
Learning with semi-definite programming: statistical bounds based on fixed point analysis and excess risk curvature. 230:1-230:64 - Rick van Veen, Michael Biehl, Gert-Jan de Vries:
sklvq: Scikit Learning Vector Quantization. 231:1-231:6 - Jon Cockayne, Ilse C. F. Ipsen, Chris J. Oates, Tim W. Reid:
Probabilistic Iterative Methods for Linear Systems. 232:1-232:34 - Robert Sicks, Ralf Korn, Stefanie Schwaar:
A Generalised Linear Model Framework for β-Variational Autoencoders based on Exponential Dispersion Families. 233:1-233:41 - Jackson Loper, David M. Blei, John P. Cunningham, Liam Paninski:
A general linear-time inference method for Gaussian Processes on one dimension. 234:1-234:36 - Henry B. Moss, David S. Leslie, Javier Gonzalez, Paul Rayson:
GIBBON: General-purpose Information-Based Bayesian Optimisation. 235:1-235:49 - Cássio Fraga Dantas, Emmanuel Soubies, Cédric Févotte:
Expanding Boundaries of Gap Safe Screening. 236:1-236:57 - Massimo Fornasier, Lorenzo Pareschi, Hui Huang, Philippe Sünnen:
Consensus-Based Optimization on the Sphere: Convergence to Global Minimizers and Machine Learning. 237:1-237:55 - Haishan Ye, Tong Zhang:
DeEPCA: Decentralized Exact PCA with Linear Convergence Rate. 238:1-238:27 - Mert Gürbüzbalaban, Xuefeng Gao, Yuanhan Hu, Lingjiong Zhu:
Decentralized Stochastic Gradient Langevin Dynamics and Hamiltonian Monte Carlo. 239:1-239:69 - Meng Liu, Youzhi Luo, Limei Wang, Yaochen Xie, Hao Yuan, Shurui Gui, Haiyang Yu, Zhao Xu, Jingtun Zhang, Yi Liu, Keqiang Yan, Haoran Liu, Cong Fu, Bora Oztekin, Xuan Zhang, Shuiwang Ji:
DIG: A Turnkey Library for Diving into Graph Deep Learning Research. 240:1-240:9 - Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, Alexandra Peste:
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks. 241:1-241:124 - Jesús María Sanz-Serna, Konstantinos C. Zygalakis:
Wasserstein distance estimates for the distributions of numerical approximations to ergodic stochastic differential equations. 242:1-242:37 - Sifan Liu, Art B. Owen:
Quasi-Monte Carlo Quasi-Newton in Variational Bayes. 243:1-243:23 - Peter Koepernik, Florian Pfaff:
Consistency of Gaussian Process Regression in Metric Spaces. 244:1-244:27 - Takayuki Okuno, Akiko Takeda, Akihiro Kawana, Motokazu Watanabe:
On lp-hyperparameter Learning via Bilevel Nonsmooth Optimization. 245:1-245:47 - Emilie Kaufmann, Wouter M. Koolen:
Mixture Martingales Revisited with Applications to Sequential Tests and Confidence Intervals. 246:1-246:44 - Alden Green, Sivaraman Balakrishnan, Ryan J. Tibshirani:
Statistical Guarantees for Local Spectral Clustering on Random Neighborhood Graphs. 247:1-247:71 - Daren Wang, Zifeng Zhao, Kevin Z. Lin, Rebecca Willett:
Statistically and Computationally Efficient Change Point Localization in Regression Settings. 248:1-248:46 - Zhiqiang Xu, Ping Li:
On the Riemannian Search for Eigenvector Computation. 249:1-249:46 - Arkaprava Roy, Jana Schaich Borg, David B. Dunson:
Bayesian time-aligned factor analysis of paired multivariate time series. 250:1-250:27 - James-A. Goulet, Luong Ha Nguyen, Saeid Amiri:
Tractable Approximate Gaussian Inference for Bayesian Neural Networks. 251:1-251:23 - Jayanth Jagalur-Mohan, Youssef M. Marzouk:
Batch greedy maximization of non-submodular functions: Guarantees and applications to experimental design. 252:1-252:62 - Shao-Qun Zhang, Zhao-Yu Zhang, Zhi-Hua Zhou:
Bifurcation Spiking Neural Network. 253:1-253:21 - Zijian Guo, Prabrisha Rakshit, Daniel S. Herman, Jinbo Chen:
Inference for the Case Probability in High-dimensional Logistic Regression. 254:1-254:54 - Alex Luedtke, Incheoul Chung, Oleg Sofrygin:
Adversarial Monte Carlo Meta-Learning of Optimal Prediction Procedures. 255:1-255:67 - Jiaying Zhou, Jie Ding, Kean Ming Tan, Vahid Tarokh:
Model Linkage Selection for Cooperative Learning. 256:1-256:44 - Tianhui Zhou, Yitong Li, Yuan Wu, David E. Carlson:
Estimating Uncertainty Intervals from Collaborating Networks. 257:1-257:47 - Dennis Wei, Karthikeyan Natesan Ramamurthy, Flávio P. Calmon:
Optimized Score Transformation for Consistent Fair Classification. 258:1-258:78 - Chang Chen, Fei Deng, Sungjin Ahn:
ROOTS: Object-Centric Representation and Rendering of 3D Scenes. 259:1-259:36 - Xiaowu Dai, Michael I. Jordan:
Learning Strategies in Decentralized Matching Markets under Uncertain Preferences. 260:1-260:50 - Yuansi Chen, Peter Bühlmann:
Domain adaptation under structural causal models. 261:1-261:80 - Chong Liu, Yuqing Zhu, Kamalika Chaudhuri, Yu-Xiang Wang:
Revisiting Model-Agnostic Private Learning: Faster Rates and Active Learning. 262:1-262:44 - Constantin Christof:
On the Stability Properties and the Optimization Landscape of Training Problems with Squared Loss for Neural Networks and General Nonlinear Conic Approximation Schemes. 263:1-263:77 - Mihai Cucuringu, Apoorv Vikram Singh, Déborah Sulem, Hemant Tyagi:
Regularized spectral methods for clustering signed networks. 264:1-264:79 - Feicheng Wang, Lucas Janson:
Exact Asymptotics for Linear Quadratic Adaptive Control. 265:1-265:112 - Xiang Ge Luo, Giusi Moffa, Jack Kuipers:
Learning Bayesian Networks from Ordinal Data. 266:1-266:44 - Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Fuyuta Komura, Takeshi Katsura, Yoshinobu Kawahara:
Reproducing kernel Hilbert C*-module and kernel mean embeddings. 267:1-267:56 - Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, Noah Dormann:
Stable-Baselines3: Reliable Reinforcement Learning Implementations. 268:1-268:8 - Chaim Baskin, Brian Chmiel, Evgenii Zheltonozhskii, Ron Banner, Alex M. Bronstein, Avi Mendelson:
CAT: Compression-Aware Training for bandwidth reduction. 269:1-269:20 - Sammy Khalife, Douglas Soares Gonçalves, Youssef Allouah, Leo Liberti:
Further results on latent discourse models and word embeddings. 270:1-270:36 - William A. Clark, Maani Ghaffari, Anthony M. Bloch:
Nonparametric Continuous Sensor Registration. 271:1-271:50 - Ron Levie, Wei Huang, Lorenzo Bucci, Michael M. Bronstein, Gitta Kutyniok:
Transferability of Spectral Graph Convolutional Neural Networks. 272:1-272:59 - Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell:
On the Hardness of Robust Classification. 273:1-273:29 - Bin Liu, Xinsheng Zhang, Yufeng Liu:
Simultaneous Change Point Inference and Structure Recovery for High Dimensional Gaussian Graphical Models. 274:1-274:62 - Chin Pang Ho, Marek Petrik, Wolfram Wiesemann:
Partial Policy Iteration for L1-Robust Markov Decision Processes. 275:1-275:46 - Johannes Lederer, Michael Vogt:
Estimating the Lasso's Effective Noise. 276:1-276:32 - Carlo D'Eramo, Andrea Cini, Alessandro Nuara, Matteo Pirotta, Cesare Alippi, Jan Peters, Marcello Restelli:
Gaussian Approximation for Bias Reduction in Q-Learning. 277:1-277:51 - Masahiro Fujisawa, Issei Sato:
Multilevel Monte Carlo Variational Inference. 278:1-278:44 - Michael J. Neely:
Fast Learning for Renewal Optimization in Online Task Scheduling. 279:1-279:44 - Liren Yu, Jiaming Xu, Xiaojun Lin:
Graph Matching with Partially-Correct Seeds. 280:1-280:54 - Christopher Tosh, Akshay Krishnamurthy, Daniel Hsu:
Contrastive Estimation Reveals Topic Posterior Information to Linear Models. 281:1-281:31 - Dhruv Kohli, Alexander Cloninger, Gal Mishne:
LDLE: Low Distortion Local Eigenmaps. 282:1-282:64 - Oskar Allerbo, Rebecka Jörnsten:
Non-linear, Sparse Dimensionality Reduction via Path Lasso Penalized Autoencoders. 283:1-283:28 - Thomas Kerdreux, Christophe Roux, Alexandre d'Aspremont, Sebastian Pokutta:
Linear Bandits on Uniformly Convex Sets. 284:1-284:23 - Chengchun Shi, Tianlin Xu, Wicher Bergsma, Lexin Li:
Double Generative Adversarial Networks for Conditional Independence Testing. 285:1-285:32 - Chengchun Shi, Shikai Luo, Hongtu Zhu, Rui Song:
An Online Sequential Test for Qualitative Treatment Effects. 286:1-286:51 - Zhengze Zhou, Lucas Mentch, Giles Hooker:
V-statistics and Variance Estimation. 287:1-287:48 - Marco C. Campi, Simone Garatti:
A Theory of the Risk for Optimization with Relaxation and its Application to Support Vector Machines. 288:1-288:38 - Luisa M. Zintgraf, Sebastian Schulze, Cong Lu, Leo Feng, Maximilian Igl, Kyriacos Shiarlis, Yarin Gal, Katja Hofmann, Shimon Whiteson:
VariBAD: Variational Bayes-Adaptive Deep RL via Meta-Learning. 289:1-289:39 - Nikola B. Kovachki, Samuel Lanthaler, Siddhartha Mishra:
On Universal Approximation and Error Bounds for Fourier Neural Operators. 290:1-290:76
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.