stereoplegic 's Collections No backprop
updated
Fine-Tuning Language Models with Just Forward Passes
Paper
• 2305.17333
• Published
• 4
HyperTuning: Toward Adapting Large Language Models without
Back-propagation
Paper
• 2211.12485
• Published
• 1
Is Complexity Required for Neural Network Pruning? A Case Study on
Global Magnitude Pruning
Paper
• 2209.14624
• Published
• 1
Backpropagation-free Training of Deep Physical Neural Networks
Paper
• 2304.11042
• Published
• 1
Lottery Tickets in Evolutionary Optimization: On Sparse
Backpropagation-Free Trainability
Paper
• 2306.00045
• Published
• 1
Gradients without Backpropagation
Paper
• 2202.08587
• Published
• 1
Learning with Local Gradients at the Edge
Paper
• 2208.08503
• Published
• 1
ZO-AdaMU Optimizer: Adapting Perturbation by the Momentum and
Uncertainty in Zeroth-order Optimization
Paper
• 2312.15184
• Published
• 1
DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training
Paper
• 2310.02025
• Published
• 1
AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for
Memory-Efficient Large Language Models Fine-Tuning
Paper
• 2406.18060
• Published
Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Paper
• 2406.02913
• Published
BBTv2: Towards a Gradient-Free Future with Large Language Models
Paper
• 2205.11200
• Published
Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank Structures
Paper
• 2410.07698
• Published
ZOQO: Zero-Order Quantized Optimization
Paper
• 2501.06736
• Published
Subspace-based Approximate Hessian Method for Zeroth-Order Optimization
Paper
• 2507.06125
• Published