Online Workshop Every Week
Join our free weekly interactive learning sessions.
Master AI/ML with instant feedback and personalized learning
"Cogito, ergo sum" (I think, therefore I am)
β RenΓ© Descartes
Free Problems
Advanced Quantization Techniques
This problem set focuses on **advanced quantization techniques** used for efficient LLM inference, including:
- RTN (Round-to-Nearest) quantization
- AWQ vs GPTQ comparison
- GPTQ internals: Hessian approximation and Cholesky decomposition
- GWQ (Group-wise Quantization)
- llama.cpp quantization methods (q4, q5, qX_k)
You will answer **15 questions** mixing multiple-choice, math, and code.
89 pts
Hard
90
quantization
machine learning
numerical computing
+7
DeepSeek Models and Technologies
This problem set covers DeepSeek's recent open-source models and innovations, including V3, R1, R1-Zero, and Distill.
It tests your understanding of their Mixture-of-Experts (MoE) architecture, Multi-Head Latent Attention (MLA), reinforcement learning approaches, distillation strategies, and positional encoding methods (e.g., RoPE).
The tasks include conceptual, numerical, mathematical, and coding questions.
62 pts
Medium
94
deep learning
large language models
model architecture
+7
Advanced Regularization Techniques
This problem set covers advanced regularization techniques in deep learning, including Dropout, Weight Decay, Label Smoothing, Mixup, and Stochastic Depth.
You will encounter conceptual, numerical, mathematical, and coding questions designed to test both your understanding and your implementation skills.
48 pts
Medium
92
deep learning
regularization
overfitting
+7
Inference Acceleration for Transformers
This problem set focuses on **inference acceleration techniques** in Transformers, including:
- KV cache storage, update, and quantization
- Speculative decoding (draft + target model)
- Continuous batching for dynamic request scheduling
- Prefill/Decode phase separation
You will answer **15 questions** mixing multiple-choice, math, and code.
38 pts
Medium
94
transformer models
inference optimization
natural language processing
+7
Transformer Architectures Basics
This problem set covers different Transformer architectures (Encoder-only, Decoder-only, Encoder-Decoder) and various positional encoding strategies (Sinusoidal, Learned, Rotary, Relative).
You will encounter conceptual, numerical, mathematical, and coding questions designed to test both your understanding and implementation skills.
47 pts
Medium
94
transformer architectures
attention mechanisms
deep learning
+7
ResNet and U-Net Architectures
In this problem set, you will explore advanced convolutional neural network architectures, focusing on two influential designs: **ResNet** (Residual Networks) and **U-Net**. You will answer questions ranging from conceptual understanding of residual connections to detailed coding tasks involving forward passes and architectural modifications. This will test your ability to reason about modern deep learning architectures that are widely used in computer vision, medical imaging, and beyond.
39 pts
Medium
101
neural networks
residual connections
deep learning
+7
Premium Problems
Knowledge Graphs
USA AI Olympiad
Explore competitive programming and AI contest preparation concepts
Grade 5 Math
Discover elementary mathematics concepts and learning paths
Featured PDFs
View All PDFsSystem Design Interview: An Insider's Guide Volume 2
116 questions
348 pts
System Design Interview: An Insider's Guide
108 questions
317 pts
UNICALLI: A UNIFIED DIFFUSION FRAMEWORK FOR COLUMN-LEVEL GENERATION AND RECOGNITION OF CHINESE CALLIGRAPHY
10 questions
38 pts
The Principles of Deep Learning Theory
107 questions
418 pts
Featured Books
View All BooksAcing the System Design Interview
153 questions
456 pts
Numerical Python: Scientific Computing and Data Science Applications with Numpy, SciPy and Matplotlib
190 questions
543 pts
Hands-On Machine Learning with Scikit-Learn and PyTorch
200 questions
554 pts
Deep Reinforcement Learning Hands-On - Third Edition
222 questions
720 pts
Featured Videos
View All VideosFlow-Matching vs Diffusion Models explained side by side
10 questions
29 pts
Attention in transformers, step-by-step | Deep Learning Chapter 6
10 questions
30 pts
Knowledge Distillation: How LLMs train each other
10 questions
27 pts
Diffusion Model
10 questions
32 pts
Popular Topics
machine learning
56
deep learning
40
neural networks
35
reinforcement learning
33
system-design
28
grade5
27
optimization
14
large language models
13
attention mechanisms
13
combinatorics
13
system-architecture
13
natural language processing
12
aime problems
12
Number Sense
12
scalability
11
beginner
10
number theory
10
performance
10
transformers
9
capacity-planning
9
Click on any tag to filter problems by that topic