error: Failed to parse Markdown content!
# FSRS Algorithm Research & Implementation Spec for Glidr
## Executive Summary
**Recommendation: Use FSRS via the `fsrs` Dart package (pub.dev) instead of implementing SM-19 from scratch.**
SM-19 is deeply entangled with SuperMemo's proprietary data collection infrastructure (SInc matrices, Recall matrices, forgetting curve fitting across thousands of users). It is not practically implementable as a standalone algorithm -- it requires continuous matrix refinement from historical data that a fresh app with 1300 questions simply does not have.
FSRS (Free Spaced Repetition Scheduler) is an open-source algorithm that captures the same theoretical foundation (the two-component memory model from SM-17/18/19) but packages it into a clean, self-contained formula set with sensible defaults. It is used by Anki, RemNote, and many other apps. A production-quality Dart package exists on pub.dev.
---
## Part 1: SM-19 Analysis (Why Not to Implement It)
### Core Concepts
SM-19 is built on the **Two Component Model of Memory**:
- **Stability (S)**: How long a memory can last. Defined as the time (in days) for retrievability to drop from 100% to 90%. Higher stability = longer-lasting memory.
- **Retrievability (R)**: The probability of recalling a memory at a given moment. Decays over time following a forgetting curve. R = f(time_elapsed, S).
The central formula is:
```
Int[n] = S[n-1] * SInc[D, S, R] * ln(1 - rFI/100) / ln(0.9)
```
Where SInc[D,S,R] is the **stability increase function** -- a 3D matrix indexed by Difficulty, Stability, and Retrievability.
### Why SM-19 Is Impractical for Glidr
1. **SInc[] is a matrix, not a formula**: SM-19 builds the SInc matrix empirically from user data. It requires thousands of repetition records across diverse difficulty/stability/retrievability combinations to populate. A new app starts with an empty matrix.
2. **Circular dependency**: Computing item difficulty requires the SInc matrix, but the SInc matrix requires difficulty estimates. SM-19 bootstraps this through iterative refinement over large datasets.
3. **Forgetting curve fitting**: SM-19 fits power-law forgetting curves per-user using 35 quantiles. This requires significant data collection infrastructure.
4. **Post-lapse stability**: Uses a separate PLS[Lapses, R] matrix that also needs data to populate.
5. **No published default parameters**: Unlike FSRS which ships with researched defaults, SM-19's matrices are proprietary to SuperMemo.
---
## Part 2: FSRS Algorithm (Complete Specification)
### Theoretical Foundation
FSRS uses the same two-component memory model as SM-19 but adds a third variable:
- **Stability (S)**: Time in days for R to decay from 100% to 90%. Range: [0, +inf)
- **Retrievability (R)**: Probability of recall at time t. Range: [0, 1]. Computed dynamically.
- **Difficulty (D)**: How hard the card is to remember. Range: [1, 10]. Stored per card.
### Per-Card State (What to Store)
```dart
class CardState {
double stability; // S: days (replaces SM-2's interval_days)
double difficulty; // D: 1.0-10.0 (replaces SM-2's easiness_factor)
DateTime lastReview; // when last reviewed
int state; // 0=New, 1=Learning, 2=Review, 3=Relearning
int lapses; // count of "Again" ratings (for leech detection)
int reps; // total review count
int learningStep; // current step index in learning/relearning
}
```
**Comparison to SM-2:**
| SM-2 | FSRS | Notes |
|------|------|-------|
| `repetitionNumber` | `reps` + `state` | FSRS tracks state machine position |
| `easinessFactor` (2.5 default) | `difficulty` (5.0 default) | Inverted: higher D = harder |
| `intervalDays` | `stability` | S is conceptually the interval at 90% retention |
### Grade System (4 Ratings)
| Rating | Value | Meaning | SM-2 Equivalent |
|--------|-------|---------|-----------------|
| Again | 1 | Failed to recall | Grade 1 (reset) |
| Hard | 2 | Recalled with serious difficulty | Grade 3 |
| Good | 3 | Recalled after hesitation | Grade 4 |
| Easy | 4 | Recalled effortlessly | Grade 5 |
This matches our existing 4-button UX exactly.
### Constants
```dart
const double DECAY = -0.5;
const double FACTOR = 19.0 / 81.0; // 0.2346...
```
### The 21 Default Parameters
```dart
const List
defaultParams = [
0.2172, // w[0]: S0 for Again (initial stability if first rating is Again)
1.1771, // w[1]: S0 for Hard
3.2602, // w[2]: S0 for Good
16.1507, // w[3]: S0 for Easy
7.0114, // w[4]: D0 base (initial difficulty calculation)
0.57, // w[5]: D0 scaling factor
2.0966, // w[6]: Difficulty delta slope
0.0069, // w[7]: Difficulty mean reversion weight
1.5261, // w[8]: Stability increase base exponent (success)
0.112, // w[9]: Stability decay exponent (success)
1.0178, // w[10]: Retrievability response factor (success)
1.849, // w[11]: Post-lapse stability factor
0.1133, // w[12]: Post-lapse difficulty exponent
0.3127, // w[13]: Post-lapse stability exponent
2.2934, // w[14]: Post-lapse retrievability factor
0.2191, // w[15]: Hard penalty multiplier (< 1.0)
3.0004, // w[16]: Easy bonus multiplier (> 1.0)
0.7536, // w[17]: Short-term stability grade effect
0.3332, // w[18]: Short-term stability grade offset
0.1437, // w[19]: Short-term stability saturation
0.2, // w[20]: Forgetting curve shape parameter
];
```
### Formula 1: Retrievability (Forgetting Curve)
```
R(t, S) = (1 + FACTOR * t / S) ^ DECAY
```
Where `t` = days elapsed since last review, `S` = current stability.
When t = S, R = 0.9 (90% recall probability). This is the definition of stability.
### Formula 2: Next Interval from Desired Retention
```
interval = (S / FACTOR) * (desiredRetention ^ (1/DECAY) - 1)
```
With default desiredRetention = 0.9, this simplifies to interval = S (review when R drops to 90%).
### Formula 3: Initial Stability (First Review)
```
S0(grade) = w[grade - 1]
```
- Again: S0 = 0.2172 days (~5 hours)
- Hard: S0 = 1.1771 days
- Good: S0 = 3.2602 days
- Easy: S0 = 16.1507 days
### Formula 4: Initial Difficulty (First Review)
```
D0(grade) = clamp(w[4] - exp(w[5] * (grade - 1)) + 1, 1.0, 10.0)
```
- Again (G=1): D0 = w[4] - exp(0) + 1 = 7.0114
- Good (G=3): D0 = w[4] - exp(w[5]*2) + 1 = 4.88
- Easy (G=4): D0 = w[4] - exp(w[5]*3) + 1 = 2.47
### Formula 5: Stability After Successful Recall (G = 2, 3, or 4)
```
S'_recall = S * (exp(w[8]) * (11 - D) * S^(-w[9]) * (exp(w[10] * (1 - R)) - 1) * hardPenalty * easyBonus + 1)
```
Where:
- `hardPenalty` = w[15] if G=2 (Hard), else 1.0
- `easyBonus` = w[16] if G=4 (Easy), else 1.0
- `R` = retrievability at time of review
- `D` = current difficulty
- `S` = current stability
Key properties:
- Higher D (harder) -> smaller stability increase
- Higher S (already stable) -> diminishing returns (saturation)
- Lower R (reviewed later, near forgetting) -> bigger stability boost
- Hard penalty reduces increase; Easy bonus amplifies it
### Formula 6: Stability After Lapse/Failure (G = 1, Again)
```
S'_lapse = min(S_forget, S)
S_forget = w[11] * D^(-w[12]) * ((S + 1)^w[13] - 1) * exp(w[14] * (1 - R))
```
Key properties:
- Post-lapse stability is always <= pre-lapse stability
- Higher difficulty -> lower post-lapse stability
- Higher previous stability -> somewhat higher post-lapse stability (not starting from zero)
### Formula 7: Difficulty Update (After Any Review)
Three-step process:
```
1. delta_D = -w[6] * (grade - 3)
2. D' = D + delta_D * (10 - D) / 9
3. D'' = clamp(w[7] * D0(4) + (1 - w[7]) * D', 1.0, 10.0)
```
Step 1: Grade-based delta (Again adds ~+2.1, Good adds 0, Easy subtracts ~-2.1)
Step 2: Linear damping (harder items change less, prevents D from exceeding 10)
Step 3: Mean reversion toward easiest difficulty (prevents D from drifting to extremes)
### Formula 8: Short-Term Stability (Same-Day Review)
For reviews happening on the same day (within learning/relearning steps):
```
S'_short = S * exp(w[17] * (grade - 3 + w[18])) * S^(-w[19])
```
This ensures Good/Easy always preserve or increase stability, while Again/Hard may decrease it.
---
## Part 3: State Machine (Learning Steps)
FSRS uses a state machine with learning/relearning steps (like Anki):
### States
- **New (0)**: Never reviewed
- **Learning (1)**: In initial learning steps (e.g., 1min, 10min)
- **Review (2)**: Graduated to long-term review (intervals in days)
- **Relearning (3)**: Failed a review, going through relearning steps
### Default Steps
```dart
learningSteps: [Duration(minutes: 1), Duration(minutes: 10)]
relearningSteps: [Duration(minutes: 10)]
```
### Transitions
```
New --[any rating]--> Learning (step 0)
Learning --[Again]--> Learning (reset to step 0)
Learning --[Hard]--> Learning (stay at current step)
Learning --[Good]--> Learning (advance step) or Review (if past last step)
Learning --[Easy]--> Review (skip remaining steps)
Review --[Again]--> Relearning (step 0)
Review --[Hard/Good/Easy]--> Review (update S and D)
Relearning --[Again]--> Relearning (reset to step 0)
Relearning --[Good]--> Review (graduate)
Relearning --[Easy]--> Review (graduate)
```
---
## Part 4: Implementation Plan for Glidr
### Option A: Use `fsrs` Dart Package (RECOMMENDED)
**Package**: `fsrs: ^2.0.0` on pub.dev
**Source**: https://github.com/open-spaced-repetition/dart-fsrs
Advantages:
- Production-tested, maintained by the open-spaced-repetition community
- Handles all edge cases (fuzzing, same-day reviews, short-term scheduling)
- Serialization built in (`toMap()` / `fromMap()`)
- ~200 stars, active development, FSRS v6 support
- MIT licensed
```dart
import 'package:fsrs/fsrs.dart';
// Initialize once
final scheduler = Scheduler(
desiredRetention: 0.9,
learningSteps: [Duration(minutes: 1), Duration(minutes: 10)],
relearningSteps: [Duration(minutes: 10)],
maximumInterval: 365, // cap at 1 year for exam prep
enableFuzzing: true,
);
// Create card
final card = await Card.create();
// Review card
final (:card updatedCard, :reviewLog) = scheduler.reviewCard(card, Rating.good);
// Get retrievability
final r = scheduler.getCardRetrievability(card);
// Serialize for storage
final cardMap = card.toMap();
final restored = Card.fromMap(cardMap);
```
### Option B: Implement Core FSRS (~150 lines of Dart)
If we want zero dependencies, implement only the core formulas above. Skip:
- Fuzzing (interval randomization to prevent clustering)
- Short-term stability formula (use learning steps instead)
- Parameter optimization (use defaults)
### Data Migration from SM-2
```dart
CardState migrateFromSM2({
required int repetitionNumber,
required double easinessFactor,
required int intervalDays,
required DateTime nextReviewDate,
}) {
// Map EF (1.3-2.5+) to D (1-10), inverted
// EF 2.5 (easiest) -> D ~3.0, EF 1.3 (hardest) -> D ~8.0
final difficulty = clamp(11.0 - easinessFactor * 3.33, 1.0, 10.0);
// Stability approximation: current interval is a reasonable proxy
// since SM-2 intervals target ~90% retention (similar to FSRS default)
final stability = intervalDays.toDouble().clamp(0.5, 36500.0);
return CardState(
stability: stability,
difficulty: difficulty,
lastReview: nextReviewDate.subtract(Duration(days: intervalDays)),
state: repetitionNumber >= 2 ? 2 : 1, // Review if graduated
lapses: 0, // Unknown from SM-2 data
reps: repetitionNumber,
learningStep: 0,
);
}
```
### Recommended Configuration for SPL Exam Prep
```dart
final scheduler = Scheduler(
desiredRetention: 0.90, // 90% target retention
learningSteps: [
Duration(minutes: 1), // Show again in 1 min after first see
Duration(minutes: 10), // Then 10 min
],
relearningSteps: [
Duration(minutes: 10), // Failed cards: review in 10 min
],
maximumInterval: 180, // Cap at 6 months (exam prep context)
enableFuzzing: true, // Prevent review clustering
);
```
---
## Part 5: Why FSRS Over SM-2
| Aspect | SM-2 | FSRS |
|--------|------|------|
| Memory model | Single variable (EF) | Three variables (D, S, R) |
| Forgetting curve | None (fixed intervals) | Power-law decay modeled |
| Optimal review timing | Fixed 90% target | Configurable retention target |
| Difficulty tracking | EF drifts with grades | D with mean reversion (stable) |
| Post-failure handling | Hard reset to day 1 | Proportional to prior stability |
| Early/late review | Not handled | Explicitly modeled via R |
| Research backing | 1987, no updates | 2022-2026, peer-reviewed, ML-optimized |
| Efficiency | Baseline | 20-30% fewer reviews for same retention |
The biggest practical improvement: **SM-2 resets to 1-day interval on any failure**, throwing away all prior learning. FSRS computes post-lapse stability proportional to prior stability, so a card you knew well for months that you temporarily forget gets a much better interval than a card you just learned.
---
## Sources
- SM-19: https://supermemo.guru/wiki/Algorithm_SM-17
- FSRS Algorithm Wiki: https://github.com/open-spaced-repetition/fsrs4anki/wiki/The-Algorithm
- FSRS Technical Explanation: https://expertium.github.io/Algorithm.html
- Implementing FSRS in 100 Lines: https://borretti.me/article/implementing-fsrs-in-100-lines
- Dart FSRS Package: https://pub.dev/packages/fsrs
- Dart FSRS Source: https://github.com/open-spaced-repetition/dart-fsrs