Spoke Plus Adaptive Engine — Full Design¶
1. Objective¶
Define the long-term intelligence layer that personalizes practice, review, and progression using existing learner telemetry and future model-driven policies.
This document intentionally separates current data foundations from future adaptive automation.
2. Adaptive Session Engine¶
2.1 Inputs¶
Existing data sources that can power adaptation:
- Session history (
sessions,vw_session_summary) - Item performance (
user_item_stats,vw_practice_most_wrong) - Review scheduling state (
user_item_state,vw_practice_due_items) - Skill progress (
user_skill_progress)
2.2 Skill Weighting (Planned)¶
Session assembly should weight candidate skills by:
- Recent error concentration
- Time since last practice
- Mastery gap to threshold
- Course progression relevance
2.3 Error Frequency Scoring (Planned)¶
A score function should combine:
- Absolute wrong count
- Wrong/attempt ratio
- Recency of mistakes
This prevents overfitting to low-attempt noise and prioritizes persistent weaknesses.
2.4 Spaced Repetition Logic (Planned)¶
Adaptive scheduling should update each item’s next review (due_at) from correctness history and confidence signals.
2.5 Review Queue Prioritization (Planned)¶
Queue order should prioritize:
- Overdue items
- High error-rate items
- Recently introduced unstable items
- Maintenance review items
3. Weak Item Detection¶
3.1 Core Metric (Planned Policy on Existing Data)¶
Primary weakness metric:
wrong / attemptsratio (error_rate)
3.2 Thresholding Model (Planned)¶
Define policy thresholds to classify item health bands, for example:
- Stable
- At-risk
- Critical
Threshold logic should be configurable and monitored per cohort.
3.3 Scheduling Coupling (Planned)¶
Weakness bands should influence due_at scheduling windows:
- Higher error-rate → shorter review interval
- Lower error-rate with stable streak → longer interval
4. Smart Practice Mode¶
4.1 Source Views (Implemented Foundations)¶
Smart Practice can be assembled from:
vw_practice_most_wrongvw_practice_due_items
4.2 Practice Composition (Planned)¶
Session generator should blend:
- Weakest items (error-focused)
- Due items (retention-focused)
- Small amount of medium-confidence reinforcement
4.3 Dynamic Difficulty Adjustment (Planned)¶
During session execution, difficulty should react to live outcomes:
- Consecutive errors → easier prompts/supportive modes
- Consistent success → increased challenge/less scaffolding
5. Progression Rules¶
5.1 Unlock Validation (Planned Engine)¶
Unlock checks are expected to combine:
- Structural requirements
- Dependency requirements (after skill graph rollout)
- Mastery/session thresholds
5.2 Mastery Scoring (Planned)¶
Mastery should be computed from weighted recency-aware correctness rather than raw cumulative accuracy only.
5.3 Minimum Sessions Required (Implemented Field, Planned Enforcement Engine)¶
skills.min_sessions_required and skills.min_mastery_required already exist and are intended to feed final progression validation policies.
6. Real-time Analytics Strategy¶
Planned analytics delivery:
- Near-real-time learner state summaries for dashboarding.
- Adaptive decision traces (why an item/skill was selected).
- Alerting on anomalous drops in cohort performance.
Operationally, this should integrate with existing API monitoring and reporting domains.
7. Long-term Machine Learning Roadmap¶
7.1 Phase A — Rule-Based Adaptation¶
- Deterministic policy engine using existing SQL views and thresholds.
- Configurable heuristics per level/course.
7.2 Phase B — Statistical Optimization¶
- Tune thresholds and intervals using observed retention and completion outcomes.
- Add cohort-level calibration by language pair and CEFR level.
7.3 Phase C — Predictive Personalization¶
- Train models to predict forgetting risk and next-best item sequencing.
- Introduce policy guardrails to keep behavior explainable and curriculum-safe.
7.4 Governance Requirements¶
- Human-auditable decision outputs.
- Versioned adaptive policy configurations.
- Rollback-safe deployment of adaptive strategies.