In higher education, we often measure learning through numbers: 89%, 72%, 98%. Yet those percentages rarely tell us what a student actually knows or can do. Two students might both earn an 89%, one truly mastered the material and the other was just better at turning work in on time. The number communicates very little about mastery.
“The points don’t matter” is more than a catchy phrase—it’s a call to re-center learning around evidence of achievement rather than arithmetic. Across the country, institutions are re-evaluating how they assess student learning, moving away from point accumulation and toward outcome-aligned rubrics that define and measure mastery directly.
Outcomes-based assessment (OBA) focuses on what students can demonstrate in relation to explicit Student Learning Outcomes (SLOs) rather than on how many points they accumulate. In an OBA framework, rubrics are designed around those outcomes, using descriptors such as Exceeds Proficiency, Proficient, Near Proficient, or Not Yet Proficient instead of numerical scores.
This model parallels competency-based education, authentic assessment, and rubric-based frameworks like the Association of American Colleges & Universities VALUE rubrics—all of which emphasize performance, feedback, and application over rote recall. The goal is not to eliminate accountability, but to align measurement with meaning.
The National Institute for Learning Outcomes Assessment (NILOA) found that institutions are increasingly using authentic, embedded measures such as rubrics, portfolios, and capstones because they yield actionable information rather than mere compliance data (Jankowski et al. 2018).
A point-free rubric begins with the outcomes themselves. Each SLO—whether institution-level, program-level, or discipline-specific—is listed as a criterion. Performance levels are clearly defined through observable behaviors.
Example: Mastery Scale
|
Performance Level |
Description |
|
Exceeds Proficiency |
Demonstrates advanced application and insight; integrates multiple concepts independently. |
|
Proficient |
Demonstrates consistent, accurate, and appropriate use of knowledge and skills. |
|
Near Proficient |
Demonstrates partial understanding or inconsistent application; requires feedback and revision. |
|
Not Yet Proficient |
Demonstrates limited or inaccurate understanding; significant development needed. |
This format can be aligned with institutional frameworks, regional accreditors, or professional standards such as NAEYC (Early Childhood), AACN (Nursing), or CAEP (Teacher Education). The absence of points redirects attention toward criteria and performance descriptors, promoting both rigor and clarity.
Without point totals, feedback becomes the currency of learning. Faculty describe how students met or did not meet each outcome and provide specific guidance for revision. Students resubmit work to demonstrate improvement, reinforcing the idea that learning is iterative rather than transactional.
This model echoes what NILOA identifies as the “authentic assessment trend”: evidence produced in the context of teaching and learning, used directly to enhance assignments and courses rather than to satisfy compliance requirements (Jankowski et al., 2018).
Faculty who move away from points often organize their courses around bundles or modules of outcomes. Students complete learning activities demonstrating each outcome. Instead of a running total of points, they earn mastery indicators for each SLO.
If institutions still require a final letter grade, mastery profiles can be converted transparently: for example, “Proficient or higher on all five outcomes = A; Proficient on four, Near on one = B.” The grade translation remains secondary to the learning evidence.
This method aligns with standards-based and “specifications” grading models, where the learning contract is explicit and performance thresholds are transparent. The result is a system that values meeting learning goals, not collecting points.
NILOA’s research found that most changes resulting from assessment occur at the course and program levels, where faculty use results to adjust assignments and instruction—precisely the spaces where point-free rubrics thrive.
Adopting outcomes-based rubrics requires a cultural shift. Faculty accustomed to precise arithmetic grading may initially worry about subjectivity. Yet calibrated scoring sessions and shared rubrics increase reliability over time.
Technology presents both promise and friction. Learning management systems can store rubric results and aggregate mastery data, but as NILOA notes, many institutions struggle to integrate technology meaningfully into assessment practice. The solution lies not in software but in a shared vision: assessment as evidence for learning, not simply as recordkeeping.
When the focus moves from collecting points to demonstrating mastery, students experience assessment as a process of growth. Faculty reclaim assessment as an extension of teaching. Programs gain valid, meaningful data on student learning outcomes.
“The points don’t matter” does not mean rigor doesn’t matter. It means rigor resides in clearly defined outcomes, transparent rubrics, and meaningful feedback. In this model, every point has a purpose—and when points disappear, purpose finally takes center stage.