Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.hawkings.education/llms.txt

Use this file to discover all available pages before exploring further.

AI grading scores a submission against the assignment’s rubric and returns:
  • a numeric grade (grade_ai),
  • a per-criterion breakdown,
  • a markdown rationale (grading_rationale),
  • optional inline annotations on the student’s text.
The grade is never committed to the gradebook automatically — there is always a grade_manual field that, when set, overrides the AI grade.

Calling the grader

await hk.submissions.gradeWithAi("sub_123");
That call is async (it returns immediately with ai_status: "pending"). Wait for completion via polling or webhook.

What you get back

{
  "id": "sub_123",
  "ai_status": "ready",
  "grade_ai": 8.5,
  "grading_rationale": "## Clarity (4/4)\nThe student opens with...\n\n## Accuracy (3/4)\n...",
  "grading_breakdown": [
    { "criterion": "clarity", "score": 4, "max": 4 },
    { "criterion": "accuracy", "score": 3, "max": 4 },
    { "criterion": "sources", "score": 1.5, "max": 2 }
  ],
  "evaluated_at": "2026-05-10T12:34:56Z"
}

Configuring the grader

Per-assignment configuration lives on the Assignment:
await hk.assignments.configure(assignmentId, {
  ai_evaluator: {
    enabled: true,
    model: "claude-sonnet-4-6",
    rubric: {
      criteria: [
        { name: "clarity", weight: 0.4 },
        { name: "accuracy", weight: 0.4 },
        { name: "sources", weight: 0.2 },
      ],
      scale: { min: 0, max: 10 },
    },
    feedback: {
      tone: "encouraging" | "neutral" | "demanding",
      verbosity: "short" | "normal" | "detailed",
      language: "en",
    },
  },
});
For one-off rubric overrides on a single submission:
await hk.submissions.gradeWithAi(submissionId, {
  rubric: { /* override */ },
});

Human-in-the-loop

The recommended workflow:
// 1. AI grades
await hk.submissions.gradeWithAi(submissionId);

// 2. Teacher reviews in your UI; you show grade_ai + grading_rationale.

// 3. Teacher confirms (or edits)
await hk.submissions.update(submissionId, {
  grade_manual: 8.5,            // confirm
  grader_comments: "Solid.",
  human_review_status: "reviewed",
});
human_review_status flows: pendingreviewed. You can require review per assignment (human_review: "required"); when set, the grade shown to students is null until a human marks it reviewed.

Re-grading

If you change the rubric or want a second opinion, call gradeWithAi again. The new result overwrites the previous AI grade; grade_manual is untouched.
await hk.submissions.gradeWithAi(submissionId);
You can also ask for the AI’s reasoning without a grade — useful when the teacher wants explanations on a hand-graded submission:
await hk.submissions.explainGrade(submissionId);
// → fills `grading_rationale` based on the existing manual grade

What AI grading is not

  • Not a replacement for the teacher on consequential grades.
  • Not stable across model versions: pin a model if you need reproducibility.
  • Not suitable for grading code, math proofs, or anything where the rubric isn’t expressible in natural language. Use a custom grader and ingest the score via submissions.update({ grade_manual }) (or $hk->submissions->update($id, ['grade_manual' => ...])) instead.

Auditing

Every AI grade carries the model version, the rubric snapshot, and the prompt fingerprint in submission.evaluation_meta. That makes appeals and audits feasible.
const sub = await hk.submissions.retrieve(submissionId, {
  expand: ["evaluation_meta"],
});

// sub.evaluation_meta = { model, rubric, prompt_hash, evaluated_at }