Skip to content

Manual Performance Checklist

This is a human-friendly checklist, not a replacement for k6, JMeter, or Gatling. Use it when you want to understand the app flow before you run a load test, or when you want to sanity-check a new baseline against the seeded ProjectTrace data.

Why This Exists

Automated performance tests tell you numbers. This checklist tells you what those numbers are actually measuring.

Use it to:

  • learn the app flow before testing
  • reproduce a bad result manually
  • confirm the seeded data and login still work
  • explain a performance run to someone new

How To Use It

For each scenario:

  1. Make sure the seeded backend is running.
  2. Log in with the canonical perf account.
  3. Walk the steps once by hand.
  4. Compare the feel of the app with the automated report later.

Scenarios

1. Login And Open Dashboard

FieldValue
PreconditionsProjectTrace backend is up and the seeded perf user exists.
StepsLog in, open the dashboard, and wait for the summary cards and recent activity.
Expected resultLogin succeeds and the dashboard loads without errors.
Baseline targetLogin p95 under 1 second, dashboard summary p95 under 750 ms.
Automated coveragek6 smoke, JMeter smoke, Gatling smoke

2. Browse Bugs

FieldValue
PreconditionsLogged in on a seeded dataset.
StepsOpen Bugs, apply a normal filter, and open a bug detail page.
Expected resultThe list loads quickly and the detail view opens cleanly.
Baseline targetBugs list p95 under 750 ms, bug detail should stay comfortable under the same baseline feel.
Automated coveragek6 baseline, JMeter baseline, Gatling baseline

3. Create A Bug And Add A Comment

FieldValue
PreconditionsLogged in with a user that can create and edit bugs.
StepsCreate a bug, update its status or severity, then add a comment.
Expected resultThe bug is saved, updated, and the comment appears.
Baseline targetCreate bug p95 under 1 second.
Automated coveragek6 CRUD, JMeter CRUD, Gatling CRUD
FieldValue
PreconditionsLogged in and the seeded requirements/test cases exist.
StepsOpen a requirement, inspect its linked feature and epic, then link a test case.
Expected resultThe requirement detail page shows the relationship and the link action succeeds.
Baseline targetView requirement detail p95 under 1 second, create test run p95 under 1 second.
Automated coveragek6 volume, JMeter volume, Gatling volume

5. Search And Filter

FieldValue
PreconditionsSeeded data is present across projects, requirements, test cases, and bugs.
StepsSearch bugs, requirements, test cases, and projects using a few different filters.
Expected resultResults narrow correctly and remain responsive.
Baseline targetSearch-heavy reads should stay in the same baseline band as the list endpoints.
Automated coveragek6 search, JMeter search, Gatling search

6. Run A Longer Load Session

FieldValue
PreconditionsSmoke has already passed.
StepsRun the longer load scenario and keep an eye on the HTML report.
Expected resultThe app stays stable, percentiles stay within the baseline, and errors remain low.
Baseline targetSteady load should hold the target p95 values defined in the baseline guide.
Automated coveragek6 baseline, JMeter baseline, Gatling baseline

Notes For Beginners

  • Smoke is the first check.
  • Load is the normal steady run.
  • Volume is about data size.
  • Spike is a short burst.
  • Stress is past the limit.
  • Soak is long and steady.

If you can walk through the checklist manually, the automated reports will make more sense.