Every existing tool measures how fast engineers type. Mannar measures how well they think.
| Engineer | PRs | Commits | Weighted SLOC | AI Tokens | Tickets | Review Score | Trend |
|---|---|---|---|---|---|---|---|
| Sarah K. | 4 | 12 | 847 | 14.2k | 3 | 94 | +12% |
| Marcus L. | 6 | 18 | 1,204 | 32.8k | 5 | 91 | +8% |
| Priya R. | 3 | 9 | 623 | 8.1k | 2 | 88 | 0% |
| James W. | 2 | 7 | 412 | 51.3k | 1 | 72 | -5% |
| Anika D. | 5 | 14 | 956 | 18.7k | 4 | 96 | +18% |
| Chen Y. | 1 | 4 | 187 | 67.4k | 1 | 58 | -14% |
Six dimensions of engineering output, weighted for real-world impact.
Track pull requests opened, reviewed, and merged. See who ships and who unblocks others.
Commit volume with context. Frequency, size distribution, and consistency over time.
Not all lines are equal. Application logic outweighs config tweaks and auto-generated boilerplate.
Who gets 10x leverage from Claude Code and Copilot, and who burns tokens fighting them.
JIRA issues resolved, linked to commits and PRs. Full traceability from ticket to deploy.
Depth and thoroughness of code reviews. Rubber stamps get scored differently than real feedback.
See who gets 10x output from AI tools vs. who burns tokens fighting them. Token spend without weighted output is a red flag, not a badge.
Weighted code metrics mean config changes don't count the same as application logic. Auto-generated boilerplate is scored differently than hand-crafted algorithms.
From ideation to code to ship. Research time, review thoroughness, and unblocking others all count. Not just lines pushed to main.
Daily scorecard, weekly trends, and AI leverage ratios. All in one view.
Pull data from the tools your team already uses. Zero config.
Join engineering teams measuring what matters in the AI era.