🔍

AuditLens

Document intelligence with deterministic, traceable scoring.

Page 1 of 9 — Landing & Brand
01 — 01 — HERO LANDING
app.example.com/auditlens

Beyond chatbots

Document intelligence with deterministic, traceable scoring.

AuditLens turns complex regulatory and technical documents into structured findings, gap analysis, and audit-ready reports. Multi-stage pipeline. Schema-constrained outputs. Scoring runs outside the LLM.

See how it works
PAGE 14 · §3.2.178/ 100SUBSTANTIALLY COMPLIANTTRACEABLERE-RUNNABLE
01 — 02 — PROJECTS DASHBOARD
app.example.com/projects

Projects

Active projects

12

+3 this week

Runs this month

247

+18% vs Apr

Avg score

78.4

+2.1 pts

Findings flagged

1,284

9% non-compliant

Recent projects

📘
MiCAR Q1 Submission Review
47 docs · Anna K.
Completed
📗
ISO 9001 Site B Audit
32 docs · Marcus L.
Running
📕
Tender RFP-2024-127 Response Eval
18 docs · Layla H.
Failed
📓
SOC2 Type II Pre-audit
64 docs · Anna K.
Completed
📙
Pharma Annex 11 Review
29 docs · Marcus L.
Partial
01 — 03 — UPLOAD & MAP
app.example.com/projects/new

STEP 02 · MAP TO CHECKLIST

Map document sections to checklist items

Our parser identified 8 sections in your 47-page PDF. Map each to the relevant checklist category, or let auto-mapping run. ETA 3m 40s · 32 items will evaluate.

§1 — Executive summary

Governance ▾

§2 — Authorisation conditions

Authorisation conditions ▾

§3 — Marketing communications

Disclosure ▾

§4 — White paper publication

Disclosure ▾

§5 — Conflict of interest policy

Governance ▾

§6 — Capital adequacy proof

Capital requirements ▾

§7 — Operational resilience plan

Operational resilience ▾

Annex A — Schedules & exhibits

Auto-map suggested · Governance ▾
01 — 04 — DOCUMENT ANALYSIS PIPELINE
app.example.com/projects/mica-q1/runs/r-7421
Run in progress · auto-saving
Run r-7421-a3b9

Document preview · being analyzed

PAGE 14 OF 47

Article 30 — Marketing communications

3.2.1 All marketing communications relating to a crypto-asset, or to a public offer of a crypto-asset, by an issuer or by any third party acting on behalf of an issuer, shall comply with the requirements of this Article.

ART. 4.2 · ANALYZING

All marketing communications shall be fair, clear and not misleading, and shall be clearly identifiable as such…

3.2.2 Information contained in marketing communications shall be consistent with the information contained in the white paper referred to in Article 6.

ANNEX II §3 · ANALYZING

Where applicable, marketing communications shall be approved by the competent authority of the home Member State…

3.2.3 Marketing communications shall include a clear and unambiguous statement that the white paper has been published and shall indicate the address of the website of the issuer.

SCHEDULE B ITEM 7 · ANALYZING

The issuer shall maintain a record of all marketing communications for a period of not less than five years…

PAGE 15 OF 47

Article 30 (cont.) · Pre-publication

3.2.4 The issuer shall ensure pre-publication review of all marketing materials by a designated compliance officer prior to dissemination.

3.2.5 The competent authority may request an issuer to amend marketing communications that do not meet the requirements set out in this Article.

3.2.6 Where a marketing communication is found to be misleading, the issuer shall be required to issue a corrective statement within ten business days.

CHUNK BOUNDARY · 1,847 tokens

Hash sha256:a3f9…b4c2 · cached for re-run efficiency · embedded in pgvector at 09:14:26 UTC

Pipeline · 7 stages

Parse document

1.2s

PDF parser · 47 pages · 124,418 chars extracted

Structure & section detection

2.4s

8 sections · 14 sub-sections · 3 annexes detected

Chunk & embed

4.7s

68 semantic chunks · pgvector · cache-friendly hashing

Map to checklist

3.1s

47 checklist items mapped · 0 unmapped

LLM evaluate · in progress

ETA 1m 50s

Schema-constrained · JSON Schema strict mode · 18/47 items done

Validation layer

pending

Consistency checks · cross-item conflicts · citation validation

Deterministic scoring

pending

Pure Python · runs outside LLM · reproducible

Run metadata

Run IDr-7421-a3b9ChecklistMiCAR v2.4.1Triggered byAnna K.Started2026-05-15 09:14:22 UTCDocumentMiCAR-Q1-Submission-v3.pdfSize47 pages · 2.1 MB
01 — 05 — FINDINGS DASHBOARD
app.example.com/projects/mica-q1/runs/r-7421/results
78/ 100
⚠ SUBSTANTIALLY COMPLIANTMiCAR v2.4.1

47 checklist items evaluated

34 compliant ✓ · 9 partial ⚠ · 3 non-compliant ✗ · 1 N/A

RUN r-7421-a3b9

Evaluated 2m 47s ago

Checklist tree

Governance8 items
88/100 ✓
Disclosure12 items
72/100 ⚠
12.1 White paper publication85 ✓
12.2 Marketing communications60 ⚠
12.3 Conflict of interest90 ✓
12.4 Risk warnings82 ✓
12.5 Record-keeping35 ✗
Operational resilience9 items
70/100 ⚠
Authorisation conditions10 items
95/100 ✓
Capital requirements8 items
65/100 ⚠
12.2

Marketing communications

× 2.0 weight⚠ PARTIAL · 60/100

Category · Disclosure · Regulatory ref: MiCAR Article 30, Para 1-3

Source evidence

“All marketing communications shall be fair, clear and not misleading, and shall be clearly identifiable as such. The issuer shall ensure that information contained in marketing communications is consistent with the white paper published under Article 6.”

Provenance · Page 14, §3.2.1, Para 2 · MiCAR-Q1-Submission-v3.pdf · chunk hash sha256:a3f9…b4c2

LLM evaluation reasoning

The marketing communications policy is documented but lacks specific examples of approved channels and pre-publication review workflow. References to MiCAR Article 30 are present but not operationalized into internal procedures. Pre-publication compliance sign-off ownership is not specified.

model: claude-3.5-sonnet · schema-strict · 1,247 tokens · cached

Deterministic scoring · computed outside LLM

FACTOR
WEIGHT
VALUE
SCORE
Coverage of regulatory text
0.30
4/5
24/30
Operationalization
0.40
2/5
16/40
Pre-publication controls
0.30
4/5
20/30
Final
60/100

Deterministic formula · computed outside LLM · reproducible across runs · scoring formula version 2.4.1.

Recommendations

Add concrete examples of approved channels (web, social, email) with format specs.
Document the pre-publication review workflow with named sign-off owners (Compliance Officer + Legal).
01 — 06 — RUN HISTORY
app.example.com/projects/mica-q1/runs

MiCAR Q1 Submission Review

Audit trail · all runs · every score is reproducible

247 total runs78.2 avg score2m 51s avg runtime12 re-run requested this week

Checklist updated · v2.5.0 published 3 days ago

Re-run all completed runs to compare against new criteria. Cached chunks make re-runs cost-efficient.

AllCompletedFailedRunning
Checklist · All versions
Date · Last 30 days
Run ID
Triggered by
Started
Duration
Checklist
Score
Diff vs prev
Actions
r-7421-a3b9
AK
Anna K.
2026-05-15 09:14
2m 47s
v2.4.1
78
+6 · 3 items improved
ViewRe-run
r-7384-c1e2
ML
Marcus L.
2026-05-13 16:42
2m 53s
v2.4.0
72
+4 · 2 items improved
ViewRe-run
r-7341-d8a7
LH
Layla H.
2026-05-11 11:08
3m 02s
v2.4.0
76
+2 · 1 item improved
ViewRe-run
r-7298-b2f4
AK
Anna K.
2026-05-08 14:21
2m 41s
v2.4.0
74
-1 · 2 items dropped
ViewRe-run
r-7220-9a1c
ML
Marcus L.
2026-05-05 10:33
2m 58s
v2.3.2
75
+3 · 2 items improved
ViewRe-run
r-7187-e4d0
LH
Layla H.
2026-05-02 17:55
3m 14s
v2.3.2
72
+5 · 4 items improved
ViewRe-run
r-7102-7f2b
AK
Anna K.
2026-04-28 09:12
2m 49s
v2.3.2
67
baseline
ViewRe-run
r-7045-2c3a
ML
Marcus L.
2026-04-22 13:47
2m 38s
v2.3.1
64
baseline
ViewRe-run
01 — 07 — REPORT BUILDER
app.example.com/projects/mica-q1/reports/new

Configure report

MiCAR Q1 Submission · Run r-7421-a3b9

Template

Detailed Compliance

Other: Executive Summary · Regulator Submission · Internal Review

Sections

Executive summary
Compliance scorecard
Detailed findings
Gap analysis
Recommendations
Source evidence appendix
Methodology
Audit trail
Glossary
Disclaimer

Branding

+
Ink black

Language

EnglishArabicFrench

Recipient access

Public link
Password protected
Specific emails only

Live preview

Page 1 of 47

AUDITLENS · COMPLIANCE REPORT

MiCAR Q1 Submission Review

Checklist v2.4.1 · Generated 2026-05-15 · Run r-7421-a3b9

78/100

SUBSTANTIALLY COMPLIANT

EXECUTIVE SUMMARY

The MiCAR-Q1-Submission-v3.pdf document was evaluated against 47 checklist items derived from MiCAR v2.4.1. The submission demonstrates substantial compliance with regulatory requirements, scoring 78/100 overall.

Key strengths: governance framework (88/100), authorisation conditions (95/100). Areas requiring attention: marketing communications operationalization, record-keeping procedures (item 12.5), and capital adequacy documentation.

COMPLIANCE SCORECARD

CATEGORY
SCORE
STATUS
Governance
88
Disclosure
72
Operational resilience
70
Authorisation conditions
95
CONFIDENTIAL · AUDITLENS1 / 47
1
2
3
4
5
47

Past reports

v2.4.1

2 days ago · Anna K.

↓ DOWNLOAD

v2.4.0

1 week ago · Marcus L.

↓ DOWNLOAD

v2.3.2

2 weeks ago · Layla H.

↓ DOWNLOAD

01 — 08 — CHECKLIST EDITOR
app.example.com/checklists/micar-v2.4.1/edit

MiCAR Compliance

v2.4.1PUBLISHED
History · v2.5.0 (Draft)

Item tree · 47 items

Governance (8)
Disclosure (12)
12.1 White paper publication
12.2 Marketing communications
12.3 Conflict of interest
12.4 Risk warnings
12.5 Record-keeping
Operational resilience (9)
Authorisation conditions (10)
Capital requirements (8)
12.2

Marketing communications

edited 3 days ago by Anna K.

Item name

Marketing communications

Weight

2.0

Regulatory reference

MiCAR Article 30, Para 1-3

Evaluation prompt template

# schema-strict
Evaluate whether the document operationalizes marketing
communication controls per MiCAR Art. 30.

Look for: approved channels, pre-publication review,
sign-off ownership, fair-clear-not-misleading principle.

Cite specific paragraphs. Min 1 source ref.

Scoring formula · deterministic · runs outside LLM

score = (coverage * 0.30
      + operationalization * 0.40
      + controls * 0.30) * 100 / 5

Validation rules

Must cite ≥ 1 source paragraphScore 0-100 onlyReasoning ≥ 50 chars

AuditLens MVP

Multi-stage AI evaluation system with deterministic scoring and a full audit trail. Built to the architecture you have already defined: parse, chunk, schema-constrained LLM evaluation, validation, deterministic scoring outside the LLM, traceable findings, re-runnable history, audit-ready PDF report.

⚙️

Tech Stack

Python 3.12 + FastAPI
Pydantic v2 (schema-strict LLM outputs)
Next.js 14 + React + TanStack Query
PostgreSQL 16 + pgvector
Redis + BullMQ
S3 (documents + generated PDFs)
OpenAI Structured Outputs / Claude JSON Schema
OpenTelemetry (per-run traces)
🧠

Core Technologies

  • FastAPI + Pydantic — API · schema-constrained LLM calls
  • Next.js 14 — Dashboard · server components
  • Postgres + pgvector — Documents, chunks, findings
  • BullMQ workers — Async pipeline · idempotent jobs
  • OpenAI / Claude — Strict JSON Schema · cached
  • OpenTelemetry — Audit trace per run
📦

V1 Deliverables

Document upload · PDF + Word ingestion
V1
Parser · structure & section detection
V1
Semantic chunking + pgvector embeddings
V1
Auto-map chunks to checklist items
V1
Schema-constrained LLM evaluation (strict JSON)
V1
Validation layer · consistency + citation checks
V1
Deterministic scoring engine (pure Python, outside LLM)
V1
Findings & gap analysis dashboard with traceability
V1
Versioned checklist authoring + re-run history
V1
Audit-ready PDF report generation
V1
Multi-tenant data model · tenant / project / run / finding
V1
REST API + workers + role-based access
V1
Checklist marketplace (ISO 27001, SOC2, GDPR, MiCAR, Annex 11)
V2
Integrations · Salesforce / SharePoint / Confluence
V2
White-label tenant branding
V2
Real-time collaborative review
V2
Multi-language document support
V2
On-prem / VPC deployment option
V2
🏛

Architecture Layers

UI
Next.js 14 · React · TanStack Query · Tailwind · shadcn/ui · server components
API GATEWAY
FastAPI · OpenAPI · JWT + tenant-scoped RBAC · rate limiting · idempotency keys
DOCUMENT PIPELINE
PDF/Word parser · semantic chunker · OpenAI embeddings · pgvector · cache-friendly chunk hashing for re-run efficiency
EVALUATION ENGINE
Multi-step orchestration · LLM caller with strict JSON Schema · retry + timeout · validation layer · scoring runs OUTSIDE the LLM in pure Python · scoring formula versioned per checklist version
AUDIT & TRACEABILITY
Every finding linked to source-section + checklist version + scoring formula version · OpenTelemetry trace per run · re-run any run against any checklist version
STORAGE
Postgres (tenant/project/run/finding/checklist) · pgvector (chunks) · S3 (documents + PDFs) · Redis (queue + dedup)
WORKERS
BullMQ (TS) or Celery (Python) · queue-based async · idempotent job design · backpressure handling
OPS
AWS ECS Fargate · GitHub Actions · Terraform · Sentry · DataDog · per-tenant cost attribution