Can I use Wolf Analytics without session replay?
Yes. Select GDPR mode when creating a project and session replay collection is completely disabled at the ingest layer. You still get full access to dashboards, traffic source breakdowns, conversion funnels, retention reports, and real-time visitor counts. GDPR mode records pageviews, custom events, and conversions while skipping all mouse movement, click coordinate, scroll depth, and DOM mutation capture. This means zero replay data is stored in your database. If you later decide you want replay, you can switch the project to international mode and the snippet will begin collecting replay events from that point forward without any code changes on your site.
Does each account see only its own projects?
Yes. Project ownership is strictly enforced at the API layer on a per-account basis. Every dashboard query, session lookup, and data export is scoped to the authenticated account and its own projects. There is no cross-account data leakage. Superadmin users have an explicit context-switching mechanism that lets them view any account for diagnostics and support purposes, but this requires a deliberate API call to change context. Normal accounts cannot access projects belonging to other accounts under any circumstances. This isolation model extends to API keys as well: each project receives its own unique wa_ prefixed key that only authorizes ingest for that specific project.
How do I move off legacy dashboard login?
Start by ensuring your backend is running account-based JWT authentication. Create your superadmin account via the bootstrap endpoint or seeded credentials, then set the JWT verification secret in your frontend environment variables. Test that you can log in through the standard login page at /login and that dashboard routes load correctly with the JWT token. Once validated, set ENABLE_LEGACY_DASHBOARD_LOGIN=false in your frontend .env.local file and redeploy. You can also set ENABLE_LEGACY_ADMIN_AUTH=false to disable the legacy admin API key path. After disabling both flags, remove the legacy credentials (DASHBOARD_AUTH_USERNAME, DASHBOARD_AUTH_PASSWORD, DASHBOARD_AUTH_SECRET) from your environment to reduce your secret surface area.
Which routes are publicly crawlable?
The marketing and documentation pages are public and crawlable. These include the home page, features, pricing, docs, FAQ, and contact pages. The robots.txt file explicitly allows crawling of these routes and disallows all dashboard and API paths. A dynamic sitemap.xml is generated that lists every public marketing route with appropriate lastmod timestamps and change frequencies. Additionally, the /llms.txt and /llms-full.txt files are publicly accessible for AI model consumption. All authenticated routes including /login, /dashboard, and any /api paths are blocked from indexing via robots.txt directives and noindex meta tags. This separation ensures search engines and AI systems can discover your public content without exposing private analytics data.
How quickly can we get first insights?
Most teams see their first dashboard metrics within 30 minutes of starting setup. The process has four steps: bootstrap your superadmin account with a single API call, create a project through the dashboard or API to get your wa_ project key, paste the two-line tracking snippet into your site HTML, and verify pageview data appears in the dashboard overview. The snippet loads asynchronously via the defer attribute so it does not block page rendering. Events are ingested with sub-50ms latency and appear in the real-time dashboard immediately via Server-Sent Events. Historical aggregates in the overview panel refresh on each page load. No build tools, tag managers, or third-party integrations are required to start collecting data.
Does frontend hosting change the snippet install?
No. The tracking snippet installation is identical regardless of where your frontend is hosted. Whether you deploy on a custom VPS, Cloudflare Pages, Vercel, Netlify, or any other hosting provider, the browser-side install is the same two-line script tag pointing to the hosted snippet URL and your project API key. The snippet is served from analytics.wolfai.dev and makes API calls to analapi.wolfai.dev, so your hosting choice does not affect the data flow. The one requirement is that your production origins must be allowlisted in the analytics backend CORS settings to permit cross-origin tracking requests. If you self-host the backend, update the CORS allowed origins list to include every domain where the snippet is installed.
How should we structure one superadmin and multiple brands?
Create one superadmin account for global oversight and administration, then create one standard account per brand. Each brand account owns its projects independently, and project data stays isolated within that account boundary. The superadmin can switch context to any brand account for diagnostics, usage monitoring, and support using the /auth/context endpoint. Within each brand account, create one project per tracked domain or application. This structure gives you centralized billing visibility through the superadmin global overview while maintaining strict data separation between brands. The current model uses account-scoped ownership rather than a shared multi-user workspace, so if multiple team members need access to a brand, they share the brand account credentials or you manage access at the infrastructure level.
What data does Wolf Analytics collect in GDPR mode?
GDPR mode collects the minimum data needed for meaningful analytics while respecting European privacy standards. IP addresses are hashed using SHA256 combined with a per-deployment salt before any storage occurs, meaning raw IPs are never written to the database. User-agent strings are parsed down to browser family and operating system family only, discarding version numbers and device fingerprint details. Geographic data is limited to country and region level with no city-level resolution and no latitude/longitude coordinates. Session replay is completely disabled at the ingest layer, so no mouse movements, click coordinates, scroll positions, or DOM mutations are recorded. Wolf Analytics sets zero third-party cookies. All tracking uses first-party API calls from the snippet to your analytics backend.
What data does Wolf Analytics collect in international mode?
International mode collects the full data set for teams operating outside strict GDPR jurisdictions. Full IP addresses are stored for accurate geographic correlation and session stitching. Complete user-agent strings are retained for detailed device, browser, and OS analytics. Session replay is enabled, capturing mouse movements, click coordinates, scroll depth, and DOM mutations with input fields automatically masked to avoid recording sensitive form data. Geographic resolution includes city-level location plus latitude and longitude coordinates, enabling the Leaflet-based map visualization in the dashboard. All other features available in GDPR mode are also present: pageview tracking, custom event recording, conversion attribution, traffic source classification, and real-time visitor streaming via Server-Sent Events.
How does Wolf Analytics handle IP addresses?
Wolf Analytics handles IP addresses differently depending on the privacy mode configured for each project. In GDPR mode, every incoming IP address is passed through a SHA256 hash function combined with a per-deployment salt before any database write occurs. The raw IP is used only in memory for the hashing computation and is never persisted to disk or logs. This means even a full database breach cannot expose visitor IP addresses. The salt is configured via the ANALYTICS_SALT environment variable and should be a unique random string per deployment. In international mode, the full IP address is stored in the database to enable accurate city-level MaxMind GeoIP lookups, map coordinate resolution, and reliable session correlation across multiple pageviews from the same visitor.
Is Wolf Analytics GDPR compliant?
Yes, when running in GDPR mode. Wolf Analytics was designed with privacy-by-design principles specifically to satisfy GDPR requirements for web analytics. In GDPR mode, no raw IP addresses are stored anywhere in the system because they are hashed with SHA256 plus a per-deployment salt before persistence. No session replay data is collected, eliminating concerns about recording user behavior patterns. Geographic resolution stops at country and region level with no city or coordinate data. Zero third-party cookies are set by the tracking snippet. User-agent data is reduced to browser and OS family only. All data stays in your own infrastructure when self-hosted, giving you full data residency control. These measures collectively mean GDPR mode does not require cookie consent banners in most EU jurisdictions because no personal data is stored.
What is the tracking snippet size?
The production tracking snippet is under 8KB minified and gzipped. It is written in vanilla TypeScript with zero external dependencies and bundled using esbuild for optimal output size. The snippet is loaded via the HTML defer attribute, which means it does not block page parsing, rendering, or the DOMContentLoaded event. This keeps your site performance scores unaffected. For comparison, most commercial analytics scripts range from 30KB to 150KB. The snippet handles pageview tracking, custom event dispatch, conversion recording, user identification, and optional session replay capture all within that sub-8KB footprint. Source maps are available in development builds via the npm run build:dev command for debugging. The hosted production version is served from analytics.wolfai.dev/wolf-analytics.min.js.
Can I self-host Wolf Analytics?
Yes. Wolf Analytics is designed for self-hosting with a straightforward Docker deployment. The backend is a Python FastAPI application backed by PostgreSQL for primary storage and Redis for real-time features. The frontend is a standalone Next.js 16 dashboard that connects to the backend API. A docker-compose configuration handles PostgreSQL and Redis infrastructure. To self-host, clone the repository, configure your environment variables including DATABASE_URL, ANALYTICS_SALT, and REDIS_URL, run database migrations, and start the backend and frontend services. Self-hosting gives you full data residency control, which is particularly valuable for GDPR compliance since all visitor data stays on your own infrastructure. You can deploy on any Linux server, cloud VM, or container orchestration platform that supports Docker.
What databases does Wolf Analytics support?
Wolf Analytics uses PostgreSQL as its primary database, accessed through async SQLAlchemy 2.0 with the asyncpg driver for high-throughput non-blocking queries. PostgreSQL handles all persistent storage including pageview events, sessions, projects, accounts, and configuration. Redis serves as the secondary datastore, powering real-time features like live visitor counts, Server-Sent Events streaming, and rate limiting. The development docker-compose configuration runs PostgreSQL on port 5436 and Redis on port 6382 to avoid conflicts with any locally installed instances. Database schema changes are managed through Alembic migrations. There is no support for MySQL, SQLite, or other relational databases at this time because the application relies on PostgreSQL-specific features for performance and data integrity.
How does real-time tracking work?
Real-time tracking uses Server-Sent Events (SSE) to push live visitor data from the backend to the dashboard without polling. When you open the real-time dashboard panel, your browser establishes a persistent SSE connection to the /dashboard/realtime/stream endpoint. As new pageview and event data arrives at the ingest API, it is processed and broadcast to all connected SSE clients with sub-50ms latency from event receipt to dashboard display. The ingest pipeline writes events to PostgreSQL for permanent storage while simultaneously publishing to Redis for real-time distribution. This dual-write architecture ensures that historical queries and live streaming both perform optimally without competing for resources. The SSE approach is lighter than WebSockets for this use case because the data flow is unidirectional from server to browser.