Skip to main content

INTRODUCING JANET

The home assistant that runs in your home.

Janet is a local-first AI assistant that controls your smart home, understands context instead of commands, and runs entirely on hardware you own. Your data never leaves the house.

JANET
Local-first AINo wake wordPrivate by defaultRuns on your hardwareOpen source

THE PROBLEM

Home assistants are stuck in 2016.

Cloud-tethered, command-driven, and built around a wake word you have to remember. The category hasn't moved — Janet does.

Cloud dependency

Most assistants route every request through remote servers. Your home shouldn't stop working when your internet does.

Wake-word fatigue

You shouldn't have to repeat a device's name every time you speak. Real conversations don't work like that.

No real context

Today's assistants treat each utterance as an isolated command. They don't track who's in the room, what just happened, or what you actually meant.

Privacy tradeoff

Your habits, routines, and voice are processed in the cloud and retained on someone else's terms. Convenience shouldn't cost ownership.

WHAT MAKES JANET DIFFERENT

Privacy-first. Local. Contextual. Yours.

Local processing, contextual understanding, real privacy, and the hardware to make it real.

Local-first AI

Core intelligence — speech, intent, reasoning — runs entirely on a device in your home. The cloud is optional, never required.

No wake word

Janet uses context — who's in the room, what just happened, how you're speaking — to know when you're talking to it. No "Hey Janet" required.

Private smart home control

Lights, locks, climate, scenes — controlled locally. Your routines, schedules, and voice never leave the house.

Supporting capabilities

Context awareness

Tracks conversation history, environment, and prior intent so follow-ups work the way you'd actually phrase them.

Audio intelligence

Filters background noise, detects speech, identifies who's speaking, and transcribes — all locally, with no raw audio retained.

Optional vision

Event-driven visual understanding for advanced features. Off by default, on only when you want it.

Edge hardware

Optimized for NVIDIA Jetson-class hardware so the AI core runs at conversational speed without a datacenter.

HOW IT WORKS

From sound to answer.

Audio in. Intent out. Reasoning in the middle. Vision when you want it.

AUDIO

Always listening, never recording.

Janet keeps a short rolling buffer of sound — never stored, never uploaded. It detects speech, filters out music and TV, identifies who's speaking, and turns it into text. The raw audio is gone the moment the transcript is ready.

Audio Pipeline: Listen → Detect and filter → Identify and transcribe → Output text.

Listen

Detect

Identify

Output

ROUTING

Knows when you're talking to it.

Janet weighs what was said, who said it, what just happened, and where you're standing. If it's for Janet, the request gets routed to the right system. If not, it stays quiet.

REASONING

Fast for simple, deep for hard.

A layered system: a quick intent classifier for everyday requests, a real-time action processor for smart-home commands, and a reasoning engine for the questions that actually need thinking. Simple stays fast. Complex goes deeper.

VISION

Eyes only when you ask for them.

Vision is opt-in and event-driven. A low-power awareness mode runs only if you enable it; heavier visual processing fires on specific triggers, not continuously. Off is the default.

THE ROOM

Every mic in your home, one Janet.

Janet listens through every microphone you set up — kitchen, living room, bedroom. Wherever you are, it hears you. The signals converge on a single on-device intelligence.

JANET system diagram: signals from four microphone nodes converging on the central hub J

PRIVACY

Your home data should stay home.

Local processing isn't a marketing word for us — it's the architecture. Here's what that actually means.

Local processing

Speech, intent, and reasoning run on hardware in your home.

No raw audio retention

Audio is processed in a rolling buffer and discarded — only transcripts persist.

Optional cloud

Cloud is something you opt into per feature, not a requirement to function.

You own it

The hardware is yours. The data is yours. The model weights live with the device.

JANET Typical
Core processing Runs locally Runs in the cloud
Wake word Not required Required
Raw audio Discarded after processing Often stored or processed remotely
Data ownership You control it The platform controls it
Extensibility Open by direction Closed ecosystem
On-device privacy shield ON-DEVICE

WHY TRUST JANET

Built in the open. Built to be owned.

Janet is shipping in public — every decision, every commit, every dead end. You don't have to take our word on privacy or local-first. You can read the code.

Open source

The source is public. The architecture is auditable. Nothing about Janet's privacy story is on faith.

Edge-first architecture

Designed from the start to run on hardware you own, not adapted from a cloud product after the fact.

Transparent development

Roadmap, design notes, and the reasoning behind each system live alongside the code.

Hardware-backed AI

Real local inference on real local silicon — Jetson-class edge hardware, not a Raspberry Pi pretending.

Build the assistant your home actually deserves.

Follow Janet as it evolves from concept to a real local AI system.