Assistive Real-Time Sensory Neuromodulation, Designed To Wear.
Neurodivergence isn't a problem to be fixed — it's a reality that deserves better support. Whether you stutter, navigate ADHD, or struggle with body-image distortion, we're building adaptive technology for the moments that matter.

PsyAIHance develops AI-driven sensory devices based on real-time perceptual transformation, on-device machine learning, and adaptive signal processing — in combination with decades of research into auditory-motor feedback loops, haptic processing, and neural adaptation. Patent-protected algorithms, production-grade audio pipelines, and ecologically valid experimental methods offer both a clinically rigorous intervention framework and a scalable platform for real-world deployment. Our approach opens new possibilities across speech therapy, attention support, body-perception disorders, and sensory-driven quality of life.
Our First Product · Beta Access · Now
FluxBuds®












Our team combines deep perception science, production-grade audio engineering, and clinical rigor — built over decades across academia, industry, and the arts.
as the Foundation
300+ Publications
10,500+ Citations
Top 2% Scientists (Stanford/Elsevier)
European Academy of Sciences
Real-Time ML
Industry-Proven Audio ML
Sub-50ms Latency
On-Device Processing
Deployed at Scale
to Life
Everyday Use Cases
Patent WO2024200875A1
Not For
Continuous User Feedback
BamLiD 24-Hour Living Lab
Real-World Validation
From personal experience with stuttering to decades of research in cognitive science —
our founders bring both lived expertise and scientific rigor.

Our team combines deep perception science, production-grade audio engineering, and clinical rigor — built over decades across academia, industry, and the arts.
as the Foundation
300+ Publications
10,500+ Citations
European Academy of Sciences
Grade
Real-Time ML
Sub-50ms Latency
On-Device Processing
Deployed at Scale
to
Life
Sub-50ms Latency
On-Device Processing
Deployed at Scale
as the Foundation
Industry-Proven Audio ML
Industry-Proven Audio ML
Industry-Proven Audio ML
300+
Publications
10,500+
Citations
Top 2%
Scientists




" By respecting the seriousness of the condition and the need for careful evaluation. We can be optimistic and still be precise about what we know and what we’re validating. The research and development effort is structured around defined milestones, controlled evaluations, and real-world usability—not just demos."

Multiple Conditions.
Many neurodivergent conditions share a common root: disrupted sensory feedback loops that distort how the brain processes perception in real time. PsyAIHance® targets this shared mechanism through Real-Time Perceptual Neuromodulation — delivered through neuroadaptive wearables, personalized to each user.

Wearable
Clinically validated, AI-powered sensory device


AI-guided recalibration of sensory pathways


Restored fluency, perception & focus
Disrupted auditory-motor feedback
loops distort how the brain
processes its own voice — a core
driver of stuttering. Our technology
triggers alternative neural
pathways in real time, restoring
fluent speech processing at the
source.
---------------------------------------------------
→ FluxBuds®
→ Persona™
Distorted perceptual feedback
causes the brain to misrepresent
the body — driving conditions like
Body Dysmorphic Disorder.
Guided visual recalibration
realigns self-perception with
reality, frame by frame.
---------------------------------------------------
→ ReMorph
Sensory overload destabilizes
focus and cognitive regulation — a
central challenge in ADHD.
Adaptive sensory environments
reduce neural noise in real time,
supporting sustained attention
where it matters.
---------------------------------------------------
→ FocusBuds

Get the latest insights on sensory AI research, product updates, and partnership opportunities delivered to your inbox.
Each product addresses a different challenge — all powered by the same adaptive platform, validated independently.

Our Approach
From Research to Real-World Impact
Our team combines deep perception science, production-grade audio engineering, and clinical rigor — built over decades across academia, industry, and the arts.

How It Works
You speak. Earbud mics capture your natural voice. Anti-Voice renders a brain-credible other-speaker percept in real time — shifting both voice identity and 3D spatial origin. Your STG detects it as "someone else" — bypassing the disrupted self-monitoring loop. Algorithmically controlled, novelty-inducing cues prevent habituation and keep the fluency effect stable while worn.
Who It's For
The mechanism builds on one of the most replicated findings in stuttering research: during choral speech (speech-in-unison), fluency improves by 90–100%. FluxBuds is the first product designed to reproduce this effect through AI — backed by 30+ peer-reviewed studies and validated in a pilot study (N=12, University of Bamberg) with significant fluency gains and preserved naturalness at no cognitive load increase.
What Makes It Different
FluxBuds is software, not hardware — it runs on standard earbuds and hearing aids. Unlike traditional DAF/FAF devices, it jointly converts voice identity and spatial origin cues, not just pitch or delay. The result is a fully brain-credible auditory illusion at perception-threshold latency, designed to be worn all day without listener awareness. No therapy sessions. No stigma.

Concept & Exploratory Research
Technology: Audio & Speech
Target: Human–Robot Interaction (HRI)
Voice is the most natural interface for robots — and the hardest to deploy responsibly. In real-world Human–Robot Interaction, microphones are effectively always-on, conversations happen in shared spaces, and speech signals often flow through multiple components (wake word, ASR, intent, logging, analytics). Even when audio isn’t “stored,” raw voice can still function as a biometric identifier and can leak sensitive context through transcripts or downstream representations.
FluxPersona extends the FluxBuds real-time audio platform into robotics as an ultra-low-latency edge layer that reduces speaker identifiability at the source. Incoming speech is transformed on-device so it remains intelligible and timing-accurate for natural turn-taking — but becomes less linkable to the original speaker identity before it ever reaches cloud services, logs, or third-party tooling. This enables privacy-by-design voice interaction without changing how people naturally speak to a robot.
How we validate it
FluxPersona is built to be measurable: we evaluate speaker re-identification/verification risk before vs. after transformation, quantify any ASR accuracy impact, and report end-to-end latency (including p95) to ensure the interaction stays genuinely real-time.
Concept & Exploratory Research
Technology: Cognition & Attention
Target:
Voice is the most natural interface for robots — and the hardest to deploy responsibly. In real-world Human–Robot Interaction, microphones are effectively always-on, conversations happen in shared spaces, and speech signals often flow through multiple components (wake word, ASR, intent, logging, analytics). Even when audio isn’t “stored,” raw voice can still function as a biometric identifier and can leak sensitive context through transcripts or downstream representations.
FluxPersona extends the FluxBuds real-time audio platform into robotics as an ultra-low-latency edge layer that reduces speaker identifiability at the source. Incoming speech is transformed on-device so it remains intelligible and timing-accurate for natural turn-taking — but becomes less linkable to the original speaker identity before it ever reaches cloud services, logs, or third-party tooling. This enables privacy-by-design voice interaction without changing how people naturally speak to a robot.
How we validate it
FluxPersona is built to be measurable: we evaluate speaker re-identification/verification risk before vs. after transformation, quantify any ASR accuracy impact, and report end-to-end latency (including p95) to ensure the interaction stays genuinely real-time.
Research Stage · Proof of Concept
Technology: Vision & Perception
Target:
Voice is the most natural interface for robots — and the hardest to deploy responsibly. In real-world Human–Robot Interaction, microphones are effectively always-on, conversations happen in shared spaces, and speech signals often flow through multiple components (wake word, ASR, intent, logging, analytics). Even when audio isn’t “stored,” raw voice can still function as a biometric identifier and can leak sensitive context through transcripts or downstream representations.
FluxPersona extends the FluxBuds real-time audio platform into robotics as an ultra-low-latency edge layer that reduces speaker identifiability at the source. Incoming speech is transformed on-device so it remains intelligible and timing-accurate for natural turn-taking — but becomes less linkable to the original speaker identity before it ever reaches cloud services, logs, or third-party tooling. This enables privacy-by-design voice interaction without changing how people naturally speak to a robot.
How we validate it
FluxPersona is built to be measurable: we evaluate speaker re-identification/verification risk before vs. after transformation, quantify any ASR accuracy impact, and report end-to-end latency (including p95) to ensure the interaction stays genuinely real-time.
Bespoke / Co-development
Tailored systems for specific clinical or real-world challenges - built with partners, validated with evidence
FluxPersona extends the FluxBuds real-time audio platform into robotics as an ultra-low-latency edge layer that reduces speaker identifiability at the source. Incoming speech is transformed on-device so it remains intelligible and timing-accurate for natural turn-taking — but becomes less linkable to the original speaker identity before it ever reaches cloud services, logs, or third-party tooling. This enables privacy-by-design voice interaction without changing how people naturally speak to a robot.
How we validate it
FluxPersona is built to be measurable: we evaluate speaker re-identification/verification risk before vs. after transformation, quantify any ASR accuracy impact, and report end-to-end latency (including p95) to ensure the interaction stays genuinely real-time.
Concept stage • Platform extension
Technology: Cognition & Attention
Target: Stuttering support
FocusBuds
Attention-aligned Al audio for ADHD -designed to reduce sensory overload and help sustain focus in real-world environments.
Leads product strategy, intellectual property, and partnerships. Background in cognitive science and 20+ years in audio technology and sound design. Co-authored the Leder-Belke model of aesthetic processing — one of the most cited perception frameworks in the field. Lives with a stutter. Founded PsyAIHance to build the assistive technology that never existed.
Research Stage · Proof of Concept
Technology: Vision & Perception
Target: Stuttering support
ReMorph
Guided visual feedback for people affected by body-image distress (BDD context) - supporting a steadier, more aligned self-perception.
Deine Überschrift
Kurze Beschreibung des Inhalts.
Hier steht dein ausführlicher Text, der sich aufklappt.
Concept & Exploratory Research
Technology: Audio & Speech
Target: Stuttering support
ASMR, Whisper-Based Sensory Modulation
Low-intensity auditory stimulation to reduce speaking pressure and emotional load
Concept & Exploratory Research
Technology: Audio & Speech
Target: Human–Robot Interaction (HRI)
Persona™
dfdgg
dfdffdsfds
Bespoke / Co-development
Customized Sensory Al Solutions
Tailored systems for specific clinical or real-world challenges - built with partners, validated with evidence

Get the latest insights on sensory AI research, product updates, and partnership opportunities delivered to your inbox.
We respect your privacy. Unsubscribe at any time. Privacy Policy
