
Exploring AI-native workflows
We're exploring local LLMs, permission-aware shells, and human-centered desktop workflows (Blossom OS direction). The focus is on privacy, safety, and explainability — building AI assistants that preview before executing and explain every potentially dangerous operation.
What we're exploring
These experiments inform our production work across all platforms, from business apps to health trackers to developer tools.
Local LLM Assistants
Offline-capable, privacy-focused AI
Run language models locally without sending data to the cloud. Perfect for sensitive workflows, offline environments, and users who value privacy. Models stay on your device, queries never leave your machine.
Command Translation & Safety
Natural language → shell commands
Translate plain English into terminal commands with built-in safety layers. Preview what will happen before execution, explain dangerous operations, and suggest safer alternatives. Different modes for beginners vs. experts.
Blossom OS Direction
Custom desktop environments
Experimental shells and desktop environments that integrate AI assistants at the OS level. Focus on human-centered UX, contextual help, and workflows that feel natural rather than forcing users into rigid patterns.
Blossom Shell Playground
Interactive demo of how an AI-native shell would work. Select a scenario to see how the assistant translates requests, explains operations, and prioritizes safety.
Try a scenario
Blossom Shell Assistant
Mode: BeginnerYou
I'm running out of disk space. Help me clean up safely.
Assistant
I'll scan for large temporary files, old logs, and cached data. I'll show you what can be removed before actually deleting anything.
Safety measures
Runs dry-run first, shows file sizes and locations, requires explicit confirmation for deletion. Never touches user documents or system files.
Proposed commands
# First, let's see what's using space
du -sh ~/Library/Caches/* | sort -hr | head -10
# Show old log files (won't delete yet)
find /var/log -name "*.log" -mtime +30 -ls
# Preview what would be cleaned
du -sh ~/Downloads/* | sort -hr
# After your approval, we can run:
# rm -rf ~/Library/Caches/com.example.app
# (Only after explicit confirmation)💡 This is a demonstration. No commands are actually executed.
Design principles
Core values that guide our AI and OS experiments.
Local-first by default
Models and data stay on your device. Cloud is optional, not required. Privacy isn't a feature—it's the foundation.
Preview before execute
Show what will happen before doing anything destructive. Dry runs, diffs, and explanations come first. No surprises.
Different modes for different users
Beginners get extra safety rails and explanations. Experts get advanced controls. The same tool adapts to skill level.
Explain every dangerous operation
Break down what each command does, why it might be risky, and suggest safer alternatives. Education over automation.
Integrate with the OS, not fight it
Work within existing system conventions. Enhance the shell, don't replace it. Respect user preferences and workflows.
Fail gracefully, recover cleanly
Create backups before changes. Provide rollback options. When things go wrong, make it easy to undo and understand what happened.
From R&D to products
These experiments directly inform our shipping products. Patterns discovered here flow back into real-world applications.
Blossom OS
→ Desktop & shell design
Command translation and safety layers inform how we build developer tools and system utilities that feel natural and safe.
The Company App
→ Role-based access & permissions
Permission-aware workflows from OS experiments translate into business app access control and multi-company data boundaries.
Health & Habit Apps
→ AI coaching & behavior flows
Local LLM assistant patterns inform how we build habit coaching, meal suggestions, and personalized insights—all private and on-device.
Current experiments & case studies
Early-stage prototypes exploring AI-native patterns, privacy-first architecture, and human-centered OS interactions.
Blossom Shell
Natural language macOS automation with transparent permission requests. Uses local LLMs to generate shell scripts with human-in-the-loop safety checks.
🎯 Focus: Permission-aware command generation + safety-first execution
JustSurvive AI
iOS game combining Swift gameplay with local AI for dynamic narratives. Demonstrates on-device intelligence integration without server dependencies.
🎯 Focus: Local AI storytelling + SwiftUI/SpriteKit architecture
Desktop workflows
Early concepts for AI-assisted file management, context-aware shortcuts, and explainable system interactions across macOS and Linux environments.
🎯 Focus: Human-centered automation + transparent decision-making
What this means for clients
These R&D experiments aren’t just academic exercises—they directly inform how we build production systems for real businesses:
- •Faster, local-first AI features that work offline and respect user privacy without requiring server infrastructure.
- •Better privacy through on-device or self-hosted AI, keeping sensitive data under your control.
- •Proven patterns for business apps and tools—command translation, workflow automation, and AI-assisted interfaces that actually help users get work done.
Example directions
- →Local AI assistant shell for internal tools and automation workflows
- →Privacy-first desktop workflows for teams handling sensitive data
- →Command translation and automation layers for complex systems and operations
Interested in AI-native tools or custom OS workflows?
These are research directions, not finished products. If you're working on similar problems—local AI, permission-aware systems, or human-centered developer tools—let's talk.
Building AI-native experiences or exploring local LLM integration?
Our AI & OS experiments show what's possible when you prioritize privacy, safety, and user understanding. We can help you integrate similar patterns into your products.