Linux and virtualization engineer with a delivery-first portfolio focused on automation, platform reliability,
and practical enterprise AI implementation.
Role fit
Linux platform administration in enterprise environments
Infrastructure as code with Ansible + Git workflow discipline
Cross-team delivery from design through operational handoff
Top outcomes
Patch and lifecycle orchestration patterns for safer rollouts
RHEL modernization playbooks for controlled migrations
Applied AI pilots documented from prototype to operations
Ansible + Git workflows for auditable rollouts, lifecycle changes, and safer operational execution.
RHEL modernization
RHEL 7/8 to 9 migration planning, Satellite lifecycle control, patch orchestration, and hardening.
Applied enterprise AI
Azure AI Foundry and document intelligence pilots translated into operationally viable workloads.
Choose Your View
Current: Recruiter Mode
Recruiter view prioritizes impact, role fit, and business outcomes first. Engineer view enables full live labs,
telemetry streams, and command simulations.
Engineer Labs
Recruiter Mode keeps this portfolio concise and impact-first. Switch to Engineer Mode to open live Linux telemetry,
command simulation, and network topology labs.
A practical context-management design for long conversations on constrained hardware: estimate tokens, detect fatigue, and fold history through a 4-level hierarchy.
Issue: As conversations grow, irrelevant middle context accumulates, token budgets get exceeded, and edge devices pay extra latency for input processing.
Solution: Implemented a context folding hierarchy (RAW → DETAILED → SUMMARY → CONCEPTS) with fatigue detection thresholds (85%/95%/98%) and fast character-based token estimation.
Used In: Used in RADXA AI Suite (edge inference + RAG + multi-agent orchestration).
A concrete security module set for an edge AI backend: AES-256-GCM at rest, adaptive rate limiting, input validation, alerting, and automated scanning.
Issue: Without explicit controls, an AI API is vulnerable to abuse (burst traffic), unsafe inputs (command/path traversal), leaked secrets, and silent security regressions from dependencies.
Solution: Implemented five security modules: encryption at rest, enhanced rate limiting, advanced input validation, security monitoring + alerts, and vulnerability scanning with report generation.
Used In: Used in the RADXA AI Suite TypeScript backend security package (`backend-ts`).
A documented deployment automation pattern for Firebase: scripted CLI operations, account profiles, environment routing, and Discord webhook alerts.
Issue: Manual Firebase deployments are easy to mis-target (wrong project/hosting target), hard to audit, and slow to coordinate without realtime status notifications.
Solution: Centralized deployment configuration into an `accounts.json` profile, added API endpoints for account switching, and integrated Discord webhooks for start/success/failure notifications with log snippets.
Used In: Used in RADXA AI Suite deployment automation documentation and tooling.
A small, practical benchmark showing how quantizing the attention KV cache can materially reduce RAM usage on edge hardware.
Issue: Large models can be loadable, but the KV cache can still consume meaningful memory as context grows, limiting concurrency and increasing OOM risk.
Solution: Benchmarked KV cache quantization modes (default vs q8 vs q4) at a fixed context window and compared startup time, request latency, RSS, and KV cache footprint.
Used In: Used in Engram AI benchmark runs for CPU GGUF inference (llama.cpp) on ARM64.
Benchmarking local LLM inference on RK3588 and why NPU acceleration (RKLLM) is the difference between real-time chat and unusable latency.
Issue: CPU-only inference on small models was too slow for interactive UX, and some NPU model runs initially failed for non-runtime reasons (corrupted downloads or wrong target platform conversions).
Solution: Benchmarked CPU (Ollama) vs NPU (RKLLM), applied system and inference parameter optimizations, and documented failure modes to distinguish model-file issues from NPU/runtime issues.
Used In: Used in Engram AI (local-first Discord bot) running on RK3588.
How to use Ansible and adcli to safely remove a Linux server's computer object from Active Directory during decommissioning.
Issue: Needed a repeatable way to use Ansible and adcli to safely remove a Linux server's computer object from Active Directory during decommissioning.
Solution: Implemented a practical runbook/automation pattern with clear safety checks, execution steps, and verification points.
Used In: Used in Ansible playbook pipelines for provisioning, patching, and decommissioning with audit requirements.
How to request 90Hz/120Hz rendering and implement static deep-linked app shortcuts to improve mobile application usability.
Issue: The app was locked to standard 60Hz rendering, causing sub-optimal scrolling experiences on devices capable of 90Hz or 120Hz. Additionally, users had to navigate through multiple screens to perform frequent actions.
Solution: Detected 90Hz+ display modes and configured window post-processing preferences for smoother rendering, then implemented static XML-based app shortcuts routed via deep links.
Used In: Used to modernize an Android mobile application built with Jetpack Compose.
How to use a single, universal Ansible role to deploy static sites, PHP apps, or complex reverse proxies just by changing host variables.
Issue: Needed a repeatable way to use a single, universal Ansible role to deploy static sites, PHP apps, or complex reverse proxies just by changing host variables.
Solution: Implemented a practical runbook/automation pattern with clear safety checks, execution steps, and verification points.
Used In: Used in Linux platform engineering, middleware operations, and datacenter modernization projects in regulated environments.
How to package Ansible dependencies into a portable, containerized Execution Environment (EE) for consistent automation across runners.
Issue: Needed a repeatable way to package Ansible dependencies into a portable, containerized Execution Environment (EE) for consistent automation across runners.
Solution: Implemented a practical runbook/automation pattern with clear safety checks, execution steps, and verification points.
Used In: Used in Linux platform engineering, middleware operations, and datacenter modernization projects in regulated environments.