Activity
Mon
Wed
Fri
Sun
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
What is this?
Less
More

Memberships

AI Developer Accelerator

11.2k members • Free

45 contributions to AI Developer Accelerator
AI Developer Accelerator — Coaching Call - March 31
VIEW RECORDING - 121 mins (No highlights) Meeting Purpose Review AI projects, discuss industry trends, and share development challenges. Key Takeaways - AI Accelerates Scope Creep: AI's speed creates "app creep," where clients quickly get 80% of a solution and then demand the "impossible" 20%. Managing this requires strict focus on critical business needs and OKRs. - New Skills for a New Era: The most valuable skills are shifting from coding syntax to higher-order thinking: asking the right questions, structuring workflows, and understanding business logic. - Local Models for Data Sovereignty: Repurposing gaming PCs with virtualization (Proxmox) and secure remote access (TailScale/TwinGate) enables running local LLMs, solving client data privacy concerns. - AI as a Personal Assistant: Developers are using AI to manage their own workflows, from creating voice interfaces for agents to rewriting emotionally charged emails with a professional tone. Topics The Challenge: Managing "App Creep" - Jake's experience highlights AI's double-edged sword: rapid development leads to rapid client demands. Clients quickly get 80% of a solution and then demand the "impossible" 20%, often requiring new hardware or architecture. User feedback is volatile; initial enthusiasm for a new feature fades as expectations rise. Clients often build insecure prototypes (e.g., in OpenClaw) and then expect a secure, production-ready version. - Proposed Solutions: Strict Scope Management: Focus on critical business needs and OKRs. Strategic Client Selection: Choose clients who respect a defined scope. Reframing the Challenge: View client-created problems as opportunities for long-term engagement.
RecapFlow : March 31st Coaching call analysis
šŸ“ SUMMARY A wide-ranging and practically dense call covering Claude Code plugins, self-hosted AI infrastructure, compliance for financial clients, scope creep management, and the long-term future of programming. The strongest recurring themes were the importance of asking good questions, the discipline required to manage fast-moving client expectations, and the growing viability of running AI workloads on local hardware for data-sovereign use cases. Everything Claude Code and the Codex plugin stand out as immediate priorities for anyone building seriously with Claude Code. šŸ’” KEY INSIGHTS Use Claude to rewrite emotionally charged emails before sending. Write what you actually want to say unfiltered, then ask Claude to rewrite it professionally. Patrick, Ryan, and Morgan all do this independently. Patrick described it as eliminating email-related stress entirely. The last mile problem is real and expensive. Getting to 80% quality is easy, 90% is hard, and 95% feels nearly impossible — especially as clients raise expectations after seeing early success. Scope creep has become app creep and system creep. Because AI enables rapid delivery, clients immediately want more. Asking "is this on the critical path?" and "does this affect an OKR?" is now more important than ever. When clients start building with Claude Code themselves, they bring partially-built, insecure repos and expect contractors to consolidate and productionize them — often without pausing their own development. The most valuable skill for working with AI is knowing how to ask good questions. Formulating precise, contextual questions is described as old-school BA-type knowledge that is now more valuable than ever. Curiosity is the single most important skill to cultivate. Patrick's direct recommendation for anyone entering the field. Patrick's mental model for AI adoption: the current moment mirrors the mainframe-to-PC transition. Today's AI interfaces are like dumb terminals connecting to centralized compute. A small group of specialists are building deeply now. Mass adoption and personalized local models will follow, just faster than before.
1
0
AI Developer Accelerator — Coaching Call - March 31st
Last week someone built a face authentication SDK, someone else accidentally landed a production contract through a GitHub search agent, and we ended the call still wondering whether ChatGPT and Claude will ever just *talk to each other.* If you missed it, the recap is worth a read. šŸ“ž HOW THE CALLS WORK The calls can run 2+ hours. We want to make sure we're respecting everyone's time. Especially those of you who actually show up. Here's the structure: šŸ‘‰ Reply to this post with your questions before the call šŸ‘‰ If you submit a question and you're on the call, you go first šŸ‘‰ We work through questions in the order they came in šŸ‘‰ Then we open it up for everyone else If you can't make the call but want your question answered, drop it in the comments. We'll get to it. But priority goes to people who are there. The goal is simple: if you're taking the time to show up, you shouldn't have to wait behind questions from people who aren't even on the call. There's plenty to pick back up from last week. Ty promised a live FaceGate demo where we actually try to *break* his biometric SDK — that should be fun. Elijah's competition win is finally coming off embargo. And Brandon was supposed to have his customer success manager agent built and ready to show. Drop your questions below and let's keep the momentum going. šŸ”— ZOOM LINK (save this) https://us06web.zoom.us/j/81995207847?pwd=Xe6u6LmIQOmCP5VTnOwWYjDBfZNKGB.1 šŸ“… WHEN Tuesday March 31st at 6PM ET Looking forward to seeing you on the call!
0 likes • 4d
@Haruki Saito The call is actually finished for this week, but I can transfer your question for next week call if you want. Let me know. Otherwise feel free to post the question openly in the forum if you want an answer faster.
RecapFlow : March 24th Coaching call analysis
šŸ“ SUMMARY A dense, high-signal call covering self-improving AI pipelines, governance-first agent architecture, Stripe best practices, biometric authentication, and mobile ideation workflows. The strongest through-line: the shift from using AI interactively to building systems that run autonomously — defining quality rubrics, letting agents evaluate and improve their own outputs, and removing the human from the loop wherever possible. Practical tool recommendations (CMux, Codex for autonomous tasks, Terraform for infrastructure, Discord over Telegram for agent memory) were grounded in real production experience. The IronClaw white paper, Ty's FaceGate SDK demo, and Patrick's RecapFlow auto-research experiment are the most concrete follow-ups to watch for in the coming week. šŸ’” KEY INSIGHTS Self-Improving AI Pipelines — The Most Actionable Framework Shared This Call Build systems that eliminate the human from the evaluation loop. Define explicit pass/fail criteria and a point-based rubric, build a representative input suite (e.g., 60 test cases), let the AI run experiments, grade its own outputs, identify failure modes, update its own system prompt, and iterate. Apply this at the individual pipeline step level first, then at the full system level. Brandon uses Codex for this because it runs autonomously for long periods without prompting for human confirmation. Expensive but produces measurable, compounding improvement. The Hardest Part Is Defining "Good" For mathematical outputs, scoring is straightforward. For language outputs, defining quality is the core challenge. Patrick's approach: use mechanical checks (did all URLs get extracted? is compression within bounds?) for the fast inner loop, and community feedback as the slow outer loop for subjective quality. Governance Before Features for AI Agents Prioritize governance before adding capabilities. Recommended architecture: read-only access to most systems, human-in-the-loop via Discord or Telegram for any state-changing action, full audit trail, and a smart router using local models (Ollama) for routine tasks and frontier models only for complex ones. This is the core principle behind the IronClaw framework.
1 like • 9d
@Juancho Torres Awesome, happy it helped and if you are searching for info on the subject, it's called IaC (Infrastructure as Code). Here is a video I really like about the subject (a little old, but still very good) : https://www.youtube.com/watch?v=fEIIxZUf4co
AI Developer Accelerator — Coaching Call - March 24
AI Developer Accelerator — Coaching Call - March 24 VIEW RECORDING - 151 mins (No highlights) Meeting Purpose A coaching call for AI developers to share progress and discuss advanced techniques. Key Takeaways - Self-improving AI systems are the new frontier. Brandon is building systems that eliminate human bottlenecks by using AI to self-evaluate and refine its own prompts against a clear scoring function. - Agentic DevOps is a force multiplier. Juan is using AI agents with AWS CLI access to build infrastructure, while Scott and Patrick are collaborating on "Ironclaw," a secure, local-first assistant with audited human-in-the-loop governance. - Biometric auth is solving real-world problems. Ty is developing "FaceGate," a web-based Face ID SDK for shared devices (e.g., kiosks), which eliminates password friction but introduces significant biometric data security risks. - Community-led AI development is accelerating. Patrick is using community feedback on RecapFlow to train a weekly recap generator, demonstrating a powerful model for rapid, iterative improvement. Topics Self-Improving AI Systems - Brandon's "Eliminate Myself" Approach: Building systems where the human is removed from the evaluation loop, forcing hyper-clarity on the scoring function. Method: Use a tool like Codex to run infinite experiments, refining prompts based on a clear rubric. Application: SOAP Narratives: AI generates narratives and follow-up questions, then grades its own output. SDLC: AI builds features in a sandbox; the goal is to automate the final evaluation step, currently a human bottleneck. - Patrick's RecapFlow Experiment: Integrating an auto-research loop to improve a weekly recap generator. Two-Part Loop: Mechanical Validation: A fast, internal loop checks for objective criteria (e.g., correct link count, no markdown leakage). Subjective Improvement: Community comments on the recap will be fed back as training data to refine the AI's output over time.
1
0
1-10 of 45
Patrick Chouinard
5
331points to level up
@patrick-chouinard-8756
AI strategist & IT generalist building local LLM stacks, RAG chatbots & automation pipelines. Pragmatic, future-focused, and debate-ready.

Active 2h ago
Joined Jun 27, 2025
Montreal, Quebec, Canada
Powered by