Our-Approach
Our Approach: AI-Human Collaboration Principles
Core Philosophy
Our approach to AI-augmented development is built on this fundamental principle:
AI tools are collaborative partners, not magic solutions.
Just as you wouldn't expect a new team member to deliver quality work without proper onboarding, clear requirements, and iterative feedback, AI tools require the same structured approach to collaboration. The Internet is abuzz with the majority of Vibe Coding tourists being somewhere between disappointed and maddeningly frustrated.
[xaz7sh]
The Team Member Analogy
Working with AI is remarkably similar to working with a highly capable but inexperienced developer (that is also ironically as naive and blameless as a three-year old.)


What AI Needs (Like Any Team Member)
- Clear Specifications: Detailed requirements, not vague requests
- Context and Background: Understanding of project goals and constraints
- Clear and Specific Prompts: that include attachments and line references to the context, background, and specifications.
- Iterative Feedback: Regular check-ins and course corrections
- Well-Defined Interfaces: Clear inputs, outputs, and expectations
- Structured Communication: Consistent formats and protocols
What AI Provides (Like a Skilled Contributor)
- Rapid Prototyping: Quick generation of initial implementations
- Eagerness to use often skipped Best Practices: Meaningful commit messages, code comments, updates to documentation, continuous test coverage, changelogs.
- Pattern Recognition: Identification of common structures and approaches
- Consistent Output: Reliable formatting and structure adherence
- Broad Knowledge: Access to extensive development patterns and practices
- Cross-Functional Competencies: Many human developers end up specializing in some related set of masteries, such as Back-End, Front-End, DevOps, or Data Science
- Assistance with Developer Blind Spots and Atrophy: AI models are uniquely competent at many competencies that developers often never gain mastery over or have long forgotten.
- willingness to read through the entirety of documentation and instructions (though they will forget it quickly)
- complex and less-used git and version control commands.
- complex and less used command line commands.
- fluency with Diagrams as Code, and willingness to thoroughly document all changes as they are made (if prompted).
What AI brings that No Human Can:
- 24/7 Availability: Always ready to assist and iterate.
- 100% can do attitude: Models always greet any task no matter how arduous with a complementary if not sycophantic attitude.
- Industry-Wide, Instant Access Pattern Recognition: The LLM will be incredibly knowledgeable about pretty much any language, framework, library, programming pattern, best practice.
- Instant First Drafts: If upfront investments into documentation are good, copilots can produce large amounts of code almost instantly as long as the model vendor APIs are not over-trafficked. Their first drafts are often more error free than continuous iterations because they just print out established patterns.
- Instant Error Recognition: Errors generated by programming languages and frameworks are notoriously hard for humans to read. A common way to lose time and focus was to copy error messages into Google and Stack Overflow to understand them, and hope to find some kind of explanation.
- Fuzzy Find on Caffeine: Copilots can search large codebases for instances, patterns, syntax errors, often based on loose requests.
The Challenges AI will Introduce:
- Leaps into generating large volumes of unnecessary code rather than well-crafted, well-architected code
- Will overwrite working, valuable code: that no engineer would even think to overwrite.
- Creates a hyper-vigilance with version control: which then changes the pace at which commits and pull requests happen.
- Needs continuous orientation to either be aware of or generate modular code with small individual files.
- Ignores and is oblivious to standard project files that developers would always go to check, such as utils, styles, routes, etc. They must be re-fed at every prompt, or explicitly told to go to the path and review.
- Defaults to universal variable and component names that can create naming collisions and look meaningless to humans.
- Struggles to use meaningful names that reveal project context.
- Meaningful naming must be explicit in prompts
- Lazy and stubborn when instructions are not completely clear. Prone to take shortcuts, like adding unnecessary libraries. Will often change one or two lines and say its fixed and working when not even close.
- Oblivious to its own ignorance: the model will not proactively ask questions or reveal confusion.
- Models assume immediate comprehension of project, task, and prompt, and will communicate with 100% confidence. This will leading to rabbit holes, reversion, clean-up and refactor, or bug squashing.
- Rarely asks follow up questions that improve understanding. Thus, the ACE toolkit needs to be fully written and loaded into the context window, with subsequent kit ready for course correction or the next task.
- Does not learn: ironically, once a model is trained and available it no longer learns without workflows of fine tuning. There is nothing resembling either working or long term memory. The only fix, and an arduous and imperfect one, is that everything is continuously documented, and necessary context is reintroduced into the context window at every step. Regardless, the model will repeat the same mistakes over and over.
- Quick to overwhelm: Feeding the Context Window works wonders, and relatively small work histories can lead to Context Rot and result in an "overwhelmed" Copilot. The model is also unaware it is overwhelmed, so will not tell you. You will just notice things taking longer, the model second guessing itself or going on tangents that seem quite like a nervous breakdown.
Key Principles
1. Documentation-Driven Development
Before Copilots:
Before adopting copilots, thorough documentation was often developed AFTER code had been written. Architects, Product Managers, and UI Designers would make the documentation needed for the Design to Engineering Handoff. The real documentation was usually a reflective output or deliverable.
With Copilots:
To get the most out of Human + Copilot cooperative workflows, thorough documentation needs to developed BEFORE, and DURING the development phase. And documentation needs to have its own framework, as if all the information is in the specification, it's likely the specification + the prompt and action will exceed the context window -- thus really key information could be forgotten.
In our experience, developing and having a framework of using different kinds of documentation that can be in different use cases, and as either setup, intervention, or wrap up to tasks in the development cycle.
Diagrams are Lifeblood
Of course architectural diagrams had their role and were helpful before copilots. Now, they are essential. AI models are genius at generating Diagrams as Code or Diagrams-from-Text, our experience is that Mermaid.js, an open source JavaScript library, has everything we've needed.
1a. ACE Toolkit: Recommended Documents
Our rabbit holes and endless hours of frustration has led us to a stable set of documents
Living Specifications, Blueprints, Reminders, and Prompts
Documentation Type | Living Specifications | Blueprints | Reminders | Prompts |
Use Patterns | Vital to kickoff prompt. Refer to it accompanying every prompt or as necessary. | Vital to kickoff prompt. Refer to it accompanying every prompt or as necessary. | As needed, but usually one or more involved in every prompt. | Developed on the fly prior to prompting for development. |
Development Phase | Early, prior to moving into design, often started and then iterated on for prolonged periods before moving to development. | Iteratively, usually to synthesize patterns across the project or across many projects as per developer or team preference. | Upon repeat frustration with the same naivety, forgetfulness, or assumptions. | Instead of just writing a prompt in the chat interface, reference the specification and work with the copilot in the role of product manager. Develop a comprehensive prompt for a single task scope of work. |
Frequency of Use | Frequently during the build, but rarely if every after. | As needed, but usually front loaded for context accompanying a prompt. | As needed, but usually multiple times in a single work session. | Usually once, or iteratively a few times if there is much discussion, iteration, or resets and reversions to make another attempt. |
Cognitive State | Planning | Reflective | Reflective | Planning |
2. Specification-Driven Development
Instead of asking "build me a login system," create and improve on templates and specifications that provide:
- Diagrams as Code that show various kinds of architecture context.
- Technical stack, choices, and available libraries
- Technical constraints and requirements
- Scopes for different iterations or even versions
- User stories and acceptance criteria
- Integration points with existing systems
- Security and performance requirements, even at the prototype stage without clarifying them the copilot will get confused.
- UI/UX guidelines, links to inspiration or sources to copy from, and direct access to mockups if possible
3. LLM-TDD
Not that long ago, Test-Driven Development was nearly mandatory practice. But, Hacker Culture cast it aside. Well, TDD is back to being mandatory if you want to have drama-free cooperation with AI Models.
Good news: while I've never met a software developer that likes writing tests, all AI Models are almost eager to write them. (AI likes best practices). They are also magically fast and accurate at writing tests
Tests that serve as an additional input to the prompt/task are noticeably valuable, as its a really good way to focus the copilot on the task at hand.
[h5o9du]
Tests also prevent disaster. As discussed before, AI Models will naively and enthusiastically overwrite working, valuable code... and not even notice that it did. While some people actually read through every line of code written and changed before accepting, our experience is that when documentation and prompts are airtight, you can get thousands of lines of new or changed code in less than 2 minutes. Clicking accept and praying for the best is tempting. The only way to catch that kind of disaster quickly is to run a test, revert to last commit, and do prompt again while explicitly stating: "Do not overwrite code."
2. Iterative Refinement
- Start with basic requirements and iterate
- Test and validate each iteration
- Refine specifications based on results
- Build complexity gradually
3. Human-AI Pair Programming
- Humans provide architectural decisions and creative solutions
- Continuous code review and quality assurance
- Regular alignment on project direction
4. Documentation as Communication
- Maintain living specifications
- Document decisions and reasoning
- Create reusable templates and patterns
- Share knowledge across team members
5. Quality First
- AI-generated code must meet the same standards as human code
- AI Generated Code often will not meet Human standards on the first attempt at a prompt. Don't be frustrated.
- Implement proper testing and validation workflows
- Regular security and performance reviews
- Code style and convention adherence
Implementation Strategy
Phase 0: Team ACE content repository
- Create or access your documentation repository used for this process.
- Include example "Rules" or "Rulesets" that can be used for the different AI Native IDEs. (We switch between Windsurf IDE, Cursor, and Claude Code.)
- Make sure everyone knows how to create snippets in their Text Editors or IDEs, they are usually used for comments or boilerplate code but they are very very helpful as a sub for Reminders.
Phase 1: Iterate to a Living Specification
- Ask the AI Copilot to take on the role of "Senior Product Manager brought in to save a project that is behind schedule."
- Iterate cooperatively with your Senior Product Manager assistant on the Living Specification
- Use commits liberally, the copilot can go haywire and edit things that were not requested. (We have Reminders that say to never overwrite anything unless specifically asked.)
- Chunk work into reasonable "Phases" -- a phase should be something one Human-AI Pair can reasonably accomplish in one prolonged sitting.
- Chunk Phases into Prompts, which will start in the specification but as it becomes coherent and robust, including references to Reminders and Blueprints, it should be Copypasta into it's own file.
- Include path references to any relevant documentation, codebases, repositories, or files, even recent projects that were successful.
- Create template structures for common requests
- Set up quality assurance processes
Phase 2: Integration
- Integrate AI tools into existing workflows
- Train team members on effective AI collaboration
- Establish feedback loops and improvement processes
- Document successful patterns and practices
Phase 3: Optimization
- Refine AI prompts and specifications based on experience
- Automate repetitive AI interactions
- Scale successful patterns across projects
- Continuous improvement of AI-human collaboration
Success Metrics
- Code Quality: AI-generated code meets or exceeds human standards
- Development Speed: Measurable improvements in delivery velocity
- Team Satisfaction: Developers find AI tools helpful, not hindering
- Maintainability: AI-augmented code is as maintainable as traditional code
- Learning Curve: New team members can quickly adopt AI workflows
Anti-Patterns to Avoid
❌ Vague Requests
- "Make it better"
- "Add some features"
- "Fix the bugs"
❌ Over-Reliance on AI
- Accepting all AI suggestions without review
- Skipping human architectural decisions
- Ignoring edge cases and error handling
❌ Under-Communication
- Not providing enough context
- Failing to specify constraints
- Assuming AI understands implicit requirements
✅ Effective Collaboration
- Detailed, specific requirements
- Regular review and validation
- Clear communication of constraints and expectations
- Human oversight of architectural decisions
Remember: AI is a powerful collaborator when treated as such. The key to success is clear communication, iterative development, and maintaining human oversight of critical decisions.