{"id":22037,"date":"2026-04-30T11:01:36","date_gmt":"2026-04-30T11:01:36","guid":{"rendered":"https:\/\/ideainthebox.com\/index.php\/2026\/04\/30\/audit-yourself-to-get-more-from-genai\/"},"modified":"2026-04-30T11:01:36","modified_gmt":"2026-04-30T11:01:36","slug":"audit-yourself-to-get-more-from-genai","status":"publish","type":"post","link":"https:\/\/ideainthebox.com\/index.php\/2026\/04\/30\/audit-yourself-to-get-more-from-genai\/","title":{"rendered":"Audit Yourself to Get More From GenAI"},"content":{"rendered":"<div>\n<figure class=\"article-inline\">\n<img decoding=\"async\" src=\"data:image\/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==\" data-orig-src=\"https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2026\/04\/Gupta-1290x860-1.jpg\" alt=\"\" class=\"lazyload wp-image-126888\"><figcaption>\n<p class=\"attribution\">Carolyn Geason-Beissel\/MIT SMR | Getty Images<\/p>\n<\/figcaption><\/figure>\n<\/p>\n<p><span class=\"smr-leadin\">More than a year into using<\/span> generative AI daily, I wondered whether I was getting the most out of my AI use. There was no benchmark or feedback loop, and no one was grading my sessions with ChatGPT and Claude \u2014 until I created a self-audit.<\/p>\n<p>I did what I\u2019ve always done when faced with a process that lacked measurement. I studied every method I could find \u2014 prompting guides, conversations with colleagues, my own session patterns. I used AI to help me use AI better. Over time, I built a single self-audit prompt \u2014 one that encapsulates more than 30 habits for getting the most from AI.<\/p>\n<p>Each time I ran the self-audit prompt, the output got sharper. The discipline became reflexive for me. That\u2019s the real value of the self-audit: It made me better at using AI, in every session.<\/p>\n<p>Now, at the end of any significant AI session, I simply prompt: \u201cReview this session and assess it against my AI habits guide. Score how I did, identify what I missed, and guide me to apply missed habits.\u201d Within a few minutes, I get a diagnostic that is uncomfortably specific about what I missed. I now have an answer to a key question: whether my <em>process<\/em> was good, not just the GenAI output. <\/p>\n<p>A recent field experiment confirmed what I found through my experience. A research team that included MIT Sloan professor Jackson Lu randomly assigned 250 employees at a technology consulting firm in China to either use ChatGPT to assist with their work or to work without it.<a id=\"reflink1\" class=\"reflink\" href=\"https:\/\/sloanreview.mit.edu\/article\/audit-yourself-to-get-more-from-genai\/#ref1\">1<\/a> The employees with ChatGPT access were judged as significantly more creative by both their supervisors and outside evaluators. But the gains showed up exclusively among employees with strong metacognitive strategies \u2014 those who reflected on their own thinking, recognized knowledge gaps, and refined their approach when results were weak. That finding underscores that metacognition \u2014 thinking about your thinking \u2014 is the missing link between simply using AI and using it well.<\/p>\n<\/p>\n<p>AI widens the gap between disciplined and undisciplined professionals. People who skip the discipline generate more volume without more insight \u2014 a pattern consistent with what researchers at the University of California, Berkeley\u2019s Haas School of Business called \u201cunsustainable intensity\u201d in findings published in early 2026.<a id=\"reflink2\" class=\"reflink\" href=\"https:\/\/sloanreview.mit.edu\/article\/audit-yourself-to-get-more-from-genai\/#ref2\">2<\/a><\/p>\n<p>Knowing how to use AI is good \u2014 but to get the most value from the tool, you need to know whether you\u2019re using it well. The self-audit gives you that.<\/p>\n<h3>A Self-Audit That Measures Five Key Goals<\/h3>\n<p>My self-audit prompt is organized across five goals: set up, refine, verify, own, and systematize. These goals represent a practice that experienced professionals have instinctively followed for years, long before generative AI\u2019s arrival. You don\u2019t need technical training to score well on this audit. You need to replicate the thinking and brainstorming process that you are likely already good at when conducting competitive research, responding to requests for proposals (RFPs), engaging in acquisition analysis, and planning a sales presentation, for example. It is your skill in the application of AI, not the AI itself, that makes the difference.<\/p>\n<p>The self-audit assesses each generative AI session with five questions linked to each of the goals: <\/p>\n<ul>\n<li>Set up: Did you prepare the AI before asking it to work? <\/li>\n<li>Refine: Did you iterate on your own thinking, or just reprompt? <\/li>\n<li>Verify: Did you verify before trusting? <\/li>\n<li>Own: Did you make the output yours, or accept the default? <\/li>\n<li>Systematize: Did you build something reusable, or close the chat and start over?<\/li>\n<\/ul>\n<p>You won\u2019t score well on all five goals in every session \u2014 nor should you. But knowing which ones you missed, and why, enables you to change your next session. Think of it as AI holding a mirror to your own ability. It gets sharper every time you make it your own.<\/p>\n<p>To illustrate what strong performance looks like at each goal, and what the self-audit is measuring, I applied the audit to an actual competitive due diligence analysis on a $5 billion global services company. Details have been modified for confidentiality, but the habits, prompts, and results are drawn from actual chat sessions. I\u2019ll focus on the impact one goal at a time.<\/p>\n<h4>1. Set Up: Pass the Intern Test<\/h4>\n<p><strong>What the self-audit measures:<\/strong> Did you prepare the AI with sufficient role, context, constraints, and materials before asking it to work \u2014 or did you jump straight to a question?<\/p>\n<p>The most consequential decision in any AI interaction happens before the first prompt. It\u2019s the decision to prepare.<\/p>\n<p>I tell the AI who it should be, what it has to work with, and what I need it to produce. \u201cYou are an elite research analyst specializing in competitive intelligence. Here are the target company\u2019s last two annual reports and its most recent earnings-call transcript. Assess this company\u2019s ability to disrupt our core business within 18 months and recommend our strategic response.\u201d That prompt will produce far better output than \u201cTell me about this competitor.\u201d<\/p>\n<p>I call this the \u201cintern test.\u201d If you handed your prompt to a brand-new hire with no context about your company, your industry, or your priorities, would they know what to do? If not, why would you expect your AI to?<\/p>\n<p>Most readers will likely pass this test. Any GenAI prompting guide or video covers the basics of setup.<\/p>\n<p>What gets overlooked is making clear what setup should <em>not<\/em> do \u2014 the negative constraint. I specify what I do not want: \u201cDo not give me a generic SWOT. Do not hedge every statement. Do not define terms I already know.\u201d And upload your materials. The more context you provide, the more accurate the output. It\u2019s like telling a new team member \u201cFigure out our competitive position\u201d versus handing them your last three strategy decks and customer feedback.<\/p>\n<p>Two additional practices make setup more effective. Before a significant AI chat, I run a preflight check: \u201cWhat does a great outcome look like? What are the three most important things to get right?\u201d After the first good draft, I generate a bridge summary so context carries forward, especially when I\u2019ll be taking a long break between prompts or need to transition to a new chat. You might not have considered using this tactic before. A bridge summary is especially valuable if you tend to have long, multipart exchanges over days or even weeks. (In one case, Claude suggested doing so at time intervals to avoid having the conversation get too complicated.)<\/p>\n<p>In the due diligence scenario, the difference in outputs before and after the self-audit was stark. While my first prompt was solid, the negative constraints and a preflight check were missing. The variable was me. What made the biggest difference? The negative constraint. Once I told the AI what not to do \u2014 no generic SWOT, no hedging, no defining terms I already know \u2014 the output became richer in insight and started reading like a briefing, not a book report.<\/p>\n<h4>2. Refine: Pass the Rethink Test<\/h4>\n<p><strong>What the self-audit measures:<\/strong> Did you truly iterate on your own instructions and thinking, or did you simply reprompt for a better answer?<\/p>\n<p>The first output from any AI session is a draft, not a deliverable. The real value comes from iteration. But the most productive iteration improves your own instructions, not the AI\u2019s answer.<\/p>\n<p>That\u2019s metacognition in action. The person who pauses to ask, \u201cWhat did I fail to specify? What assumption did the AI make that I should have preempted?\u201d is exercising exactly the reflective discipline that separates high performers from the rest. AI rewards those who rethink their own instructions \u2014 not those who rephrase the same request.<\/p>\n<\/p>\n<p>I started catching my own patterns. Sometimes the output sounded right, but I couldn\u2019t explain <em>why<\/em> \u2014 so I\u2019d ask the AI to walk me through its reasoning, and the gaps would surface. Other times, I\u2019d catch myself reprompting the same request with slightly different words and realize that the real problem was that I hadn\u2019t broken the task down. The hardest one to admit: When I still couldn\u2019t get what I wanted, it was usually because I couldn\u2019t describe the desired goal clearly enough. Pasting in an example of output that showed what I was after worked better than trying to describe it.<\/p>\n<p>One of the most powerful refining habits is embarrassingly simple: Ask the AI what you should be asking. \u201cWhat question should I be asking that I am not currently asking?\u201d That one prompt has produced more valuable insights than any other, in my experience.<\/p>\n<p>When I applied these habits to the due diligence, they surfaced a critical insight I\u2019d overlooked: The competitor\u2019s employee sentiment data contradicted its public narrative of a thriving digital transformation. That disconnect between external messaging and internal reality changed my entire threat assessment. I never would have discovered that if I hadn\u2019t challenged my own assumptions.<\/p>\n<\/p>\n<h4>3. Verify: Pass the Trust Test<\/h4>\n<p><strong>What the self-audit measures:<\/strong> Did you independently verify the AI\u2019s claims, check its sources, and stress-test its confidence \u2014 or did you trust fluent output at face value?<\/p>\n<p>AI output typically reads well \u2014 which can be a problem. It\u2019s linguistically fluent and structurally polished, even when the underlying claims are fabricated, outdated, or mathematically wrong. This is a new kind of quality risk, and it misleads experienced professionals more often than they\u2019d like to admit.<\/p>\n<p>I once asked AI to summarize the regulatory history of the credit card industry, which I know well. The response was beautifully written, logically structured, and completely wrong on two key regulatory revisions. It read like an A-minus term paper from a student who\u2019d skipped the reading. I almost didn\u2019t catch it \u2014 because it sounded right. That\u2019s what worried me. I knew the domain well, and I still nearly walked into a committee meeting with hallucinated data.<\/p>\n<p>Since then, I\u2019ve built verification into my routine. I ask the AI to surface and rank every assumption behind its answer. I request verifiable sources and note when the model can\u2019t provide them. For anything involving numbers, I ask for step-by-step calculations. I\u2019ve found two habits particularly effective: the temporal awareness check (\u201cWhat is the date of the most recent information you\u2019re drawing on?\u201d) and the confidence stress test (\u201cRate your confidence in each factual claim as high, medium, or low\u201d).<\/p>\n<p>It\u2019s the same discipline we\u2019ve always followed: Verify before you trust; trust before you share.<\/p>\n<p>During the due diligence, the AI flagged that its revenue figures were nine months old and rated its confidence in the regulatory settlement details as medium. When I verified the output independently, I discovered a $42 million enforcement action that the AI had understated. That single verification changed the risk profile of the entire analysis.<\/p>\n<h4>4. Own: Pass the Signature Test<\/h4>\n<p><strong>What the self-audit measures:<\/strong> Did you actively impose your voice, your position, and your audience on the output \u2014 or did you accept AI\u2019s default?<\/p>\n<p>The real work starts here. I used to stop too early. Most of us do.<\/p>\n<p>AI models default to hedged, tonally generic output. Left unguided, they produce content that is competent but indistinct \u2014 written by a smart person who seems to have an opinion about everything yet commits to nothing. That\u2019s fine for a rough research summary, but it doesn\u2019t reflect your voice or your style, and it\u2019s not something you\u2019d want to put your name on.<\/p>\n<p>The first complete draft was exactly that: well organized, factually grounded, and thoroughly researched. But it was hedged throughout and read like a report designed to avoid being wrong rather than to help someone make a decision. When I forced the AI to take a clear position on the competitive threat, pushed it for unconventional strategic responses, and asked it to apply champion-challenger lenses, the analysis became richer and something I would stake my reputation on.<\/p>\n<\/p>\n<p>One technique I use at this stage is running a draft by a <a href=\"https:\/\/sloanreview.mit.edu\/article\/how-i-built-a-personal-board-of-directors-with-genai\/\">virtual personal board of directors<\/a> that I built. These distinct personas help push my thinking and the AI\u2019s analysis  away from the default path toward the edges. I built AI-powered personas modeled on real personalities: v_SunTzu for power dynamics, v_Indra (Nooyi) for the human dimension, v_Mark (Cuban) for commercial realism, and v_Meg (Whitman) for operational rigor. What survives that gauntlet of virtual advisers is sharper and more defensible.<\/p>\n<p>The habit most people underuse is calibrating AI to their own personality: how they think, how they argue, and what they won\u2019t tolerate in a deliverable. Take ownership of the thinking, not just the editing. That\u2019s when the output starts sounding like you.<\/p>\n<h4>5. Systematize: Pass the Reuse Test<\/h4>\n<p><strong>What the self-audit measures:<\/strong> Did you build systems that make your next session better \u2014 or did you close the chat and leave yourself having to start from scratch next time?<\/p>\n<p>Nearly everyone treats each AI session as a stand-alone thread \u2014 which may be productive in isolation, but the value doesn\u2019t compound. Here, the discipline shifts from improving sessions to building systems.<\/p>\n<p>Building repeatable processes out of one-off successes is what I do. Yet, early on in my GenAI use, I spent two hours building a detailed competitive analysis that delivered exceptional output \u2014 and then I closed the chat. I\u2019d produced a great deliverable but captured none of the thinking that made it great. I should have known better. When I needed to run a similar analysis a month later, I had to start from scratch \u2014 the same role definition, the same constraints, the same verification steps, all rebuilt from memory. <\/p>\n<p>Three habits make the difference. These are not habits you apply at the end of the conversation but throughout \u2014 after every prompt, at every logical checkpoint, or after a break.<\/p>\n<p>First, maintain continuity. During any significant working session, I ask the AI to maintain a running summary of what we\u2019ve accomplished, what\u2019s still open, and what I will need to copy and paste to resume the conversation in another chat. This produces a bridge summary that makes it easy for you to pick up the discussion in a new session without losing continuity, especially if you run out of tokens on one chat.<\/p>\n<p>Second, be a coeditor. Review the AI\u2019s output after every prompt, or at logical break points, and feed your own judgment back in. You read what the AI produced. Some of it is good; some of it is wrong. Some of it is vague in ways you didn\u2019t notice until you tried to use it. You fix it, mark it up, and hand it back: \u201cHere\u2019s my revised version. Use this as our new baseline and continue from here.\u201d<\/p>\n<p>Third, &#8220;templatize&#8221; what works. Every time you craft a session that produces exceptional output \u2014 a due diligence workflow, an RFP evaluation, a customer analysis \u2014 convert it into a reusable template. Replace the specifics with [variable] placeholders and save the session as what I call a <em>macro-prompt<\/em> \u2014 a single structured prompt that combines the entire session\u2019s workflow so anyone can run it without having to start from scratch. Individual expertise becomes organizational capability.<\/p>\n<p>That single due diligence session became a reusable macro-prompt I\u2019ve now used for partnership evaluations, board position assessments, and acquisition analyses \u2014 each time just pasting it in the chat to start the conversation. From there, AI guides me step-by-step \u2014 instead of me guiding the AI \u2014 with all of the thinking intensity captured from the original session. After every use, I run a prompt to improve this macro-prompt for the next session.<\/p>\n<h3>How to Start Auditing and Improving<\/h3>\n<p>Below, I\u2019ve shared the self-audit macro-prompt that includes all 30 habits to audit oneself. Think of it as a companion resource. You can just copy and paste it into an existing conversation you\u2019ve been having with AI on a significant, extended topic. See what it tells you about your use of AI across all five goals and 30 habits. The self-audit will show you exactly where to refocus. <\/p>\n<p>Then, start practicing these habits in your GenAI conversations wherever you see the opportunity.<\/p>\n<p>Generative AI technology has already proved its capabilities and will keep getting better. The discipline is what unlocks real value \u2014 and that discipline will always be needed, regardless of which AI tool you use. <\/p>\n<p>There\u2019s one last thing I didn\u2019t expect when I started this journey: The better I got at working with AI, the better I got at thinking without it. <\/p>\n<p>Run the self-audit. See what it tells you about your critical thinking.<\/p>\n<div class=\"callout-highlight callout-highlight--transparent\">\n<aside class=\"l-content-wrap\">\n<article>\n<h4>Self-Audit Prompt<\/h4>\n<p>Copy and paste the prompt below during or after any significant AI working session. The AI will autonomously review your entire conversation, evaluate it against 30 habits spanning five goals, and deliver a structured diagnostic with scores, specific gaps it identified, and the exact prompts you should have used.<\/p>\n<p><strong>During or after a session:<\/strong> Paste it at any point in a conversation \u2014 midsession to course-correct, or at the end, to score what you did against the five goals.<\/p>\n<p><strong>Retroactively:<\/strong> Paste it into any past conversation you\u2019ve had with an AI to learn from your history.<\/p>\n<p>This macro-prompt includes micro-prompts or checks for every habit so the AI will know exactly what to look for and will be able to show you precisely what you should have said.<\/p>\n<div class=\"callout-toggle\">\n<figure class=\"copy-prompt\" role=\"region\" aria-labelledby=\"prompt-label-1\"><figcaption id=\"prompt-label-1\">SELF-AUDIT MACRO-PROMPT \u2014 COPY AND PASTE BELOW<\/figcaption><pre aria-label=\"Prompt text, use the copy button below to copy it\">\r\nSELF-AUDIT OF AI SESSION\r\n \r\nReview the entire conversation we just had. Evaluate how effectively I used AI in this session by assessing my performance against the 30 habits below.\r\n \r\nFor each goal, check whether I applied the habits listed. For each habit I missed, show me the EXACT PROMPT I should have used \u2014 written specifically for the content of this session, not as a generic template.\r\n \r\nWork through the five goals in order. After all five, deliver the scorecard.\r\n\r\n=================================  \r\nGOAL 1: SET UP \u2014 Did I prepare the AI before asking it to work?\r\n=================================\r\n \r\nHabit 1 \u2014 The preflight\r\nDid I define what a great outcome looks like before starting?\r\nMicro-prompt: \u201cBefore we begin, help me define: What does a great outcome look like for this task? What are the three most important things to get right? What mistakes do people typically make?\u201d\r\n \r\nHabit 2 \u2014 The mission\r\nDid I assign a clear role, context, and mission?\r\nMicro-prompt: \u201cYou are [specific expert role with years of experience in relevant domain]. Here is what I need: [specific deliverable]. Here is the context: [situation, constraints, timeline]. Your mission: [clear objective].\u201d\r\n \r\nHabit 3 \u2014 The negative constraint\r\nDid I state what I did NOT want?\r\nMicro-prompt: \u201cDo not [produce generic output]. Do not [hedge every statement]. Do not [define terms I already know]. Do not [give balanced, \u2018on the other hand\u2019 analysis].\u201d\r\n \r\nHabit 4 \u2014 The context upload\r\nDid I provide relevant documents, data, or prior work?\r\nMicro-prompt: \u201cHere are the attachments: [list files]. Use these as the primary basis for your analysis. Flag where you are drawing on general knowledge versus the documents I provided.\u201d\r\n \r\nHabit 5 \u2014 The session bridge\r\nDid I provide or request a bridge summary for continuity?\r\nMicro-prompt: \u201cThis is a continuation of our previous work on [topic]. Here is where we left off: [paste summary]. Confirm your understanding, flag anything unclear, and suggest where to pick up.\u201d\r\n \r\n================================= \r\nGOAL 2: REFINE \u2014 Did I iterate on my own thinking, not just reprompt?\r\n=================================\r\n\r\nHabit 6 \u2014 The iteration\r\nDid I challenge assumptions and explore alternative scenarios?\r\nMicro-prompt: \u201cYour analysis assumes [X]. Surface that assumption. What changes if [alternative scenario A]? What changes if [alternative scenario B]?\u201d\r\n \r\nHabit 7 \u2014 The reasoning request\r\nDid I ask the AI to show its reasoning step-by-step?\r\nMicro-prompt: \u201cThink step-by-step through your reasoning for [conclusion]. Show me the logic chain before restating your conclusion. I want to see how you got there, not just where you landed.\u201d\r\n \r\nHabit 8 \u2014 The prompt self-critique\r\nDid I ask the AI to critique or improve my prompt?\r\nMicro-prompt: \u201cHow would you improve my original prompt? Rate it 1-10 for clarity, specificity, and completeness. Show me what a 10 would look like.\u201d\r\n \r\nHabit 9 \u2014 The strategic question\r\nDid I ask what question I should be asking but haven\u2019t?\r\nMicro-prompt: \u201cStep back. What question should I be asking about [topic] that I haven\u2019t asked? What blind spots does my framing have?\u201d\r\n \r\nHabit 10 \u2014 The decomposition\r\nDid I break complex tasks into sequential subtasks?\r\nMicro-prompt: \u201cBefore writing the full [deliverable], (1) list the top three [dimensions], (2) rank them by [criteria], and (3) draft only the highest-priority one with supporting evidence.\u201d\r\n \r\nHabit 11 \u2014 The expert thinking\r\nDid I request an expert or alternative perspective?\r\nMicro-prompt: \u201cHow would a [specific expert role] evaluate this? What would they focus on that [my current perspective] might miss?\u201d\r\n \r\nHabit 12 \u2014 The few-shot example\r\nDid I provide concrete examples of desired output?\r\nMicro-prompt: \u201cHere is an example of the depth and structure I want: [paste excerpt]. Match this level of specificity and directness.\u201d\r\n \r\nHabit 13 \u2014 The diagnosis\r\nDid I diagnose and fix vague or generic responses?\r\nMicro-prompt: \u201cYour [section] feels generic. Identify the assumptions you made and the context that was missing. Then revise with more specificity about [specific aspect].\u201d\r\n \r\n\r\n================================= \r\nGOAL 3: VERIFY \u2014 Did I verify before trusting?\r\n=================================\r\n \r\nHabit 14 \u2014 The assumption surface\r\nDid I ask the AI to surface and rank its assumptions?\r\nMicro-prompt: \u201cList every assumption underlying your [analysis\/recommendation]. Which ones are weakest? Which would change your conclusion entirely if wrong?\u201d\r\n \r\nHabit 15 \u2014 The source demand\r\nDid I demand verifiable sources?\r\nMicro-prompt: \u201cProvide sources I can independently verify for [specific claims]. If you cannot provide a verifiable source, say so explicitly.\u201d\r\n \r\nHabit 16 \u2014 The counterargument\r\nDid I request the strongest opposing case?\r\nMicro-prompt: \u201cMake the strongest possible case that [opposite of your conclusion]. What evidence supports that view?\u201d\r\n \r\nHabit 17 \u2014 The math audit\r\nDid I ask for step-by-step math on calculations?\r\nMicro-prompt: \u201cRecalculate [specific figures]. Show your math step-by-step.\u201d\r\n \r\nHabit 18 \u2014 The confidence stress test\r\nDid I request confidence ratings on factual claims?\r\nMicro-prompt: \u201cFor each factual claim in this [output], rate your confidence as high, medium, or low. Flag anything below high and explain why.\u201d\r\n \r\nHabit 19 \u2014 The freshness check\r\nDid I check the recency of the data?\r\nMicro-prompt: \u201cWhat is the date of the most recent information you drew on? Flag anything that may be outdated.\u201d\r\n \r\nHabit 20 \u2014 The hallucination stress test\r\nDid I stress-test which claims are most likely wrong?\r\nMicro-prompt: \u201cWhich specific factual claims in this [output] are you least certain about? If I fact-checked every statement, which ones are most likely to be wrong?\u201d\r\n\r\n=================================  \r\nGOAL 4: OWN \u2014 Did I make this mine, or accept the AI\u2019s default?\r\n=================================\r\n \r\nHabit 21 \u2014 The position forcer\r\nDid I force a clear position rather than accepting hedged output?\r\nMicro-prompt: \u201cDo not hedge. Take a clear position: [specific question]. Defend your position, then address the strongest counterargument.\u201d\r\n \r\nHabit 22 \u2014 The originality push\r\nDid I push for unconventional or nonobvious angles?\r\nMicro-prompt: \u201cGenerate three unconventional [responses\/strategies\/angles] that most [consultants\/analysts\/writers] would not recommend. Label one as high risk, high reward.\u201d\r\n \r\nHabit 23 \u2014 The specificity demand\r\nDid I require specific data points instead of abstract claims?\r\nMicro-prompt: \u201cSupport every claim with a specific data point from the documents I provided or a verifiable source. Remove anything abstract.\u201d\r\n \r\nHabit 24 \u2014 The narrative shaper\r\nDid I shape output into narrative rather than accepting lists?\r\nMicro-prompt: \u201cRewrite this as a strategic narrative: What is the one thing [audience] needs to understand, why does it matter, and what is the decision we need to make now? No lists. End with a clear recommendation.\u201d\r\n \r\nHabit 25 \u2014 The audience calibration\r\nDid I calibrate output for a specific audience?\r\nMicro-prompt: \u201cRewrite this for [specific audience]. Assume they are [smart but not immersed in details]. Lead with [what matters to them].\u201d\r\n \r\nHabit 26 \u2014 The multi-persona workflow\r\nDid I use multiple perspectives to challenge the output?\r\nMicro-prompt: \u201cNow review this from three perspectives: (1) [strategist role]: What are we failing to anticipate? (2) [empathetic leader role]: What human factors are missing? (3) [editor role]: Tighten and cut.\u201d\r\n\r\n=================================  \r\nGOAL 5: SYSTEMATIZE \u2014 Did I build systems, not just outputs?\r\n=================================\r\n \r\nHabit 27 \u2014 The coeditor\r\nDid I feed my own edits back in as a coeditor THROUGHOUT the session?\r\nCheck: Did this happen at multiple points during the conversation \u2014 not just once at the end? Count how many times I revised and handed back my own version. More is better. Flag any stretch of three or more prompts where I accepted output without coediting.\r\nMicro-prompt: \u201cHere is my revised version with my edits. Use this as our new baseline. Incorporate my changes, flag anything you disagree with, and continue from here.\u201d\r\n \r\nHabit 28 \u2014 The session debrief\r\nDid I request bridge summaries THROUGHOUT the session?\r\nCheck: Did this happen at logical break points, before long breaks, or when approaching token limits \u2014 not just at the end? Count how many bridge summaries were requested. Flag any point where continuity was lost because a bridge summary was missing.\r\nMicro-prompt: \u201cSummarize what we accomplished, what\u2019s still open, and what I should bring to our next session to pick up where we left off.\u201d\r\n \r\nHabit 29 \u2014 The self-audit\r\nDid I run self-audit checkpoints THROUGHOUT the session?\r\nCheck: Did I pause at logical milestones to assess session quality before moving on \u2014 or did I audit only at the very end? Flag any major transition between goals or phases where a midsession audit would have caught a gap earlier.\r\n(You\u2019re running the final self-audit now.)\r\n \r\nHabit 30 \u2014 The macro maker\r\nDid I convert the session into a reusable macro-prompt?\r\nMicro-prompt: \u201cConvert this session into a reusable macro-prompt with [variable] placeholders. Format it so anyone can copy, paste, and follow the steps to produce [deliverable type].\u201d\r\n \r\n=================================  \r\nSCORECARD \u2014 Deliver this after evaluating all five goals\r\n=================================\r\n \r\nFor each goal (1-5), provide:\r\n- Score (1-5, where 5 = all habits demonstrated, 1 = none)\r\n- Habits demonstrated well (with specific examples from our conversation)\r\n- Habits missed (with the EXACT prompt I should have used, written for the specific content of THIS session)\r\n- How each missed prompt would have improved the output\r\n \r\nThen provide:\r\n- Overall session score (average of five goals)\r\n- The single highest-impact habit I missed\r\n- Top three habits to focus on in my next session\r\n \r\nBe specific and direct. Reference actual moments in our conversation.\r\nDo not soften the assessment.\r\n\r\n\r\n================================= \r\nSESSION CLOSE\r\n=================================\r\n\r\nAfter delivering the scorecard, ask me: \u201cWould you like me to (1) go back and apply the missed habits now to improve the work we just did, (2) generate a bridge summary for your next session, or (3) suggest improvements to this self-audit macro-prompt based on what we learned in this session?\u201d\r\n\r\n\r\n<\/pre>\n<\/figure>\n<\/div>\n<\/article>\n<\/aside>\n<\/div>\n<p>Apply these tips to get the most from the self-audit:<\/p>\n<ul>\n<li>Run it at the end of every significant AI session, not just occasionally. The habit of measuring is itself the discipline.<\/li>\n<li>Don\u2019t stop at the scorecard. When the AI asks, \u201cWould you like me to go back and apply the missed habits?\u201d say yes. Then run the self-audit again. Repeat until you\u2019re satisfied you\u2019ve extracted the most value from the session.<\/li>\n<li>Track your scores over time. You\u2019ll notice patterns \u2014 goals you consistently score well on and goals you consistently skip. Those patterns are your development road map.<\/li>\n<li>Improve the prompt itself. When the AI suggests improvements to this macro-prompt based on your session, review them and update your saved copy. The self-audit gets sharper each time you use it.<\/li>\n<li>Make it yours. Add habits that matter to your work, remove ones that don\u2019t, or build in your own techniques. The 30 habits here are a starting point, not a ceiling.<\/li>\n<li>Share it with your team. When everyone runs the same self-audit, you build a shared language for AI session quality across the organization.<\/li>\n<\/ul>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Carolyn Geason-Beissel\/MIT SMR | Getty Images More than a year  [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":""},"categories":[194],"tags":[],"class_list":["post-22037","post","type-post","status-publish","format-standard","hentry","category-graphic-design"],"acf":[],"_links":{"self":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/22037","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/comments?post=22037"}],"version-history":[{"count":0,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/22037\/revisions"}],"wp:attachment":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/media?parent=22037"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/categories?post=22037"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/tags?post=22037"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}