Introduction
It started as an experiment. Just an idea — could we use pseudo code, written in plain human language, to define tasks for AI platforms in a structured, logical way? Not programming, exactly. Not scripting. But something between instruction and automation. And to my surprise — it worked. At least in early testing, platforms like Claude Sonnet 4 and Perplexity have been responding in consistently usable ways. This post outlines the method I’ve been testing, broken into three sections: Inputs, Task Logic, and Outputs. It’s early, but I think this structure has the potential to evolve into a kind of “prompt language” — a set of building blocks that could power a wide range of rule-based tools and reusable logic trees.
Section 1: Inputs
The first section of any pseudo code prompt needs to make the data sources explicit. In my experiments, that means spelling out exactly where the AI should look — URLs, APIs, or internal data sets. Being explicit in this section has two advantages: it limits hallucination by narrowing the AI’s attention, and it standardizes the process, so results are more repeatable across runs or across different models.
# --- INPUTS ---
Sources:
- DrudgeReport (https://drudgereport.com/)
- MSN News (https://www.msn.com/en-us/news)
- Yahoo News (https://news.yahoo.com/)
Each source is clearly named and linked, making the prompt both readable and machine-parseable by future tools. It’s not just about inputs — it’s about documenting the scope of trust and context for the model.
Section 2: Task Logic
This is the core of the approach: breaking down what we want the AI to do in clear, sequential steps. No heavy syntax. Just numbered logic, indentation for subtasks, and simple conditional statements. Think of it as logic LEGO — modular, stackable, and understandable at a glance.
# --- TASK LOGIC ---
1. Scrape and parse front-page headlines and article URLs from all three sources.
2. For each headline:
a. Fetch full article text.
b. Extract named entities, events, dates, and facts using NER and event detection.
3. Deduplicate:
a. Group similar articles across sources using fuzzy matching or semantic similarity.
b. Merge shared facts; resolve minor contradictions based on majority or confidence weighting.
4. Prioritize and compress:
a. Reduce down to significant, non-redundant points that are informational and relevant.
b. Eliminate clickbait, vague, or purely opinion-based content unless it reflects significant sentiment shift.
5. Rate each item:
a. Assign sentiment as [Positive | Neutral | Negative].
b. Assign a probability of truthfulness based on:
- Agreement between sources
- Factual consistency
- Source credibility
- Known verification via primary sources or expert commentary
What’s emerging here is a flexible grammar of logic. Early tests show that platforms can follow this format surprisingly well — especially when the tasks are clearly modularized. Even more exciting: this structure hints at future libraries of reusable prompt modules — small logic trees that could plug into a larger system.
Section 3: Outputs
The third section defines the structure of the expected output — not just format, but tone, scope, and filters for relevance. This ensures that different models produce consistent, actionable results, even when their internal mechanics differ.
# --- OUTPUT ---
Structured listicle format:
- [Headline or topic summary]
- Detail: [1–2 sentence summary of key point or development]
- Sentiment: [Positive | Neutral | Negative]
- Truth Probability: [XX%]
It’s not about precision so much as direction. The goal is to give the AI a shape to pour its answers into. This also makes post-processing or visualization easier, which I’ve started exploring using Perplexity Labs.
Conclusion
The “aha” moment for me was realizing that you could build logic in natural language — and that current AI platforms could follow it. Not flawlessly, not yet. But well enough to sketch the blueprint of a new kind of rule-based system. If we keep pushing in this direction, we may end up with prompt grammars or libraries — logic that’s easy to write, easy to read, and portable across AI tools.
This is early-phase work, but the possibilities are massive. Whether you’re aiming for decision support, automation, research synthesis, or standardizing AI outputs, pseudo code prompts are a fascinating new tool in the kit. More experiments to come.
* AI tools were used as a research assistant for this content, but human moderation and writing are also included. The included images are AI-generated.
