The Entrepreneur’s Guide to Overcoming Fear of Failure

Fear of failure is a common barrier that holds back many aspiring entrepreneurs. It’s a natural response to the uncertainty and risks involved in starting and running a business. However, overcoming this fear is crucial for success. Here are some insightful strategies and real-world examples to help you embrace risk and failure on your entrepreneurial journey.

Thinking

 1. Understand That Failure is a Learning Opportunity

– Embrace a Growth Mindset: Entrepreneurs need to see failure not as a dead-end but as a stepping stone to success. Adopting a growth mindset helps you learn from mistakes and continuously improve.
– Example: Thomas Edison famously failed thousands of times before inventing the light bulb. He saw each failure as a lesson, saying, “I have not failed. I’ve just found 10,000 ways that won’t work.”

 2. Set Realistic Goals and Manage Expectations

– Break Down Big Goals: Setting smaller, achievable goals can reduce the fear of failure. It makes the larger objective seem more manageable and provides a sense of accomplishment along the way.
– Example: When launching a new product, start with a pilot project or a small market test. This approach allows you to gather feedback, make adjustments, and reduce the financial risk.

 3. Build a Support Network

– Seek Mentorship and Advice: Surround yourself with experienced entrepreneurs who can provide guidance and share their experiences of overcoming failure.
– Example: Sara Blakely, the founder of Spanx, credits much of her success to the advice and support she received from mentors. Their encouragement helped her persevere through the challenges of building her business.

 4. Prepare for Failure

– Have a Contingency Plan: Being prepared for potential setbacks can alleviate the fear of failure. A well-thought-out contingency plan can help you navigate difficulties with confidence.
– Example: Dropbox initially faced significant challenges with its business model and competition. By having backup strategies and being flexible, they were able to pivot and refine their product, eventually achieving massive success.

 5. Embrace Calculated Risks

– Evaluate Risks and Rewards: It’s essential to take risks, but they should be calculated. Assess the potential impact and benefits before making a decision.
– Example: Jeff Bezos took a calculated risk when he left his secure job to start Amazon. He weighed the potential rewards against the risks, deciding that the opportunity was worth pursuing despite the uncertainty.

 6. Focus on What You Can Control

– Control Your Effort and Attitude: While you can’t control every outcome, you can control your response and the effort you put in. This focus can reduce anxiety and increase resilience.
– Example: Elon Musk has faced numerous failures with SpaceX and Tesla. His relentless work ethic and positive attitude have helped him persist and ultimately succeed, despite setbacks. (Of course, he might also be insane…)

 7. Learn from Others’ Mistakes

– Study Successful Entrepreneurs: Understanding how others have navigated failure can provide valuable insights and strategies.
– Example: Richard Branson, founder of the Virgin Group, openly shares his business failures. By learning from his experiences, aspiring entrepreneurs can avoid similar pitfalls and adopt effective strategies.

 8. Reframe Failure as Feedback

– View Failure as Constructive Criticism: Instead of seeing failure as a negative outcome, treat it as feedback on what needs improvement.
– Example: The creators of Angry Birds, Rovio Entertainment, developed 51 unsuccessful games before achieving global success. Each failure provided valuable feedback that led to the creation of their hit game.

 9. Develop Resilience and Perseverance

– Cultivate a Resilient Mindset: Building mental toughness helps you bounce back from setbacks and stay committed to your goals.
– Example: J.K. Rowling faced numerous rejections before finding a publisher for Harry Potter. Her resilience and perseverance paid off, leading to one of the most successful book series in history. (Note: I am not a fan of Ms. Rowling, but she is a good example here…)

 10. Celebrate Small Wins

– Acknowledge Progress: Recognizing and celebrating small achievements can boost your confidence and motivation.
– Example: When launching a startup, celebrate milestones such as securing initial funding, launching a website, or gaining the first 100 customers. These small wins can keep you motivated through challenging times.

 Conclusion

Overcoming the fear of failure is essential for entrepreneurial success. By understanding that failure is part of the journey, setting realistic goals, building a support network, and embracing calculated risks, you can navigate the uncertainties of entrepreneurship with confidence. Learn from others, reframe failure as feedback, and develop resilience to keep moving forward. Remember, every successful entrepreneur has faced failure—what sets them apart is their ability to learn, adapt, and persist.

 

* AI tools were used as a research assistant for this content.

 

How to Use Mental Models to Save Cognitive Energy and Attention in Day-to-Day Life

In the hustle and bustle of modern existence, our minds are constantly inundated with a deluge of data. From sunrise to sunset, we’re faced with a barrage of choices, both monumental and minuscule, that sap our mental stamina. But fear not, for there is a solution: mental models. These nifty cognitive tools help streamline our thought processes, enabling us to tackle life’s daily obstacles with greater ease and efficiency. By harnessing the might of mental models, we can conserve our precious brainpower for the things that truly matter.

MentalModels

Unveiling the Enigma: What Exactly Are Mental Models?

Picture mental models as the scaffolding that supports our understanding and interpretation of the world around us. They take complex concepts and boil them down into a structured approach for tackling problems and making decisions. In essence, they’re like cognitive shortcuts that lighten the mental load required to process information. Mental models span a wide range of fields, from economics and psychology to physics and philosophy. When wielded effectively, they can dramatically enhance our decision-making and problem-solving prowess[1][2].

 Unleashing the Potential: Mental Models in Action

 1. The Pareto Principle: Doing More with Less

The Pareto Principle, also known as the 80/20 rule, suggests that 80% of results stem from a mere 20% of efforts. This principle can be a real game-changer when it comes to prioritizing tasks and zeroing in on what truly matters.

Real-World Application: Picture yourself as a project manager with a daunting to-do list of 20 tasks. Rather than trying to juggle everything at once, zero in on the top four tasks that will have the most profound impact on the project’s success. By focusing your energy on these critical tasks, you can achieve more substantial results with less effort[1].

 2. Inversion: Flipping the Script

Inversion involves approaching problems from the opposite angle to pinpoint potential pitfalls and solutions. By considering what you want to avoid, you can unearth strategies to achieve your goals more effectively.

Real-World Application: Let’s say you’re orchestrating a major event. Instead of solely focusing on what needs to go right, ponder what could go wrong. By identifying potential snags, such as equipment malfunctions or scheduling snafus, you can take proactive steps to mitigate these risks and ensure a smoother event[1].

 3. First Principles Thinking: Breaking It Down

First principles thinking, a favorite of Elon Musk, involves deconstructing complex problems into their most basic elements. By grasping the core components, you can devise innovative solutions that aren’t shackled by conventional thinking.

Real-World Application: Imagine you’re trying to optimize your daily commute. Instead of accepting the usual traffic and route options, break down the problem: What’s the fundamental goal? To reduce travel time and stress. From there, you might explore alternative transportation methods, such as biking or carpooling, or even rearranging your work schedule to avoid peak traffic times[1].

 4. The Eisenhower Matrix: Mastering Time Management

The Eisenhower Matrix is a time management tool that categorizes tasks based on their urgency and importance. By sorting tasks into four quadrants—urgent and important, important but not urgent, urgent but not important, and neither urgent nor important—you can prioritize more effectively.

Real-World Application: Your email inbox is overflowing, and you’re drowning in messages. Use the Eisenhower Matrix to sort through your emails. Tackle urgent and important emails first, such as those from your boss or key clients. Important but not urgent emails can be scheduled for later, while urgent but not important ones (like promotional offers) can be quickly handled or delegated. Lastly, delete or archive those that are neither urgent nor important[1].

 5. Confirmation Bias: Challenging Your Assumptions

Awareness of confirmation bias—the tendency to favor information that confirms our preexisting beliefs—can help us make more objective decisions. By actively seeking out diverse perspectives and challenging our assumptions, we can avoid narrow-minded thinking.

Real-World Application: You’re researching a new investment opportunity and already have a positive opinion about it. To counter confirmation bias, deliberately seek out critical reviews and analyses. By evaluating both positive and negative viewpoints, you can make a more informed decision and reduce the risk of overlooking potential downsides[1].

 Putting Mental Models into Practice: Tips and Tricks

 1. Create a Mental Models Toolbox

Assemble a personal collection of mental models that resonate with you. This could be a digital document, a notebook, or even a series of flashcards. Regularly review and update your toolbox to keep these models fresh in your mind[1].

 2. Start Small and Build Momentum

Begin by applying mental models to everyday decisions. For instance, use the Pareto Principle to prioritize your daily tasks or the Eisenhower Matrix to manage your time. With practice, these models will become second nature[1].

 3. Reflect, Refine, Repeat

After applying a mental model, take a moment to reflect on its effectiveness. Did it help simplify the decision-making process? What could you improve next time? Iterative reflection will help you fine-tune your use of mental models and amplify their impact[1].

 4. Learn from the Best

Study how successful individuals and organizations use mental models. Books like “Poor Charlie’s Almanack” by Charlie Munger and “Thinking, Fast and Slow” by Daniel Kahneman offer valuable insights into the practical application of mental models[1][2].

 5. Never Stop Exploring

Keep exploring new mental models and expanding your cognitive toolkit. The more models you have at your disposal, the better equipped you’ll be to handle a wide range of situations[1][2].

 The Bottom Line

Mental models are indispensable allies in our quest to conserve brainpower and navigate the complexities of daily life. By integrating these cognitive tools into our routines, we can make more informed decisions, solve problems more efficiently, and ultimately free up mental space for what truly matters. Whether you’re prioritizing tasks, managing time, or challenging your assumptions, mental models can help you streamline your thinking and unleash your full potential[1][2]. So, start building your mental models toolbox today and watch as your cognitive load lightens and your decision-making sharpens.

Remember, the goal isn’t to eliminate all cognitive effort but to use it more strategically. By leveraging mental models, you can focus your brainpower where it counts, leading to a more productive, balanced, and fulfilling life[1][2].

Citations:
[1] https://fronterabrands.com/mental-model-examples-and-their-explanations/
[2] https://nesslabs.com/mental-models
[3] https://commoncog.com/putting-mental-models-to-practice-part-5-skill-extraction/
[4] https://durmonski.com/self-improvement/how-to-use-mental-models/
[5] https://jamesclear.com/mental-models
[6] https://blog.hubspot.com/marketing/mental-models
[7] https://fs.blog/mental-models/
[8] https://jamesclear.com/feynman-mental-models
[9] http://cogsci.uwaterloo.ca/Articles/Thagard.brains-models.2010.pdf
[10] https://betterhumans.pub/4-lesser-known-mental-models-that-save-me-30-hours-every-week-efc60f88ec7a?gi=e3c8dbd3d48c
[11] https://www.julian.com/blog/mental-model-examples
[12] https://www.youtube.com/watch?v=hkL7S9cQLQM
[13] https://www.coleschafer.com/blog/ernest-hemingway-writing-style
[14] https://www.okayokapi.com/blog-post/why-your-writing-style-isnt-wrong-or-bad
[15] https://www.turnerstories.com/blog/2019/3/10/how-to-find-your-writing-style
[16] https://carnivas.com/writing-style-culture-7740ad03d7a6?gi=e15f15841156
[17] https://www.reddit.com/r/coolguides/comments/1bgdmp9/a_cool_guide_cheatsheet_to_mental_models_with/
[18] https://writersblockpartyblog.com/2018/04/05/finding-your-writing-style/
[19] https://www.slideshare.net/slideshow/reflection-sample-essay-reflection-essay-samples-template-business/266204999
[20] https://www.slideshare.net/slideshow/example-of-critique-paper-introduction-how-to-write/265714891

 

* AI tools were used as a research assistant for this content.

How to Use N-Shot and Chain of Thought Prompting

 

Imagine unlocking the hidden prowess of artificial intelligence by simply mastering the art of conversation. Within the realm of language processing, there lies a potent duo: N-Shot and Chain of Thought prompting. Many are still unfamiliar with these innovative approaches that help machines mimic human reasoning.

Prompting

*Image from ChatGPT

N-Shot prompting, a concept derived from few-shot learning, has shaken the very foundations of machine interaction with its promise of enhanced performance through iterations. Meanwhile, Chain of Thought Prompting emerges as a game-changer for complex cognitive tasks, carving logical pathways for AI to follow. Together, they redefine how we engage with language models, setting the stage for advancements in prompt engineering.

In this journey of discovery, we’ll delve into the intricacies of prompt engineering, learn how to navigate the sophisticated dance of N-Shot Prompts for intricate tasks, and harness the sequential clarity of Chain of Thought Prompting to unravel complexities. Let us embark on this illuminating odyssey into the heart of language model proficiency.

What is N-Shot Prompting?

N-shot prompting is a technique employed with language models, particularly advanced ones like GPT-3 and 4, Claude, Gemini, etc., to enhance the way these models handle complex tasks. The “N” in N-shot stands for a specific number, which reflects the number of input-output examples—or ‘shots’—provided to the model. By offering the model a set series of examples, we establish a pattern for it to follow. This helps to condition the model to generate responses that are consistent with the provided examples.

The concept of N-shot prompting is crucial when dealing with domains or tasks that don’t have a vast supply of training data. It’s all about striking the perfect balance: too few examples could lead the model to overfit, limiting its ability to generalize its outputs to different inputs. On the flip side, generously supplying examples—sometimes a dozen or more—is often necessary for reliable and quality performance. In academia, it’s common to see the use of 32-shot or 64-shot prompts as they tend to lead to more consistent and accurate outputs. This method is about guiding and refining the model’s responses based on the demonstrated task examples, significantly boosting the quality and reliability of the outputs it generates.

Understanding the concept of few-shot prompting

Few-shot prompting is a subset of N-shot prompting where “few” indicates the limited number of examples a model receives to guide its output. This approach is tailored for large language models like GPT-3, which utilize these few examples to improve their responses to similar task prompts. By integrating a handful of tailored input-output pairs—as few as one, three, or five—the model engages in what’s known as “in-context learning,” which enhances its ability to comprehend various tasks more effectively and deliver accurate results.

Few-shot prompts are crafted to overcome the restrictions presented by zero-shot capabilities, where a model attempts to infer correct responses without any prior examples. By providing the model with even a few carefully selected demonstrations, the intention is to boost the model’s performance especially when it comes to complex tasks. The effectiveness of few-shot prompting can vary: depending on whether it’s a 1-shot, 3-shot, or 5-shot, these refined demonstrations can greatly influence the model’s ability to handle complex prompts successfully.

Exploring the benefits and limitations of N-shot prompting

N-shot prompting has its distinct set of strengths and challenges. By offering the model an assortment of input-output pairs, it becomes better at pattern recognition within the context of those examples. However, if too few examples are on the table, the model might overfit, which could result in a downturn in output quality when it encounters a varied range of inputs. Academically speaking, using a higher number of shots, such as 32 or 64, in the prompting strategy often leads to better model outcomes.

Unlike fine-tuning methodologies, which actively teach the model new information, N-shot prompting instead directs the model toward generating outputs that align with learned patterns. This limits its adaptability when venturing into entirely new domains or tasks. While N-shot prompting can efficiently steer language models towards more desirable outputs, its efficacy is somewhat contingent on the quantity and relevance of the task-specific data it is provided with. Additionally, it might not always stand its ground against models that have undergone extensive fine-tuning in specific scenarios.

In conclusion, N-shot prompting serves a crucial role in the performance of language models, particularly in domain-specific tasks. However, understanding its scope and limitations is vital to apply this advanced prompt engineering technique effectively.

What is Chain of Thought (CoT) Prompting?

Chain of Thought (CoT) Prompting is a sophisticated technique used to enhance the reasoning capabilities of language models, especially when they are tasked with complex issues that require multi-step logic and problem-solving. CoT prompting is essentially about programming a language model to think aloud—breaking down problems into more manageable steps and providing a sequential narrative of its thought process. By doing so, the model articulates its reasoning path, from initial consideration to the final answer. This narrative approach is akin to the way humans tackle puzzles: analyzing the issue at hand, considering various factors, and then synthesizing the information to reach a conclusion.

The application of CoT prompting has shown to be particularly impactful for language models dealing with intricate tasks that go beyond simple Q&A formats, like mathematical problems, scientific explanations, or even generating stories requiring logical structuring. It serves as an aid that navigates the model through the intricacies of the problem, ensuring each step is logically connected and making the thought process transparent.

Overview of CoT prompting and its role in complex reasoning tasks

In dealing with complex reasoning tasks, Chain of Thought (CoT) prompting plays a transformative role. Its primary function is to turn the somewhat opaque wheelwork of a language model’s “thinking” into a visible and traceable process. By employing CoT prompting, a model doesn’t just leap to conclusions; it instead mirrors human problem-solving behaviors by tackling tasks in a piecemeal fashion—each step building upon and deriving from the previous one.

This clearer narrative path fosters a deeper contextual understanding, enabling language models to provide not only accurate but also coherent responses. The step-by-step guidance serves as a more natural way for the model to learn and master the task at hand. Moreover, with the advent of larger language models, the effectiveness of CoT prompting becomes even more pronounced. These gargantuan neural networks—with their vast amounts of parameters—are better equipped to handle the sophisticated layering of prompts that CoT requires. This synergy between CoT and large models enriches the output, making them more apt for educational settings where clarity in reasoning is as crucial as the final answer.

Understanding the concept of zero-shot CoT prompting

Zero-shot Chain of Thought (CoT) prompting can be thought of as a language model’s equivalent of being thrown into the deep end without a flotation device—in this scenario, the “flotation device” being prior specific examples to guide its responses. In zero-shot CoT, the model is expected to undertake complex reasoning on the spot, crafting a step-by-step path to resolution without the benefit of hand-picked examples to set the stage.

This method is particularly invaluable when addressing mathematical or logic-intensive problems that may befuddle language models. Here, providing additional context via CoT enabling intermediate reasoning steps paves the way to more accurate outputs. The rationale behind zero-shot CoT relies on the model’s ability to create its narrative of understanding, producing interim conclusions that ultimately lead to a coherent final answer.

Crucially, zero-shot CoT aligns with a dual-phase operation: reasoning extraction followed by answer extraction. With reasoning extraction, the model lays out its thought process, effectively setting its context. The subsequent phase utilizes this path of thought to derive the correct answer, thus rendering the overall task resolution more reliable and substantial. As advancements in artificial intelligence continue, techniques such as zero-shot CoT will only further bolster the quality and depth of language model outputs across various fields and applications.

Importance of Prompt Engineering

Prompt engineering is a potent prompt engineering technique that significantly influences the reasoning process of language models, particularly when implementing methods such as chain-of-thought (CoT) prompting. The careful construction of prompts is absolutely vital to steering language models through a logical sequence of thoughts, ensuring the delivery of coherent and correct answers to complex problems. For instance, in a CoT setup, sequential logic is of the essence, as each prompt is meticulously designed to build upon the previous one, much like constructing a narrative or solving a puzzle step by step.

In terms directly related to the ubiquity and function of various prompting techniques, it’s important to distinguish between zero-shot and few-shot prompts. Zero-shot prompting shines with straightforward efficiency, allowing language models to process normal instructions without any additional context or pre-feeding with examples. This is particularly useful when there is a need for quick and general understanding. On the flip side, few-shot prompting provides the model with a set of examples to prime its “thought process,” thereby greatly improving its competency in handling more nuanced or complex tasks.

The art and science of prompt engineering cannot be overstated as it conditions these digital brains—the language models—to not only perform but excel across a wide range of applications. The ultimate goal is always to have a model that can interface seamlessly with human queries and provide not just answers, but meaningful interaction and understanding.

Exploring the role of prompt engineering in enhancing the performance of language models

The practice of prompt engineering serves as a master key for unlocking the potential of large language models. By strategically crafting prompts, engineers can significantly refine a model’s output, weighing heavily on factors like consistency and specificity. A prime example of such manipulation is the temperature setting within the OpenAI API which can control the randomness in output, ultimately influencing the precision and predictability of language model responses.

Furthermore, prompt engineers must often deconstruct complex tasks into a series of smaller, more manageable actions. These actions may include recognizing grammatical elements, generating specific types of sentences, or even performing grammatical correctness checks. Such detailed engineering allows language models to tackle a task step by step, mirroring human cognitive strategies.

Generated knowledge prompting is another technique indicative of the sophistication of prompt engineering. This tool enables a language model to venture into uncharted territories—answering questions on new or less familiar topics by generating knowledge from provided examples. As a direct consequence, the model becomes capable of offering informed responses even when it has not been directly trained on specific subject matter.

Altogether, the potency of prompt engineering is seen in the tailored understanding it provides to language models, resulting in outputs that are not only accurate but also enriched with the seemingly intuitive grasp of the assigned tasks.

Techniques and strategies for effective prompt engineering

Masterful prompt engineering involves a symbiosis of strategies and tactics, all aiming to enhance the performance of language models. At the heart of these strategies lies the deconstruction of tasks into incremental, digestible steps that guide the model through the completion of each. For example, in learning a new concept, a language model might first be prompted to identify key information before synthesizing it into a coherent answer.

Among the arsenal of techniques is generated knowledge prompting, an approach that equips language models to handle questions about unfamiliar subjects by drawing on the context and structure of provided examples. This empowerment facilitates a more adaptable and resourceful AI capable of venturing beyond its training data.

Furthermore, the careful and deliberate design of prompts serves as a beacon for language models, illuminating the path to better understanding and more precise outcomes. As a strategy, the use of techniques like zero-shot prompting, few-shot prompting, delimiters, and detailed steps is not just effective but necessary for refining the quality of model performance.

Conditioning language models with specific instructions or context is tantamount to tuning an instrument; it ensures that the probabilistic engine within produces the desired melody of outputs. It is this level of calculated and thoughtful direction that empowers language models to not only answer with confidence but also with relevance and utility.


Table: Prompt Engineering Techniques for Language Models

Technique

Description

Application

Benefit

Zero-shot prompting

Providing normal instructions without additional context

General understanding of tasks

Quick and intuitively geared responses

Few-shot prompting

Conditioning the model with examples

Complex task specialization

Enhances model’s accuracy and depth of knowledge

Generated knowledge prompting

Learning to generate answers on new topics

Unfamiliar subject matter questions

Allows for broader topical engagement and learning

Use of delimiters

Structuring responses using specific markers

Task organization

Provides clear output segmentation for better comprehension

Detailed steps

Breaking down tasks into smaller chunks

Complex problem-solving

Facilitates easier model navigation through a problem

List: Strategies for Effective Prompt Engineering

  1. Dismantle complex tasks into smaller, manageable parts.
  2. Craft prompts to build on successive information logically.
  3. Adjust model parameters like temperature to fine-tune output randomness.
  4. Use few-shot prompts to provide context and frame model thinking.
  5. Implement generated knowledge prompts to enhance topic coverage.
  6. Design prompts to guide models through a clear thought process.
  7. Provide explicit output format instructions to shape model responses.

Utilizing N-Shot Prompting for Complex Tasks

N-shot prompting stands as an advanced technique within the realm of prompt engineering, where a sequence of input-output examples (N indicating the number) is presented to a language model. This method holds considerable value for specific domains or tasks where examples are scarce, carving out a pathway for the model to identify patterns and generalize its capabilities. More so, N-shot prompts can be pivotal for models to grasp complex reasoning tasks, offering them a rich tapestry of examples from which to learn and refine their outputs. It’s a facet of prompt engineering that empowers a language model with enhanced in-context learning, allowing for outputs that not only resonate with fluency but also with a deepened understanding of particular subjects or challenges.

Applying N-shot Prompting to Handle Complex Reasoning Tasks

N-shot prompting is particularly robust when applied to complex reasoning tasks. By feeding a model several examples prior to requesting its own generation, it learns the nuances and subtleties required for new tasks—delivering an added layer of instruction that goes beyond the learning from its training data. This variant of prompt engineering is a gateway to leveraging the latent potential of language models, catalyzing innovation and sophistication in a multitude of fields. Despite its power, N-shot prompting does come with caveats; the breadth of context offered may not always lead to consistent or predictable outcomes due to the intrinsic variability of model responses.

Breakdown of Reasoning Steps Using Few-Shot Examples

The use of few-shot prompting is an effective stratagem for dissecting and conveying large, complex tasks to language models. These prompts act as a guiding light, showcasing sample responses that the model can emulate. Beyond this, chain-of-thought (CoT) prompting serves to outline the series of logical steps required to understand and solve intricate problems. The synergy between few-shot examples and CoT prompting enhances the machine’s ability to produce not just any answer, but the correct one. This confluence of examples and sequencing provides a scaffold upon which the language model can climb to reach a loftier height of problem-solving proficiency.

Incorporating Additional Context in N-shot Prompts for Better Understanding

In the tapestry of prompt engineering, the intricacy of N-shot prompting is woven with threads of context. Additional examples serve as a compass, orienting the model towards producing well-informed responses to tasks it has yet to encounter. The hierarchical progression from zero-shot through one-shot to few-shot prompting demonstrates a tangible elevation in model performance, underscoring the necessity for careful prompt structuring. The phenomenon of in-context learning further illuminates why the introduction of additional context in prompts can dramatically enrich a model’s comprehension and output.

Table: N-shot Prompting Examples and Their Impact

Number of Examples (N)

Type of Prompting

Impact on Performance

0

Zero-shot

General baseline understanding

1

One-shot

Some contextual learning increases

≥ 2

Few-shot (N-shot)

Considerably improved in-context performance

List: Enhancing Model Comprehension through N-shot Prompting

  1. Determine the complexity of the task at hand and the potential number of examples required.
  2. Collect or construct a series of high-quality input-output examples.
  3. Introduce these examples sequentially to the model before the actual task.
  4. Ensure the examples are representative of the problem’s breadth.
  5. Observe the model’s outputs and refine the prompts as needed to improve consistency.

By thoughtfully applying these guidelines and considering the depth of the tasks, N-shot prompting can dramatically enhance the capabilities of language models to tackle a wide spectrum of complex problems.

Leveraging Chain of Thought Prompting for Complex Reasoning

Chain of Thought (CoT) prompting emerges as a game-changing prompt engineering technique that revolutionizes the way language models handle complex reasoning across various fields, including arithmetic, commonsense assessments, and even code generation. Where traditional approaches may lead to unsatisfactory results, embracing the art of CoT uncovers the model’s hidden layers of cognitive capabilities. This advanced method works by meticulously molding the model’s reasoning process, ushering it through a series of intelligently designed prompts that build upon one another. With each subsequent prompt, the entire narrative becomes clearer, akin to a teacher guiding a student to a eureka moment with a sequence of carefully chosen questions.

Utilizing CoT prompting to perform complex reasoning in manageable steps

The finesse of CoT prompting lies in its capacity to deconstruct convoluted reasoning tasks into discrete, logical increments, thereby making the incomprehensible, comprehensible. To implement this strategy, one must first dissect the overarching task into a series of smaller, interconnected subtasks. Next, one must craft specific, targeted prompts for each of these sub-elements, ensuring a seamless, logical progression from one prompt to the next. This consists not just of deploying the right language but also of establishing an unambiguous connection between the consecutive steps, setting the stage for the model to intuitively grasp and navigate the reasoning pathway. When CoT prompting is effectively employed, the outcomes are revealing: enhanced model accuracy and a demystified process that can be universally understood and improved upon.

Using intermediate reasoning steps to guide the language model

Integral to CoT prompting is the use of intermediate reasoning steps – a kind of intellectual stepping stone approach that enables the language model to traverse complex problem landscapes with grace. It is through these incremental contemplations that the model gauges various problem dimensions, enriching its understanding and decision-making prowess. Like a detective piecing together clues, CoT facilitates a step-by-step analysis that guides the model towards the most logical and informed response. Such a strategy not only elevates the precision of the outcomes but also illuminates the thought process for those who peer into the model’s inner workings, providing a transparent, logical narrative that underpins its resulting outputs.

Enhancing the output format to present complex reasoning tasks effectively

As underscored by research, such as Fu et al. 2023, the depth of reasoning articulated within the prompts – the number of steps in the chain – can directly amplify the effectiveness of a model’s response to multifaceted tasks. By prioritizing complex reasoning chains through consistency-based selection methods, one can distill a superior response from the model. This structured chain-like scaffolding not only helps large models better demonstrate their performance but also presents a logical progression that users can follow and trust. As CoT prompting forges ahead, it is becoming increasingly evident that it leads to more precise, coherent, and reliable outputs, particularly in handling sophisticated reasoning tasks. This approach not only augments the success rate of tackling such tasks but also ensures that the journey to the answer is just as informative as the conclusion itself.

Table: Impact of CoT Prompting on Language Model Performance

Task Complexity

CoT Prompting Implementation

Model Performance Impact

Low

Minimal CoT steps

Marginal improvement

Medium

Moderate CoT steps

Noticeable improvement

High

Extensive CoT steps

Significant improvement

List: Steps to Implement CoT Prompting

  1. Identify the main task and break it down into smaller reasoning segments.
  2. Craft precise prompts for each segment, ensuring logical flow and clarity.
  3. Sequentially apply the prompts, monitoring the language model’s responses.
  4. Evaluate the coherence and accuracy of the model’s output, making iterative adjustments as necessary.
  5. Refine and expand the CoT prompt sequences for consistent results across various complexity levels.

By adhering to these detailed strategies and prompt engineering best practices, CoT prompting stands as a cornerstone for elevating the cognitive processing powers of language models to new, unprecedented heights.

Exploring Advanced Techniques in Prompting

In the ever-evolving realm of artificial intelligence, advanced techniques in prompting stand as critical pillars in mastering the complexity of language model interactions. Amongst these, Chain of Thought (CoT) prompting has been pivotal, facilitating Large Language Models (LLMs) to unravel intricate problems with greater finesse. Unlike the constrained scope of few-shot prompting, which provides only a handful of examples to nudge the model along, CoT prompting dives deeper, employing a meticulous breakdown of problems into digestible, intermediate steps. Echoing the subtleties of human cognition, this technique revolves around the premise of step-by-step logic descriptions, carving a pathway toward more reliable and nuanced responses.

While CoT excels in clarity and methodical progression, the art of Prompt Engineering breathes life into the model’s cold computations. Task decomposition becomes an orchestral arrangement where each cue and guidepost steers the conversation from ambiguity to precision. Directional Stimulus Prompting is one such maestro in the ensemble, offering context-specific cues to solicit the most coherent outputs, marrying the logical with the desired.

In this symphony of advanced techniques, N-shot and few-shot prompting play crucial roles. Few-shot prompting, with its example-laden approach, primes the language models for improved context learning—weaving the fabric of acquired knowledge with the threads of immediate context. As for N-shot prompting, the numeric flexibility allows adaptation based on the task at hand, infusing the model with a dose of experience that ranges from a minimalist sketch to a detailed blueprint of responses.

When harmonizing these advanced techniques in prompt engineering, one can tailor the conversations with LLMs to be as rich and varied as the tasks they are set to accomplish. By leveraging a combination of these sophisticated methods, prompt engineers can optimize the interaction with LLMs, ensuring each question not only finds an answer but does so through a transparent, intellectually rigorous journey.

Utilizing contextual learning to improve reasoning and response generation

Contextual learning is the cornerstone of effective reasoning in artificial intelligence. Chain-of-thought prompting epitomizes this principle by engineering prompts that lay out sequential reasoning steps akin to leaving breadcrumbs along the path to the ultimate answer. In this vein, a clear narrative emerges—each sentence unfurling the logic that naturally leads to the subsequent one, thereby improving both reasoning capabilities and response generation.

Multimodal CoT plays a particularly significant role in maintaining coherence between various forms of input and output. Whether it’s text generation for storytelling or a complex equation to be solved, linking prompts ensures a consistent thread is woven through the narrative. Through this, models can maintain a coherent chain of thought—a crucial ability for accurate question answering.

Moreover, few-shot prompting plays a pivotal role in honing the model’s aptitude by providing exemplary input-output pairs. This not only serves as a learning foundation for complex tasks but also embeds a nuance of contextual learning within the model. By conditioning models with a well-curated set of examples, we effectively leverage in-context learning, guiding the model to respond with heightened acumen. As implied by the term N-shot prompting, the number of examples (N) acts as a variable that shapes the model’s learning curve, with each additional example further enriching its contextual understanding.

Evaluating the performance of language models in complex reasoning tasks

The foray into complex reasoning has revealed disparities in language model capabilities. Smaller models tend to struggle with maintaining logical thought chains, which can lead to a decline in accuracy, thus underscoring the importance of properly structured prompts. The triumph of CoT prompting hinges on its symbiotic relationship with the model’s capacity. Therefore, the grandeur of LLMs, facilitated by CoT, shows a marked performance improvement, which can be directly traced back to the size and complexity of the model itself.

The ascendancy of prompt-based techniques tells a tale of transformation—where error rates plummet as the precision and interpretiveness of prompts amplify. Each prompt becomes a trial, and the model’s ability to respond with fewer errors becomes the measure of success. By incorporating a few well-chosen examples via few-shot prompting, we bolster the model’s understanding and thus enhance its performance, particularly on tasks embroiled in complex reasoning.

Table: Prompting Techniques and Model Performance Evaluation

Prompting Technique

Task Complexity

Impact on Model Performance

Few-Shot Prompting

Medium

Moderately improves understanding

Chain of Thought Prompting

High

Significantly enhances accuracy

Directional Stimulus Prompting

Low to Medium

Ensures consistent output

N-Shot Prompting

Variable

Flexibly optimizes based on N

The approaches outlined impact the model differentially, with the choice of technique being pivotal to the success of the outcome.

Understanding the role of computational resources in implementing advanced prompting techniques

Advanced prompting techniques hold the promise of precision, yet they do not stand without cost. Implementing such strategies as few-shot and CoT prompting incurs computational overhead. Retrieval processes become more complex as the model sifts through a larger array of information, evaluating and incorporating the database of examples it has been conditioned with.

The caliber of the retrieved information is proportional to the performance outcome. Hence, the computational investment often parallels the quality of the response. Exploiting the versatility of few-shot prompting can economize computational expenditure by allowing for experimentation with a multitude of prompt variations. This leads to performance enhancement without an excessive manual workload or human bias.

Breaking problems into successive steps for CoT prompting guides the language model through a task, placing additional demands on computational resources, yet ensuring a methodical approach to problem-solving. Organizations may find it necessary to engage in more extensive computational efforts, such as domain-specific fine-tuning of LLMs, particularly when precise model adaptation surpasses the reach of few-shot capabilities.

Thus, while the techniques offer immense upside, the interplay between the richness of prompts and available computational resources remains a pivotal aspect of their practical implementation.

Summary

In the evolving realm of artificial intelligence, Prompt Engineering has emerged as a crucial aspect. N-Shot prompting plays a key role by offering a language model a set of examples before requesting its own output, effectively priming the model for the task. This enhances the model’s context learning, essentially using few-shot prompts as a template for new input.

Chain-of-thought (CoT) prompting complements this by tackling complex tasks, guiding the model through a sequence of logical and intermediate reasoning steps. It dissects intricate problems into more manageable steps, promoting a structured approach that encourages the model to display complex reasoning tasks transparently.

When combined, these prompt engineering techniques yield superior results. Few-shot CoT prompting gives the computational resources the dual benefit of example-driven context and logically parsed problem-solving. Even in the absence of examples, as with Zero-Shot CoT, the step-by-step reasoning still helps language models perform better on complex tasks.

CoT ultimately achieves two objectives: reasoning extraction and answer extraction. The former facilitates the generation of detailed context, while the latter utilizes said context for formulating correct answers, improving the performance of language models across a spectrum of complex reasoning tasks.

Prompt Type

Aim

Example

Few-Shot

Provides multiple training examples

N-shot prompts

Chains of Thought

Break down tasks into steps

Sequence of prompts

Combined CoT

Enhance understanding with examples

Few-shot examples

 

* AI tools were used as a research assistant for this content.

 

Cynefin For Everyday Life – A Use Case

 

Understanding the Cynefin framework

A brief overview of the Cynefin framework

The Cynefin framework is a problem-solving model developed to help leaders make decisions within the context of unique and complex situations. Developed by Dave Snowden in the early 2000s, this framework emphasizes the idea that every problem is different and requires a tailored approach for resolution. The framework categorizes problems into five domains – Obvious, Complicated, Complex, Chaotic, and Disorder, each requiring a different strategy for addressing them effectively.

By understanding the characteristics of each domain and applying the corresponding approach, organizations can navigate through uncertainties, make sense of complex situations, and make informed decisions. The Cynefin framework provides a structured way to approach problems, analyze data, engage stakeholders, and determine the best course of action based on the nature of the problem at hand. This adaptive and flexible framework can be applied at all levels of an organization to enhance decision-making processes and achieve successful outcomes.

The relevance of Cynefin in everyday life

The Cynefin framework is not just a tool for consultants and senior management types, but it is also highly applicable in everyday life. By understanding the different domains within the framework, individuals can approach various situations with a more informed and strategic mindset. Whether it’s making decisions at work, solving personal problems, or navigating complex relationships, the Cynefin model offers a structured approach to sense-making and problem-solving in all aspects of life. Its relevance lies in its ability to help individuals adapt to the unique characteristics of each situation and make more effective decisions based on the context at hand.

Case Study: Applying Cynefin in decision-making

Using Cynefin to Evaluate Homeowner and Flood Insurance Options

When evaluating homeowner and flood insurance options, the Cynefin framework can be a helpful tool to navigate the complexity of choosing the right policy, provider, and options.

In the Simple domain, where the relationship between cause and effect is clear, you may consider basic homeowner insurance options that cover common risks such as fire or theft. These policies are straightforward and easy to understand, making them suitable for situations where the risks are well-known and easily mitigated.

Moving into the Complicated domain, where the relationship between cause and effect is less obvious, you may need to consult with experts or insurance agents to analyze and understand the different flood insurance options available. By using a “Sense-Analyze-Respond” approach, you can gather information, compare policies, and make an informed decision based on your specific needs and circumstances.

In the Complex domain, where the relationship between cause and effect is unpredictable, you may need to consider more comprehensive homeowner and flood insurance options that offer additional coverage for unforeseen events. This may involve looking at policies that include coverage for natural disasters, water damage, and other potential risks that are not easily mitigated.

Finally, in the Chaotic domain, where the situation is unstable and rapidly changing, you may need to act quickly to protect your home and assets in the event of a flood or other disaster. This could involve seeking immediate assistance from emergency services, contacting your insurance provider, and documenting any damage for future claims.

By applying the Cynefin framework to evaluate homeowner and flood insurance options, you can make a more informed decision that aligns with your unique circumstances and needs. This approach allows you to assess the complexity of the situation, consider different factors, and choose the best insurance options to protect your home and assets in the face of uncertainty.

Summary

The Cynefin framework is a valuable tool to navigate the complexity of choosing homeowner and flood insurance options. In the Simple domain, basic homeowner insurance options provide coverage for common risks. In the Complicated domain, consulting experts or insurance agents can help analyze and understand different flood insurance options. In the Complex domain, more comprehensive policies that provide coverage for unforeseen events may be necessary. In the Chaotic domain, acting quickly to protect your home and assets is key. By applying the Cynefin framework, you can make an informed decision that aligns with your unique circumstances and needs, ensuring that you have the best insurance options to protect your home and assets in the face of uncertainty.

Personal Use Example

Here’s a textual representation of the Cynefin model I used for evaluating homeowner and flood insurance options recently:

Cynefin Framework for Insurance Evaluation:

  1. Clear
    • Comparing premiums
    • Evaluating deductibles
    • Assessing policy limits
    • Simple choices with clear outcomes
  2. Complicated
    • Analyzing policy exclusions
    • Understanding endorsements
    • Reviewing specific coverage limits
    • Requires expert advice for thorough evaluation
  3. Complex
    • Assessing future flood risks
    • Considering climate change impacts
    • Evaluating long-term sustainability of insurance providers
    • Involves unpredictable factors and requires adaptive strategies
  4. Chaotic
    • Handling emergency responses
    • Managing immediate claims post-disaster
    • Making urgent decisions without clear information
    • Focuses on immediate action and resolution
  5. Disorder (Central Area)
    • Situations where it’s unclear which quadrant applies
    • Initial assessment phase before categorizing into appropriate quadrants

This framework helped me to categorize and address various aspects of insurance evaluation based on the nature and complexity of the factors involved.

 

 

Don’t Underestimate the Value of Progress

I love self experimentation. I like trying to find ways to grow my skills, learn new things and optimize my life. I find learning new things not just rewarding, but enriching-that is learning seems to make my life significantly richer and brighter with each new skill and insight. That said, I am a recovering type A personality, and like most type A personalities I’ve lived a life heavily focused on goal setting and goal achievement.

I suppose goal setting has always been a big part of my life. I was, after all, a high stress and high anxiety child. I can remember doing extensive goal setting exercises in middle and high school, and of course, I remember the stress of pushing myself to get decent grades and to stay focused on the learning at hand. As I transitioned into an adult, and an entrepreneur, goal setting became a huge part of my life. I came of age in the days of Stephen Covey, when carrying a paper planner and doing daily goal setting was a part of the corporate mantra. Later, I joined Amway, where goal setting and so-called “dream building” were a part of the business culture. I’ll save my Amway stories for another day, but suffice it to say, I was fully indoctrinated in goal setting by the time I opened the beginning of a string of businesses.

Fast forward 30 years and you get to today. Now, I’m a serial entrepreneur, a part-time ex-pat and a recovering type A personality, with the emphasis on recovering. I still do goal setting regularly and I have a variety of daily practices which I follow closely. But, unlike the days of my entrepreneurial youth, I tend not to focus on the end goal as a destination, but as a general direction in which to focus my efforts. This probably seems like a subtle change to many of you, but to me it is life-changing. You see, I’ve learned to stop tying my self-worth to achievement of a very large goal at the end of a usually complex and difficult struggle. Instead, I now endeavor, and judge myself, against the yardstick of improvement. Generally speaking, I ask myself “Did I make 1% improvement toward my goals today?”.

I originally learned about the 1% better approach from James Altucher . He mentions it in many of his books and it is a frequent topic on his podcast. Basically, he asks himself a set of questions every day, such as “Did I make myself 1% better today?” And he does that across a set of categories which he has defined. I adapted this technique, unaware of its historic tradition, to my life several years ago and have seen great benefit from it over the long run.

It turns out, that the 1% approach to improvement has a pretty significant historic tradition. That tradition, called Kaizen, dates back quite further than James. I’ve included a link, for those of you interested in learning about the history of this approach. No matter how it came into my life, or the history behind it, I am just thrilled with the difference it has made.

So, why am I telling you all this? It’s because I want to share with you a very simple but powerful insight that I gained from this approach. That insight, is to value progress. For so many years, I only could see the value of reaching the end goal, and I realize now, that I was blind to the joys of the progress along the way. Be better than I was, and learn from my mistakes. Learn to appreciate each step of the journey. Learn to appreciate progress.

Let me give you a couple of examples of how powerful this concept has turned out to be in my life recently. One of the goals I’ve been pursuing this year is to raise my number of Kiva loans to 100. Today, my number of loans sits at 73. I still have a ways to go before the end of the year to hit my goal, but I have a plan to make that happen. Now, I could be down on myself because it is the middle of November and I’m still quite a few loans away from hitting my goal. However, in the last 30 days, thanks to help from some of my friends who donated gift cards to my Kiva account, I’ve made several loans and thus helped several families around the world. Every day that I make a loan, I improve the lives of distant entrepreneurs and their families, as well as those in their community in many cases. Thus, even without hitting my goal of 100 loans at this moment, I have managed to help people and improve people’s lives with the progress I’ve made. I’ve gotten notes and updates from a few of them, describing their progress and thanking me for my help. I celebrate those notes and my being fortunate enough to help others. I’m not stressed about hitting my goal, because I am grateful for the progress that I’ve made thus far and the help I’ve been able to give to those who need it.

Another example of appreciating progress came in the last few months in the area of exercise. At the beginning of the year, I set a goal for 2018 to restore my mobility and flexibility to prevent injuries. I had a concrete, measurable achievement that I used as a test of whether or not I had hit this goal. For personal reasons, I’ll leave that test out of this discussion. But, suffice it to say, that I have a metric that I’m trying to hit. To accomplish that metric, I also defined a set of activities and a frequency that I would perform those actions as a sub goal toward achieving my overall goal of passing the test. Now, here we are in the middle of November and I am very close to hitting my goal in passing the test. In fact, barring a physical illness or injury, I should be able to pass the test in early December. But, here’s the amazing thing, as I have been working on the larger goal in accomplishing the subgoal on a week to week basis, the improvement has been amazing. Each week, I get more and more mobility and that leads to significantly improved comfort, balance, patience and an overall sense of just feeling “better”. I’ve also gained additional physical capabilities along the way. This has translated into increased comfort and mastery while practicing various physical activities that I enjoy. Once again, even though I haven’t hit my overall goal yet, the journey has been rewarded at its own pace along the way. Nearly every day, I take a little time and appreciate that progress. I remind myself in gratitude of those changes in the progress that they represent. Just that simple reframing and recognition of each step and its return has made a huge difference in my happiness and contentment.

I know. I know. By now, a lot of my type A personality friends have either stopped reading, or saying to themselves that this seems like such a small and trite thing that it couldn’t possibly be useful. “I’d rather stay focused on the larger goal.”, they are probably saying to themselves. That’s okay. I don’t feel that I lost track of my larger goals. I don’t feel lost, wayward or listless. I also don’t feel as much anxiety and trepidation as I used to. In fact, I feel like I get more done now than I used to. I feel like I hit my goals more readily and with more satisfaction than ever before in my adult life. What works for me, might not work for you. I’m certainly not writing this to tell you that my way is the right way. Instead, if you are one of my type A personality friends, I just want you to be aware that there are other ways to think about the problem. That’s it, that’s the sole agenda I have.

Maybe someday, I’ll dig a little deeper into my time at Amway or I’ll tell a few more stories about all of the lessons I learned there. But for now, I hope this writing helped you and I look forward to hearing from many of you that have questions or that want to share their own stories about valuing progress. You know how to find me, I am @LBHUSTON on most of the socials. As always, thanks for reading and I look forward to hearing from you.

Sometimes The Best Answer is Better Questions…

Let’s face it – we live in a world of answers. Nearly all of us has a little box in our pocket that we can ask anything and get back some form of answer. Maybe it’s blogger content, scientific study results, news feeds, or a meme about our interest along with cats or tacos – but the ability to get answers is almost ubiquitous. You can ask about anything, as well; from the mating habits of Sudanese moths to the ingredients in a Pina Colada and everything in between.
Even when we aren’t explicitly seeking answers, data is still everywhere. There’s the web, of course, and printed materials. There’s video and audio on billions of subjects that would take thousands of lifetimes to consume. Then there is social media – a never ending barrage of stream of consciousness from around the globe, expressed in short bursts or pictures (often with tacos). All of that data and all of that access to communication is driving down the value of answers each and every day. Why learn and memorize when you can find it in a few seconds?
What’s amazing about that, is that the value of questions is actually rising, even as answers become trivial. Looking at a problem, and being able to derive the questions you need to search for to find the most appropriate answer is a modern day super power. Identifying the right series of questions that allow someone to link answers together and walk down a path of data to enlightenment is a mystifying, awe inspiring super power in these days of the information age.
However, not all questions are created equal, are they? Not all paths lead to compounding knowledge and insight. Many only lead in circles or back to pictures of cats, or tacos, or cats with tacos…
How can you get better at creating the right questions and knowing what questions to ask and in what order to ask them? The answer is, of course, repetition. Practice makes perfect, as the old saying goes. You have to do the work of generating questions (almost like mental pushups) on a daily basis, so that when you need them for real, you have the skill to build the right path to insight. To make that happen, you have to flex those question muscles and work through them against a variety of topics every day. Like pushups, if everyday, you generate a set of 10-20 questions and follow them down a research path, you’ll find that you get better and better at the process, until one day, like pushups again – you can flex without pain, confusion or hesitation.
Give it a shot. Pick a random subject every day and try and ask 10-20 questions that you think are interesting about that subject via Google or your favorite search engine. It’s OK to follow the rabbit holes, and keep pushing forward. You’ll be surprised what you learn, and even more surprised at how much easier generating good questions becomes.
Want better answers? Learn to ask better questions. It leads to a better life.

Why I Make Things

I spent the first 20 years of my career breaking things. I am good at breaking stuff. I am a decent hacker, a semi-talented reverse engineer and a very curious deconstructionist. I yearn to tear things down, to tweak, to iterate, to improve and to use things in ways that were entirely unintended by their makers. As long as I can remember, I have loved these engagements.

I grew up in a print shop with my Dad, tearing down presses, opening and rebuilding motors and playing with type. I absorbed the power of making something on a printing press and the amplification that it represented at a visceral level. One of my earliest mentors, Bob Gent, furthered this enthrallment with print throughout high school, and even threw in my basic electronics education to boot! My Mom, another major icon in my life, was a computer professional. She was working in mainframe shops doing operations, quality control, management & some light development. She let me cut my teeth in the tape library & keypunch rooms. I later worked there after high school and during college as a tape librarian and eventually a printer technician/junior operator.

It was there, on that first corporate job, that a few people like Jim & Su Klun, Mike Davis, Diane DeFallo, Gary Shank, Art Smith and others taught me about coding, scripting, PCs, communications, EDI and how to be more than a technician. Art, in particular, taught me that it was great to be smart, but that you could take those skills and make a life, a business and some joy. I was an attentive student, even if it didn’t seem like it at the time. I was paying attention. And, because of their lessons, I made things – software/scripts, a BBS, business processes, a HUGE ego :), and I started businesses. I started generating ideas, working on them, chasing them, building them. I made myself happy by pursuing them, even the ones that crashed and burned. I learned that making is a form of hope. It’s a way to put forth something that represents your will to change the world, even if it is in some small way, (I supposed Aleister Crowley would be proud…).

Over the years, since then, I have made many businesses, products, written part of a book, been a published poet several times, created several groups for different purposes, made a symposium that ran for 5 years, written hundreds of articles for a magazine, taught myself to be a speaker and presented at conferences around the world, build processes and tools in use by thousands of people on a global scale. I have been and am – a maker. And I have loved every moment of it. I see making things – be it code, hardware or words, as a tribute to those mentors. I honor each of them with everything I do. I am a part of the reflection of the sum of their inputs. I make because that is exactly what they taught me to do…