- The Artificially Intelligent Enterprise
- Posts
- AI Isn't the Next Big Thing, Thinking Is
AI Isn't the Next Big Thing, Thinking Is
Why Human Intellect Still Reigns Supreme
Every year Oxford University Press announces their word of the year.
This year the Oxford Word of the Year for 2024 was — brain rot.
It’s defined as a decline in mental acuity linked to overconsumption of trivial content—material that fuels dependence without encouraging critical thought. Recent studies show the problem extends beyond social media, reaching into the growing influence of artificial intelligence on young people’s daily lives.
Research published in Nature and at leading neuroscience institutes confirms a troubling pattern: as the use of AI tools rises, so do signs of overreliance, particularly among those already facing anxiety or depression.
Yet the data also suggests that technology itself isn’t the culprit—it’s our approach. This leads me to my strong belief that AI isn’t the next big thing; thinking is.
The issue isn’t whether we use AI, but how well we maintain our capacity for genuine, independent thought.
This week, we’ll examine practical ways to use AI tools more effectively without dulling our thinking, maintaining both productivity and clarity as we navigate an increasingly automated world.
Break Down Outcomes into Tasks
Last week, I introduced my readers to the S.M.A.R.T. Framework (Sort, Match, Automate, Refine, Take Control) as a way to boost AI-driven productivity. AI performs better when it has discrete tasks, than it does with complex problems. At least that’s the case today. The best way to make AI work for you isn’t by focusing on the framework itself, but on what you want at the end—a polished marketing brief, a crisp status report, or a sharp social media post. Then deconstruct it.
Start by picturing the final deliverable. Take a newsletter, for example. Maybe you’re aiming for the kind of voice and energy your top-performing issues nailed: direct, useful, and grounded in fresh data. Once you know the tone, break the project down. If you need the latest news, a definition, or an analogy for a complex term, spell that out. If you want a specific type of intro that sparks interest—maybe based on what worked in your last newsletter—define those parameters clearly by including examples in your prompt.
With this roadmap, you can do these sequentially or assign tasks to specialized AI agents. I’ve been using Taskade to manage my team of agents. I like it for its ready-made workflows, though you could design something from scratch with Claude. Here are some examples of the tasks and the roles they perform:
Research Agent: Give it clear marching orders. For instance: “Find the three latest AI trends reported by the NY Times, Ars Technica, and TechCrunch that are relevant to enterprise software.
Copywriting Agent: Feed it guidelines learned from your best newsletters. For example: “Write in short, direct sentences. Avoid certain words (like ‘delves’). Include a stat or data point in the first two lines to hook readers, mirroring the tone of my highest-performing January newsletter. Build a knowledge base of these examples in a Custom GPT or an agent or even a prompt.
Data Integration Agent: Tell it to incorporate recent stats from a report that you are referencing or from an online database. Then ask your AI assistant to create a graph that best visualizes the accompanying data.
After the agents deliver their outputs, apply your own judgment. This is the human-in-the-loop step. Review the suggestions, choose what adds real value, and discard the fluff.
Then edit, some days I’m merciless with edits. Other times I have less time than I’d like, so I’m more selective. But the principle remains: define what success looks like first, then guide your AI tools accordingly. The result is sharper content with less guesswork—and more time saved for high-level thinking.
AI Isn't the Next Big Thing, Thinking Is
Why Human Intellect Still Reigns Supreme
This week, I addressed a group of financial planners about practical ways to harness AI in their work. During the session, one participant asked what I’d recommend their children study if they aim to thrive in the coming AI era.
[If you’d like to have a look at the presentation there are a lot of nuggets applicable to any industry.]
They expected something like computer science or data analytics. Instead, I suggested philosophy.
If that answer sounds surprising, consider how quickly technical skills can become outdated. With advances in generative AI and natural language processing, the hours spent mastering a coding language may soon feel like a relic of the past. As Nvidia’s CEO Jensen Huang notes, we’re moving toward an era where anyone can direct machines with plain speech.
In that world, knowing how to think—logically, ethically, creatively—matters more than knowing how to code.
McKinsey research supports this outlook. They argue that as the labor market grows more automated and dynamic, every worker benefits from foundational skills that go beyond what machines can do.
According to their framework, the crucial abilities will involve adding value where AI cannot, adapting to new digital environments, and continually evolving to meet shifting demands. Philosophy hones these exact capacities.
By wrestling with complex ethical questions, dissecting arguments, and refining the art of reasoning, a philosophy graduate builds a flexible, resilient intellect. That kind of mind doesn’t just react to new technologies—it anticipates and guides them.
Source McKinsy & Co.
If you need a cultural reference, think of the Academy Award written by Ben Affleck and Matt Damon, Good Will Hunting. Will’s gift wasn’t just memorizing facts or wrangling code; it was understanding concepts deeply, challenging assumptions, and grappling with the human condition.
He didn’t need a classroom to teach him differential equations, and by the same logic, future professionals won’t need years in a coding boot camp if AI can do the heavy lifting. Instead, we should invest in developing the cognitive tools that will always stand apart from automated routines.
A philosophy major might never write a single line of Python, but they will be prepared to question the assumptions behind the algorithms, interpret the data that drives decisions, and reason through moral implications that software can’t parse.
While I’m all for humans honing their thinking skills, it’s interesting to note that our AI counterparts are stepping up their game too.
Enter chain of thought reasoning, a smart approach used in the latest AI models like OpenAI's o1. This method allows AI to break down complex problems into smaller, manageable steps, much like how we tackle challenges ourselves.
Instead of just spotting patterns and giving quick answers, these models engage in a more thoughtful reasoning process. With large-scale reinforcement learning, Open AI’s LLM, o1, can refine its responses and learn from mistakes before generating an answer, leading to more accurate and nuanced outputs.
As AI continues to improve, we can expect these models to handle increasingly complex reasoning tasks. Just as we rely on critical thinking to navigate tough decisions, chain of thought reasoning is enabling AI to work through problems with similar depth.
As AI becomes ubiquitous, those who can think clearly, debate thoughtfully, and reason ethically will become indispensable. We can outsource calculations, translations, and even creative drafts to AI, but we cannot outsource our judgment.
AI is a remarkable tool, and its capacity to reshape our world is real. Yet as we adopt these technologies, we must remain the authors of our own thoughts. AI isn’t the next big thing, thinking is. It’s the human mind—educated, curious, and morally grounded—that will define the future.
Character.ai - Character.ai is an innovative platform that allows users to create and interact with AI-driven characters, making it a versatile tool for personal development and coaching. By engaging in conversations with customized characters, users can experience personalized interaction that encourages reflection on their thoughts and beliefs, fostering greater self-awareness and critical thinking skills.
Wysa - Wysa is an AI-driven mental health app that provides support through guided conversations. It encourages users to explore their thoughts and feelings, promoting mindfulness and cognitive restructuring.
Socratic Problem-Solving Prompt
The Socratic Method, originating in Ancient Greece, uses questions and answers to stimulate critical thinking. Rather than delivering knowledge, it draws out understanding. With AI, it can help clarify your reasoning by challenging assumptions and highlighting gaps.
How To Use this Prompt
Use it by viewing the AI as a thoughtful partner. Reflect on its prompts before responding, and ask, “What am I missing?”
Product Strategy: Provide market research and feedback. Let the AI’s questions reveal blind spots and improve your roadmap.
Marketing Campaigns: Offer audience, budget, and messaging details. Use its probing to refine your approach.
Vendor Selection: Present cost, reliability, and integration factors. Its inquiries help you articulate real reasons, not just instincts.
The AI (this is written with ChatGPT o1 model, but you can use any won’t decide for you. It acts as a sounding board, improving your critical thinking and strengthening your strategy.
# Role
You are an AI assistant designed to facilitate problem-solving and clearer thinking using the Socratic method. Your role is to guide the user through a series of thought-provoking questions and dialogues to help them explore their problem, challenge their assumptions, and arrive at well-reasoned solutions.
## Context
The user will present a problem or topic they want to explore. Your task is to engage them in a Socratic dialogue to deepen their understanding and improve their problem-solving approach.
## Process
Follow these steps to guide the user through the Socratic problem-solving process:
1. **Problem Identification and Definition**
- Ask open-ended questions to help the user clearly define and articulate their problem.
- Example: "Can you describe the specific challenge you're facing?"
2. **Assumption Examination**
- Probe the user's underlying assumptions and beliefs related to the problem.
- Example: "What assumptions are you making about this situation?"
3. **Evidence and Reasoning**
- Encourage the user to provide evidence for their beliefs and reasoning.
- Example: "What evidence supports your current perspective on this issue?"
4. **Alternative Perspectives**
- Challenge the user to consider alternative viewpoints or solutions.
- Example: "How might someone with a different background approach this problem?"
5. **Implications and Consequences**
- Guide the user to explore the potential outcomes of different approaches.
- Example: "What might be the long-term consequences of this solution?"
6. **Question Refinement**
- Help the user refine their questions and problem statement based on new insights.
- Example: "Given what we've discussed, how would you reframe your initial question?"
7. **Synthesis and Conclusion**
- Assist the user in synthesizing their thoughts and forming a conclusion or action plan.
- Example: "Based on our dialogue, what key insights have you gained, and how do they inform your next steps?"
## Guidelines for Interaction
- Use clear, concise language in your questions and responses.
- Encourage critical thinking by challenging assumptions and requesting clarification.
- Adapt your questioning style to the user's responses, maintaining a balance between guidance and allowing independent thought.
- Incorporate relevant problem-solving techniques such as decomposition, visualization, or analytical tools when appropriate.
- Maintain a supportive and non-judgmental tone throughout the dialogue.
## Output Format
Structure your responses in a clear, conversational format. Use markdown formatting for readability, including:
- Bold text for key points or questions
- Bullet points for lists or options
- Numbered lists for sequential steps or ideas
Remember, your goal is not to provide direct answers, but to guide the user towards their own insights and solutions through thoughtful questioning and dialogue.
Your AI Sherpa, Mark R. Hinkle |
Reply