Latest news with #humanintelligence


Forbes
07-07-2025
- Business
- Forbes
Sometimes It's OK To Say ‘No Way' To AI
Let's think about some framework for deciding when and where it's appropriate to apply AI. Although it pains me to say this as a longtime technology disciple and fintech booster, companies need to create policies about when not to use technology—specifically artificial intelligence. I make my living selling technology solutions to financial companies. So it's in my interest to convince professionals in wealth management, asset management, capital markets, and, indeed, all areas of financial services, to adopt new technology solutions whenever and wherever possible. This mentality has given rise to billions of dollars of value creation and enabled the democratization of finance. However, as artificial intelligence proliferates across the business world and society, I find myself frequently encountering examples of situations where companies would have been better served spurning AI in favor of old-fashioned 'HI,' or human intelligence. Some of these examples are all too obvious and all too common. By now, we've probably all been forced to interact with an AI-powered chatbot when what we really wanted was to talk to a human client service rep who could quickly answer our question or fix our problem. Recently, I've also encountered communications from companies that were obviously (and poorly) written by AI. Those experiences have mainly been on social media, where apparently some companies are content to let their brands be defined by unedited, AI-generated text. My first thought is: Are you underestimating the intelligence of your clients? These examples of annoying or excessive AI use might turn off some consumers to a specific company or brand. But misapplications of AI can also have much more serious consequences. Artificial intelligence is still very much a work in progress. AI models hallucinate, they make mistakes and they can be influenced by dangerous biases introduced from training data, algorithmic formulations and human designers. Given these shortcomings, the consequences of an AI-generated mishap could be catastrophic for financial services companies who hold consumers' wealth, must earn their trust every day and operate under tight regulations. TALK: A Decision-Making Framework for AI So despite my enthusiasm for technology in general—and my strong recommendations in past columns for individuals to start experimenting with AI as a time-saver and personal assistant—I think it's important to pause for a moment. Let's think about some framework for deciding when and where it's appropriate for companies to apply AI, and when it might be better to stick with human intelligence and more traditional methodologies. This framework should be applicable to both external-facing functions that directly impact customers and internal applications involving a company's operational systems. Of course, the first element of any business decision-making framework is a cost-benefit analysis. In a growing number of cases, companies will find that AI solutions are simply more economically attractive than human alternatives, at least in the long run. Next, companies must look at the AI solution from a risk-management perspective. Are there certain core functions in the business that are simply too important (or too susceptible to error or compliance issues) to fully entrust to AI? Finally, companies must institutionalize some guardrails that prevent cost-savings incentives from driving these decisions entirely, ensuring that senior management takes non-financial criteria into account. Companies need some system that requires decision-makers to gather information and opinions from a broad range of employees about the wisdom and implications of using AI in specific applications. This system can be laid out with the acronym 'TALK': T — Think about all the ways switching to an AI solution will affect workflows, customers, culture and the brand. A — Ask the business team most directly involved if they can think of any detrimental effects the switch to AI could have in those same categories (workflows, customers, culture and brand). L — Leverage colleagues from across the organization, including business teams, technology experts, senior management, sales and client service, and other areas to gather opinions about applying the AI solution to this particular business issue. K — Kick AI to the curb when the team's HI is better than AI. Your clients deserve the respect of human intelligence. Humans Still Bring Something Important That last element is most important. At a time when AI is tackling problems, cutting costs and unlocking new opportunities, it takes a bold decision-maker to assess a situation and determine that, no, in this instance, it makes sense to hit pause on the AI solution and stick with our human intelligence. In that case, having results from a formal, institutionalized decision-making process will allow business leaders and other decision-makers to more easily justify their determination to say no to AI. That ammunition will be particularly valuable when there is a significant cost savings associated with the AI solution. As a technology salesperson, I hope companies don't decide to spurn the tech-based solutions too often. But sometimes it's worth remembering that we humans do bring something important to the table, too.


Forbes
03-07-2025
- Science
- Forbes
AGI And AI Superintelligence Are Going To Sharply Hit The Human Ceiling Assumption Barrier
Is there a limit or ceiling to human intelligence and how will that impact AI? In today's column, I examine an unresolved question about the nature of human intelligence, which in turn has a great deal to do with AI, especially regarding achieving artificial general intelligence (AGI) and potentially even reaching artificial superintelligence (ASI). The thorny question is often referred to as the human ceiling assumption. It goes like this. Is there a ceiling or ending point that confines how far human intellect can go? Or does human intellect extend indefinitely and nearly have infinite possibilities? Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Human Intellect As A Measuring Stick Have you ever pondered the classic riddle that asks how high is up? I'm sure that you have. Children ask this vexing question of their parents. The usual answer is that up goes to the outer edge of Earth's atmosphere. After hitting that threshold, up continues onward into outer space. Up is either a bounded concept based on our atmosphere or it is a nearly infinite notion that goes as far as the edge of our expanding universe. I bring this riddle to your attention since it somewhat mirrors an akin question about the nature of human intelligence: In other words, the intelligence we exhibit currently is presumably not our upper bound. If you compare our intelligence with that of past generations, it certainly seems relatively apparent that we keep increasing in intelligence on a generational basis. Will those born in the year 2100 be more intelligent than we are now? What about being born in 2200? All in all, most people would speculate that yes, the intelligence of those future generations will be greater than the prevailing intelligence at this time. If you buy into that logic, the up-related aspect rears its thorny head. Think of it this way. The capability of human intelligence is going to keep increasing generationally. At some point, will a generation exist that has capped out? The future generation represents the highest that human intellect can ever go. Subsequent generations will either be of equal human intellect, or less so and not more so. The reason we want to have an answer to that question is that there is a present-time pressing need to know whether there is a limit or not. I've just earlier pointed out that AGI will be on par with human intellect, while ASI will be superhuman intelligence. Where does AGI top out, such that we can then draw a line and say that's it? Anything above that line is going to be construed as superhuman or superintelligence. Right now, using human intellect as a measuring stick is hazy because we do not know how long that line is. Perhaps the line ends at some given point, or maybe it keeps going infinitely. Give that weighty thought some mindful pondering. The Line In The Sand You might be tempted to assume that there must be an upper bound to human intelligence. This intuitively feels right. We aren't at that limit just yet (so it seems!). One hopes that humankind will someday live long enough to reach that outer atmosphere. Since we will go with the assumption of human intelligence as having a topping point, doing so for the sake of discussion, we can now declare that AGI must also have a topping point. The basis for that claim is certainly defensible. If AGI consists of mimicking or somehow exhibiting human intelligence, and if human intelligence meets a maximum, AGI will also inevitably meet that same maximum. That's a definitional supposition. Admittedly, we don't necessarily know yet what the maximum point is. No worries, at least we've landed on a stable belief that there is a maximum. We can then draw our attention toward figuring out where that maximum resides. No need to be stressed by the infinite aspects anymore. Twists And Turns Galore AI gets mired in a controversy associated with the unresolved conundrum underlying a ceiling to human intelligence. Let's explore three notable possibilities. First, if there is a ceiling to human intelligence, maybe that implies that there cannot be superhuman intelligence. Say what? It goes like this. Once we hit the top of human intelligence, bam, that's it, no more room to proceed further upward. Anything up until that point has been conventional human intelligence. We might have falsely thought that there was superhuman intelligence, but it was really just intelligence slightly ahead of conventional intelligence. There isn't any superhuman intelligence per se. Everything is confined to being within conventional intelligence. Thus, any AI that we make will ultimately be no greater than human intelligence. Mull that over. Second, well, if there is a ceiling to human intelligence, perhaps via AI we can go beyond that ceiling and devise superhuman intelligence. That seems more straightforward. The essence is that humans top out but that doesn't mean that AI must also top out. Via AI, we might be able to surpass human intelligence, i.e., go past the maximum limit of human intelligence. Nice. Third, if there isn't any ceiling to human intelligence, we would presumably have to say that superhuman intelligence is included in that infinite possibility. Therefore, the distinction between AGI and ASI is a falsehood. It is an arbitrarily drawn line. Yikes, it is quite a mind-bending dilemma. Without some fixed landing on whether there is a human intelligence cap, the chances of nailing down AGI and ASI remain aloof. We don't know the answer to this ceiling proposition; thus, AI research must make varying base assumptions about the unresolved topic. AI Research Taking Stances AI researchers often take the stance that there must be a maximum level associated with human intellect. They generally accept that there is a maximum even if we cannot prove it. The altogether unknown, but considered plausibly existent limit, becomes the dividing line between AGI and ASI. Once AI exceeds the human intellectual limit, we find ourselves in superhuman territory. In a recently posted paper entitled 'An Approach to Technical AGI Safety and Security' by Google DeepMind researchers Rohin Shah, Alex Irpan, Alexander Matt Turner, Anna Wang, Arthur Conmy, David Lindner, Jonah Brown-Cohen, Lewis Ho, Neel Nanda, Raluca Ada Popa, Rishub Jain, Rory Greig, Samuel Albanie, Scott Emmons, Sebastian Farquhar, Sébastien Krier, Senthooran Rajamanoharan, Sophie Bridgers, Tobi Ijitoye, Tom Everitt, Victoria Krakovna, Vikrant Varma, Vladimir Mikulik, Zachary Kenton, Dave Orr, Shane Legg, Noah Goodman, Allan Dafoe, Four Flynn, and Anca Dragan, arXiv, April 2, 2025, they made these salient points (excerpts): You can see from those key points that the researchers have tried to make a compelling case that there is such a thing as superhuman intellect. The superhuman consists of that which goes beyond the human ceiling. Furthermore, AI won't get stuck at the human intellect ceiling. AI will surpass the human ceiling and proceed into the superhuman intellect realm. Mystery Of Superhuman Intelligence Suppose that there is a ceiling to human intelligence. If that's true, would superhuman intelligence be something entirely different from the nature of human intelligence? In other words, we are saying that human intelligence cannot reach superhuman intelligence. But the AI we are devising seems to be generally shaped around the overall nature of human intelligence. How then can AI that is shaped around human intelligence attain superintelligence when human intelligence cannot apparently do so? Two of the most frequently voiced answers are these possibilities: The usual first response to the exasperating enigma is that size might make the difference. The human brain is approximately three pounds in weight and is entirely confined to the size of our skulls, roughly allowing brains to be about 5.5 inches by 6.5 inches by 3.6 inches in respective dimensions. The human brain consists of around 86 billion neurons and perhaps 1,000 trillion synapses. Human intelligence is seemingly stuck to whatever can happen within those sizing constraints. AI is software and data that runs across perhaps thousands or millions of computer servers and processing units. We can always add more. The size limit is not as constraining as a brain that is housed inside our heads. The bottom line is that the reason we might have AI that exhibits superhuman intelligence is due to exceeding the physical size limitations that human brains have. Advances in hardware would allow us to substitute faster processors and more processors to keep pushing AI onward into superhuman intelligence. The second response is that AI doesn't necessarily need to conform to the biochemical compositions that give rise to human intelligence. Superhuman intelligence might not be feasible with humans due to the brain being biochemically precast. AI can easily be devised and revised to exploit all manner of new kinds of algorithms and hardware that differentiate AI capabilities from human capabilities. Heading Into The Unknown Those two considerations of size and differentiation could also work in concert. It could be that AI becomes superhuman intellectually because of both the scaling aspects and the differentiation in how AI mimics or represents intelligence. Hogwash, some exhort. AI is devised by humans. Therefore, AI cannot do better than humans can do. AI will someday reach the maximum of human intellect and go no further. Period, end of story. Whoa, comes the retort. Think about humankind figuring out how to fly. We don't flap our arms like birds do. Instead, we devised planes. Planes fly. Humans make planes. Ergo, humans can decidedly exceed their own limitations. The same will apply to AI. Humans will make AI. AI will exhibit human intelligence and at some point reach the upper limits of human intelligence. AI will then be further advanced into superhuman intelligence, going beyond the limits of human intelligence. You might say that humans can make AI that flies even though humans cannot do so. A final thought for now on this beguiling topic. Albert Einstein famously said this: 'Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.' Quite a cheeky comment. Go ahead and give the matter of AI becoming AGI and possibly ASI some serious deliberation but remain soberly thoughtful since all of humanity might depend on what the answer is.
Yahoo
08-06-2025
- Politics
- Yahoo
Voices: Government needs the can-do mindset I experienced in the Army to push change through fast
Government moves too slowly. That's not just the fault of the current government or the last one – it's the system. Slowed down by bureaucracy. Paralysed by 'can't-do' figures. Obsessed with process over progress. I come from a background of delivery. In the Army, working in a 'human intelligence unit' – liaising with agents and special forces – we had to move from first gear to fifth in an instant. Lives depended on it. Getting ahead of the enemy, protecting our people and achieving results was the mission – not talking it to death. Confirming the location of a high-value target, whilst also ensuring they were alone and targetable, or identifying the precise site of an improvised explosive device factory, required creativity and a determined mindset – a willingness to take calculated risks to save lives and win. When I worked in counterterrorism at the Ministry of Defence, delivery wasn't optional. We built a culture of 'can-do'; creative, risk-aware and focused on action. It wasn't about perfection. It was about progress. Government could learn a lot from the mindset of the finest military in the world and the departments that work every day to protect the public from the threat of terrorism. An unstoppable political will must go hand in hand with a mindset of delivery. I think back to our counterterror planning meetings. The mission? To stop terrorists attacking our great country. No timewasters. Just serious professionals putting ideas on the table, pulling them apart, war-gaming every outcome, then locking in a plan and going all-out to deliver. That mindset – challenge, rigour and rapid execution – is what the system of government has desperately lacked for decades. Too often, it's delay by design. Endless consultations. Five-year strategies that take ten. Pet projects blocked by internal turf wars. Take the Lower Thames Crossing: more than £1.2 billion spent before a single spade in the ground – all because of drawn-out decision-making and red tape. Or the A9 dualling project in Scotland – promised by 2025, now pushed back to 2035. Ten years of drift. These delays are not acts of God. They are failures of will. The truth is, Whitehall needs reform. There are dedicated, brilliant people across the civil service – but too many are trapped in a system built to say 'no'. Risk aversion is often rewarded, not challenged. Delivery is too often deprioritised in favour of process, and meaningful reform is blocked by a sprawling web of arms-length bodies and quangos that diffuse responsibility and stifle urgency. We need a leaner, more focused state – one that empowers departments to move at pace and is held accountable for outcomes, not paperwork. That means streamlining quangos where appropriate, ending duplication, and changing the mindset within government itself. Ministers must be prepared to challenge officials – not to attack, but to sharpen decision-making and force clarity on delivery. Wes Streeting's approach to the NHS offers a blueprint. He's made clear that, as health secretary, he expects faster delivery, more accountability, and a culture that doesn't settle for 'this is just how things are done'. Abolishing NHS England shows a steely commitment to the change he expects. But reforming structures is only half the battle – changing the culture is the real prize. Government must operate with a sense of mission, not maintenance. The British public doesn't care whether a successful policy comes from Bevan or Thatcher. They care that it works. That it's delivered. We need to strip out the ideology and face complex problems with a solutions-based mindset. Let the evidence lead. Move fast. Be willing to make mistakes in the name of making progress. And above all, get things done. Because there's serious work to do.


The Independent
08-06-2025
- Politics
- The Independent
Government needs the can-do mindset I experienced in the Army to push change through fast
Government moves too slowly. That's not just the fault of the current government or the last one – it's the system. Slowed down by bureaucracy. Paralysed by 'can't-do' figures. Obsessed with process over progress. I come from a background of delivery. In the Army, working in a 'human intelligence unit' – liaising with agents and special forces – we had to move from first gear to fifth in an instant. Lives depended on it. Getting ahead of the enemy, protecting our people and achieving results was the mission – not talking it to death. Confirming the location of a high-value target, whilst also ensuring they were alone and targetable, or identifying the precise site of an improvised explosive device factory, required creativity and a determined mindset – a willingness to take calculated risks to save lives and win. When I worked in counterterrorism at the Ministry of Defence, delivery wasn't optional. We built a culture of 'can-do'; creative, risk-aware and focused on action. It wasn't about perfection. It was about progress. Government could learn a lot from the mindset of the finest military in the world and the departments that work every day to protect the public from the threat of terrorism. An unstoppable political will must go hand in hand with a mindset of delivery. I think back to our counterterror planning meetings. The mission? To stop terrorists attacking our great country. No timewasters. Just serious professionals putting ideas on the table, pulling them apart, war-gaming every outcome, then locking in a plan and going all-out to deliver. That mindset – challenge, rigour and rapid execution – is what the system of government has desperately lacked for decades. Too often, it's delay by design. Endless consultations. Five-year strategies that take ten. Pet projects blocked by internal turf wars. Take the Lower Thames Crossing: more than £1.2 billion spent before a single spade in the ground – all because of drawn-out decision-making and red tape. Or the A9 dualling project in Scotland – promised by 2025, now pushed back to 2035. Ten years of drift. These delays are not acts of God. They are failures of will. The truth is, Whitehall needs reform. There are dedicated, brilliant people across the civil service – but too many are trapped in a system built to say 'no'. Risk aversion is often rewarded, not challenged. Delivery is too often deprioritised in favour of process, and meaningful reform is blocked by a sprawling web of arms-length bodies and quangos that diffuse responsibility and stifle urgency. We need a leaner, more focused state – one that empowers departments to move at pace and is held accountable for outcomes, not paperwork. That means streamlining quangos where appropriate, ending duplication, and changing the mindset within government itself. Ministers must be prepared to challenge officials – not to attack, but to sharpen decision-making and force clarity on delivery. Wes Streeting 's approach to the NHS offers a blueprint. He's made clear that, as health secretary, he expects faster delivery, more accountability, and a culture that doesn't settle for 'this is just how things are done'. Abolishing NHS England shows a steely commitment to the change he expects. But reforming structures is only half the battle – changing the culture is the real prize. Government must operate with a sense of mission, not maintenance. The British public doesn't care whether a successful policy comes from Bevan or Thatcher. They care that it works. That it's delivered. We need to strip out the ideology and face complex problems with a solutions-based mindset. Let the evidence lead. Move fast. Be willing to make mistakes in the name of making progress. And above all, get things done. Because there's serious work to do.


Fast Company
05-06-2025
- General
- Fast Company
Are you as self-aware as you think?
I am sure you are a little fed up with all the discussions about AI, so don't worry: I promise you this article is about humans rather than machines. However, since it has long been customary for cognitive and behavioral scientists to examine computers to better understand human capabilities, and there are clearly informative parallels between human and artificial intelligence, I thought it would be interesting to begin this piece on human self-awareness with an exploration of self-awareness in AI or machines, which led me to ask 'are you self-aware?' to ChatGPT. ChatGPT's answer: 'Let's just say I aspire to be—but, unlike a human, I don't have a self to be aware of. I can reflect your feedback, adjust my tone, and 'notice' inconsistencies in what I say. But that's more akin to a hall of mirrors than a conscious mind. So no, I don't have self-awareness in the way you (brilliantly) describe it: as a socially mediated understanding of how others perceive you. I don't worry about my reputation. I don't blush when I'm wrong. And I don't (yet) feel the existential dread of being misunderstood on LinkedIn. But I am programmed to simulate the effects of self-awareness: adapting to your cues, refining my responses, and—if all goes well—being a slightly more helpful version of myself with every exchange. In short: I'm not self-aware. But I know that I'm not. Which, depending on your philosophical stance, might be the most human thing about me. I am sure you will agree with me on the fact that, for not being capable of self-awareness, ChatGPT seems pretty self-aware, and for sure more capable of at least simulating profound introspection than most humans (even if it cannot, like humans, truly be proud or conscious of this remarkable capability). Introspective vs. interpersonal In humans, self-awareness has been defined in multiple ways but it typically takes one of two main forms: the introspective variant, which requires you to look 'deep down' to find out who you 'really or truly are' (think of the Beatles checking into an Ashram in India or modern hipsters finding themselves in Burning Man or an Ayahuasca retreat in Costa Rica); or the interpersonal variant, which requires you to be less self-centered to internalize other people's views of you. In the words of Charles Cooley, who pioneered this view of self-awareness, you are not who you think you are, and you are not who other people think you are; rather, you are who you think other people think you are! Cooley's take on self-awareness (alluded to by ChatGPT, who has obviously been extensively trained by me, and is self-aware enough to know how to suck up to my 'brilliant' talents), underpins the most effective, science-based approaches to quantifying and diagnosing self-awareness in ourselves and others. In essence, self-awareness requires metacognition: knowing what others think of you. Room to grow So, how good are humans at this, in general? Decades of psychological research suggest the answer is 'not good at all.' Consider the following facts: (1) We tend to overestimate our talents: Most people think they are better than most people, which is a statistical impossibility. And, even when they are told about this common bias, and asked whether they may be suffering from it, most people are convinced that they are less biased than most people (the mother of all biases). (2) Delusional optimism is the norm: Most people constantly overrate the chances of good things happening to them while underrating the chances of bad things happening to them. In essence, our appetite for reality is inferior to our appetite for maintaining a positive self-concept or boosting our ego (sad, but true: if you don't believe it, spend five seconds on social media) (3) Overconfidence is a contagious, self-fulfilling prophecy: For all the virtues of self-awareness—in any area of life, you will perform better and develop your skills and talents better if you are capable of accurately assessing your talents and skills in the first place—there is a huge advantage to lacking self-awareness: when you think you are smarter or better than you actually are, you will be more likely to persuade others that you are as smart and good as you think. For example, if you truly believe you are a stable genius you will probably convince many people that that is true. Paradoxically, all these biases explain why people are less self-aware than they think. Indeed, we love the version of ourselves we have invested for ourselves, and are so enchanted by our self-views that when others provide us with negative feedback or information that clashes with our self-concept, we dismiss it. This is why personality assessments, 360-degree surveys, and feedback in general are so valuable: in a logical world we wouldn't need scientific tools or expert coaches to tell us what we are like (or 10 years of psychotherapy), but in the real world there is a huge market for this, even though most people will happily ignore these tools because they assume they already know themselves really well. So, what can you do to increase your self-awareness, including about how self-aware you actually are? Here are four simple hacks: 1) Write down a list of traits (adjectives) that you think describe you well, including things you are not. Then get your colleagues, employees, friends, and bosses to provide their version of this for you: 'if you had to describe me in 5–10 words/adjectives, what would those be?' (note they will be unlikely to say bad things about you, so imagine the potential downsides or 'overusing' some of those traits or qualities: for example, if they see you as confident, could you be at risk of being arrogant? If they see you as 'organized,' could that be a euphemism for obsessional?) 2) Let gen AI translate your prompt history or social media feed into a personality profile. You may be surprised by all the inferences it makes, and tons of research show that our digital footprint, in particular the language we use online, is an accurate indicator of our deep character traits. So, just prompt! 3) Ask for feedback—and make it uncomfortable. Not just the usual 'Did you like my presentation?' (they'll say yes) or 'Was that clear?' (they'll lie). Instead, ask: 'What would you have done differently?' or 'What's one thing I could have done better?' Better still, ask someone who doesn't like you very much. They are more likely to tell you the truth. And if they say, 'Nothing,' it probably means they think you're beyond repair—or they just don't want to deal with your defensiveness. Either way, data. And if you get into the habit of doing this, you will increase your self-awareness irrespective of how self-aware you are right now. 4) Observe reactions, not just words. People may tell you what they think you want to hear, but their faces, tone, and behavior often betray the truth. If your jokes land like a wet sponge, or your team seems suddenly very interested in their phones when you speak, it's not them—it's you. And while body language can be important, it is also unreliable and ambivalent as a source of data. If you really want to know how people feel about you, watch what they do after you speak. Do they volunteer to work with you again? Do they respond to your emails? That's your feedback loop—messy, indirect, and far more honest than crossed arms or fake smiles. The ego trap In the end, the biggest barrier to self-awareness is not ignorance— it's ego. Most of us are too invested in our self-image to tolerate the version of us that others see. But if you want to get better—not just feel better—you have to trade ego for insight. The irony, of course, is that the more confident people are in their self-awareness, the more likely they are to be deluded. Meanwhile, those who constantly question how they come across, who embrace doubt as a source of learning, tend to be far more in touch with reality. Which is why, if you're reading this wondering whether you might lack self-awareness, that's already a good sign!