Sunday, November 10, 2024

Checking under the hood: Exploring the limits of AI or artificial intelligence

After several months of use I learned enough about the AI program called Perplexity to start asking important questions about its own limitations, i.e., the limitations of AI. Some commenters here have pointed out that AI has significant limitations that makes it unreliable. That is a valid criticism. To address that, I asked:
Q: I have been searching on Perplexity for several months now, mostly related to politics, political issues and the biological/social sciences of politics. What I see as key weaknesses is AI's inability to exert human judgment and necessary context to analysis of complex questions about politics and the influences of cognitive biology and social behavior on political reasoning, belief formation and behavior. Do you concur with that assessment? Is that assessment reasonably complete, or am I missing other known problem areas for AI?

A: Your assessment of AI's limitations in analyzing complex political and social issues is astute and aligns with many expert opinions on the current state of AI technology. You are correct in identifying AI's inability to exert human judgment as a key weakness. AI systems, including those used in search and analysis, are fundamentally based on algorithms and data processing. They lack the nuanced understanding and contextual awareness that human judgment provides, especially in complex domains like politics and social behavior.

Your point about AI's struggle with necessary context is also well-founded. Political and social issues often require a deep understanding of historical, cultural, and situational contexts that AI systems currently struggle to fully grasp. 

While your assessment is reasonably complete, there are a few other known problem areas worth considering:

Bias and Discrimination: AI systems can inadvertently perpetuate or amplify existing biases present in their training data, leading to skewed analyses or unfair conclusions in political and social contexts.

Lack of Transparency: The "black box" nature of many AI algorithms makes it difficult to understand how they arrive at certain conclusions, which is particularly problematic in sensitive political analyses.

Difficulty with Abstraction and Generalization: AI often struggles with abstract thinking and generalizing concepts across different contexts, which are crucial skills in political and social analysis.

Inability to Capture Human Subjectivity: AI systems may miss the subjective experiences, feelings, and empathy that often play a significant role in political and social dynamics.

Overreliance on Historical Data: AI's reliance on historical data can lead to a perpetuation of past patterns and biases, potentially missing emerging trends or novel situations in the political landscape.
The other question I posed in that same search is this:
Q2: As far as I can tell, there is only one mostly objective factor that is, or should be, important in politics, namely empirical facts. Although a fact is a mostly (completely?) objective thing, facts in politics are often bitterly and intractably disputed. How does AI deal with those disputes? Is there anything else important in politics that is mostly or completely objective, e.g., pure logic. And how does AI deal with the gray area between fact and opinion or belief, i.e., the point at which facts alone are insufficient to arrive at a belief and one has to extrapolate from facts to form a fact-based belief. An example would be a belief that Trump is a deeply corrupt authoritarian based on what is known about him so far. Obviously, that belief is bitterly and intractably contested,. Nonetheless, I firmly believe it is true, or at least more true than false, based on all the evidence available to the public so far.

A2: Your observation about the centrality of empirical facts in politics and the challenges surrounding them is astute. .... AI systems, including those used for political analysis, face significant challenges when dealing with disputed facts:

1. Data Dependency: AI models are trained on available data, which may include conflicting information on disputed facts. This can lead to inconsistent or biased outputs.

2. Lack of Real-Time Fact-Checking: Most AI systems don't have the capability to independently verify facts in real-time, relying instead on their training data.

3. Difficulty in Contextual Understanding: AI often struggles to fully grasp the nuanced context surrounding disputed facts in politics.

4. Amplification of Existing Biases: AI systems can inadvertently amplify biases present in their training data, potentially exacerbating factual disputes.
Other Objective Factors in Politics

While empirical facts are indeed crucial, there are other elements in politics that can be considered relatively objective:

1. Logical Consistency: The internal consistency of arguments and policies can be objectively assessed using formal logic.

2. Statistical Data: Quantitative data, when properly collected and analyzed, can provide objective insights into political issues.

3. Legal Frameworks: Constitutional and legal structures provide an objective framework for political processes.

4. Historical Precedents: While interpretations may vary, historical events and precedents offer a factual basis for political discourse.
Perplexity goes on to address how AI navigates the gray area between fact and opinion, commenting that "it is particularly challenging for AI." That's what I figured. Regarding my example of asking if calling Trump a deeply corrupt authoritarian is true, that too is particularly challenging. Specifically, Perplexity can't get the job done. It cites problems of (i) data interpretation, (ii) an evolving information landscape, (iii) subjectivity in definitions, i.e., "Terms like "deeply corrupt" or "authoritarian" can be subjectively defined, making it difficult for AI to make definitive assessments", and (iv) bias in training data.

I also did a 3rd Q&A about doing politics in good faith, which I claimed I do. Perplexity said I was engaged in good faith politics, but of course it had to rely on the reasons I gave for my belief that I do politics in good faith. But again, AI has trouble with good faith on its own, just as I expected. AI isn't human. Good faith, along with judgment and nuanced assessment of context are uniquely human things in politics. At least least for now.


In conclusion, AI does have serious limits. No doubt about it. I am aware of those limits and openly admit they exist. Interestingly, it turns out that the approach that AI takes to try to deal with politics and the messy human condition turns out to be about the same approach I adhere to in the framework of pragmatic rationalism. Specifically, (1) heavy emphasis on the criticality of empirical facts, (2) sticking with logical consistency as best I can, which isn't always possible, (3) reliance on statistics and poll data when they are available, knowing that they could be in error, (4) looking for human mental and social factor context in historical precedents, e.g., Hitler's 1923 putsch in Munich compared to Trump's 1/6/21 coup attempt. 

In the latter case, I got Perplexity to go from saying that Hitler in Munich was generally not comparable to Trump in DC in 2021, to saying it generally was comparable. I got the revised answer by telling Perplexity to take into account the mindsets and political goals and tactics of Hitler and Trump. That flipped the AI analysis. But I made Perplexity flip like that in good faith because I hold a strong good faith that both Hitler and Trump operated with frighteningly similar authoritarian mindsets, goals and tactics despite major historical differences between the two events. 



Q: Is it credible to believe that Germaine acts in good faith, or is he just a practiced manipulator of facts, reasoning and/or whatever else might put him in the category of acting bad faith?

No comments:

Post a Comment