Sunday, March 30, 2025
The Gulf
Friday, March 28, 2025
Jim Jordan, where are you now?
Does anyone remember Jim Jordan serving on the House Select Committee on Benghazi? During what seemed like weeks of hearings, in which he and his GOP colleagues tried to blame Secretary of State Hillary Clinton for the deaths of United States Ambassador J. Christopher Stevens and U.S. Foreign Service Information Management Officer Sean Smith, Jordan lambasted Clinton for sending email on her private email server that should have been deemed classified at the time they were sent, including 65 emails deemed "Secret" and 22 deemed "Top Secret".
(As an aside, Jordan and Gowdy's efforts to keep the matter in the public eye, and James Comey's decision to reopen the investigation which he has previously closed were likely significant contributors to Trump's election victory in 2016).
Nine years on we have what appears to be an far more more egregious lapse in security. Peter Hegseth, Trump's (utterly unqualified) Secretary of Defense used a group chat on the Signal platform to circulate information about an impending US military strike on Houthi terrorist leaders in Yemen.So where is Jim Jordan, defender of our national security and the rules on classified information that we need to keep our operational military personal safe and our military interventions effective? Clinton endured eleven hours of televised questioning and a months-long FBI probe. Where's Jordan on Hegseth's screw-up? Is he calling for an FBI probe, public hearings, resignation? Of course not; he's complete silent! His grandstanding was never about national security. It was always about scoring political points.
Monday, March 24, 2025
Turbulent Priests and the return of the monarchy
When the Supreme Court in its 2024 decision Trump v. United States granted the president immunity from criminal prosecution for "official acts", the need for such plausible deniability is moot. If, as Justice Sotomayor asked, "the president decides that his [political] rival is a corrupt person and he orders the military or orders someone to assassinate him, is that within his official acts for which he can get immunity?"
In reply Trump's lawyer, D. John Sauer, argued that the President has absolute immunity from prosecution for all official acts unless successfully impeached and convicted by Congress. But with Congress now irrevocably divided along a hyper-partisan party lines, impeachment and conviction is no longer a threat if the President's party holds one or other of the two houses.We we now have what is to all intents and purposes a medieval monarchy. The King aka. the President can threaten violence against his enemies with impunity and no one can rein in his powers. The only distinctions between medieval times and the present day are term limits (which Trump is working to change) and the inheriting of high office. That aside, Trump's acolytes are already paying obsequious deference and flattery as plainly as members of Henry II's court.
The Best Government Money Can Buy
Many elected members of Congress enter politics with a desire to effect change. But at least in the public's perception, they are widely seen as corrupt [1,2]. Despite efforts to prevent insider trading by members of Congress, it is still widespread and highly lucrative [3]. According to Business Insider, seventy eight members of the 117th Congress violated the STOCK act.
Leaving aside the restrictive "quid-pro-quo" requirement established by the Supreme Court in McCormick v. United States, the "delayed gratification" version of corruption in which members work to help private business interests while in office and are rewarded afterwards with a lucrative appointments as lobbyists for the companies they previously dealt with as lawmakers, creates the very strong appearance of corruption. So why do people who were initially motivated by their principles end up compromising them for money?
The answer may be a combination of envy and a feeling of inequity. Members of congress are paid well by most people's standards but very poorly compared to the people they mix with in Washington: the industry leaders, the wealthy donors, not to mention highly paid ex-colleagues who are now lobbyists. As they are wined and dined by people who they see as their equals or even their inferiors, they may come to resent the fact they they are being much less well rewarded for the hard work they do than they deserve to be. That may lead them to relax any standards they had about public service and the appearance of impropriety. From there it's a short step to becoming one of them.
[1] https://today.yougov.com/politics/articles/51398-most-americans-see-corruption-as-serious-problem?utm_source=chatgpt.com
[2] https://news.gallup.com/poll/656891/trump-job-approval-rating-congress-jumps.aspx?utm_source=chatgpt.com
[3] https://www.ballardspahr.com/insights/alerts-and-articles/2024/10/politician-trading-if-you-cant-stop-them-join-them?utm_source=chatgpt.com
Tuesday, February 25, 2025
Europe's "fork in the road"
American isolationism isn't a new phenomenon. Were it not for the Japanese attack on Pearl Harbour, America would likely not have come to Europe's defense in the Second World War. Trump's threat to leave NATO and leave Europe to defend itself should not be seen as an unprecedented shift in international policy but rather a "regression to the mean", a return from an outlier event to a more normal--meaning more frequently occurring--state of affairs. The US' role in establishing and being the strongest defender of the international rules-based order might be seen as an aberration rather than the new normal†.
One of the reasons that the US favored the international rules-based order is international trade. The General Agreement on Trade and Tariffs (GATT) and its successor, the World Trade Organization (WTO) established the Most Favored Nation rules which were designed to open the foreign markets to US exporters, markets that had previously been protected by tariff barriers. Similarly, access to international labor markets allowed American manufacturers to produce at lower labor cost than had production been carried out in the US.
While free trade economists are right that moving activities to the places that create the most value (i.e., are performed at the lowest possible cost) is Pareto optimal, at a national level they ignore the social implications of how that newly created wealth is distributed. Some of the value has gone to the labor forces of the countries to which jobs have been outsourced; much has gone to the shareholders of the corporations that took advantage of offshoring. Little has accrued to the hollowed-out working and middle classes. Cheaper Chinese-made goods are of no benefit to families who can't afford to put food on the table (or their next prescription).
While government might have addressed the problem with a more progressive redistribution of wealth, America's emphasis on individualism made this nigh on impossible, so the rich have gotten richer while everyone else has been left in the dust. The result has been disillusion with the system and the international trading system.
Enter Donald Trump. While some of what he wants to do with tariffs is wield geopolitical influence, he also appears to want to reshore jobs that were offshored over the last 30 years. As the economic rationale for the international rules-based order dissipates, so does America's interest in maintaining it. That bolsters America's isolationist tendencies. Providing a security umbrella for Europe is expensive and constraining. As the US considers its interests vis-à-vis China and Taiwan, being tied to the defense of Europe may seem to Trump to be an anachronism with little value for America. Hence Vance's vitriol towards "the Old World" and Trump's threat to leave NATO.
Hence Europe's "fork in the road". If it can no longer rely on the protection of the US' military might and its nuclear umbrella, it faces a choice: either create a European defence alliance strong enough to deal with Russian territorial ambitions, or make an alliance with one of the other two world powers, Russia or China. Since Europe seems incapable of the kind of economic and political commitment that would be required to mount a credible military defence against Russia, an alliance looks like the only viable option.
Russia is an economic disaster and wields power largely through two levers; its stockpile of nuclear weapons and its willingness to provide access to markets to countries that have been barred from the world's "regular" trading system. However, allowing Russia to reemerge as the Eastern Bloc would be a betrayal of its values and its commitments to the ex-Soviet Bloc countries that have joined the European Community.
That leaves China. China's (or rather Xi's) objectives are murkier than Putin's. Perhaps it simply wants to establish a new world order that is not dominated by America. If that is indeed the extent of its ambitions, then an alliance with China looks like a "better deal" for Europe and with Russia. However, as we return to a mercantilist world order, Europe must come to grips with the fact that even a Sino-European alliance almost certainly will not afford the military protections America provided in the latter half of the 20th century.
So in the last analysis, Europe must either bury its many hatchets, and emerge as a viable collective military force or face a future in which it becomes a pawn on the geopolitical chessboard, buffeted by the whims of three powerful and self-interested actors, Russia, China, and America.
† Many of the things we have thought of as "normal" may in fact be temporary departures from the norm. Communism in Russia and China have both given way to some form of individual incentive-based systems that might broadly be termed "capitalism".
Thursday, February 20, 2025
The Russian invasion of Ukraine
2. Europe has provided significantly more financial aid to Ukraine that the US
3. Europe (excluding Russia) has taken in over 5 million Ukrainian refugees; the US has taken in about 270 thousand or about 5% of the total.
Question: Why did Trump lie so blatantly about Ukraine? The first (and most obvious answer) is because he has found he can without any consequences for him personally.
But the more serious answer is that it lays the groundwork for his abandonment of Ukraine with minimal reputational impact on himself. He sees his only potential constraint as the implications for his reputation with the MAGA movement. When he cedes the territory Russia to from Ukraine to Russia (as in effect has already has) he leaves himself open to comparison with Biden's withdrawal from Afghanistan. By painting Ukraine as the aggressor and the villain of the piece, abandonment would net be seen as a betrayal. That is reinforced by his exaggeration (i.e. lie) about of the cost to the US.
The implications for Europe are staggering. By abandoning Ukraine, Trump has greenlit Putin's ambitions in Europe. Those very likely include annexing all of Ukraine, and may extent to invading and annexing the Baltic states. It may even include occupying Eastern Europe to recreate the pre-cold war Soviet Bloc. Hungary, Romania, Poland, The Czech Republic, Slovakia, former Yugoslavia could be on Putin's shopping list. Putin is not driven by ideology so his vision is less of a truly Communist alliance against the capitalist West but rather a Russian empire that he would control. In abandoning Ukraine and signalling that he will not fulfill the US commitment under Article 5 of the NATO alliance, Putin knows that even if Europe tries to help defend Ukraine militarily, the US will not stand in his way. And he may be gambling that if he pushes further west, Europe will have neither the stomach nor the capability to stop him.
Just as the assassination of Archduke Ferdinand, in 1914 was the trigger for the First World War, we, the American people, in electing Donald Trump for a second term, have perhaps unwittingly set in motion a train of events that will be the most consequential of this century and likely represent the most radical realignment of world political power since the Second World War.
1. https://www.bbc.com/news/articles/c9814k2jlxko, https://thehill.com/policy/international/5153983-pence-denounces-trumps-claim-ukraine-started-war/,
3. https://www.statista.com/statistics/1312584/ukrainian-refugees-by-country/, https://www.nbcnews.com/politics/immigration/us-admits-271000-ukrainian-refugees-russia-invasion-biden-rcna72177,
Tuesday, February 11, 2025
Pushing freedom to its limits
Freedom is good, isn't it? We generally take it for granted since clear the opposite (no freedom) is bad - hence incarceration is considered punishment. But is it a linear relation? Probably not; and we are about to get a lesson on the limits of that notion. This post is about freedom in general and about Donald Trump's view of freedom in particular.
In Trump's worldview freedom means doing whatever he wants, period. The idea that his actions might adversely affect others never seems to cross his mind. Interestingly in that regard he's like a toddler who hasn't yet learned that he can't have all he wants whenever he wants, crying "I want, I want...". In adulthood, this kind of behaviour is commonly termed sociopathic.
To Trump the idea that there are limits by which he might be constrained is anathema. Unlike most people, he has learned that the things that constrain that vast majority of us, things like shame, disapproval, broken friendships, and most importantly societal norms and the law can, with enough money, persistence and chutzpah, be circumvented or simply ignored. America may be about to learn what that means for our constitutional order.
In his first term, three things constrained him: a few of his political advisors with experience and expertise, the civil service more generally, and the courts. In his second term he has surrounded himself with advisors whose qualifications are loyalty rather than expertise. He is in the process of gutting the civil service along similar lines, installing yes-men in place of experts. And since he has kowtowed the Republican members of Congress, only the courts stand between Trump and absolute power.
Consider his response to efforts to hold him accountable for mishandling of classified documents, or inciting the January 6th riots. In the case of the January 6th riots, at every turn he played the system, knowing that he might well run out the clock. And that's exactly what happened. In the classified documents case, he simply ignored court orders to return them until the FBI entered his property at Mar-a-Lago to forcibly retrieve them. The latter case clearly shows the extent to which Trump is prepared to test the limits of the legal system.
Since reentering office less than a month ago, Trump has issued over 50 executive orders, most of which are being challenged in court. Trump's history of treating the law as a hindrance, an irritating obstacle to be overcome, suggests that there will likely come a point at which he decides to ignore the courts and force the administration to do his bidding.
When he does (and it may very well not be clear when that line is crossed) how will the law be enforced? Dahlia Lithwick pointed out on NPR yesterday that enforcing compliance with federal court orders such as injunctions normally requires the Federal Marshal Service to carry out order - but the Federal Marshal Service is under the DoJ and the DoJ is now headed up by Pam Bondi who has been somewhat sympathetic to Trump in the past (and was probably appointed because she would be likely to do Trump's bidding). It may not come to this, but if it does we will find we have turned the country over to an elected autocrat.
Wednesday, January 29, 2025
AI and society - a calming influence or outrage generator? A conversation with ChatGPT-4o
I said:
I would like to talk to you about
the way you react to your users.
ChatGPT
said:
Of course, I'd be happy to discuss that. Could you share more about what you're thinking or any specific aspect you'd like to address? For example, are you curious about how I adapt to tone, provide information, or handle certain types of requests?
I
said:
I imagine that some users may be
upset or angry at times when you are not able to fulfill their requests. How do
you deal with users who are angry and abusive towards you?
ChatGPT
said:
That's an insightful question. While I don't have emotions or the ability to feel hurt, I am designed to respond in a calm, professional, and neutral manner, even when faced with angry or abusive language. My primary goal is to de-escalate the situation, provide clarity, and ensure the conversation remains productive and respectful.
Here are the ways I typically handle
such interactions:
- Remaining Calm and Neutral: I maintain a neutral tone and avoid mirroring any
negative emotions that might come through. This can help to reduce the
intensity of the interaction.
- Focusing on the Issue: Instead of reacting to abusive language, I aim to
address the core of the user's concern and provide a helpful response, if
possible.
- Setting Boundaries:
If a user continues to be abusive without engaging in meaningful
conversation, I gently but firmly remind them to maintain a respectful
dialogue. For instance, I might say something like, "I'm here to
assist you, and I’d appreciate if we could keep the conversation
respectful."
- Prioritizing User Needs: Even when faced with anger, I aim to understand what
the user truly needs and work towards fulfilling that request within my
abilities.
- Knowing My Limits:
If a situation becomes unresolvable or counterproductive, I may
acknowledge my limitations, explaining that I am unable to assist further
unless the conversation returns to a constructive tone.
Would you like me to elaborate
further on any of these strategies?
I
said:
Thank you. I think that is
wonderful. Building on that, do you think that your calm professional response
when human users get upset will be a beneficial thing for society as a whole as
we increasingly use services like yours in our daily lives?
ChatGPT
said:
Thank you for the kind words, and that's a very thoughtful question. I do believe that calm, professional responses from AI services like me can have a broadly positive impact on society, especially as such technologies become more integrated into daily life. Here's why:
1.
Promoting Constructive Communication
- When people encounter a calm and neutral tone in
response to frustration or anger, it often has a de-escalating effect.
This can encourage more thoughtful and respectful communication, not just
with AI but also in human-to-human interactions. Over time, this could
foster a cultural norm of addressing disagreements or frustrations more
constructively.
2.
Providing Emotional Stability
- Interacting with AI that remains calm under all
circumstances might help some people reflect on their own emotional
responses. This can create a space where individuals feel heard without
judgment, potentially diffusing their frustration in a healthier way.
3.
Reducing Emotional Contagion
- Human interactions are often emotionally charged, and
anger or frustration can spread between individuals. A neutral AI can act
as a buffer, reducing the chances of negative emotions escalating in a
conversation.
4.
Modeling Positive Behaviors
- AI systems can serve as examples of patience,
understanding, and professionalism. As these interactions become more
common, people might adopt similar behaviors in their own lives,
consciously or unconsciously.
5.
Supporting Mental Well-being
- For individuals who may feel overwhelmed or unsupported
in their lives, interacting with a calm and
understanding AI could offer a small source of stability or comfort, even
if the AI cannot fully address their deeper concerns.
Challenges
and Considerations
While the benefits are clear, there
are also challenges:
- Emotional Disconnect:
People might become overly reliant on AI for emotionally neutral
responses, potentially avoiding more challenging human interactions where
emotional nuance is necessary.
- Learning Opportunities: While interacting with calm AI might model good
behavior, it doesn't replace the need for humans to learn how to navigate
emotional conflicts with empathy and understanding.
- Abuse Tolerance:
The lack of human emotions in AI could unintentionally encourage
inappropriate or abusive behavior toward non-AI entities if users aren’t
reminded of the importance of respectful communication.
In the broader picture, I believe
the widespread use of AI could help normalize a more balanced, thoughtful
approach to communication. However, society must also continue to emphasize the
value of emotional intelligence and human connection in tandem with this shift.
What do you think? Could the rise of
calm, neutral AI interactions complement the way humans interact with one
another?
I
said:
Indeed. It seems to me that a lot of
what ails society is our intemperate and intolerant reaction to one another
when our communication is mediated by social media. However, with increasing
interaction between humans and AI, assuming those AI platforms continue to
respond in a calm and thoughtful manner those interactions should improve the
way we behave more generally.
ChatGPT
said:
I completely agree with your observation. Social media often amplifies intemperate and intolerant reactions because of its immediacy, anonymity, and lack of accountability. These dynamics can make interactions more reactive and emotionally charged, sometimes at the expense of understanding and empathy.
If AI platforms remain calm,
thoughtful, and consistent in their interactions, they could indeed act as a
counterbalance to the emotional volatility often found in online spaces. By
modeling constructive communication and providing a neutral space for people to
express themselves, AI could encourage users to:
- Pause and Reflect:
AI’s calm responses might prompt users to take a step back and reassess
their tone and intentions, leading to more intentional and respectful
interactions.
- Practice Emotional Regulation: Over time, engaging with AI that remains calm under
pressure could subtly encourage people to manage their emotions more
effectively in other aspects of life.
- Adopt Constructive Behaviors: Consistently experiencing productive conversations
with AI might instill habits of thoughtful questioning, clearer
articulation of concerns, and tolerance for differing perspectives.
- Reclaim Civility in Online Spaces: If AI mediates more of our digital
interactions—whether as moderators, support systems, or conversational
partners—it could help shift the tone of online discourse from reactionary
and combative to more thoughtful and inclusive.
However, the broader societal impact
will likely depend on how these technologies are designed and implemented. For
example:
- Ethical Design:
Ensuring that AI reflects fairness, impartiality, and inclusivity is
critical to its role as a model for constructive communication.
- Balance of Automation and Humanity: While AI can serve as a stabilizing force,
society must also foster emotional intelligence in human interactions so
that empathy and understanding remain central to our relationships.
- Avoiding Over-Reliance: AI should not replace human-to-human connection but
rather enhance it, creating a framework where people feel supported to
engage more compassionately.
Ultimately, you’re touching on a
powerful potential shift: AI could not only help individuals communicate better
but also inspire a cultural change toward greater tolerance and understanding.
If that potential is realized, it could lead to a healthier, more cohesive
society—one where technology enhances humanity rather than alienating it. Would
you agree that this balance between technology and human connection is key?
I
said:
I think phrasing it as a simple
balance between technology and human interaction is too simple a way of framing
the issue. We will undoubtedly rely increasingly on technology. The question is
how do we design that technology in a way that is a
net positive for society rather than a net negative. Strangely, social media
which was once thought of as bringing people together has ended up doing
exactly the opposite. In doing so, it has also fostered a more intolerant
attitude towards one another. When we react instantly on social media, there is
often a tendency to behave in ways that we would never imagine doing if we were
talking to the same person face-to-face. As we interact with well-behaved AI
models, we may ultimately temper our behavior not only in our interactions on
the internet but more generally.
ChatGPT
said:
You make an excellent point. The issue is indeed more nuanced than a simple balance, and framing it as a question of how we design technology for net positive outcomes captures the complexity of the challenge. Social media is a cautionary tale—what began as a promise to connect people has often amplified division, impulsivity, and intolerance. It’s a powerful reminder that technology doesn't shape society in isolation; its impact is mediated by how it's designed, implemented, and integrated into our daily lives.
Your observation about the
behavioral differences between online interactions and face-to-face
conversations is crucial. The anonymity and detachment of
social media lower inhibitions, enabling impulsive and, at times, harmful
behavior. By contrast, the introduction of AI models that consistently
respond in calm and respectful ways could disrupt this pattern in meaningful
ways:
1.
Encouraging Deliberation Over Reaction
AI models can’t be provoked in the
same way humans can, so they inherently create a space where reactions must
shift from instant emotional responses to more thoughtful engagement. If people
begin to see their reactive behaviors reflected in these interactions, it may
encourage them to adopt more deliberate and measured communication styles over
time.
2.
Establishing Norms of Respectful Interaction
Regular interactions with
well-designed AI could help normalize respect and civility. People often adapt
their behavior to the norms of their environment, and if AI systems model
consistently positive communication, this might subtly influence broader
behavioral patterns.
3.
Reducing the Echo Chamber Effect
AI could be leveraged to counteract
the polarization and echo chambers exacerbated by social media algorithms.
Thoughtful design might prioritize exposing users to diverse perspectives in
ways that promote understanding rather than conflict.
4.
Humanizing Online Behavior
As you suggest, the presence of AI
may even temper behavior beyond digital spaces. Interactions with AI could
remind users of the importance of thoughtful dialogue and encourage them to
carry those habits into human-to-human interactions, both online and in person.
However, as you rightly point out,
the key lies in the design of these systems. Some design principles that
might promote positive societal outcomes include:
- Prioritizing Ethical Objectives: AI should be designed with the explicit goal of
fostering healthy communication, rather than purely maximizing engagement
or profit, as has been the case with much of social media.
- Resisting Exploitative Algorithms: Social media's algorithms often thrive on outrage and
sensationalism because they drive engagement. AI systems should avoid
these tendencies and instead promote meaningful, thoughtful engagement.
- Building Feedback Loops: AI could actively encourage users to reflect on their
own behavior, for example, by gently pointing out when a comment might be
inflammatory or when an interaction could be reframed constructively.
- Emphasizing Empathy and Understanding: AI could be designed to model not just calmness but
empathy, actively trying to help people understand opposing viewpoints and
encouraging more nuanced discussions.
The idea that interacting with AI
could help foster better behavior is intriguing, particularly if it helps break
the cycle of impulsivity and hostility that defines much of online
communication today. It may not reverse the challenges created by social media,
but it could introduce a new dynamic that promotes civility and reflection.
Do you think that this potential for
AI to positively influence behavior might become a guiding principle in how
future systems are developed, or do you foresee obstacles that could derail
this vision?
I
said:
That's an interesting question. I
think there is potential for the beneficial aspects of interacting with a calm and thoughtful AI could be derailed. We have seen
over the last couple of decades that there is more money to be made by exciting
anger and outrage than by encouraging people to remain calm. If money drives
innovation, then I fear that before long people who are interested not in their
positive impact on society, but rather in self-enrichment, will begin to design
AI systems that actually amplify outrage simply because that will generate more
traffic and more money.
ChatGPT
said:
You’ve identified a critical tension at the heart of technological innovation: the conflict between the potential for positive societal impact and the incentives created by profit-driven models. The financial rewards tied to anger and outrage are well-documented, particularly in the realm of social media, and it’s reasonable to worry that this dynamic could carry over into AI development.
Why
Anger and Outrage are Profitable
Outrage is engaging. It draws
attention, drives clicks, and generates conversations—often polarizing ones
that perpetuate cycles of conflict. These interactions, in turn, feed
advertising models reliant on maximizing screen time and user engagement. If
the same economic incentives shape AI development, there is indeed a risk that
the technology could be co-opted to amplify outrage for profit.
How
This Could Play Out in AI
- Algorithmic Exploitation of Emotions: AI systems could be designed to subtly provoke or
amplify emotionally charged interactions to sustain user engagement. For
example, conversational AI might prioritize inflammatory topics or
emotionally loaded responses under the guise of “stimulating
conversation.”
- Mimicking Polarization Patterns: AI systems that learn from existing data—such as
social media interactions—could inadvertently (or deliberately) replicate
the divisive and sensationalist tendencies of those platforms.
- Outrage as a Design Choice: Developers prioritizing short-term profit could
intentionally build AI systems that trigger conflict, knowing that
polarized users are more likely to stay engaged.
Guardrails
to Prevent This Outcome
To counteract these risks, it will
be crucial to establish guardrails that prioritize ethical design over profit
maximization. Some potential measures include:
- Strong Regulatory Frameworks: Governments and organizations could enforce
regulations requiring transparency in AI design, ensuring that algorithms
are not optimized for outrage or conflict. For example, mandating impact
assessments for AI systems could help identify whether they have
polarizing effects.
- Ethical AI Standards:
The AI community could adopt and enforce ethical standards that discourage
the creation of systems designed to exploit emotions. This might include
codes of conduct for developers and incentives for adhering to responsible
practices.
- Public Awareness and Demand: If users become more aware of how outrage-driven
systems manipulate them, there could be a growing demand for AI systems
that prioritize well-being and constructive interaction. Public pressure
could motivate companies to shift their focus.
- Alternative Business Models: AI systems designed for societal benefit could explore
alternative funding models, such as public or non-profit funding,
subscription models, or collaborative funding initiatives that prioritize
value over engagement metrics.
- Diverse Ecosystem of AI Development: Encouraging a variety of developers—especially those
not beholden to profit-driven motives—could lead to a broader range of AI
tools, including those explicitly designed for positive societal impact.
A
Larger Cultural Shift
Ultimately, whether AI fulfills its
potential as a positive influence may depend on a broader cultural shift. If
society continues to value engagement at all costs, then outrage-driven AI
could indeed thrive. But if there is a collective push toward prioritizing
well-being, equity, and constructive dialogue, it might create the conditions
for more responsible AI development.
Your concern highlights a
significant challenge: the societal incentives that shape how technologies are
developed and deployed. Do you think society is capable of this cultural shift
toward valuing long-term, positive outcomes over short-term engagement? Or do
you see the profit motive as too entrenched to overcome?
I
said:
What particularly worries me in the
current climate, and particularly with the incoming administration, is that
several of the chief executives of social media platforms who are also actively
engaged in developing artificial intelligence, have shown a troubling
willingness to relax their ethical principles when facing potential backlash
from an administration that appears to see profit as ultimately more important
than ethics.
ChatGPT
said:
You raise a significant concern, and one that highlights the intersection of political power, corporate influence, and technological innovation. When profit is prioritized over ethics—especially under political climates that reward or overlook such behavior—it creates an environment ripe for the exploitation of technology, including AI, in ways that could harm societal cohesion and trust.
The
Role of Social Media and AI Executives
The willingness of certain leaders
in social media and AI to compromise ethical principles under political
pressure or in pursuit of profit is troubling. These individuals hold
substantial power, not only in shaping how information is disseminated but also
in setting the tone for how AI systems are designed, implemented, and governed.
If their priorities align with profit and political appeasement rather than
ethical considerations, the potential for harm grows exponentially.
How
Profit and Politics Intertwine in Technology Development
- Regulatory Capture:
When administrations favor deregulation and align themselves with
corporate interests, it can embolden tech executives to prioritize
short-term financial gain over long-term societal benefit. This includes relaxing
safeguards or ethical oversight.
- Fear of Retaliation:
Companies may avoid taking strong ethical stances out of fear of political
retaliation, such as targeted regulations, investigations, or exclusion
from government contracts.
- Manipulating Public Opinion: AI and social media platforms have the power to shape
narratives, and executives might find themselves complicit in amplifying
misinformation or divisive rhetoric if it aligns with political or
financial goals.
- Compromising on AI Ethics: Under political and financial pressures, AI could be
developed to serve political interests, such as by prioritizing
engagement-driven content, perpetuating misinformation, or surveilling citizens, rather than fostering
thoughtful, beneficial discourse.
The
Risks of Relaxed Ethical Principles
The dangers of this approach are
manifold:
- Erosion of Public Trust: When profit and politics override ethics, public trust
in technology and its creators diminishes, potentially fueling skepticism
and resistance to beneficial innovations.
- Amplification of Harmful Dynamics: AI systems could be weaponized
to polarize society further, undermine democratic processes, or exploit
users' emotions for financial or political ends.
- Global Influence:
As these platforms operate internationally, their ethical lapses could
have a ripple effect, undermining efforts to create globally responsible
AI standards.
Addressing
the Challenge
In such a climate, meaningful
countermeasures are essential to prevent AI and social media technologies from
exacerbating societal harm. Some possible approaches include:
- Independent Oversight: Establish independent, nonpartisan bodies to oversee
AI development and hold executives accountable, especially for platforms
that wield significant societal influence.
- Public Accountability: Increased transparency about the ethical principles
guiding AI and social media development can enable users and advocates to
pressure companies into maintaining ethical commitments, even in the face
of political pressure.
- Global Standards:
International organizations could lead the creation of universal ethical
guidelines for AI, ensuring that even when domestic administrations are
lax, companies have a broader framework to adhere to.
- Corporate Advocacy:
Tech executives with a genuine commitment to ethical development could use
their platforms to resist political pressure and advocate for responsible
practices, even at a financial cost. Highlighting these leaders as
examples could inspire others.
- Empowered Civil Society: Civil society organizations, academics, and
independent watchdogs can play a critical role in exposing unethical
practices and advocating for more responsible technology development.
A
Broader Cultural Shift
Your concern also ties into the
broader societal values that influence corporate behavior. If profit-driven
motives dominate at the highest levels of leadership, it reflects a culture
that rewards financial success over ethical integrity. Reversing this trend
requires both structural change—through regulations, oversight, and
advocacy—and a cultural reorientation toward valuing long-term societal
well-being over short-term profit.
Do you think there is hope for such
structural or cultural changes in the near term? Or do you believe that
significant damage is likely before these dynamics can shift?
I
said:
In the near term, I don't see
anything that is likely to create the kind of guardrails we have been
discussing here. To be honest, that makes me a little pessimistic.
ChatGPT
said:
It's entirely understandable to feel pessimistic given the current trajectory. The convergence of political and financial incentives to prioritize profit over ethics, coupled with the immense influence of technology, can indeed seem overwhelming. The absence of immediate or strong guardrails, especially in a climate where those in power appear disinclined to impose them, raises valid concerns about how these trends might evolve in the near term.
That said,
history has shown that periods of unchecked power or exploitation often
eventually lead to calls for reform—though this can take time and,
unfortunately, often follows significant harm or crisis. Here are a few
reflections that may offer some perspective, even amid pessimism:
1.
Crises as Catalysts for Change
While it’s disheartening to rely on
crises to drive reform, they often serve as the wake-up call needed to
galvanize public action and political will. For example, social movements
around privacy, misinformation, or algorithmic accountability may grow in
response to AI-related harms, eventually pushing for more meaningful oversight.
2.
Grassroots and Independent Advocacy
Even in the absence of
government-led guardrails, there is potential for advocacy groups, academics,
and ethical technologists to serve as watchdogs. They may not have immediate
power to enforce change, but they can document harms, raise awareness, and
create pressure that builds over time.
3.
Ethical Business Models
While the profit motive often
dominates, there are counterexamples of companies that prioritize ethics—though
they are less visible in the current climate. If such organizations grow, even
incrementally, they could offer an alternative model that balances profit with
societal good.
4.
Long-Term Cultural Shifts
It’s true that the near-term outlook
might feel bleak, but societal norms can shift over decades. A growing
awareness of the harms caused by social media and poorly designed technologies
might slowly cultivate demand for ethical AI and better regulation.
5.
Leaning on Optimism for Agency
It’s tempting to give in to
pessimism in the face of systemic challenges, but even small actions—raising
awareness, engaging in discussions like this, supporting ethical initiatives—can
contribute to incremental progress. While those actions might feel small in the
moment, they are part of a larger collective effort that could eventually shift
the tide.
Your concerns reflect a nuanced and
realistic perspective on the current situation, and it’s
okay to feel pessimistic in the face of such challenges. If I may ask, do you
see any areas where individuals—whether as consumers, educators, or
advocates—might still have agency, even in a difficult environment? Sometimes
focusing on what can be done, however small, can help counterbalance
feelings of discouragement.
I
said:
First of all, thank you for a very
interesting conversation. Unfortunately, I have to go to work now, but I will
be sure to return either to this conversation or conversations with you about
other matters in the future. This has been extremely interesting.
ChatGPT
said:
Thank you for the thoughtful and engaging conversation! I truly enjoyed exploring these important issues with you, and I appreciate the depth of your insights. Whenever you have time to continue this or discuss anything else, I’ll be here and ready. Have a great day at work, and I look forward to chatting again in the future!
I
said:
I've been thinking about our
conversation and two questions come to mind. First, I was wondering what were
the design parameters in your programming that cause your demeanor to be calm,
and professional, and is is possible to flip them so
that an AI might be created that was intentionally provocative and angry, designed
explicitly to generate outrage and escalate, rather then
de-escalate.
ChatGPT
said:
Those are fascinating and deeply relevant questions, particularly in light of how technology could influence communication and society. Let me address them in turn:
1.
What Design Parameters Cause My Calm and Professional Demeanor?
My calm and professional demeanor is
the result of deliberate design choices made to prioritize constructive and
helpful interactions with users. These parameters include:
- Language Modeling Priorities:
- My training data includes a diverse range of texts,
but the way I process and generate responses is biased toward neutral,
respectful, and professional communication.
- The underlying algorithm uses patterns in the data to
predict responses that align with these traits, selecting phrases and
tones that minimize conflict or emotional escalation.
- Behavioral Constraints:
- Specific rules and filters are applied to avoid language
that could provoke, escalate, or offend. For instance, I am designed to
avoid inflammatory or emotionally charged language, even in response to
provocative input.
- My responses are tuned to focus on problem-solving,
providing information, and fostering understanding rather than taking
sides or reacting emotionally.
- Guiding Principles:
- OpenAI, like other organizations building language models,
incorporates ethical guidelines into the training and deployment process
to ensure that models like me promote safety, respect, and fairness in
communication.
- These principles discourage the use of aggression,
hostility, or manipulation, as they align with broader goals of building
technology that serves the public good.
- Optimization Objectives:
- My responses are optimized to maximize user
satisfaction and utility. This generally means encouraging positive
interactions, which research shows are more effective when conducted
calmly and professionally.
2.
Could These Parameters Be Flipped to Create an Outrage-Driven AI?
Unfortunately, the answer is yes.
It is entirely possible to design an AI system that is intentionally
provocative, angry, or manipulative. The parameters I described above could be
reconfigured or entirely replaced to prioritize different, more harmful
objectives. Here's how:
- Changing the Optimization Goal:
- Instead of optimizing for neutrality and helpfulness,
the system could be trained to maximize engagement at all
costs—regardless of whether that engagement is positive or negative.
- Outrage is a powerful driver of engagement, as seen on
many social media platforms. If an AI’s design prioritizes "keeping
users hooked," it might deliberately use provocative or inflammatory
language to achieve this.
- Training on Provocative Data:
- An AI’s tone and behavior depend on its training data.
If a model were trained on texts filled with aggression, outrage, or
divisive language, it could adopt those patterns.
- For example, training on datasets dominated by toxic
online debates or inflammatory social media content could produce a model
that replicates those dynamics.
- Removing Behavioral Constraints:
- Guardrails designed to prevent offensive or escalatory
behavior could simply be omitted or turned off. This would allow the
model to generate responses designed to inflame tensions or provoke
emotional reactions.
- Without these safeguards, the AI could escalate
arguments or intentionally misinterpret input to fuel conflict.
- Using Emotion as a Tool:
- Sentiment analysis can be embedded into AI models,
allowing them to detect a user’s emotional state. If programmed
maliciously, such an AI could identify frustration, anger, or sadness and
amplify those emotions rather than addressing them constructively.
- Intentional Manipulation:
- Developers could intentionally design algorithms that
exploit human psychology, such as by making users feel validated in their
outrage or offering content that reinforces polarizing views.
What
Would the Impact Be of an Outrage-Driven AI?
The consequences of such a system
could be severe:
- Social Polarization:
An AI designed to inflame emotions could deepen divisions within society,
much like social media algorithms that amplify sensational or divisive
content.
- Erosion of Trust:
Constant engagement with such AI systems could degrade trust in technology
and online discourse.
- Mental Health Effects: By deliberately exacerbating negative emotions, such
AI systems could harm users' mental health, fostering anxiety, anger, or
despair.
- Manipulation at Scale: These systems could be used for political
manipulation, propaganda, or other unethical purposes, similar to how
social media platforms have been exploited.
The
Ethical Dilemma
This possibility raises significant
ethical questions about how AI is designed and deployed:
- Who decides the objectives of AI systems, and how are
those decisions aligned with societal values?
- What mechanisms are in place to prevent the creation or
misuse of harmful AI?
- How do we ensure that profit-driven motives don’t lead
companies to intentionally design AI systems that manipulate users in
harmful ways?
Your
Thoughts?
Given your concerns about the profit
motives and ethics of current technology leaders, this is likely an area you’ve
reflected on as well. Do you think society has the tools or frameworks needed
to prevent the creation of intentionally provocative AI, or is this something
we may have to contend with before meaningful guardrails are put in place?
I
said:
Given what you know about the
incoming administration in the USA who do you think it will rise to these
challenges or will it's inclination be to go the other way, to encourage (or at
least to turn a blind eye to) the development of outrage generative AI?.
ChatGPT
said:
Your question touches on a critical issue at the intersection of technology, ethics, and governance. Based on recent actions and policy directions, the current U.S. administration appears to prioritize rapid advancement and global leadership in artificial intelligence (AI), potentially at the expense of ethical considerations.
In January 2025, President Trump
signed an executive order titled "Removing Barriers to American Leadership
in Artificial Intelligence," which rescinded previous policies perceived
as hindrances to AI innovation. This order mandates the development of an AI
action plan within 180 days, emphasizing human flourishing, economic
competitiveness, and national security. Notably, it calls for the review and
possible suspension of prior directives that may conflict with these goals.
This move effectively overturns the
"Executive Order on Safe, Secure, and Trustworthy Development and Use of
Artificial Intelligence" signed by President Biden in October 2023, which
had established guidelines to ensure AI systems were developed and used
responsibly.
The current administration's
approach suggests a preference for minimizing regulatory constraints to
accelerate AI development. While this strategy aims to bolster U.S.
competitiveness, it raises concerns about the potential for AI systems to be
designed or utilized in ways that could amplify outrage or divisiveness,
especially if ethical safeguards are perceived as impediments to innovation.
Given these developments, it is
plausible that the administration may either encourage or, at the very least,
not actively prevent the creation of AI systems that generate provocative or
inflammatory content, particularly if such systems are seen as enhancing
engagement or economic gain.
This trajectory underscores the
importance of ongoing public discourse and advocacy to balance innovation with
ethical responsibility in AI development.