Preparing for a Pivotal Conversation: My Upcoming Call with Representative Derek Kilmer's Office

On April 19th 2024, I received a missed call from the office of Representative Derek Kilmer, who serves Washington’s 6th Congressional District. The voicemail left for me was a polite invitation to reconnect regarding my previous correspondence about the Responsible Advanced Artificial Intelligence Act of 2024. As someone deeply embedded in the data analytics profession and committed to ethical AI advocacy through my foundation, Voice for Change, this call is not just a follow-up; it's an opportunity to influence policy directly.

Representative Derek Kilmer is an active member of the New Democrat Coalition's AI Working Group, which focuses on the implications and governance of artificial intelligence as it becomes increasingly integrated into various aspects of society. This group is particularly concerned with ensuring that AI technology supports economic growth and public welfare, while also addressing potential risks such as misinformation and job displacement. Recently, this working group, alongside Kilmer, initiated a letter, (read more information below), requesting that leaders of prominent generative AI and social media platforms, like Facebook, provide Congress with detailed reports on their efforts to monitor, identify, and disclose deceptive synthetic media content. This initiative aligns closely with recent announcements from Facebook, now known as Meta, which plans to implement labeling on AI-generated content across its platforms to enhance transparency and user awareness. These labels aim to inform users when they are viewing content that has been manipulated or generated by AI, with more prominent labels for content that poses a higher risk of deceiving the public. This move by Meta is part of a broader industry response to growing concerns over AI-generated misinformation, especially as deepfake technologies become more sophisticated.

In his letter, Congressman Gerry Connolly highlighted the urgent need for clear and effective AI governance to protect public trust and security, reinforcing the importance of initiatives like those undertaken by Kilmer and the New Democrat Coalition. Connolly’s focus on ensuring that AI advancements are accompanied by robust safeguards resonates with the proactive steps Kilmer’s group advocates for, ensuring that AI technologies are developed and utilized responsibly.

“The use of Artificial Intelligence can provide us with valuable tools and cost-saving measures, but any developments must be accompanied by protection from abuse and malpractice. That is why I voted in favor of the CHIPS and Science Act of 2022, which highlighted the importance for democratizing access to national AI research cyberinfrastructure. Furthermore, we must also support policies that preserve the integrity of truth and provide transparency to users. That is why I joined a letter, led by Rep. Kilmer and the New Democrat Coalition's AI Working Group, that requests leaders of prominent generative AI and social media platforms provide information to Congress outlining their efforts to monitor, identify, and disclose deceptive synthetic media content.”

This Tuesday, April 23rd, I will have the chance to speak with Rep. Kilmer’s team. My goal is to ensure that the critical issues surrounding AI and automation are not only understood but urgently addressed. Here are the key questions and points I plan to bring up during the conversation:

  1. Details on AI Working Group Initiatives:

    • "Could you elaborate on the specific goals of the New Democrat Coalition's AI Working Group regarding synthetic media? What measures are being considered to enhance the detection and disclosure of AI-generated content, including deepfakes and propaganda, across various platforms?"

  2. Impact of Meta’s AI Labeling Plan:

    • "With Meta planning to start labeling AI-generated content, what role does Congress see for itself in standardizing such practices across all digital platforms? How do you propose to ensure these measures are uniformly applied, including smaller platforms that may lack the resources of a company like Meta, particularly in the context of the upcoming presidential elections?"

  3. Coordination with Social Media Giants:

    • "How is the New Democrat Coalition working with major tech companies like Meta to align federal legislative goals with corporate policies on AI content labeling? What steps are being taken to ensure these policies effectively mitigate the risk of misinformation and propaganda?"

  4. Legislative Actions Post-Meta's Announcement:

    • "Following Meta's announcement, what legislative actions are being considered to support or enforce the labeling of AI-generated content, including deepfakes? Are there plans to introduce new regulations or to modify existing laws to address these issues more comprehensively?"

  5. Regulation of Synthetic Media:

    • “What concrete steps should Congress take to ensure that all AI-generated media, particularly involving political figures, are accurately labeled to prevent misinformation, including deepfakes? How will these regulations evolve with advancing technologies and the challenges posed by upcoming presidential elections?”

  6. Specific Provisions of the AI Act:

    • “Could you discuss specific provisions that you consider crucial for the Responsible Advanced Artificial Intelligence Act of 2024, particularly those that protect the workforce and address societal impacts, including the threats posed by deepfakes and AI-generated propaganda?”

The answers to these questions will not only deepen our understanding of Rep. Kilmer’s positions but also help shape my advocacy efforts going forward. This conversation is a chance to influence AI policy that could potentially reshape our economy and society in profound ways.

Summary of the New Democrat Coalition's Letter on Deceptive Synthetic Media

Here is a summary in bullet points of the letter from November 08, 2023, led by New Democrat Coalition Artificial Intelligence Working Group Chair Derek Kilmer, addressing the CEOs of leading AI and social media companies about deceptive synthetic media:

  • Leadership and Initiative: Derek Kilmer, chair of the New Dem AI Working Group, spearheaded a letter co-signed by 30 members of the New Democrat Coalition directed at CEOs of major AI and social media companies.

  • Concerns Raised: The letter highlights the growing concerns regarding the ease of creating and viewing deceptive synthetic media, such as deepfakes and synthetic images, which pose risks of spreading misinformation and disinformation.

  • Impact on Public Discourse: The letter emphasizes the complications these technologies introduce in combating false narratives about critical current events across politics, culture, natural disasters, and more.

  • Call for Corporate Accountability: The letter urges these companies to disclose their efforts in identifying, monitoring, and controlling deceptive synthetic media on their platforms and to collaborate with lawmakers to devise effective solutions.

  • Democratic Implications: Kilmer stresses the importance of collaboration between AI content creators, distributors, and legislators to manage the risks associated with deceptive synthetic media, highlighting the necessity for tools that help users distinguish between authentic and AI-generated content to protect democracy.

  • Specific Requests: The letter requests detailed information on the companies' actions regarding the monitoring, identification, and disclosure of deceptive media and emphasizes the need for a cooperative approach to develop educational tools and disclosure mechanisms, especially those that could manipulate public opinion.

  • Election Security Concerns: Given the upcoming 2024 election cycle and the polarized political environment, the letter expresses concern about the potential for deceptive synthetic media to exacerbate misinformation, potentially influencing elections and causing civic unrest.

  • Deadline for Response: The companies are asked to respond by December 8, 2023, with information about their efforts and any progress in addressing these critical issues.

The letter reflects a significant effort by lawmakers to engage directly with technology leaders to ensure that the rapid advancement of AI technologies does not undermine public trust or democratic processes.

Meta's Multilayered Defense Against Indirect Election Interference and Ethical Concerns

Policy Rationale:

  • No comprehensive list can define all types of misinformation due to the ever-changing nature of truth and individual knowledge levels.

  • Policies categorize different types of misinformation and provide guidance for treatment.

  • The balance of values: expression, safety, dignity, authenticity, and privacy is considered.

  • Misinformation Removal:

    • Direct risk to physical harm: Content removed if it could lead to imminent violence or physical harm.

    • Political process interference: Removal of content likely to disrupt the functioning of political processes.

    • Expert partnership: Collaboration with experts to assess the truth of content and its potential harm.

  • Reduction and Dialogue:

    • For non-harmful misinformation, focus is on reducing prevalence and fostering productive dialogue.

    • Recognition that misinformation can be used harmlessly or represent personal beliefs.

    • Slowing the spread of hoaxes and viral misinformation is a priority.

Third-party Fact-Checking:

  • Partnership with fact-checkers to review and rate content accuracy.

  • Resources provided to increase media and digital literacy.

  • Labeling of potentially misleading content on important matters.

  • Overlapping Content and Behavior Prohibition:

    • Prohibition of fake accounts, fraud, and coordinated inauthentic behavior.

  • Policy Evolution:

    • Ongoing evolution of policies in response to changes in online and offline environments.

    • Consequences for entities repeatedly sharing misinformation: reduced distribution, advertising limitations, or removal from platforms.

  • Types of Removed Misinformation:

    • Physical Harm or Violence: Removal of content likely to contribute to imminent harm, based on expert consultation.

    • Harmful Health Misinformation: Removal of false health information determined to be directly harmful, including specific misinformation about vaccines and health emergencies.

    • Voter or Census Interference: Removal of false information that could interfere with voting or census participation processes.

    • Manipulated Media: Removal of videos edited or synthesized in ways likely to mislead and not apparent to the average person.

    • Content Digitally Created or Altered: Informative labels placed on content that may deceive the public on important matters, without violating Community Standards.

Kevin:

See below Meta's Policy Rationale as of Today. In the policy regarding "Voter or Census Interference," how does the platform plan to address and mitigate the risks of indirect forms of interference, such as potential external pressures or unethical influences from political entities that may not manifest as direct misinformation but could still impact the integrity of the electoral process?

ChatGPT:

Meta's policy does articulate a comprehensive stance against misinformation that directly risks interference with the functioning of political processes, such as voter and census participation. The policy, however, does not specifically address the scenario of indirect forms of interference, like potential unethical influences or pressures from political entities that may include bribery or other forms of manipulation which could sway public opinion or voter behavior.

Kevin:

In situations where indirect interference in the electoral process may occur, such as potential unethical influences from political figures that do not constitute direct misinformation but could nonetheless impact the integrity of elections, how does Meta plan to detect and address these subtler forms of manipulation? Does Meta have a strategy or policy in place to identify and counteract these indirect pressures to uphold election integrity?

See an extract above of the contentious audio recording call between former President Donald Trump and Georgia Secretary Raffensperger during the election, where Donald Trump was heard requesting the "finding" of approximately 11,000 paper ballot votes—a number that would have secured him a victory in the state. Given that the call has raised concerns about practices prone to bribery, it is imperative that in the face of upcoming elections—where similar tactics may be employed, potentially leveraging technology to influence voters—tech CEOs like Mark Zuckerberg and Elon Musk (CEO of X, formerly known as Twitter) must be subjected to a thorough governmental review process. This is essential to ensure that the United States of America continues to uphold its democratic values and steers clear of any drift towards a totalitarian regime.

Stay tuned, as I will be sharing insights from the call and the outcomes of our discussion. Together, we can ensure that AI development benefits society as a whole, safeguarding against the risks while promoting technological advancements.

This blog post will be followed up with a detailed account of the call, aiming to keep all stakeholders informed and engaged in the ongoing dialogue about the responsible use of AI. Stay connected and advocate for change alongside us at Voice for Change Foundation.

Previous
Previous

Educating Policymakers: My Efforts to Illuminate AI's Role in Shaping the Workforce

Next
Next

Embracing Ethical AI: The Human Perspective in an Automated Era