Enter Your Contact Info
Mr.
Mrs.
Ms.
Message Recipients
Your U.S. Senators
Customize Your Message
Dear [Recipient's Name],
Senator Moody and Senator Scott, The White House has released A National Policy Framework for Artificial Intelligence. This framework does not provide the protection needed for minors and will not allow Florida to strength regulations beyond the federal framework. Our legislative session ended without AI regulations being passed, with our state representatives passing the ball to the federal level. There must be guardrails in place that protect minors with parental opt-in for minor use of AI technology and strengthen parental rights and controls over minor data collection. Our children are our priority and must be protected. Please ensure the federal guardrails are strengthened, and that preemption does not handcuff Florida from its own regulations to ensure the best protections are in place for Florida’s children. Some important information concerning AI are below. According to Common Sense Media: 1 out of 3 children are choosing an AI companion over a human for serious conversations. According to the APA: Adolescents are less likely than adults to question the accuracy and intent of information offered by a bot as compared with a human. Fully 83 percent of likely voters say they are concerned about the development of AI, by an 81 percent to 10 percent margin, likely voters say AI companies should have safeguards to protect consumers and children rather than operate without restrictions. When given a choice between a candidate who supports AI innovation but wants safeguards to protect the public—especially minors—from harmful content and misinformation, and a candidate who says the U.S. cannot place restrictions on AI because it would allow China to get ahead, voters sided with the pro-safeguards candidate by a 77 percent to 13 percent margin. When asked about specific policy proposals, voters overwhelmingly agree with need to hold AI companies accountable: 88 percent of voters agree AI chat bots should be banned from discussing ways of committing suicide with users. 88 percent agree the government should hold companies liable if their AI technology gives inaccurate or dangerous information which leads to harm for its users. 87 percent agree AI companies and their leadership should be held liable if a court finds their technology was responsible for a child's suicide. 86 percent agree billionaires are buying off politicians to say AI can't be restricted.
Keep me updated on issues from Ryan
powered by
FLC Action Center