This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read
Reposted from Taylor English Insights

Senate Pushing AI Inquiry and Oversight

Various members of the US Senate are asking for government watchdog reporting on the uses and threats of AI, and for a regulatory framework to govern its development. Their actions mirror activities in other countries as well.  

Why It Matters

Industries from fast food to advertising to medicine to law (infamously, a lawyer recently submitted an AI-written brief in which the AI tool simply made up supporting caselaw where there was none) are poised to use AI to cut costs and improve efficiency.  Because AI is so new and so potentially powerful, however, many tech companies and regulators are calling for it have guardrails around it before it is widely adopted.  AI often produces "information" that is wrong, unintentionally: it is only as good as the information used to train it, which is not always apparent to users. If a non-expert user gets a wrong result, and that result was not vetted by a human with expertise, the consequences could be comical (a recent campaign ad showed a woman with three arms) or grave (if AI gives an incorrect medical diagnosis). In addition, most information in the digital age will not go away once published. That means that a widely-distributed AI piece can worm its way into the consciousness of a lot of people as "fact" without much effort or cost -- and correcting or retracting it can be nearly impossible.  

These are only some of the issues that regulators and Big Tech want to try to contain as AI emerges onto the scene.  

The senators asked the nonpartisan government agency to conduct a “detailed technology assessment” of the risks of generative AI tools and how to mitigate them in their letter sent Friday.


data security and privacy, hill_mitzi, ai and blockchain, technology