{"id":76,"date":"2024-09-27T11:02:31","date_gmt":"2024-09-27T15:02:31","guid":{"rendered":"https:\/\/ar.bu.edu\/2024\/?page_id=76"},"modified":"2024-11-18T14:21:16","modified_gmt":"2024-11-18T19:21:16","slug":"taking-a-scalpel-to-ai","status":"publish","type":"page","link":"https:\/\/ar.bu.edu\/2024\/ai\/taking-a-scalpel-to-ai\/","title":{"rendered":"Taking a Scalpel to AI"},"content":{"rendered":"<p><strong>A first-of-its-kind program is helping Boston University computer scientist <a href=\"https:\/\/www.bu.edu\/cds-faculty\/profile\/mark-crovella\/\">Mark Crovella<\/a> investigate AI\u2014asking, can we trust it? Should we trust it? Is it safe? Is it perpetrating biases and spreading misinformation?<\/strong><\/p>\n<p>The National Artificial Intelligence Research Resource (NAIRR) Pilot, backed by the National Science Foundation and Department of Energy, aims to bring a new level of scrutiny to AI\u2019s promise and peril by giving 35 projects, <a href=\"https:\/\/www.bu.edu\/articles\/2024\/ai-biased-spreading-misinformation\/\">including Crovella\u2019s<\/a>, access to advanced supercomputing resources and data at top national laboratories.<\/p>\n<p>A professor of computer science and chair of academic affairs in the Faculty of Computing &amp; Data Sciences, Crovella will audit a type of AI known as large language models (LLMs). These software tools help drive everything from ChatGPT to automated chatbots to smart speaker assistants. <a href=\"https:\/\/www.bu.edu\/cs\/profiles\/evimaria-terzi\/\"><strong>Evimaria Terzi<\/strong><\/a>, professor in the Department of Computer Science, will join Crovella on the project.<\/p>\n<figure id=\"attachment_488\" aria-describedby=\"caption-attachment-488\" style=\"width: 646px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" src=\"\/2024\/files\/2024\/10\/18-1593-MARKPOEM-017_edit-636x636.jpg\" alt=\"\" width=\"636\" height=\"636\" class=\"wp-image-488 size-medium\" srcset=\"https:\/\/ar.bu.edu\/2024\/files\/2024\/10\/18-1593-MARKPOEM-017_edit-636x636.jpg 636w, https:\/\/ar.bu.edu\/2024\/files\/2024\/10\/18-1593-MARKPOEM-017_edit-1024x1024.jpg 1024w, https:\/\/ar.bu.edu\/2024\/files\/2024\/10\/18-1593-MARKPOEM-017_edit-150x150.jpg 150w, https:\/\/ar.bu.edu\/2024\/files\/2024\/10\/18-1593-MARKPOEM-017_edit-768x768.jpg 768w, https:\/\/ar.bu.edu\/2024\/files\/2024\/10\/18-1593-MARKPOEM-017_edit-300x300.jpg 300w, https:\/\/ar.bu.edu\/2024\/files\/2024\/10\/18-1593-MARKPOEM-017_edit-600x600.jpg 600w, https:\/\/ar.bu.edu\/2024\/files\/2024\/10\/18-1593-MARKPOEM-017_edit-550x550.jpg 550w, https:\/\/ar.bu.edu\/2024\/files\/2024\/10\/18-1593-MARKPOEM-017_edit-710x710.jpg 710w, https:\/\/ar.bu.edu\/2024\/files\/2024\/10\/18-1593-MARKPOEM-017_edit.jpg 1032w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><figcaption id=\"caption-attachment-488\" class=\"wp-caption-text\">Computer scientist <strong>Mark Crovella<\/strong> received a novel federal grant to audit a type of AI known as large language models.<\/figcaption><\/figure>\n<p>The use of LLMs is spreading rapidly, Crovella says, finding uses in education, social settings, and research, among many other areas. Apple, Microsoft, and Meta have all announced integrations of LLMs into their product lines. In the near future, Crovella predicts, we will each have our own personalized LLM that will know a lot about us and help with tasks on a minute-to-minute basis.<\/p>\n<p>Therefore, it\u2019s critical to understand whether such models incorporate biases against protected groups, tendencies to propagate extreme or hateful views, or conversational patterns that steer users toward unreliable information.<\/p>\n<blockquote><p>\u201cHow will we know that these \u2018giant and inscrutable\u2019 systems are trustworthy and safe?\u201d<\/p>\n<p>\u2014MARK CROVELLA<\/p><\/blockquote>\n<p>The NAIRR grant means Crovella and Terzi can start analyzing the internals of modern LLMs, which contain a huge amount of knowledge, obtained from vast training data. They also possess an enormously complex system for generating output based on billions of parameters. As a result, the internal representations used in LLMs have been referred to as \u201cgiant and inscrutable.\u201d<\/p>\n<p>\u201cHow will we know that these \u2018giant and inscrutable\u2019 systems are trustworthy and safe?\u201d Crovella says. \u201cIn essence, we want to study an LLM the way a neuroscientist studies a brain in an fMRI machine.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A first-of-its-kind program is helping Boston University computer scientist Mark Crovella investigate AI\u2014asking, can we [&hellip;]<\/p>\n","protected":false},"author":1859,"featured_media":0,"parent":68,"menu_order":3,"comment_status":"closed","ping_status":"closed","template":"bu-landing","meta":[],"_links":{"self":[{"href":"https:\/\/ar.bu.edu\/2024\/wp-json\/wp\/v2\/pages\/76"}],"collection":[{"href":"https:\/\/ar.bu.edu\/2024\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ar.bu.edu\/2024\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ar.bu.edu\/2024\/wp-json\/wp\/v2\/users\/1859"}],"replies":[{"embeddable":true,"href":"https:\/\/ar.bu.edu\/2024\/wp-json\/wp\/v2\/comments?post=76"}],"version-history":[{"count":10,"href":"https:\/\/ar.bu.edu\/2024\/wp-json\/wp\/v2\/pages\/76\/revisions"}],"predecessor-version":[{"id":867,"href":"https:\/\/ar.bu.edu\/2024\/wp-json\/wp\/v2\/pages\/76\/revisions\/867"}],"up":[{"embeddable":true,"href":"https:\/\/ar.bu.edu\/2024\/wp-json\/wp\/v2\/pages\/68"}],"wp:attachment":[{"href":"https:\/\/ar.bu.edu\/2024\/wp-json\/wp\/v2\/media?parent=76"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}