传统和新兴生态系统中负责任的人工智能治理

尼娜·布莱恩特和苏珊娜·埃斯特班
作者: 尼娜·布莱恩特和苏珊娜·埃斯特班
发表日期: 2024年1月25日

人工智能(AI)的最新进展以超人的速度发展. 随着人工智能时代在澳门赌场官方下载环境中迅速获得动力, 组织正面临着一个新的数据生态系统,必须用强大的, 积极的治理. 和所有新技术一样, this will require careful thought about how to balance the advances in innovation with the potential impact and harm on individuals. Although this is not a wholly new concept—society has faced similar issues since the dawn of the industrial revolution—the difference now is the speed of change and the incredible power to analyze vast quantities of data, which stands to impact not only corporate revenue streams but personal outcomes related to finance, 保险, 健康或职业, 以及更广泛的澳门赌场官方下载和社会.

例如, 研究已经确定了人工智能算法中的关键风险, 包括种族, 性别和社会经济偏见以及年龄验证问题. 数据保护风险也很大. 例如, 收集什么数据来训练大型语言模型, who owns the data outputs when individuals and employees communicate with tools such as ChatGPT and what happens to personal data that may be submitted? There have already been data breaches in which users were able to access other users' chat histories, 强调数据隐私和安全的重要性. There is also the issue of accuracy—this technology is fundamentally predicting the most likely result based on all previous combinations of the language. So, 训练模型的数据对于确保准确性至关重要, 这也会随着时间的推移而减少,这取决于系统的架构方式.

面对这些和其他的担忧, 隐私, 法律, 遵从性和IT领导者需要更快地建立, reinforce and adapt governance structures around AI applications and the new flows of data that stem from them. 尽管法规仍在制定中,组织正在等待更具体的指导, there are important steps that can be taken to ensure compliance with data 隐私 requirements and avoid potential ethical issues in new AI implementations.

有几个小组在开发标准(包括技术标准和治理标准), which are providing a useful starting point for organizations to begin tackling this new landscape. 其中包括经济合作与发展组织(oecd), 国际标准化组织(ISO), the US National Institute of Standards and Technology (NIST) and the European Data Protection Board. 除了遵循ISO 27701隐私和ISO 31700隐私的设计, organizations can look to the large number of standards ISO has developed to address the various system elements and functions of AI solutions as well as standards around AI risk management, 人工智能、自动化决策和治理标准方面的偏见. 同样的, NIST发布了一个人工智能风险管理框架, 它为实现强有力的治理建立了实际的措施 值得信赖和负责任的AI资源中心. The recent US White House Executive Order on AI has also highlighted NIST as key to developing standards and guidance as well as the NIST AI framework as a major enabler for organizations to ensure effective and 积极的治理 for AI solutions. 重要的是, 而全球法规正在制定, organizations should look to utilize early standards and known best practices to mitigate risk while allowing for innovation until further requirements are confirmed.

Data 隐私 best practices are also a crucial element of establishing AI governance structures. One of the lessons from implementing 隐私 programs is that in order to enable compliance with a complex set of global regulations it is essential to collaborate holistically across an enterprise in partnership with key stakeholders. 也, the fundamentals that are central to strong 隐私—including creating cultures of compliance, gauging risk appetites across an organization and assessing potential for individual harms—can be leveraged for AI governance.

人工智能的高隐私标准可以也应该在开发层面得到解决. 隐私 standards and best practices can be transposed into checkpoints and controls at each step of the software development lifecycle. 为了说明这一点, 用于临床试验和其他涉及人体的研究, there is always an ethical review of the proposed process and the need to identify current and emerging risk to assess the impact of a product, 过程或研究. These standards are followed by many industries and provide a reasonable foundation for the development of AI and other impactful technologies. 这种方式可以促进信任,减少不利影响的可能性. 在实践中, 隐私负责人可以与开发人员合作,确保对人工智能模型进行评估等步骤, 解释数据源的沿袭, 数据集的流通和准确性, risk categorization and assessments are all completed appropriately and reviewed by a committee of data 隐私, 安全, 合规和道德专家.

区块链和元宇宙等新兴生态系统是额外的考虑因素. 这些进步与人工智能之间存在显著的重叠. 例如, blockchain can record actions and decisions that are made by layered AI agents and systems with auditable logs of what has happened and when. This provides proof of where data came from and evidence of any tampering with underlying data, 允许根据数据源声誉和真实性对结果进行评估. 此外, digital ownership can permit provable ownership and provenance of outputted models and generated content, 承认虚拟或隐藏的人工智能系统所创造的贡献. 使用基于区块链的数字代币, data provenance within AI implementations can be supported by enabling the tracking and authentication of AI-generated content’s origin. 例如, 人工智能创建的新闻文章可以与代币相关联, 确保内容的来源是可验证和安全的, 让人们更容易区分真实信息和伪造信息.

Ultimately, innovation almost always runs ahead of governance, and even further ahead of regulation. 人工智能和机器学习工具已经以各种形式存在了很多年, 但直到现在,各方才齐心协力地将治理嵌入其中, 透明度, explainability and fairness in response to global concerns around risk of harm and associated legislation. This is an inevitable conundrum: how to exploit the wealth of data within an organization while ensuring the trusted buy-in of consumers, 客户, 合作伙伴和政府. 这样的支持只有在隐私得到保护的情况下才能实现, risk and compliance leaders build trust and develop solutions based on appropriate algorithms and technology with accurate models and data, 确保结果公平透明.

编者按:想要进一步了解这个话题,请阅读作者最近在《澳门赌场官方软件》上发表的文章, “数据是澳门赌场官方下载的命脉:要想繁荣,治理必须推动洞察力。” ISACA杂志,第3卷2023.

ISACA杂志

额外的资源