TY - GEN
T1 - A Framework for Safe AI
T2 - 7th IEEE International Conference on Artificial Intelligence Testing, AITest 2025
AU - Tsai, Wei Tek
AU - Zhang, Li
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Large Language Models have become a foundational component of modern artificial intelligence, but their development is often hindered by inadequate data governance, resulting in challenges such as hallucinations, intellectual property violations, and security vulnerabilities. In light of emerging regulatory requirements, this paper presents a Collaborative Safe AI Framework (CSAIF) for building safe AI systems through robust data lifecycle management and ecosystem collaboration. This paper analyzes governance principles drawn from the U.S. Blueprint for an AI Bill of Rights and the EU AI Act, emphasizing transparency, traceability, explainability, and auditability. Existing industry practices are reviewed to identify current strengths and limitations. This paper then introduces an approach that treats data as verifiable digital assets and a Data Container architecture to encapsulate both content and governance metadata. This design enables version control, access management, data sovereignty, and usage logging across the AI model lifecycle. The CSAIF defines the responsibilities of data providers, validation entities, application developers, and regulatory actors, and outlines a process that ensures data integrity, lawful use, and accountability. By integrating technical safeguards with operational oversight, the proposed CSAIF establishes a trustworthy foundation for developing and deploying AI models in compliance with legal and ethical standards.
AB - Large Language Models have become a foundational component of modern artificial intelligence, but their development is often hindered by inadequate data governance, resulting in challenges such as hallucinations, intellectual property violations, and security vulnerabilities. In light of emerging regulatory requirements, this paper presents a Collaborative Safe AI Framework (CSAIF) for building safe AI systems through robust data lifecycle management and ecosystem collaboration. This paper analyzes governance principles drawn from the U.S. Blueprint for an AI Bill of Rights and the EU AI Act, emphasizing transparency, traceability, explainability, and auditability. Existing industry practices are reviewed to identify current strengths and limitations. This paper then introduces an approach that treats data as verifiable digital assets and a Data Container architecture to encapsulate both content and governance metadata. This design enables version control, access management, data sovereignty, and usage logging across the AI model lifecycle. The CSAIF defines the responsibilities of data providers, validation entities, application developers, and regulatory actors, and outlines a process that ensures data integrity, lawful use, and accountability. By integrating technical safeguards with operational oversight, the proposed CSAIF establishes a trustworthy foundation for developing and deploying AI models in compliance with legal and ethical standards.
KW - blockchain
KW - data governance
KW - large language models
KW - regulatory compliance
UR - https://www.scopus.com/pages/publications/105016250299
U2 - 10.1109/AITest66680.2025.00008
DO - 10.1109/AITest66680.2025.00008
M3 - 会议稿件
AN - SCOPUS:105016250299
T3 - Proceedings - 2025 IEEE International Conference on Artificial Intelligence Testing, AITest 2025
SP - 11
EP - 23
BT - Proceedings - 2025 IEEE International Conference on Artificial Intelligence Testing, AITest 2025
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 21 July 2025 through 24 July 2025
ER -