In a recent technical paper, Apple revealed the inner workings of its Apple Intelligence AI features, set to debut on iOS, macOS, and iPadOS soon. The paper addresses concerns about data usage, reiterating that no private user data was employed in training the models. Instead, Apple relied on a mix of publicly available, licensed, and open-sourced data.
The Apple Foundation Models (AFM) were trained on a massive dataset of 6.3 trillion tokens, including web data, licensed publisher content, and open-source code. Apple claims to have "license-filtered" the code to ensure minimal usage restrictions. Math-focused data and human feedback were also used to refine the models and mitigate potential issues.
Ethical Considerations and Controversies
Apple's approach to AI training has sparked debate, particularly regarding the use of open-source code and web scraping.
Training AI models on open-source code without explicit consent or adherence to licensing terms has ignited a contentious debate, with some questioning the legitimacy of this practice, even when the code is open-source. Apple's crawler, Applebot, can be blocked by webmasters, but individual creators may still be affected.
Companies like Apple must navigate legal gray areas. Fair use doctrine may protect some data scraping practices, but ongoing lawsuits ultimately decide the fate of generative AI models and their training methods.
By publishing this paper, Apple aims to demonstrate its commitment to responsible AI development, grounded in the company's core values. However, the lack of transparency and potential legal risks may limit the revelations in such papers. The future of generative AI will be significantly influenced by the evolving ethical landscape and ongoing legal debates as the AI industry expands.